I've a roadfollowing table, containing column
COLUMN geom geometry(LineString,4326);
I've also a sub_polygon table, containing a column
COLUMN geom geometry(MultiPolygon,4326);
I want to subtract the polygons stored in sub_polygon from linestrings stored in roadfollowing table, and update the table with these new data.
I've tried to perform following query:
WITH RESULTS as (
SELECT ST_Difference(public.roadnetwork.geom, public.sub_polygon.geom)
FROM public.roadnetwork, public.sub_polygon
),
FILTERED_RESULTS as (
SELECT RESULTS.st_difference FROM RESULTS where GeometryType(RESULTS.st_difference) <> 'GEOMETRYCOLLECTION'
)
UPDATE public.roadnetwork
SET geom = FILTERED_RESULTS.st_difference
FROM FILTERED_RESULTS;
but I obtain following error:
ERROR: Geometry type (MultiLineString) does not match column type (LineString)
I've modified the query in order to check results in string format:
WITH RESULTS as (
SELECT ST_Difference(public.roadnetwork.geom, public.sub_polygon.geom)
FROM public.roadnetwork, public.sub_polygon
),
FILTERED_RESULTS as (
SELECT ST_AsText(RESULTS.st_difference) FROM RESULTS where GeometryType(RESULTS.st_difference) <> 'GEOMETRYCOLLECTION'
)
SELECT * from FILTERED_RESULTS;
and I can see that there are some MULTILINESTRING in results, that cannot be copied in roadnetwork.geom column, because data are not consistent:
...
MULTILINESTRING((51.5054201 25.3462475,51.505411 25.3462656,51.5052981 25.3464467,51.5051894 25.3466039,51.5049763 25.3469023,51.5048058 25.347141,51.5046538 25.347324,51.5044476 25.3475493,51.5041983 25.3478035,51.5038722 25.3481104,51.5035605 25.3483885,51.509695 25.3489269,51.5026179 25.3492445,51.5022888 25.349556),(51.5022888 25.349556,51.5022898 25.3495551),(51.5022888 25.349556,51.5017303 25.3500517))
LINESTRING(51.5017303 25.3500517,51.5014725 25.3502989,51.5013472 25.3504121)
LINESTRING(51.5013472 25.3504121,51.501175 25.3505679)
...
How can I update my query in order to convert MULTILINESTRING to LINESTRING so I can update successfully my table?
You could use st_dump to expand MultiLineStrings to LineStrings.
Something like this
WITH RESULTS as (
SELECT ST_Difference(public.roadnetwork.geom, public.sub_polygon.geom)
FROM public.roadnetwork, public.sub_polygon
),
FILTERED_RESULTS as (
SELECT RESULTS.st_difference FROM RESULTS where GeometryType(RESULTS.st_difference) <> 'GEOMETRYCOLLECTION'
),
expanded_results as (
select (a.p_geom).geom
from (SELECT ST_Dump(FILTERED_RESULTS.st_difference))
),
SELECT * from expanded_results;
Related
I have this query that basically returns (right now) only 10 rows as results:
select *
FROM Table1 as o
inner join Table2 as t on t.Field1 = o.Field2
where Code = 123456 and t.FakeData is not null
Now, if I want to parse the field FakeData (which, unfortunately, can contain different types of data, from DateTime to Surname/etc; i.e. nvarchar(70)), for data show and/or filtering:
select *, TRY_PARSE(t.FakeData as date USING 'en-GB') as RealDate
FROM Table1 as o
inner join Table2 as t on t.Field1 = o.Field2
where Code = 123456 and t.FakeData is not null
It takes x10 the query to be executed.
Where am I wrong? How can I speed up?
I can't edit the database, I'm just a customer which read data.
The TSQL documentation for TRY_PARSE makes the following observation:
Keep in mind that there is a certain performance overhead in parsing the string value.
NB: I am assuming your typical date format would be dd/mm/yyyy.
The following is something of a shot-in-the-dark that might help. By progressively assessing the nvarchar column if it is a candidate as a date it is possible to reduce the number of uses of that function. Note that a data point established in one apply can then be referenced in a subsequent apply:
CREATE TABLE mytable(
FakeData NVARCHAR(60) NOT NULL
);
INSERT INTO mytable(FakeData) VALUES (N'oiwsuhd ouhw dcouhw oduch woidhc owihdc oiwhd cowihc');
INSERT INTO mytable(FakeData) VALUES (N'9603200-0297r2-0--824');
INSERT INTO mytable(FakeData) VALUES (N'12/03/1967');
INSERT INTO mytable(FakeData) VALUES (N'12/3/2012');
INSERT INTO mytable(FakeData) VALUES (N'3/3/1812');
INSERT INTO mytable(FakeData) VALUES (N'ohsw dciuh iuh pswiuh piwsuh cpiuwhs dcpiuhws ipdcu wsiu');
select
t.FakeData, oa3.RealDate
from mytable as t
outer apply (
select len(FakeData) as fd_len
) oa1
outer apply (
select case when oa1.fd_len > 10 then 0
when len(replace(FakeData,'/','')) + 2 = oa1.fd_len then 1
else 0
end as is_candidate
) oa2
outer apply (
select case when oa2.is_candidate = 1 then TRY_PARSE(t.FakeData as date USING 'en-GB') end as RealDate
) oa3
FakeData
RealDate
oiwsuhd ouhw dcouhw oduch woidhc owihdc oiwhd cowihc
null
9603200-0297r2-0--824
null
12/03/1967
1967-03-12
12/3/2012
2012-03-12
3/3/1812
1812-03-03
ohsw dciuh iuh pswiuh piwsuh cpiuwhs dcpiuhws ipdcu wsiu
null
db<>fiddle here
I need to perform a query that outputs a Json array.
SQL Server has the function FOR JSON PATH; but the output would be:
[{"key_1":"value_1","key_2":"value_2"},{"key_1":"value_3","key_2":"value_4"}]
But the output I need is:
{ "key_1": ["value_1","value_3"],
"key_2": ["value_2","value_4"]
}
Can this be done?
Add , WITHOUT_ARRAY_WRAPPER after FOR JSON PATH
https://learn.microsoft.com/en-us/sql/relational-databases/json/remove-square-brackets-from-json-without-array-wrapper-option?view=sql-server-ver15
UPDATE:
This isn't the prettiest solution, but it works. I tried to recreate some data based on the original format of your output. So the answer is based on that assumption.
-- Test data (based on what I could figure out from your question)
CREATE TABLE dbo.jsonvalues
(
key_1 NVARCHAR(MAX)
,key_2 NVARCHAR(MAX)
);
INSERT INTO dbo.jsonvalues
(
key_1
,key_2
)
VALUES
(
N'value_1'
,N'value_2'
)
,(
N'value_3'
,N'value_4'
);
To get the data in the right format you would first have to unpivot the data (or store it differently). Then you can create an array in the output by using a sub-query in the SELECT.
SELECT
*
INTO
#newformat
FROM
dbo.jsonvalues AS J
UNPIVOT
(
vals FOR keys IN (key_1, key_2)
) AS u;
SELECT
N.keys
,(SELECT N2.vals FROM #newformat AS N2 WHERE N2.keys = N.keys FOR JSON AUTO) AS vals
FROM
(SELECT DISTINCT
keys
FROM
#newformat
) AS N
FOR JSON AUTO, WITHOUT_ARRAY_WRAPPER;
OUTPUT:
{
"keys":"key_1","vals":[{"vals":"value_1"},{"vals":"value_3"}]
},
{
"keys":"key_2","vals":[{"vals":"value_2"},{"vals":"value_4"}]
}
This won't give you exactly what you want but is the closest I could get without starting to do some manual tweaking. I included the tweaks I did below to get to your requested output format. Just be warned that it is brittle and might need additional tweaks for your data.
SELECT
REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE((
SELECT
N.keys
,(SELECT N2.vals FROM #newformat AS N2 WHERE N2.keys = N.keys FOR JSON AUTO) AS vals
FROM
(SELECT DISTINCT
keys
FROM
#newformat
) AS N
FOR JSON AUTO, WITHOUT_ARRAY_WRAPPER
),'"keys":',''),'"vals"',''),',:',':'),'},{:',','),'{:',''),'}]}',']');
OUTPUT:
{
"key_1":["value_1","value_3"]
,"key_2":["value_2","value_4"]
}
I want to retrieve data from one table but the data is based on conditions.
so now i've added the count section to my query but its not executing and the following error occured:
Column 'DailyStockHistory.Stc_Item_Desc' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause
the query is :
Select
*
From
(
Select
dsh.Stc_Item_Desc, dsh.Imported_Date, dsh.Store_ID, dsh.Quantity_On_Hand,Count(inv.Sim_Network_Type) counterr
From DailyStockHistory dsh,
Temp_DailyInventory inv
Where Stc_Item_Desc = 'SAWA/QuickNet prepaid triosim'
And dsh.Store_ID Like 'S%'
And (Imported_Date = '3-15-2017' Or Imported_Date = '3-22-2017')
) AS Data
PIVOT
(
SUM(Quantity_On_Hand)
FOR Imported_Date IN ([3-15-2017],[3-22-2017])
) AS PIVOTData
I have a query which generates result set such as below image :
And i want a result set like this
Along with the below pivoted columns
I have used the below query for pivot
select distinct
ptbl.PdaId,
ptbl.BusinessUnitName,
ptbl.SalesAreaName,
ptbl.regionName,
ptbl.territoryName,
ptbl.ResponseType,
[11-04-2016],
[11-17-2016],
[11-18-2016],
[11-20-2016]
from
(select
#data.PdaId,
#data.UserName,
#data.BusinessUnitName,
#data.SalesAreaName,
#data.regionName,
#data.territoryName,
#data.ResponseType,
#data.TOTAL [TOTAL],
#data.TOTAL [DateBreakUpTOTAL],
MyTable.InterviewedDate
from
#data
join #InProgressResponses
on #InProgressResponses.PdaId=#data.PdaId
left join MyTable
on MyTable.PdaId=#data.PdaId
) tbl1
pivot
(
sum([DateBreakUpTOTAL])
for InterviewedDate in ([11-04-2016], [11-17-2016], [11-18-2016], [11-20-2016])
) ptbl
but the break up data is null instead of dividing the above total count into their respective dates.
Assume #data has the dataset that i have depicted in first image.
and MyTable has the dates that i have mentioned in the pivot.
What is wrong with my query?
I am creating reports in SSIS using datasets and have the following SQL requirement:
The sql is returning three rows:
a
b
c
Is there anyway I can have the SQL return an additioanal row without adding data to the table?
Thanks in advance,
Bruce
select MyCol from MyTable
union all
select 'something' as MyCol
You can use a UNION ALL to include a new row.
SELECT *
FROM yourTable
UNION ALL
SELECT 'newRow'
The number of columns needs to be the same between the top query and the bottom one. So if you first query has one column, then the second would also need one column.
If you need to add multiple values there is a stranger syntax available:
declare #Footy as VarChar(16) = 'soccer'
select 'a' as Thing, 42 as Thingosity -- Your original SELECT goes here.
union all
select *
from ( values ( 'b', 2 ), ( 'c', 3 ), ( #Footy, Len( #Footy ) ) ) as Placeholder ( Thing, Thingosity )