Am trying to merge and split polygons from postgis. Merge works but I dont have any idea how to divide polygon into two parts.
I have tried using this query but it is not dividing
insert into edited values('2','338',
(SELECT ST_ASText(geom_part2) FROM(
WITH RECURSIVE ref(geom, env) AS (
SELECT geom,
ST_Envelope(geom) As env,
ST_Area(geom)/2 As targ_area,
1000 As nit
FROM plots
WHERE plot_no = 338 limit 1
),
T(n,overlap) AS (
VALUES (CAST(0 As Float),CAST(0 As Float))
UNION ALL
SELECT n + nit, ST_Area(ST_Intersection(geom, ST_Translate(env, n+nit, 0)))
FROM T CROSS JOIN ref
WHERE ST_Area(ST_Intersection(geom, ST_Translate(env, n+nit, 0)))>
ref.targ_area
),
bi(n) AS(
SELECT n
FROM T
ORDER BY n DESC LIMIT 1
)
SELECT bi.n,
ST_Difference(geom, ST_Translate(ref.env, n,0)) As geom_part1,
ST_Intersection(geom, ST_Translate(ref.env, n,0)) As geom_part2
FROM bi CROSS JOIN ref) AS TT));
Related
I have a dataset as shown below:
ProductionOrder RootID SalesOrder Line Quantity
829602 60124786_7275 60124786 7375 1
829603 60124786_7275 60124786 7400 1
109051 60126867_10000 60126867 10000 3
109058 60126867_10000 60126867 10050 3
109063 60126867_10000 60126867 10075 3
109071 60126867_10000 60126867 10125 3
109076 60126867_10000 60126867 10150 3
I was wondering if it would be possible to "explode" out this view into it's individual components for each quantity. For example, that last row (ProductionOrder: 109076) would look like this instead:
ProductionOrder RootID SalesOrder Line QtyID
109076 60126867_10000 60126867 10150 1 of 3
109076 60126867_10000 60126867 10150 2 of 3
109076 60126867_10000 60126867 10150 3 of 3
And this would be done for every line, dynamic based on that total qty. I can achieve this with a loop, but this is thousands and thousands of rows, so I was wondering if anyone could help me with a CTE-based example of this. I am trying to wrap my head around it but it has proven to be difficult. Any ideas?
This can be very easily achieved with a Tally. If you have very small values, then a JOIN to a small VALUES clause works:
--Sample data
WITH YourTable AS(
SELECT *
FROM (VALUES(829602,'60124786_7275',60124786,7375,1),
(829603,'60124786_7275',60124786,7400,1),
(109051,'60126867_10000',60126867,10000,3),
(109058,'60126867_10000',60126867,10050,3),
(109063,'60126867_10000',60126867,10075,3),
(109071,'60126867_10000',60126867,10125,3),
(109076,'60126867_10000',60126867,10150,3))V(ProductionOrder,RootID,SalesOrder,Line,Quantity))
--Solution
SELECT ProductionOrder,
RootID,
SalesOrder,
Line,
CONCAT(V.I,' of ',YT.Quantity) AS QtyID
FROM YourTable YT
JOIN (VALUES(1),(2),(3),(4),(5),(6),(7),(8),(9),(10))V(I) ON V.I <= YT.Quantity
If, however, you have much larger values for Quantity, then a larger tally will be needed:
--Sample data
WITH YourTable AS(
SELECT *
FROM (VALUES(829602,'60124786_7275',60124786,7375,1),
(829603,'60124786_7275',60124786,7400,1),
(109051,'60126867_10000',60126867,10000,3),
(109058,'60126867_10000',60126867,10050,3),
(109063,'60126867_10000',60126867,10075,3),
(109071,'60126867_10000',60126867,10125,3),
(109076,'60126867_10000',60126867,10150,101))V(ProductionOrder,RootID,SalesOrder,Line,Quantity)),
--Solution
N AS(
SELECT N
FROM(VALUES(NULL),(NULL),(NULL),(NULL),(NULL),(NULL),(NULL),(NULL),(NULL),(NULL))N(N)),
Tally AS(
SELECT TOP (SELECT MAX(Quantity) FROM YourTable)
ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS I
FROM N N1, N N2, N N3) --1,000 rows, add more N's for more rows
SELECT ProductionOrder,
RootID,
SalesOrder,
Line,
CONCAT(T.I,' of ',YT.Quantity) AS QtyID
FROM YourTable YT
JOIN Tally T ON T.I <= YT.Quantity
ORDER BY ProductionOrder,
T.I;
This is not a runnable query, however, it should be enough to give you an idea on how to use recursion to replicate rows for a set number of times.
;WITH Normalized AS
(
SELECT *, RowNumber = ROW_NUMBER() OVER (ORDER BY (SELECT 0)) FROM YourData
)
,ReplicateAmount AS
(
SELECT ProductionOrder , RootID, SalesOrder, Line, Quantity
FROM Normalized
UNION ALL
SELECT R.ProductionOrder , R.RootID, R.SalesOrder, R.Line, Quantity=(R.Quantity - 1)
FROM ReplicateAmount R INNER JOIN Normalized N ON R.RowNumber = N.RowNumber
WHERE R.Quantity > 1
)
I'm trying to get some individual stats from a score keeping system. In essence, teams are scheduled into matches
Match
---------
Matchid (uniqueidentifier)
SessionId (int)
WeekNum (int)
Those matches are broken into sets, where two particular players from a team play each other
MatchSet
-----------
SetId (int)
Matchid (uniqueidentifier)
HomePlayer (int)
AwayPlayer (int)
WinningPlayer (int)
LosingPlayer (int)
WinningPoints (int)
LosingPoints (int)
MatchEndTime (datetime)
In order to allow for player absences, players are allowed to play twice per Match. The points from each set will count for their team totals, but for the individual awards, only the first time that a player plays should be counted.
I had been trying to make use of a CTE to number the rows
;WITH cte AS
(
SELECT *,
ROW_NUMBER() OVER (PARTITION BY MatchId ORDER BY MatchEndTime) AS rn
FROM
(SELECT
SetId, MS.MatchId, WinningPlayer, LosingPlayer,
HomePlayer, AwayPlayer, WinningPoints, LosingPoints, MatchEndTime
FROM
MatchSet MS
INNER JOIN
[Match] M ON M.MatchId = MS.MatchId AND M.[Session] = #SessionId
)
but I'm struggling as the player could be either the home player or away player in a given set (also, could either be the winner or the loser)
Ideally, this result could then be joined based on either WinningPlayer or LosingPlayer back to the players table, which would let me get a list of individual standings
I think the first step is to write a couple CTEs that get the data into a structure where you can evaluate player points regardless of win/loss. Here's a possible start:
;with PlayersPoints as
(
select m.MatchId
,m.SessionId
,m.WeekNum
,ms.SetId
,ms.WinningPlayer as PlayerId
,ms.WinningPoints as Points
,'W' as Outcome
,ms.MatchEndTime
from MatchSet ms
join Match m on on ms.MatchId = m.MatchId
and m.SessionId = #SessionId
union all
select m.MatchId
,m.SessionId
,m.WeekNum
,ms.SetId
,ms.LosingPlayer as PlayerId
,ms.LosingPoints as Points
,'L' as Outcome
,ms.MatchEndTime
from MatchSet ms
join Match m on on ms.MatchId = m.MatchId
and m.SessionId = #SessionId
)
, PlayerMatch as
(
select SetId
,WeekNum
,MatchId
,PlayerId
,row_number() over (partition by PlayerId, WeekNum order by MatchEndTime) as PlayerMatchSequence
from PlayerPoints
)
....
The first CTE pulls out the points for each player, and the second CTE identifies which match it is. So for calculating individual points, you'd look for PlayerMatchSequence = 1.
Perhaps you could virtualize a normalized view of your data and key off of it instead of the MatchSet table.
;WITH TeamPlayerMatch AS
(
SELECT TeamID,PlayerID=WinnningPlayer,MatchID,Points = MS.WinningPoints, IsWinner=1 FROM MatchSet MS INNER JOIN TeamPlayer T ON T.PlayerID=HomePlayer
UNION ALL
SELECT TeamID,PlayerID=LosingPlayer,MatchID,Points = MS.LosingPoints, IsWinner=0 FROM MatchSet MS INNER JOIN TeamPlayer T ON T.PlayerID=AwayPlayer
)
,cte AS
(
SELECT *,
ROW_NUMBER() OVER (PARTITION BY MatchId ORDER BY MatchEndTime) AS rn
FROM
(SELECT
SetId, MS.MatchId, PlayerID, TeamID, Points, MatchEndTime, IsWinner
FROM
TeamPlayerMatch MS
INNER JOIN
[Match] M ON M.MatchId = MS.MatchId AND M.[Session] = #SessionId
WHERE
IsWinner=1
)
I have a data table with destinations and LAT/LON data (~100K records)
DESTINATIONS {
id,
lat,
lon,
...
}
Now I need to insert distances into a new table...
DISTANCES {
id_a,
id_b,
distance
}
What's the best way to do that?
I don't need all data (cartesian product), only the 100 closest.
No duplicates (a_id+b_id == b_id+a_id), e.g. [NYC:Chicago] == [Chicago:NYC] (same distance)
Not by itself (a_id != b_id), because it 0 miles from [NYC:NYC] ;)
This is the calculation (in kilometers/meters):
ROUND(111045
* DEGREES(ACOS(COS(RADIANS(A.lat))
* COS(RADIANS(B.lat))
* COS(RADIANS(A.lon) - RADIANS(B.lon))
+ SIN(RADIANS(A.lat))
* SIN(RADIANS(B.lat)))),0)
AS 'distance'
Okay, the JOIN is no problem, but how can I implement the three "filters"?
Maybe with a WHILE loop and SUBSELECT LIMIT/TOP 100 ORDER BY distance ASC?
Or is it also possible to INSERT by JOIN?
Does somebody have a idea?
Psuedocode:
INSERT INTO [newTable] (ColumnList...)
SELECT TOP 100 a.id, b.id, DistanceFormula(a.id, b.id)
FROM Destination a
CROSS JOIN Destination b
WHERE a.id<b.id
ORDER BY DistanceFormula(a.id, b.id) ASC
EDIT to get 100 b for every a:
INSERT INTO [newTable] (ColumnList...)
SELECT a.id, b.id, DistanceFormula(a.id, b.id)
FROM Destination a
INNER JOIN Destination b
ON b.id=(
SELECT TOP 100 c.id
FROM Destination c
WHERE a.id<c.id
ORDER BY DistanceFormula(a.id, c.id) ASC
)
I've simplified it (distcalc)...
INSERT INTO [DISTANCES] (id_a, id_b, distance)
SELECT
A.id,
B.id,
25 /*ROUND(111045 * DEGREES(ACOS(COS(RADIANS(A.geo_lat)) * COS(RADIANS(B.geo_lat)) * COS(RADIANS(A.geo_lon) - RADIANS(B.geo_lon)) + SIN(RADIANS(A.geo_lat)) * SIN(RADIANS(B.geo_lat)))),0)*/
FROM [DESTINATIONS] AS A
INNER JOIN [DESTINATIONS] AS B
ON b.id IN(
SELECT TOP 100
C.id
FROM [DESTINATIONS] AS C
WHERE
A.id < C.id
ORDER BY A.id /*ROUND(111045 * DEGREES(ACOS(COS(RADIANS(A.geo_lat)) * COS(RADIANS(C.geo_lat)) * COS(RADIANS(A.geo_lon) - RADIANS(C.geo_lon)) + SIN(RADIANS(A.geo_lat)) * SIN(RADIANS(C.geo_lat)))),0)*/ ASC
)
You mean like this?
Okay. That works. :)
But it is definitely too slow!
I'll program a routine that returns only the 100 nearest results on request.
And another (sub) routine will insert/update these (program-sided) results with timestamp into the distances table, so that it's possible to accessed to any existing results by the next call.
But thank you very very much! :)
I have two kinds of query with did the same job here
Query 1 :
SELECT MP.MemberName FROM MemberProfile MP
LEFT JOIN Order O ON MP.MemberID = O.MemberID
WHERE O.TotalAmount >= 100
UNION
SELECT MP.MemberName FROM MemberProfile MP
LEFT JOIN Order O ON MP.MemberID = O.MemberID
WHERE O.Quantity >= 10
Query 2 :
;WITH cte AS (
SELECT MP.MemberName, O.Quantity, O.TotalAmount FROM MemberProfile MP
LEFT JOIN Order O ON MP.MemberID = O.MemberID
)
SELECT MemberName FROM cte WHERE TotalAmount >= 100
UNION
SELECT MemberName FROM cte WHERE Quantity >= 10
In real environment will be more complicated query, these just a simple version for other to read
Question:
Is it better to use CTE instead of using JOIN every single time, base on performance and also redundancy?
Is there a better way to do this UNION or even UNION ALL query other than these way.
I don't think that there will any big difference between CTE and JOIN in your case. SQL Server optimization will convert it to something complete different and these two queries will produce probably the same result. You can try to add indexes on TotalAmount and Quantity if selectivity is high.
I have one table vwuser. I want join this table with the table valued function fnuserrank(userID). So I need to cross apply with table valued function:
SELECT *
FROM vwuser AS a
CROSS APPLY fnuserrank(a.userid)
For each userID it generates multiple records. I only want the last record for each empid that does not have a Rank of Term(inated). How can I do this?
Data:
HistoryID empid Rank MonitorDate
1 A1 E1 2012-8-9
2 A1 E2 2012-9-12
3 A1 Term 2012-10-13
4 A2 E3 2011-10-09
5 A2 TERM 2012-11-9
From this 2nd record and 4th record must be selected.
In SQL Server 2005+ you can use this Common Table Expression (CTE) to determine the latest record by MonitorDate that doesn't have a Rank of 'Term':
WITH EmployeeData AS
(
SELECT *
, ROW_NUMBER() OVER (PARTITION BY empId, ORDER BY MonitorDate DESC) AS RowNumber
FROM vwuser AS a
CROSS APPLY fnuserrank(a.userid)
WHERE Rank != 'Term'
)
SELECT *
FROM EmployeeData AS ed
WHERE ed.RowNumber = 1;
Note: The statement before this CTE will need to end in a semi-colon. Because of this, I have seen many people write them like ;WITH EmployeeData AS...
You'll have to play with this. Having trouble mocking your schema on sqlfiddle.
Select bar.*
from
(
SELECT *
FROM vwuser AS a
CROSS APPLY fnuserrank(a.userid)
where rank != 'TERM'
) foo
left join
(
SELECT *
FROM vwuser AS b
CROSS APPLY fnuserrank(b.userid)
where rank != 'TERM'
) bar
on foo.empId = bar.empId
and foo.MonitorDate > bar.MonitorDate
where bar.empid is null
I always need to test out left outers on dates being higher. The way it works is you do a left outer. Every row EXCEPT one per user has row(s) with a higher monitor date. That one row is the one you want. I usually use an example from my code, but i'm on the wrong laptop. to get it working you can select foo., bar. and look at the results and spot the row you want and make the condition correct.
You could also do this, which is easier to remember
SELECT *
FROM vwuser AS a
CROSS APPLY fnuserrank(a.userid)
) foo
join
(
select empid, max(monitordate) maxdate
FROM vwuser AS b
CROSS APPLY fnuserrank(b.userid)
where rank != 'TERM'
) bar
on foo.empid = bar.empid
and foo.monitordate = bar.maxdate
I usually prefer to use set based logic over aggregate functions, but whatever works. You can tweak it also by caching the results of your TVF join into a table variable.
EDIT:
http://www.sqlfiddle.com/#!3/613e4/17 - I mocked up your TVF here. Apparently sqlfiddle didn't like "go".
select foo.*, bar.*
from
(
SELECT f.*
FROM vwuser AS a
join fnuserrank f
on a.empid = f.empid
where rank != 'TERM'
) foo
left join
(
SELECT f1.empid [barempid], f1.monitordate [barmonitordate]
FROM vwuser AS b
join fnuserrank f1
on b.empid = f1.empid
where rank != 'TERM'
) bar
on foo.empId = bar.barempid
and foo.MonitorDate > bar.barmonitordate
where bar.barempid is null