Am trying to write q query which Partition based on value 90. Below is My table
create table #temp(StudentID char(2), Status int)
insert #temp values('S1',75 )
insert #temp values('S1',85 )
insert #temp values('S1',90)
insert #temp values('S1',85)
insert #temp values('S1',83)
insert #temp values('S1',90 )
insert #temp values('S1',85)
insert #temp values('S1',90)
insert #temp values('S1',93 )
insert #temp values('S1',93 )
insert #temp values('S1',93 )
Required Out put:
ID Status Result
S1 75 0
S1 85 0
S1 90 0
S1 85 1
S1 83 1
S1 90 1
S1 85 2
S1 90 2
S1 93 3
S1 93 3
S1 93 3
Please any one has the solution to partition based status id 90,Result should be 1,2,3 ..etc incrementing based on number of time value 90
Assuming that the actual question is "How can I find ranges/islands of incrementing values", the answer could use LAG to compare the current Status value with the previous one base on some order. If the previous value is 90, you have a new island :
declare #temp table (ID int identity PRIMARY KEY, StudentID char(2), Status int)
insert into #temp (StudentID,Status)
values
('S1',75),
('S1',85),
('S1',90),
('S1',85),
('S1',83),
('S1',90),
('S1',85),
('S1',90),
('S1',93),
('S1',93),
('S1',93);
select
* ,
case LAG(Status,1,0) OVER (PARTITION BY StudentID ORDER BY ID)
when 90 then 1 else 0 end as NewIsland
from #temp
This returns :
+----+-----------+--------+-----------+
| ID | StudentID | Status | NewIsland |
+----+-----------+--------+-----------+
| 1 | S1 | 75 | 0 |
| 2 | S1 | 85 | 0 |
| 3 | S1 | 90 | 0 |
| 4 | S1 | 85 | 1 |
| 5 | S1 | 83 | 0 |
| 6 | S1 | 90 | 0 |
| 7 | S1 | 85 | 1 |
| 8 | S1 | 90 | 0 |
| 9 | S1 | 93 | 1 |
| 10 | S1 | 93 | 0 |
| 11 | S1 | 93 | 0 |
+----+-----------+--------+-----------+
You can create an Island ID from this by summing all NewIsland values before the current one, using SUM with the ROWS clause of OVER:
with islands as
(
select
* ,
case LAG(Status,1,0) OVER (PARTITION BY StudentID ORDER BY ID)
when 90 then 1 else 0 end as NewIsland
from #temp
)
select * ,
SUM(NewIsland) OVER (PARTITION BY StudentID ORDER BY ID ROWS UNBOUNDED PRECEDING)
from islands
This produces :
+----+-----------+--------+-----------+--------+
| ID | StudentID | Status | NewIsland | Result |
+----+-----------+--------+-----------+--------+
| 1 | S1 | 75 | 0 | 0 |
| 2 | S1 | 85 | 0 | 0 |
| 3 | S1 | 90 | 0 | 0 |
| 4 | S1 | 85 | 1 | 1 |
| 5 | S1 | 83 | 0 | 1 |
| 6 | S1 | 90 | 0 | 1 |
| 7 | S1 | 85 | 1 | 2 |
| 8 | S1 | 90 | 0 | 2 |
| 9 | S1 | 93 | 1 | 3 |
| 10 | S1 | 93 | 0 | 3 |
| 11 | S1 | 93 | 0 | 3 |
+----+-----------+--------+-----------+--------+
BTW this is a case of the wider Gaps & Islands problem in SQL.
UPDATE
LAG and OVER are available in all supported SQL Server versions, ie SQL Server 2012 and later. OVER is also available in SQL Server 2008 but not LAG. In those versions different, slower techniques were used to calculate islands: The SQL of Gaps and Islands in Sequences
In most cases ROW_NUMBER() is used to calculate the row ordering, which results in one extra CTE. This can be avoided if the desired ordering is the same as the ID, or any other unique incrementing column. The following query returns the same results as the query that uses LAG :
select
* ,
case when exists (select ID
from #temp t1
where t1.StudentID=t2.StudentID
and t1.ID=t2.ID-1
and t2.status=90) then 1
else 0 end
as NewIsland
from #temp t2
This query returns 1 if there's any row with the same StudentID, Status 90 and ID or ROW_NUMBER one less, ie the same as LAG(,1).
After that we just need to SUM previous values. While SUM OVER was available in 2008, it only supported PARTITION BY. We need to use another subquery :
;with islands as
(
select
* ,
case when exists (select ID from #temp t1 where t1.StudentID=t2.StudentID and t1.ID=t2.ID-1 and t2.status=90) then 1
else 0 end
as NewIsland
from #temp t2
)
select * ,
(select ISNULL(SUM(NewIsland),0)
from islands i1
where i1.ID<i2.ID) AS Result
from islands i2
This sums all NewIsland values for rows with an ID less than the current one.
Performance
All those subqueries result in a lot of repeated scans. Suprisingly though, the older query is faster than the query with LAG because the first query has to order temporary results multiple times and filter by Status, with a 45% vs 55% execution plan cost.
Things change dramatically when an index is added :
declare #temp table ( ID int identity PRIMARY KEY, StudentID char(2), Status int,
INDEX IX_TMP(StudentID,ID,Status))
The multiple sorts disappear and the costs become 80% vs 20%. The query just scans the index values once without sorting the intermediate results.
The subquery version wasn't able to take advantage of the index
UPDATE 2
uzi suggested that removing LAG and summing only up to the previous row would be better :
select * ,
SUM(case when status =90 then 1 else 0 end)
OVER (PARTITION BY StudentID
ORDER BY ID ROWS
BETWEEN UNBOUNDED PRECEDING AND 1 PRECEDING)
from #temp;
Semantically, this is the same thing - for each row find all previous ones, calculate 1 for the 90s and 0 for the other rows, and sum them.
The server generate similar execution plans in both cases. The LAG version used two streaming aggregate operators while the version without it one. The end result for this limited data set was essentially the same though.
For a larger data set the results may be different, eg if the server has to spool data to tempdb because they didn't fit in memory.
Perhaps this is not a very good solution, but it works.
SELECT StudentID ID
, Marks Status
, CASE
WHEN Marks = 90
THEN SUM(q) OVER(order by row) - 1
ELSE SUM(q) OVER(order by row)
END Result
FROM (
SELECT row_number() OVER(order by StudentID desc) row
, *
, CASE
WHEN Marks = 90
THEN 1
ELSE 0
END q
FROM #temp
) a
You could simply use subquery
select *,
coalesce((select sum(case when Marks = 90 then 1 else 0 end)
from table
where studentid = t.studentid and
? < t.?) , 0) as Result
from table t;
However, ? (i.e. id) specify your actual data ordering columns
Related
I have a bunch of value pairs (Before, After) by users in a table. In ideal scenarios these values should form an unbroken chain. e.g.
| UserId | Before | After |
|--------|--------|-------|
| 1 | 0 | 10 |
| 1 | 10 | 20 |
| 1 | 20 | 30 |
| 1 | 30 | 40 |
| 1 | 40 | 30 |
| 1 | 30 | 52 |
| 1 | 52 | 0 |
Unfortunately, these records originate in multiple different tables and are imported into my investigation table. The other values in the table do not lend themselves to ordering (e.g. CreatedDate) due to some quirks in the system saving them out of order.
I need to produce a list of users with gaps in their data. e.g.
| UserId | Before | After |
|--------|--------|-------|
| 1 | 0 | 10 |
| 1 | 10 | 20 |
| 1 | 20 | 30 |
// Row Deleted (30->40)
| 1 | 40 | 30 |
| 1 | 30 | 52 |
| 1 | 52 | 0 |
I've looked at the other Daisy Chaining questions on SO (and online in general), but they all appear to be on a given problem space, where one value in the pair is always lower than the other in a predictable fashion. In my case, there can be increases or decreases.
Is there a way to quickly calculate the longest chain that can be created? I do have a CreatedAt column that would provide some (very rough) relative ordering - When the date is more than about 10 seconds apart, we could consider them orderable)
Are you not therefore simply after this to get the first row where the "chain" is broken?
SELECT UserID, Before, After
FROM dbo.YourTable YT
WHERE NOT EXISTS (SELECT 1
FROM dbo.YourTable NE
WHERE NE.After = YT.Before)
AND YT.Before != 0;
If you want to last row where the row where the "chain" is broken, just swap the aliases on the columns in the WHERE in the NOT EXISTS.
the following performs hierarchical recursion on your example data and calculates a "chain" count column called 'h_level'.
;with recur_cte([UserId], [Before], [After], h_level) as (
select [UserId], [Before], [After], 0
from dbo.test_table
where [Before] is null
union all
select tt.[UserId], tt.[Before], tt.[After], rc.h_level+1
from dbo.test_table tt join recur_cte rc on tt.UserId=rc.UserId
and tt.[Before]=rc.[After]
where tt.[Before]<tt.[after])
select * from recur_cte;
Results:
UserId Before After h_level
1 NULL 10 0
1 10 20 1
1 20 30 2
1 30 40 3
1 30 52 3
Is this helpful? Could you further define which rows to exclude?
If you want users that have more than one chain:
select t.UserID
from <T> as t left outer join <T> as t2
on t2.UserID = t.UserID and t2.Before = t.After
where t2.UserID is null
group by t.UserID
having count(*) > 1;
I am looking for some advice or pointers on how to construct this. I have spent the last year self-learning SQL. I am at work and I only have access to the query interface in report builder. Which for me means, no procedures, no create tables and no IDE :(. So thats the limitations!
I am trying to reconstruct account balances. I have no intervening balances. I have the current balance and a table full of the transaction history
My current approach is to sum the transactions by posting week (Which I have done) in my CTE named
[SUMTRANSREF]
+--------------+------------+-----------+
| TNCY-SYS-REF | POSTING-WK | SUM-TRANS |
+--------------+------------+-----------+
| 1 | 47 | 37.95 |
| 1 | 46 | 37.95 |
| 1 | 45 | 37.95 |
| 2 | 47 | 50.00 |
| 2 | 46 | 25.00 |
| 2 | 45 | 25.00 |
+--------------+------------+-----------+
I then get the current balances in another CTE called
[CBAL]
+--------------+-------------+-----------+
| TNCY-SYS-REF | CUR-BALANCE | CURR-WEEK |
+--------------+-------------+-----------+
| 1 | 27.52 | 47 |
| 1 | 52.00 | 47 |
+--------------+-------------+-----------+
Now I am assuming I could create intervening CTEs to sum and then splice those altogether but is there a smarter (more automated) way?
Ideally my result should be
+--------------+-------------+----------+----------+
| TNCY-SYS-REF | CUR-BALANCE | BAL-WK46 | BAL-Wk45 |
+--------------+-------------+----------+----------+
| 1 | 27.52 | -10.43 | -48.38 |
| 2 | 52.00 | 2.00 | -48.00 |
+--------------+-------------+----------+----------+
I just am uncertain because each column requires the sum of intervening transactions
So BAL-WK46 is (CURR-BALANCE) - SUM(Transactions from 47)
So BAL-WK46 is (CURR-BALANCE) - SUM(Transactions 46+47)
So BAL-WK45 is (CURR-BALANCE) - SUM(Transactions 45+46+47)
and so on.
Normally I have an idea where to start but I am flummoxed by this one.
Any help you can give would be appreciated. Thank you
Here is some T-SQL that gets the result you require. Should be easy enough to play with to get what you want.
It makes use of Recursive CTE and a PIVOT
IF OBJECT_ID('Tempdb..#SUMTRANSREF') IS NOT NULL
DROP TABLE #SUMTRANSREF
IF OBJECT_ID('Tempdb..#CBAL') IS NOT NULL
DROP TABLE #CBAL
IF OBJECT_ID('Tempdb..#TEMP') IS NOT NULL
DROP TABLE #TEMP
CREATE TABLE #SUMTRANSREF
(
[TNCY-SYS-REF] int,
[POSTING-WK] int,
[SUM-TRANS] float
)
CREATE TABLE #CBAL
(
[TNCY-SYS-REF] int ,
[CUR-BALANCE] float , [CURR-WEEK] int
)
INSERT INTO #SUMTRANSREF
VALUES (1 ,47 , 37.95),
(1 ,46 , 37.95),
(1 ,45 , 37.95),
(2 ,47 , 50.00),
(2 ,46 , 25.00),
(2 ,45 , 25.00 )
INSERT INTO #CBAL
VALUES (1,27.52,47),(2,52.00,47);
WITH CBAL AS
(SELECT * FROM #CBAL),
SUMTRANSREF AS(SELECT * FROM #SUMTRANSREF),
RecursiveTotals([TNCY-SYS-REF],[CURR-WEEK],[CUR-BALANCE],RunningBalance)
AS
(
select C.[TNCY-SYS-REF], C.[CURR-WEEK],C.[CUR-BALANCE],C.[CUR-BALANCE] + S.RunningTotal RunningBalance from CBAL C
JOIN (select *,-SUM([SUM-TRANS]) OVER (PARTITION BY [TNCY-SYS-REF] ORDER BY [POSTING-WK] DESC) RunningTotal
from SUMTRANSREF) S
ON C.[CURR-WEEK]=S.[POSTING-WK] AND C.[TNCY-SYS-REF]=S.[TNCY-SYS-REF]
UNION ALL
select RT.[TNCY-SYS-REF], RT.[CURR-WEEK] -1 [CURR_WEEK],RT.[CUR-BALANCE],RT.[CUR-BALANCE] + S.RunningTotal RunningBalance FROM RecursiveTotals RT
JOIN (select *,-SUM([SUM-TRANS]) OVER (PARTITION BY [TNCY-SYS-REF] ORDER BY [POSTING-WK] DESC) RunningTotal
from #SUMTRANSREF) S ON RT.[TNCY-SYS-REF] = S.[TNCY-SYS-REF] AND RT.[CURR-WEEK]-1 = S.[POSTING-WK]
)
select [TNCY-SYS-REF],[CUR-BALANCE],[46] as 'BAL-WK46',[45] as 'BAL-WK45',[44] as 'BAL-WK44'
FROM (
select [TNCY-SYS-REF],[CUR-BALANCE],RunningBalance,BalanceWeek from (SELECT *,R.[CURR-WEEK]-1 'BalanceWeek' FROm RecursiveTotals R
) RT) AS SOURCETABLE
PIVOT
(
AVG(RunningBalance)
FOR BalanceWeek in ([46],[45],[44])
) as PVT
I'm trying to add rank by sales and also change the date column to a 'month end' field that would have one month end date per month - if that makes sense?
Would you alter table and add column or could you just rename the date field and use set and case to make all March dates = 3-31-18 and all April 4-30-18?
I got this far:
UPDATE table1
SET DATE=EOMONTH(DATE) AS MONTH_END;
ALTER TABLE table1
ADD COLUMN RANK INT AFTER sales;
UPDATE table1
SET RANK=
RANK() OVER(PARTITION BY cust ORDER BY sales DESC);
LIMIT 2
can i do two sets in a row like that without adding an update? I'm looking for top 2 within each month - would this work? I feel like this is right and most efficient query, but its not working - any help appreciated!!
orig table
+------+----------+-------+--+
| CUST | DATE | SALES | |
+------+----------+-------+--+
| 36 | 3-5-2018 | 50 | |
| 37 | 3-15-18 | 100 | |
| 38 | 3-25-18 | 65 | |
| 37 | 4-5-18 | 95 | |
| 39 | 4-21-18 | 500 | |
| 40 | 4-45-18 | 199 | |
+------+----------+-------+--+
desired output
+------+-----------+-------+------+
| CUST | Month End | SALES | Rank |
+------+-----------+-------+------+
| | | | |
| 37 | 3-31-18 | 100 | 1 |
| 38 | 3-31-18 | 65 | 2 |
| 39 | 4-30-18 | 500 | 1 |
| 40 | 4-30-18 | 199 | 2 |
+------+-----------+-------+------+
Based on your expected output I think this may work as well.
create table Salesdate (Cust int, Dates date, Sales int)
insert into Salesdate values
(36 , '2018-03-05' , 50 )
,(37 , '2018-03-15' , 100 )
,(38 , '2018-03-25' , 65 )
,(37 , '2018-04-05' , 95 )
,(40 , '2018-04-25' , 199 )
,(39 , '2018-04-21' , 500 )
Updating the same column dates to the last day of the month (EOmonth will help to give last day of the month), you can add a separate column or update the column as you prefer.
Update Salesdate
set Dates = eomonth(Dates)
Add a column called rank in the table.
Alter table Salesdate
add rank int
Update the column rank which was just added.
update Salesdate
set Salesdate.[rank] = tbl.Ranked from
(select Cust, Sales, Dates , rank() over (Partition by Dates order by Sales Desc)
Ranked from Salesdate ) tbl
where tbl.Cust = salesdate.Cust
and tbl.Sales = salesdate.Sales
and tbl.dates = salesdate.Dates
--Not sure if this step is necessary if you want your final table to have only rank 1 and 2, then you can delete the data. Or it can be filtered out only on select list as well. Please note that sometimes rank may skip the number if we don't have unique set of sales amount for a given customer.
;With cte as (
select * from Salesdate)
delete from cte
where [RANK] > 2
select * from Salesdate
order by dates, [RANK]
Output
Cust Dates Sales rank
37 2018-03-31 100 1
38 2018-03-31 65 2
39 2018-04-30 500 1
40 2018-04-30 199 2
I'm trying to get a "lineage" or similar, and also information about the first and last links (at least; all would be good), out of a table that has self-referential links between rows that have been "replaced" and rows that have replaced them. The table has a structure along these lines:
CREATE TABLE Thing (
Id INT PRIMARY KEY,
TStamp DATETIME,
Replaces INT NULL,
ReplacedBy INT NULL
);
I'm stuck with this structure. :-) It's sort of doubly-linked (yes, it's a bit silly): Each row has a unique Id, and then a row that has been "replaced" by another will have a non-NULL ReplacedBy giving the Id of the replacement row, and the replacement row will also have a link back to what it replaces in Replaces. So we can use either Replaces or ReplacedBy (or both) if we like.
Here's some sample data:
INSERT INTO Thing
(Id, TStamp, Replaces, ReplacedBy)
VALUES
(1, '2017-01-01', NULL, 11),
(2, '2017-01-02', NULL, 12),
(3, '2017-01-03', NULL, NULL),
(4, '2017-01-04', NULL, NULL),
(11, '2017-01-11', 1, NULL),
(12, '2017-01-12', 2, 22),
(22, '2017-01-22', 12, NULL);
So 1 was replaced by 11, 2 was replaced by 12, and 12 was replaced by 22.
I'd like to get the following information for each chain of links from this table in a reasonable way:
Details of the row that started the chain
Details of the final row in the chain
Details of the links in-between or at least how many links (total) there are in the chain
...filtered by a date range applied to the last row in the chain.
In an ideal universe, I'd get back something like this:
+−−−−−−−−−+−−−−−−−−+−−−−+−−−−−−−+−−−−−−−−−−−−+
| FirstId | LastId | Id | Links | TStamp |
+−−−−−−−−−+−−−−−−−−+−−−−+−−−−−−−+−−−−−−−−−−−−+
| 1 | 11 | 1 | 2 | 2017−01−01 |
| 1 | 11 | 11 | 2 | 2017−01−11 |
| 2 | 22 | 2 | 3 | 2017−01−02 |
| 2 | 22 | 12 | 3 | 2017−01−12 |
| 2 | 22 | 22 | 3 | 2017−01−22 |
+−−−−−−−−−+−−−−−−−−+−−−−+−−−−−−−+−−−−−−−−−−−−+
So far I have this query, which I could post-process to get the above:
WITH Data AS (
SELECT Id, TStamp, Replaces, ReplacedBy, 0 AS Depth
FROM Thing
UNION ALL
SELECT Thing.Id, Thing.TStamp, Thing.Replaces, Thing.ReplacedBy, Depth + 1
FROM Data
JOIN Thing
ON Thing.Replaces = Data.Id
)
SELECT *
FROM Data
WHERE ReplacedBy IS NOT NULL OR Depth > 0
ORDER BY
Id, Depth;
That gives me:
+−−−−+−−−−−−−−−−−−+−−−−−−−−−−+−−−−−−−−−−−−+−−−−−−−+
| Id | TStamp | Replaces | ReplacedBy | Depth |
+−−−−+−−−−−−−−−−−−+−−−−−−−−−−+−−−−−−−−−−−−+−−−−−−−+
| 1 | 2017−01−01 | NULL | 11 | 0 |
| 2 | 2017−01−02 | NULL | 12 | 0 |
| 11 | 2017−01−11 | 1 | NULL | 1 |
| 12 | 2017−01−12 | 2 | 12 | 0 |
| 12 | 2017−01−12 | 2 | 12 | 1 |
| 22 | 2017−01−13 | 12 | NULL | 1 |
| 22 | 2017−01−13 | 12 | NULL | 2 |
+−−−−+−−−−−−−−−−−−+−−−−−−−−−−+−−−−−−−−−−−−+−−−−−−−+
And I could use something like this to figure out (for instance) the final row of each chain:
WITH Data AS (
SELECT Id, Replaces, ReplacedBy, 0 AS Depth
FROM Thing
UNION ALL
SELECT Thing.Id, Thing.Replaces, Thing.ReplacedBy, Depth + 1
FROM Data
JOIN Thing
ON Thing.Replaces = Data.Id
),
MaxData AS (
SELECT Data.Id, Data.Depth
FROM Data
JOIN (
SELECT Id, MAX(Depth) AS MaxDepth
FROM Data
GROUP BY Id
) j ON data.Id = j.Id AND Data.Depth = j.MaxDepth
WHERE Depth > 0
)
SELECT *
FROM MaxData
ORDER BY
Id;
...which gives me:
+−−−−+−−−−−−−+
| Id | Depth |
+−−−−+−−−−−−−+
| 11 | 1 |
| 12 | 1 |
| 22 | 2 |
+−−−−+−−−−−−−+
...but I've lost the starting point and the points along the way.
I have the strong feeling I'm missing something really straight-forward — but clever — that would let me get this largely with the query rather than post-processing, some kind of join with a "min" and "max" query (but not like my one above). What would it be?
The table doesn't have any indexes on Replaces or ReplacedBy, but we could add any needed. The table is only lightly used (roughly 300k rows and probably only a couple of hundred updates/inserts a day).
I'm limited to SQL Server 2008 features.
Inspired by Gordon Linoff's answer and HABO's comment which highlighted something Gordon was doing that was critical, I:
Removed the SQL Server 2012+ FIRST_VALUE function, replacing it with a CROSS JOIN on an "overview" query of the data
Included the Links count in the overview query
Removed the reliance on t in Gordon's WHERE NOT EXISTS (SELECT 1 FROM Thing t2 WHERE t2.ReplacedBy = t.id), which (at last on SQL Server 2008) wasn't bound to anything
Filtered out rows that weren't replaced
Below, I also add the date filtering mentioned in the question
...filtered by a date range applied to the last row in the chain.
...which Gordon didn't cover at all, and changes our approach, but only in terms of the arrow of time.
So, first, without the date criteria, sticking fairly close to Gordon's answer:
WITH Data AS (
SELECT Id AS FirstId, Id, TStamp, Replaces, ReplacedBy, 0 AS Depth
FROM Thing
WHERE Replaces IS NULL AND ReplacedBy IS NOT NULL
UNION ALL
SELECT d.FirstId, t.Id, t.TStamp, t.Replaces, t.ReplacedBy, d.Depth + 1
FROM Data d
JOIN Thing t ON t.Replaces = d.Id
),
Overview AS (
SELECT FirstId, MAX(Id) AS LastId, COUNT(*) AS Links
FROM Data
GROUP BY
FirstId
)
SELECT d.FirstId, o.LastId, d.Id, o.Links, d.Depth, d.TStamp
FROM Data d
CROSS APPLY (
SELECT LastId, Links
FROM Overview
WHERE FirstId = d.FirstId
) o
ORDER BY
d.FirstId, d.Depth
;
The critical parts of that are grabbing the seed Id as FirstId here:
SELECT Id AS FirstId, Id, TStamp, Replaces, ReplacedBy, 0 AS Depth
FROM Thing
WHERE Replaces IS NULL AND ReplacedBy IS NOT NULL
and then propagating it through the results of the recursive join:
SELECT d.FirstId, t.Id, t.TStamp, t.Replaces, t.ReplacedBy, d.Depth + 1
FROM Data d
JOIN Thing t ON t.Replaces = d.Id
Just adding that to my original query gives us most of what I wanted. Then we add a second query to get the LastId for each FirstId (Gordon did it as a FIRST_VALUE over a partition, but I can't do that in SQL Server 2008) and using an overview query also lets me grab the number of links. We cross-apply that on the basis of the FirstId value to get the overall results I wanted.
The query above returns the following for the sample data:
+−−−−−−−−−+−−−−−−−−+−−−−+−−−−−−−+−−−−−−−+−−−−−−−−−−−−+
| FirstId | LastId | Id | Links | Depth | TStamp |
+−−−−−−−−−+−−−−−−−−+−−−−+−−−−−−−+−−−−−−−+−−−−−−−−−−−−+
| 1 | 11 | 1 | 2 | 0 | 2017-01-01 |
| 1 | 11 | 11 | 2 | 1 | 2017-01-11 |
| 2 | 22 | 2 | 3 | 0 | 2017-01-02 |
| 2 | 22 | 12 | 3 | 1 | 2017-01-12 |
| 2 | 22 | 22 | 3 | 2 | 2017-01-13 |
+−−−−−−−−−+−−−−−−−−+−−−−+−−−−−−−+−−−−−−−+−−−−−−−−−−−−+
...e.g., exactly what I wanted, plus Depth if I want (so I know what order the intermediary links were in).
If we wanted to include rows that were never replaced, we'd just change
WHERE Replaces IS NULL AND ReplacedBy IS NOT NULL
to
WHERE Replaces IS NULL
Giving us:
+−−−−−−−−−+−−−−−−−−+−−−−+−−−−−−−+−−−−−−−+−−−−−−−−−−−−+
| FirstId | LastId | Id | Links | Depth | TStamp |
+−−−−−−−−−+−−−−−−−−+−−−−+−−−−−−−+−−−−−−−+−−−−−−−−−−−−+
| 1 | 11 | 1 | 2 | 0 | 2017-01-01 |
| 1 | 11 | 11 | 2 | 1 | 2017-01-11 |
| 2 | 22 | 2 | 3 | 0 | 2017-01-02 |
| 2 | 22 | 12 | 3 | 1 | 2017-01-12 |
| 2 | 22 | 22 | 3 | 2 | 2017-01-13 |
| 3 | 3 | 3 | 1 | 0 | 2017-01-03 |
| 4 | 4 | 4 | 1 | 0 | 2017-01-04 |
+−−−−−−−−−+−−−−−−−−+−−−−+−−−−−−−+−−−−−−−+−−−−−−−−−−−−+
But we've ignored the date criteria required by the question:
...filtered by a date range applied to the last row in the chain.
To do that without building a massive temporary result set, we have to work backward: Instead of selecting the starting point (the first entry in a chain, Replaces IS NULL), we need to select the ending point (the last entry in a chain, ReplacedBy IS NULL), and then invert our logic working back through the chain. It's largely a matter of:
Swapping FirstId with LastId
Swapping Replaces with ReplacedBy (convenient the table had both!)
Using MIN to get the first ID in the chain rather than MAX to get the last
Using d.Depth - 1 rather than d.Depth + 1
Then fixing-up Depth based on Links once we know it in our final select, to get those nice values where 0 = first link rather than some varying negative number: o.Links + d.Depth - 1 AS Depth
All of which gives us:
WITH Data AS (
SELECT Id AS LastId, Id, TStamp, Replaces, ReplacedBy, 0 AS Depth
FROM Thing
WHERE ReplacedBy IS NULL AND Replaces IS NOT NULL
-- Filtering by date of last entry would go here
UNION ALL
SELECT d.LastId, t.Id, t.TStamp, t.Replaces, t.ReplacedBy, d.Depth - 1
FROM Data d
JOIN Thing t ON t.ReplacedBy = d.Id
),
Overview AS (
SELECT LastId, MIN(Id) AS FirstId, COUNT(*) AS Links
FROM Data
GROUP BY
LastId
)
SELECT o.FirstId, d.LastId, d.Id, o.Links, o.Links + d.Depth - 1 AS Depth, d.TStamp
FROM Data d
CROSS APPLY (
SELECT FirstId, Links
FROM Overview
WHERE LastId = d.LastId
) o
ORDER BY
o.FirstId, d.Depth
;
So for instance, if we used
AND TStamp BETWEEN '2017-01-12' AND '2017-02-01'
where I have
-- Filtering by date of last entry would go here
above, with our sample data we'd get this result:
+−−−−−−−−−+−−−−−−−−+−−−−+−−−−−−−+−−−−−−−+−−−−−−−−−−−−+
| FirstId | LastId | Id | Links | Depth | TStamp |
+−−−−−−−−−+−−−−−−−−+−−−−+−−−−−−−+−−−−−−−+−−−−−−−−−−−−+
| 2 | 22 | 2 | 3 | 0 | 2017−01−02 |
| 2 | 22 | 12 | 3 | 1 | 2017−01−12 |
| 2 | 22 | 22 | 3 | 2 | 2017−01−13 |
+−−−−−−−−−+−−−−−−−−+−−−−+−−−−−−−+−−−−−−−+−−−−−−−−−−−−+
...because the last link the Id = 1 chain is outside the date range, so we don't include it.
This is a little tricky. Arrange the CTE to start at the beginning of each list. That makes the subsequent processing easier:
WITH Data AS (
SELECT Id as FirstId, Id, TStamp, Replaces, ReplacedBy, 0 AS Depth
FROM Thing t
WHERE NOT EXISTS (SELECT 1 FROM Thing t2 WHERE t2.ReplacedBy = t.id)
UNION ALL
SELECT d.FirstId, t.Id, t.TStamp, t.Replaces, t.ReplacedBy, d.Depth + 1
FROM Data d JOIN
Thing t
ON t.Replaces = d.Id
)
SELECT d.*,
FIRST_VALUE(id) OVER (PARTITION BY FirstId ORDER BY Depth DESC) as LastId
FROM Data d;
Then, you can use FIRST_VALUE() with a reverse sort to get the last value in the chain.
This returns chains that have no links. You can add a filter to remove these.
I can use a traditional subquery approach to count the occurrences in the last ten minutes. For example, this:
drop table if exists [dbo].[readings]
go
create table [dbo].[readings](
[server] [int] NOT NULL,
[sampled] [datetime] NOT NULL
)
go
insert into readings
values
(1,'20170101 08:00'),
(1,'20170101 08:02'),
(1,'20170101 08:05'),
(1,'20170101 08:30'),
(1,'20170101 08:31'),
(1,'20170101 08:37'),
(1,'20170101 08:40'),
(1,'20170101 08:41'),
(1,'20170101 09:07'),
(1,'20170101 09:08'),
(1,'20170101 09:09'),
(1,'20170101 09:11')
go
-- Count in the last 10 minutes - example periods 08:31 to 08:40, 09:12 to 09:21
select server,sampled,(select count(*) from readings r2 where r2.server=r1.server and r2.sampled <= r1.sampled and r2.sampled > dateadd(minute,-10,r1.sampled)) as countinlast10minutes
from readings r1
order by server,sampled
go
How can I use a window function to obtain the same result ? I've tried this:
select server,sampled,
count(case when sampled <= r1.sampled and sampled > dateadd(minute,-10,r1.sampled) then 1 else null end) over (partition by server order by sampled rows between unbounded preceding and current row) as countinlast10minutes
-- count(case when currentrow.sampled <= r1.sampled and currentrow.sampled > dateadd(minute,-10,r1.sampled) then 1 else null end) over (partition by server order by sampled rows between unbounded preceding and current row) as countinlast10minutes
from readings r1
order by server,sampled
But the result is just the running count. Any system variable that refers to the current row pointer ? currentrow.sampled ?
This isn't a very pleasing answer but one possibility is to first create a helper table with all the minutes
CREATE TABLE #DateTimes(datetime datetime primary key);
WITH E1(N) AS
(
SELECT 1 FROM (VALUES(1),(1),(1),(1),(1),
(1),(1),(1),(1),(1)) V(N)
) -- 1*10^1 or 10 rows
, E2(N) AS (SELECT 1 FROM E1 a, E1 b) -- 1*10^2 or 100 rows
, E4(N) AS (SELECT 1 FROM E2 a, E2 b) -- 1*10^4 or 10,000 rows
, E8(N) AS (SELECT 1 FROM E4 a, E4 b) -- 1*10^8 or 100,000,000 rows
,R(StartRange, EndRange)
AS (SELECT MIN(sampled),
MAX(sampled)
FROM readings)
,N(N)
AS (SELECT ROW_NUMBER()
OVER (
ORDER BY (SELECT NULL)) AS N
FROM E8)
INSERT INTO #DateTimes
SELECT TOP (SELECT 1 + DATEDIFF(MINUTE, StartRange, EndRange) FROM R) DATEADD(MINUTE, N.N - 1, StartRange)
FROM N,
R;
And then with that in place you could use ROWS BETWEEN 9 PRECEDING AND CURRENT ROW
WITH T1 AS
( SELECT Server,
MIN(sampled) AS StartRange,
MAX(sampled) AS EndRange
FROM readings
GROUP BY Server )
SELECT Server,
sampled,
Cnt
FROM T1
CROSS APPLY
( SELECT r.sampled,
COUNT(r.sampled) OVER (ORDER BY N.datetime ROWS BETWEEN 9 PRECEDING AND CURRENT ROW) AS Cnt
FROM #DateTimes N
LEFT JOIN readings r
ON r.sampled = N.datetime
AND r.server = T1.server
WHERE N.datetime BETWEEN StartRange AND EndRange ) CA
WHERE CA.sampled IS NOT NULL
ORDER BY sampled
The above assumes that there is at most one sample per minute and that all the times are exact minutes. If this isn't true it would need another table expression pre-aggregating by datetimes rounded to the minute.
As far as I know, there is not a simple exact replacement for your subquery using window functions.
Window functions operate on a set of rows and allow you to work with them based on partitions and order.
What you are trying to do isn't the type of partitioning that we can work with in window functions.
To generate the partitions we would need to be able to use window functions in this instance would just result in overly complicated code.
I would suggest cross apply() as an alternative to your subquery.
I am not sure if you meant to restrict your results to within 9 minutes, but with sampled > dateadd(...) that is what is happening in your original subquery.
Here is what a window function could look like based on partitioning your samples into 10 minute windows, along with a cross apply() version.
select
r.server
, r.sampled
, CrossApply = x.CountRecent
, OriginalSubquery = (
select count(*)
from readings s
where s.server=r.server
and s.sampled <= r.sampled
/* doesn't include 10 minutes ago */
and s.sampled > dateadd(minute,-10,r.sampled)
)
, Slices = count(*) over(
/* partition by server, 10 minute slices, not the same thing*/
partition by server, dateadd(minute,datediff(minute,0,sampled)/10*10,0)
order by sampled
)
from readings r
cross apply (
select CountRecent=count(*)
from readings i
where i.server=r.server
/* changed to >= */
and i.sampled >= dateadd(minute,-10,r.sampled)
and i.sampled <= r.sampled
) as x
order by server,sampled
results: http://rextester.com/BMMF46402
+--------+---------------------+------------+------------------+--------+
| server | sampled | CrossApply | OriginalSubquery | Slices |
+--------+---------------------+------------+------------------+--------+
| 1 | 01.01.2017 08:00:00 | 1 | 1 | 1 |
| 1 | 01.01.2017 08:02:00 | 2 | 2 | 2 |
| 1 | 01.01.2017 08:05:00 | 3 | 3 | 3 |
| 1 | 01.01.2017 08:30:00 | 1 | 1 | 1 |
| 1 | 01.01.2017 08:31:00 | 2 | 2 | 2 |
| 1 | 01.01.2017 08:37:00 | 3 | 3 | 3 |
| 1 | 01.01.2017 08:40:00 | 4 | 3 | 1 |
| 1 | 01.01.2017 08:41:00 | 4 | 3 | 2 |
| 1 | 01.01.2017 09:07:00 | 1 | 1 | 1 |
| 1 | 01.01.2017 09:08:00 | 2 | 2 | 2 |
| 1 | 01.01.2017 09:09:00 | 3 | 3 | 3 |
| 1 | 01.01.2017 09:11:00 | 4 | 4 | 1 |
+--------+---------------------+------------+------------------+--------+
Thanks, Martin and SqlZim, for your answers. I'm going to raise a Connect enhancement request for something like %%currentrow that can be used in window aggregates. I'm thinking this would lead to much more simple and natural sql:
select count(case when sampled <= %%currentrow.sampled and sampled > dateadd(minute,-10,%%currentrow.sampled) then 1 else null end) over (...whatever the window is...)
We can already use expressions like this:
select count(case when sampled <= getdate() and sampled > dateadd(minute,-10,getdate()) then 1 else null end) over (...whatever the window is...)
so thinking would be great if we could reference a column that's in the current row.