Select all rows for the first N distinct child table rows - sql-server

It seems like this should be a common problem with a simple solution, but I haven't found it.
I would like to compute child_order, which is the order of appearance of distinct child table rows as is shown below in the following data:
child_order PK1 PK2 ACCESS ACCESS_ID
1 99 Al NULL NULL
2 55 Charles Accounts 1
2 55 Charles Desktop 2
2 55 Charles Printer 3
2 55 Charles Servers 4
2 55 Charles VMs 5
3 66 Charles Desktop 2
3 66 Charles VMs 5
4 22 Chris Desktop 2
4 22 Chris Printer 3
4 22 Chris Servers 4
5 89 Evan Desktop 2
Retrieved by a query like:
SELECT sub1.*
FROM (
SELECT ??? as child_order, sub2.*
FROM (
SELECT ct.PK1, ct.PK2, pt1.ACCESS, pt1.ACCESS_ID
FROM child_table ct
LEFT JOIN some_linktable lt ON lt.child_id = ct.id
LEFT JOIN parent_table1 pt1 ON lt.parent_id = pt1.id
WHERE ct.PK2 IN ('Charles', 'Evan', 'Al', 'Chris')
ORDER BY ct.PK2, pt1.ACCESS -- Order must be preserved
) sub2
) sub1
WHERE child_order < 10 AND (other_conditions)
I can use subqueries, aggregates, analytics, etc. but not really CTEs/"WITH" statements or temporary tables because of the complexity of generating SQL for them dynamically.
Specifically, I am generating pagination SQL (for several DBMSs) for search results from a query joining several tables.
I am trying to figure out how to simply show the top N rows, not counting repeats due to a join (e.g. Chris counts as only one row. Access shows "Desktop, Printer, Servers").
I've tried DENSE_RANK() OVER (ORDER BY PK1, PK2), but of course I get ranking in PK1 PK2 order, which is useless for the WHERE clause. Al, for example, would get a value higher than 1.
I've tried DENSE_RANK() OVER (ORDER BY PK2, ACCESS), but it enumerates only the search terms, not the child table rows.
I've tried DENSE_RANK() OVER (PARTITION BY PK2, ACCESS ORDER BY (SELECT NULL)) (to get DENSE_RANK to use the row order it is given, which is how I want to rank values) but only "1" is returned.
I'll omit my other "try random stuff"-phase attempts.
I would like to avoid having a SELECT DISTINCT PK1, PK2 WHERE (search) ORDER BY (sortorder) subquery because there may be zero or very many primary key fields so dynamic SQL generation would be tricky and, additionally, I suspect the performance would suck with all the WHERE sub3.field1 = sub2.field1 AND sub3.field2 = sub2.field2... checks.

Despite your misgivings about SELECT DISTINCT, this might be the best choice for a sub-query:
SELECT row_number() OVER (ORDER BY PK2) AS child_order, PK1, PK2
FROM (
SELECT DISTINCT PK1, PK2
FROM child_table
WHERE PK2 IN ('Charles', 'Evan', 'Al', 'Chris')
ORDER BY PK2
LIMIT 9) sub2
The child_order field depends only on table child_table and you want only 9 rows of them, so compute the child_order in a sub-query on that table only. After you have that you can join to the other tables. If you have an index on child_table(PK1, PK2) this should be a very fast index-only search. It takes some of the filtering and the limiting inside, so the enveloping query is much simpler:
SELECT sub1.child_order, PK1, PK2, pt1.ACCESS, pt1.ACCESS_ID
FROM child_table ct
JOIN (
SELECT row_number() OVER (ORDER BY PK2) AS child_order, PK1, PK2
FROM (
SELECT DISTINCT PK1, PK2
FROM child_table
WHERE PK2 IN ('Charles', 'Evan', 'Al', 'Chris')
ORDER BY PK2
LIMIT 9) sub2
) sub1 USING (PK1, PK2)
LEFT JOIN some_linktable lt ON lt.child_id = ct.id
LEFT JOIN parent_table1 pt1 ON lt.parent_id = pt1.id
WHERE <other conditions>
ORDER BY sub1.child_order, pt1.ACCESS; -- Faster to order by int

Related

How does order by work when all column values are identical?

I use SQL Server 2016. Below is the rows in table: test_account. You can see the values of updDtm and fileCreateTime are identical. id is the primary key.
id accno updDtm fileCreatedTime
-----------------------------------------------------------------------
1 123456789 2022-07-27 09:41:10.0000000 2022-07-27 11:33:33.8300000
2 123456789 2022-07-27 09:41:10.0000000 2022-07-27 11:33:33.8300000
3 123456789 2022-07-27 09:41:10.0000000 2022-07-27 11:33:33.8300000
I want to query the latest account id which accno is 123456789 order by updDtm, fileCreatedTime
I run the following SQL, the output result is id = 1
SELECT t.id
FROM
(SELECT
ROW_NUMBER() OVER(PARTITION BY a.accno ORDER BY a.updDtm desc, a.fileCreatedTime DESC) AS seq,
a.id, a.accno, a.updDtm, a.fileCreatedTime
FROM
test_account a) AS t
WHERE t.seq = 1
My question is does the query result is repeatable and reliable (always output id=1 either run 1 time or multiple times) when the values of columns updDtm and fileCreatedTime are identical or just output the random id?
I read some articles and learn that for MySql and Oracle the query result is not reliable and reproducible. How about SQL Server?
The context of this documentation reference is ORDER BY usage with OFFSET and FETCH but the same considerations apply to all ORDER BY usage, including windowing functions like ROW_NUMBER(). In summary,
To achieve stable results between query requests, the following conditions must be met:
The underlying data that is used by the query must not change.
The ORDER BY clause contains a column or combination of columns that are guaranteed to be unique.
I'm trying to find an case to test if the query would output result
other than id=1 but with no luck
The ordering of rows when duplicate ORDER BY values exist is undefined (a.k.a. non-deterministic and arbitrary) because it depends on the execution plan (which may vary due to available indexes, stats, and the optimizer), parallelism, database engine internals, and even physical data storage. The example below yields different results due to a parallel plan on my test instance.
DROP TABLE IF EXISTS dbo.test_account;
CREATE TABLE dbo.test_account(
id int NOT NULL
CONSTRAINT pk_test_account PRIMARY KEY CLUSTERED
, accno int NOT NULL
, updDtm datetime2 NOT NULL
, fileCreatedTime datetime2 NOT NULL
);
--insert 100K rows
WITH
t10 AS (SELECT n FROM (VALUES(0),(0),(0),(0),(0),(0),(0),(0),(0),(0)) t(n))
,t1k AS (SELECT 0 AS n FROM t10 AS a CROSS JOIN t10 AS b CROSS JOIN t10 AS c)
,t1g AS (SELECT ROW_NUMBER() OVER (ORDER BY (SELECT 0)) AS num FROM t1k AS a CROSS JOIN t1k AS b CROSS JOIN t1k AS c)
INSERT INTO dbo.test_account (id, accno, updDtm, fileCreatedTime)
SELECT num, 123456789, '2022-07-27 09:41:10.0000000', '2022-07-27 11:33:33.8300000'
FROM t1g
WHERE num <= 100000;
GO
--run query 10 times
SELECT t.id
FROM
(SELECT
ROW_NUMBER() OVER(PARTITION BY a.accno ORDER BY a.updDtm desc, a.fileCreatedTime DESC) AS seq,
a.id, a.accno, a.updDtm, a.fileCreatedTime
FROM
test_account a) AS t
WHERE t.seq = 1;
GO 10
Example results:
1
27001
25945
57071
62813
1
1
1
36450
78805
The simple solution is to add the primary key as the last column to the ORDER BY clause to break ties. This returns the same id value (1) in every iteration regardless of the execution plan and indexes.
SELECT t.id
FROM
(SELECT
ROW_NUMBER() OVER(PARTITION BY a.accno ORDER BY a.updDtm desc, a.fileCreatedTime DESC, a.id) AS seq,
a.id, a.accno, a.updDtm, a.fileCreatedTime
FROM
test_account a) AS t
WHERE t.seq = 1;
GO 10
On a side note, this index will optimize the query:
CREATE NONCLUSTERED INDEX idx ON dbo.test_account(accno, updDtm DESC, fileCreatedTime DESC, id);

How do I exclude rows when an incremental value starts over?

I am a newbie poster but have spent a lot of time researching answers here. I can't quite figure out how to create a SQL result set using SQL Server 2008 R2 that should probably be using lead/lag from more modern versions. I am trying to aggregate data based on sequencing of one column, but there can be varying numbers of instances in each sequence. The only way I know a sequence has ended is when the next row has a lower sequence number. So it may go 1-2, 1-2-3-4, 1-2-3, and I have to figure out how to make 3 aggregates out of that.
Source data is joined tables that look like this (please help me format):
recordID instanceDate moduleID iResult interactionNum
1356 10/6/15 16:14 1 68 1
1357 10/7/15 16:22 1 100 2
1434 10/9/15 16:58 1 52 1
1435 10/11/15 17:00 1 60 2
1436 10/15/15 16:57 1 100 3
1437 10/15/15 16:59 1 100 4
I need to find a way to separate the first 2 rows from the last 4 rows in this example, based on values in the last column.
What I would love to ultimately get is a result set that looks like this, which averages the iResult column based on the grouping and takes the first instanceDate from the grouping:
instanceDate moduleID iResult
10/6/15 1 84
10/9/15 1 78
I can aggregate to get this result using MIN and AVG if I can just find a way to separate the groups. The data is ordered by instanceDate (please ignore the date formatting here) then interactionNum and the group separation should happen when the query finds a row where the interactionNum is <= than the previous row (will usually start over with '1' but not always, so prefer just to separate on a lower or equal integer value).
Here is the query I have so far (includes the joins that give the above data set):
SELECT
X.*
FROM
(SELECT TOP 100 PERCENT
instanceDate, b.ModuleID, iResult, b.interactionNum
FROM
(firstTable a
INNER JOIN
secondTable b ON b.someID = a.someID)
WHERE
a.someID = 2
AND b.otherID LIKE 'xyz'
AND a.ModuleID = 1
ORDER BY
instanceDate) AS X
OUTER APPLY
(SELECT TOP 1
*
FROM
(SELECT
instanceDate, d.ModuleID, iResult, d.interactionNum
FROM
(firstTable c
INNER JOIN
secondTable d ON d.someID = c.someID)
WHERE
c.someID = 2
AND d.otherID LIKE 'xyz'
AND c.ModuleID = 1
AND d.interactionNum = X.interactionNum
AND c.instanceDate < X.instanceDate) X2
ORDER BY
instanceDate DESC) Y
WHERE
NOT EXISTS (SELECT Y.interactionNum INTERSECT SELECT X.interactionNum)
But this is returning an interim result set like this:
instanceDate ModuleID iResult interactionNum
10/6/15 16:10 1 68 1
10/6/15 16:14 1 100 2
10/15/15 16:57 1 100 3
10/15/15 16:59 1 100 4
and the problem is that interactionNum 3, 4 do not belong in this result set. They would go in the next result set when I loop over this query. How do I keep them out of the result set in this iteration? I need the result set from this query to just include the first two rows, 'seeing' that row 3 of the source data has a lower value for interactionNum than row 2 has.
Not sure what ModuleID was supposed to be used, but I guess you're looking for something like this:
select min (instanceDate), [moduleID], avg([iResult])
from (
select *,row_number() over (partition by [moduleID] order by instanceDate) as RN
from Table1
) X
group by [moduleID], RN - [interactionNum]
The idea here is to create a running number with row_number for each moduleid, and then use the difference between that and InteractionNum as grouping criteria.
Example in SQL Fiddle
Here is my solution, although it should be said, I think #JamesZ answer is cleaner.
I created a new field called newinstance which is 1 wherever your instanceNumber is 1. I then created a rolling sum(newinstance) called rollinginstance to group on.
Change the last select to SELECT * FROM cte2 to show all the fields I added.
IF OBJECT_ID('tempdb..#tmpData') IS NOT NULL
DROP TABLE #tmpData
CREATE TABLE #tmpData (recordID INT, instanceDate DATETIME, moduleID INT, iResult INT, interactionNum INT)
INSERT INTO #tmpData
SELECT 1356,'10/6/15 16:14',1,68,1 UNION
SELECT 1357,'10/7/15 16:22',1,100,2 UNION
SELECT 1434,'10/9/15 16:58',1,52,1 UNION
SELECT 1435,'10/11/15 17:00',1,60,2 UNION
SELECT 1436,'10/15/15 16:57',1,100,3 UNION
SELECT 1437,'10/15/15 16:59',1,100,4
;WITH cte1 AS
(
SELECT *,
CASE WHEN interactionNum=1 THEN 1 ELSE 0 END AS newinstance,
ROW_NUMBER() OVER(ORDER BY recordID) as rowid
FROM #tmpData
), cte2 AS
(
SELECT *,
(select SUM(newinstance) from cte1 b where b.rowid<=a.rowid) as rollinginstance
FROM cte1 a
)
SELECT MIN(instanceDate) AS instanceDate, moduleID, AVG(iResult) AS iResult
FROM cte2
GROUP BY moduleID, rollinginstance

SQL Server pick random (or first) value with aggregation

How can I get SQL Server to return the first value (any one, I don't care, it just needs to be fast) it comes across when aggregating?
For example, let's say I have:
ID Group
1 A
2 A
3 A
4 B
5 B
and I need to get any one of the ID's for each group. I can do this as follows:
Select
max(id)
,group
from Table
group by group
which returns
ID Group
3 A
5 B
That does the job, but it seems stupid to me to ask SQL Server to calculate the highest ID when all it really needs to do is to pick the first ID it comes across.
Thanks
PS - the fields are indexed, so maybe it doesn't really make a difference?
There is an undocumented aggregate called ANY which is not valid syntax but is possible to get to appear in your execution plans. This does not provide any performance advantage however.
Assuming the following table and index structure
CREATE TABLE T
(
id int identity primary key,
[group] char(1)
)
CREATE NONCLUSTERED INDEX ix ON T([group])
INSERT INTO T
SELECT TOP 1000000 CHAR( 65 + ROW_NUMBER() OVER (ORDER BY ##SPID) % 3)
FROM sys.all_objects o1, sys.all_objects o2, sys.all_objects o3
I have also populated with sample data such that there are many rows per group.
Your original query
SELECT MAX(id),
[group]
FROM T
GROUP BY [group]
Gives Table 'T'. Scan count 1, logical reads 1367 and the plan
|--Stream Aggregate(GROUP BY:([[T].[group]) DEFINE:([Expr1003]=MAX([[T].[id])))
|--Index Scan(OBJECT:([[T].[ix]), ORDERED FORWARD)
Rewritten to get the ANY aggregate...
;WITH cte AS
(
SELECT *,
ROW_NUMBER() OVER (PARTITION BY [group] ORDER BY [group] ) AS RN
FROM T)
SELECT id,
[group]
FROM cte
WHERE RN=1
Gives Table 'T'. Scan count 1, logical reads 1367 and the plan
|--Stream Aggregate(GROUP BY:([[T].[group]) DEFINE:([[T].[id]=ANY([[T].[id])))
|--Index Scan(OBJECT:([[T].[ix]), ORDERED FORWARD)
Even though potentially SQL Server could stop processing the group as soon as the first value is found and skip to the next one it doesn't. It still processes all rows and the logical reads are the same.
For this particular example with many rows in the group a more efficient version would be a recursive CTE.
WITH RecursiveCTE
AS (
SELECT TOP 1 id, [group]
FROM T
ORDER BY [group]
UNION ALL
SELECT R.id, R.[group]
FROM (
SELECT T.*,
rn = ROW_NUMBER() OVER (ORDER BY (SELECT 0))
FROM T
JOIN RecursiveCTE R
ON R.[group] < T.[group]
) R
WHERE R.rn = 1
)
SELECT *
FROM RecursiveCTE
OPTION (MAXRECURSION 0);
Which gives
Table 'Worktable'. Scan count 2, logical reads 19
Table 'T'. Scan count 4, logical reads 12
The logical reads are much less as it retrieves the first row per group then seeks into the next group rather than reading a load of records that don't contribute to the final result.

T-SQL SELECT DISTINCT & ROW_NUMBER() OVER Ordering Problem

I'm trying to select DISTINCT rows from a view using ROW_NUMBER() OVER for paging. When I switched the ORDER BY field from a SMALLDATETIME to INT I started getting weird results:
SELECT RowId, Title, HitCount FROM
( SELECT DISTINCT Title, HitCount, ROW_NUMBER() OVER(ORDER BY HitCount DESC) AS RowId FROM ou_v_Articles T ) AS Temp
WHERE RowId BETWEEN 1 AND 5
This query returns:
RowId | Title | HitCount
=======================
4 --- 9
1 --- 43
3 --- 11
2 --- 13
5 --- 0
The results are obviously not in the correct order. I'm not sure what the problem is here, but when I removed DISTINCT it orders them correctly.
Thanks.
Applying DISTINCT to a column list containing ROW_NUMBER() will always result in every row being distinct, as there is one ROW_NUMBER per row.
Is the RowId value you're getting correct? Perhaps you just need an ORDER BY RowId clause on the outer query?
Have you tried just using an order by on the outer select and removing the OVER clause?

SQL Select Statement For Calculating A Running Average Column

I am trying to have a running average column in the SELECT statement based on a column from the n previous rows in the same SELECT statement. The average I need is based on the n previous rows in the resultset.
Let me explain
Id Number Average
1 1 NULL
2 3 NULL
3 2 NULL
4 4 2 <----- Average of (1, 3, 2),Numbers from previous 3 rows
5 6 3 <----- Average of (3, 2, 4),Numbers from previous 3 rows
. . .
. . .
The first 3 rows of the Average column are null because there are no previous rows. The row 4 in the Average column shows the average of the Number column from the previous 3 rows.
I need some help trying to construct a SQL Select statement that will do this.
This should do it:
--Test Data
CREATE TABLE RowsToAverage
(
ID int NOT NULL,
Number int NOT NULL
)
INSERT RowsToAverage(ID, Number)
SELECT 1, 1
UNION ALL
SELECT 2, 3
UNION ALL
SELECT 3, 2
UNION ALL
SELECT 4, 4
UNION ALL
SELECT 5, 6
UNION ALL
SELECT 6, 8
UNION ALL
SELECT 7, 10
--The query
;WITH NumberedRows
AS
(
SELECT rta.*, row_number() OVER (ORDER BY rta.ID ASC) AS RowNumber
FROM RowsToAverage rta
)
SELECT nr.ID, nr.Number,
CASE
WHEN nr.RowNumber <=3 THEN NULL
ELSE ( SELECT avg(Number)
FROM NumberedRows
WHERE RowNumber < nr.RowNumber
AND RowNumber >= nr.RowNumber - 3
)
END AS MovingAverage
FROM NumberedRows nr
Assuming that the Id column is sequential, here's a simplified query for a table named "MyTable":
SELECT
b.Id,
b.Number,
(
SELECT
AVG(a.Number)
FROM
MyTable a
WHERE
a.id >= (b.Id - 3)
AND a.id < b.Id
AND b.Id > 3
) as Average
FROM
MyTable b;
Edit: I missed the point that it should average the three previous records...
For a general running average, I think something like this would work:
SELECT
id, number,
SUM(number) OVER (ORDER BY ID) /
ROW_NUMBER() OVER (ORDER BY ID) AS [RunningAverage]
FROM myTable
ORDER BY ID
A simple self join would seem to perform much better than a row referencing subquery
Generate 10k rows of test data:
drop table test10k
create table test10k (Id int, Number int, constraint test10k_cpk primary key clustered (id))
;WITH digits AS (
SELECT 0 as Number
UNION SELECT 1
UNION SELECT 2
UNION SELECT 3
UNION SELECT 4
UNION SELECT 5
UNION SELECT 6
UNION SELECT 7
UNION SELECT 8
UNION SELECT 9
)
,numbers as (
SELECT
(thousands.Number * 1000)
+ (hundreds.Number * 100)
+ (tens.Number * 10)
+ ones.Number AS Number
FROM digits AS ones
CROSS JOIN digits AS tens
CROSS JOIN digits AS hundreds
CROSS JOIN digits AS thousands
)
insert test10k (Id, Number)
select Number, Number
from numbers
I would pull the special case of the first 3 rows out of the main query, you can UNION ALL those back in if you really want it in the row set. Self join query:
;WITH NumberedRows
AS
(
SELECT rta.*, row_number() OVER (ORDER BY rta.ID ASC) AS RowNumber
FROM test10k rta
)
SELECT nr.ID, nr.Number,
avg(trailing.Number) as MovingAverage
FROM NumberedRows nr
join NumberedRows as trailing on trailing.RowNumber between nr.RowNumber-3 and nr.RowNumber-1
where nr.Number > 3
group by nr.id, nr.Number
On my machine this takes about 10 seconds, the subquery approach that Aaron Alton demonstrated takes about 45 seconds (after I changed it to reflect my test source table) :
;WITH NumberedRows
AS
(
SELECT rta.*, row_number() OVER (ORDER BY rta.ID ASC) AS RowNumber
FROM test10k rta
)
SELECT nr.ID, nr.Number,
CASE
WHEN nr.RowNumber <=3 THEN NULL
ELSE ( SELECT avg(Number)
FROM NumberedRows
WHERE RowNumber < nr.RowNumber
AND RowNumber >= nr.RowNumber - 3
)
END AS MovingAverage
FROM NumberedRows nr
If you do a SET STATISTICS PROFILE ON, you can see the self join has 10k executes on the table spool. The subquery has 10k executes on the filter, aggregate, and other steps.
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Check out some solutions here. I'm sure that you could adapt one of them easily enough.
If you want this to be truly performant, and arn't afraid to dig into a seldom-used area of SQL Server, you should look into writing a custom aggregate function. SQL Server 2005 and 2008 brought CLR integration to the table, including the ability to write user aggregate functions. A custom running total aggregate would be the most efficient way to calculate a running average like this, by far.
Alternatively you can denormalize and store precalculated running values. Described here:
http://sqlblog.com/blogs/alexander_kuznetsov/archive/2009/01/23/denormalizing-to-enforce-business-rules-running-totals.aspx
Performance of selects is as fast as it goes. Of course, modifications are slower.

Resources