Randomize part of select from CTE output - sql-server

In this question #GordonLinoff provided a solution (recursive common table expression) to my initial question. This is a follow-up question.
Initial question:
How can I loop through registrations until a certain amount (sum) of AmountPersons was reached and if the next AmountPersons was too high to be invited check the AmountPersons of the next row to see if it would fit?
Please check the initial question via the link above to get the full picture.
New situation:
First we have 20 available seats and we run through the data rows to fill these seats (initial question).
Then I sorted on Count_Invited and updated the order by from the row_number() function. So people who were invited the least should get priority.
Then I also added the Count_Registered column, because people who registered most, but got invited least, should also get priority.
New question:
How can I scramble the last two people who are invited from the below result if a third, forth, fifth.. user also has the same values (Count_Invited and Count_Registered and AmountPersons are 1)?
The top data is ordered correctly, but only for the last few rows it would need to randomize the invitee.
I know of this ORDER BY NEWID() functionality to randomize rows, but it can't be applied on all rows in my case. I don't know how to approach this... More info below.
The new T-SQL code:
WITH tn AS (
SELECT g.[Id],
g.[FirstName],
g.[LastName],
g.[E-mail],
g.[Count_Invited],
g.[Count_Registered],
r.[DateReservation],
r.[AmountPersons],
row_number() over(order by g.[Count_Invited], g.[Count_Registered] DESC) as seqnum
FROM USERTABLE g
INNER JOIN RESERVATION r ON r.[UserId] = g.[Id]
WHERE r.[PartyId] = 21
),
cte AS (
SELECT [Id], [FirstName], [LastName], [E-mail], [Count_Invited], [Count_Registered], [DateReservation],
[AmountPersons], [AmountPersons] as total, 1 as is_included, seqnum
FROM tn
WHERE seqnum = 1
UNION ALL
SELECT tn.[Id], tn.[FirstName], tn.[LastName], tn.[E-mail], tn.[Count_Invited], tn.[Count_Registered], tn.[DateReservation], tn.[AmountPersons],
(case when tn.[AmountPersons] +cte.total <= 20
then tn.[AmountPersons] +cte.total
else cte.total
end),
(case when tn.[AmountPersons] +cte.total <= 20
then 1
else 0
end) as is_included,
tn.seqnum
FROM cte join
tn
on tn.seqnum = cte.seqnum + 1
WHERE cte.total < 20
)
SELECT cte.Id AS userId,
cte.FirstName,
cte.LastName,
cte.[E-mail],
cte.Count_Invited,
cte.Count_Registered,
cte.AmountPersons,
cte.DateReservation
FROM cte
WHERE is_included = 1
This is the result I'm getting every time I execute the above code:
I hope this makes sense to someone. Thank you.
Output from suggested answer by #George Menoutis:
Edit: Extra clarification steps. This is what should happen:
-- declare amountSeats = 25
-- select 1st value of Count_Invited
-- if that value is 0
-- do sum of AmountPersons (multiple rows) where Count_Invited is 0
-- if that sum is lower than amountSeats, let's say it's 10 now
-- insert all rows with value 0 in temp table (not sure if this is the way to go...)
-- select 2nd value (not second row) of Count_Invited --> so where Count_Invited is not 0
-- if that value is 1
-- do sum of AmountPersons (multiple rows) where Count_Invited is 1
-- sum count of Count_Invited = 0 + Count_Invited = 1
-- if that sum is still lower then amountSeats, let's say it's 15 now
-- insert (add) all rows with Count_Invited 1 in temp table
-- select 3rd value (not 3rd row) of Count_Invited --> so where Count_Invited NOT IN (0, 1)
-- if that value is 5
-- do count of AmountPersons (multiple rows) where Count_Invited is 5
-- sum count of Count_Invited = 0 + Count_Invited = 1 + Count_Invited = 5
-- let's now say the count for AmountPersons is now 40
-- this means that not everyone with Count_Invited = 5 can be invited as there are only 10 open seats
-- a random selection needs to be made of these rows
-- select random rows where Count_Invited is 5 until the sum of these rows is 10
-- if 10 can't be matched, get as close as possible by looping through the leftover rows, but don't exceed 10

I actually think your newid() idea is best. It's just that the correct way to put it is in the definition of seqnum:
row_number() over(order by g.[Count_Invited], g.[Count_Registered] DESC, newid() asc) as seqnum
Addendum: After OP's comment, I made a new question here. So, it seems that it will work out but you will have to make tn a temp table first, else newid() is triggered multiple times by the following cte.

Related

Ultra Fuzzy gaps and islands grouping problem

I have a bunch of test data. Each test was done several dozen times and the average and margin of error for each test calculated in a CTE. In the next step I want to dense_rank each sub-group of tests. Here's an example of a sub-group of data and the rank value I'm looking for:
AvgScore StdErr DesiredRank
65550 2109 1
67188 2050 1
67407 2146 1
67414 1973 1
67486 1889 2
67581 2320 2
67858 1993 2
68509 2029 2
68645 2039 2
68868 2051 2
68902 1943 2
69305 1564 3
69430 2037 3
69509 1594 3
387223 12521 4
389709 12975 4
392200 11344 4
398916 11755 4
399018 11480 5
401144 11021 5
401640 10973 5
403442 10688 5
Notice the margin of error for each score makes many scores ostensibly equivalent. Yes, this causes some rows to technically belong to more than one group but making it part of the nearest group gives the most accurate results.
I looked at Grouping data into fuzzy gaps and islands but this version seems significantly more complex since the switch from one group to another not only requires both rows to be within the margin of error of each other but a switch may occur between equivalent rows.
Here is the most complex case appearing in the example: Row 1 has rows 2-6 within its range and despite row 6 also being within row 1's range, row 5 doesn't have row 1 within its range so a new rank must be started at row 5.
There are only a couple hundred groups in the result set so performance shouldn't be an issue. I'm just struggling with finding logic that can not only look in both directions across the ordered range but recognize that some intermediate row has forced the starting of a new group. Obviously this is simple using a cursor but I have additional processing to do after the ranking and so I'm looking for a SET based solution if any is possible.
I'm on 2017 but if there is a set based non-recursive answer that requires 2019 I'm OK with that.
I don't really like it when the depth of recursion depends on the number of rows in the data as opposed to an actual depth to the data. This solution works OK for me because I have so few rows to rank. All the same, for future readers, if someone has a non-recursive solution I'm happy to mark it as an answer rather than my own.
To demonstrate this IS set based I've added a GROUP BY column. Recursion depth depends on the number of items to be ranked not the number of groups. All groups are processed simultaneously. This code was tested on my production dataset and compared to answers generated by a sequential loop thru the data so I know it works on bigger, more complex data sets.
WITH T AS (
SELECT *
FROM(VALUES ('Type1', 65550 ,2109 ,1),('Type2', 65550 ,2109 ,1),
('Type1', 67188 ,2050 ,1),('Type2', 67188 ,2050 ,1),
('Type1', 67407 ,2146 ,1),('Type2', 67407 ,2146 ,1),
('Type1', 67414 ,1973 ,1),('Type2', 67414 ,1973 ,1),
('Type1', 67486 ,1889 ,2),('Type2', 67486 ,1889 ,2),
('Type1', 67581 ,2320 ,2),('Type2', 67581 ,2320 ,2),
('Type1', 67858 ,1993 ,2),('Type2', 67858 ,1993 ,2),
('Type1', 68509 ,2029 ,2),('Type2', 68509 ,2029 ,2),
('Type1', 68645 ,2039 ,2),('Type2', 68645 ,2039 ,2),
('Type1', 68868 ,2051 ,2),('Type2', 68868 ,2051 ,2),
('Type1', 68902 ,1943 ,2),('Type2', 68902 ,1943 ,2),
('Type1', 69305 ,1564 ,3),('Type2', 69305 ,1564 ,3),
('Type1', 69430 ,2037 ,3),('Type2', 69430 ,2037 ,3),
('Type1', 69509 ,1594 ,3),('Type2', 69509 ,1594 ,3)) X(TestType,AvgScore,StdErr,DesiredRank)
), X AS (
SELECT *,ROW_NUMBER() OVER(PARTITION BY TestType ORDER BY AvgScore) GRow,1 Rnk,AvgScore RAvg, AvgScore+StdErr RMax
FROM T
), Y AS (
SELECT TestType,AvgScore,StdErr,DesiredRank,GRow,Rnk,RAvg,RMax,0 NewRank,0 pravg,0 prmin
FROM X
WHERE GRow = 1
UNION ALL
SELECT Z.TestType,Z.AvgScore,Z.StdErr,Z.DesiredRank,Z.GRow
,CASE WHEN W.NewRank = 1 THEN Y.Rnk+1 ELSE Y.Rnk END Rnk
,CASE WHEN W.NewRank = 1 THEN Z.RAvg ELSE Y.RAvg END RAvg
,CASE WHEN W.NewRank = 1 THEN Z.RMax ELSE Y.RMax END RMin
,W.NewRank,Y.RAvg pravg,y.RMax prmin
FROM Y
CROSS APPLY (SELECT * FROM X WHERE X.TestType=Y.TestType and X.GRow = Y.GRow+1) Z
CROSS APPLY (VALUES (CASE WHEN Z.AvgScore <= Y.RMax and Z.AvgScore - Z.StdErr <= Y.RAvg THEN 0 ELSE 1 END)) W(NewRank)
)
SELECT * FROM Y
ORDER BY TestType,AvgScore;
It is really a tough one: first I thought I could just recursively increase the Rank if there is a certain missing overlap, by examining the highest Rank at a step so that lower AvgScore's will have less Rank increments. But I recognised that a recursive CTE's recursive element can not have
- aggregation + GROUP BY
- multiple references to the recursive CTE
- a nested CTE defined
so I gave up this direction. It seems data should be "prepared" in a way so that it can be fed to a simple recursion (can not think of any other solution than recursion).
So, my solution is to find the lowest AvgScore belonging to the first AvgScore that is out of range and mark it as a new Rank's first element, and "jump" to that element and repeat so at the end have all rows in a set that are the first row where a new Rank should be assigned ("first" meant by sorting by AvgScore). After that putting together all rows and Rank them.
So if your set is called #UltraFuzzy you can send it through a couple of CTE's:
;WITH UltraFuzzyCTE AS (
SELECT AvgScore, StdErr, AvgScore - StdErr as RangeMIN, AvgScore + StdErr as RangeMAX
FROM #UltraFuzzy
)
-- SELECT * FROM UltraFuzzyCTE ORDER BY AvgScore
,FirstOutOfRangeCTE AS (
SELECT
Original.*
,MIN (Helper.AvgScore) as FirstOutOfRange
FROM UltraFuzzyCTE as Original
LEFT OUTER JOIN UltraFuzzyCTE as Helper
ON Original.RangeMAX < Helper.AvgScore OR Original.AvgScore < Helper.RangeMIN
GROUP BY Original.AvgScore, Original.StdErr, Original.RangeMIN, Original.RangeMAX
)
-- SELECT * FROM FirstOutOfRangeCTE ORDER BY AvgScore
,NewRankFirstMemberCTE AS (
SELECT * FROM FirstOutOfRangeCTE WHERE AvgScore = (SELECT MIN (AvgScore) FROM FirstOutOfRangeCTE)
UNION ALL
SELECT f.*
FROM NewRankFirstMemberCTE as n
INNER JOIN FirstOutOfRangeCTE as f ON n.FirstOutOfRange = f.AvgScore
)
-- SELECT * FROM NewRankFirstMemberCTE ORDER BY AvgScore
,RankCTE AS (
SELECT *, 1 as NewRankFirstMember FROM NewRankFirstMemberCTE
UNION ALL
SELECT *, 0 as NewRankFirstMember FROM FirstOutOfRangeCTE WHERE AvgScore NOT IN (SELECT AvgScore FROM NewRankFirstMemberCTE)
)
-- SELECT * FROM RankCTE ORDER BY AvgScore
SELECT *, SUM (NewRankFirstMember) OVER (ORDER BY AvgScore) as Rank
FROM RankCTE
ORDER BY AvgScore
Definitely it can be simplyfied, for debugging I used SELECT * but unneccessary fields could be thrown away - and less CTE's used. The commented stuff is for step-by-step analysis.

Get a count based on the row order

I have a table with this structure
Create Table Example (
[order] INT,
[typeID] INT
)
With this data:
order|type
1 7
2 11
3 11
4 18
5 5
6 19
7 5
8 5
9 3
10 11
11 11
12 3
I need to get the count of each type based on the order, something like:
type|count
7 1
11 **2**
18 1
5 1
19 1
5 **2**
3 1
11 **2**
3 1
Context
Lets say that this table is about houses, so I have a list houses in an order. So I have
Order 1: A red house
2: A white house
3: A white house
4: A red house
5: A blue house
6: A blue house
7: A white house
So I need to show that info condensed. I need to say:
I have 1 red house
Then I have 2 white houses
Then I have 1 red house
Then I have 2 blue houses
Then I have 1 white house
So the count is based on the order. The DENSE_RANK function would help me if I were able to reset the RANK when the partition changes.
So I have an answer, but I have to warn you it's probably going to get some raised eyebrows because of how it's done. It uses something known as a "Quirky Update". If you plan to implement this, please for the love of god read through the linked article and understand that this is an "undocumented hack" which needs to be implemented precisely to avoid unintended consequences.
If you have a tiny bit of data, I'd just do it row by agonizing row for simplicity and clarity. However if you have a lot of data and still need high performance, this might do.
Requirements
Table must have a clustered index in the order you want to progress in
Table must have no other indexes (these might cause SQL to read the data from another index which is not in the correct order, causing the quantum superposition of row order to come collapsing down).
Table must be completely locked down during the operation (tablockx)
Update must progress in serial fashion (maxdop 1)
What it does
You know how people tell you there is no implicit order to the data in a table? That's still true 99% of the time. Except we know that ultimately it HAS to be stored on disk in SOME order. And it's that order that we're exploiting here. By forcing a clustered index update and the fact that you can assign variables in the same update statement that columns are updated, you can effectively scroll through the data REALLY fast.
Let's set up the data:
if object_id('tempdb.dbo.#t') is not null drop table #t
create table #t
(
_order int primary key clustered,
_type int,
_grp int
)
insert into #t (_order, _type)
select 1,7
union all select 2,11
union all select 3,11
union all select 4,18
union all select 5,5
union all select 6,19
union all select 7,5
union all select 8,5
union all select 9,3
union all select 10,11
union all select 11,11
union all select 12,3
Here's the update statement. I'll walk through each of the components below
declare #Order int, #Type int, #Grp int
update #t with (tablockx)
set #Order = _order,
#Grp = case when _order = 1 then 1
when _type != #Type then #grp + 1
else #Grp
end,
#Type = _type,
_grp = #Grp
option (maxdop 1)
Update is performed with (tablockx). If you're working with a temp table, you know there's no contention on the table, but still it's a good habit to get into (if using this approach can even be considered a good habit to get into at all).
Set #Order = _order. This looks like a pointless statement, and it kind of is. However since _order is the primary key of the table, assigning that to a variable is what forces SQL to perform a clustered index update, which is crucial to this working
Populate an integer to represent the sequential groups you want. This is where the magic happens, and you have to think about it in terms of it scrolling through the table. When _order is 1 (the first row), just set the #Grp variable to 1. If, on any given row, the column value of _type differs from the variable value of #type, we increment the grouping variable. If the values are the same, we just stick with the #Grp we have from the previous row.
Update the #Type variable with the column _type's value. Note this HAS to come after the assignment of #Grp for it to have the correct value.
Finally, set _grp = #Grp. This is where the actual column value is updated with the results of step 3.
All this must be done with option (maxdop 1). This means the Maximum Degree of Parallelism is set to 1. In other words, SQL cannot do any task parallelization which might lead to the ordering being off.
Now it's just a matter of grouping by the _grp field. You'll have a unique _grp value for each consecutive batch of _type.
Conclusion
If this seems bananas and hacky, it is. As with all things, you need to take this with a grain of salt, and I'd recommend really playing around with the concept to fully understand it if you plan to implement it because I guarantee nobody else is going to know how to troubleshoot it if you get a call in the middle of the night that it's breaking.
This solution is using a recursive CTE and is relying on a gapless order value. If you don't have this, you can create it with ROW_NUMBER() on the fly:
DECLARE #mockup TABLE([order] INT,[type] INT);
INSERT INTO #mockup VALUES
(1,7)
,(2,11)
,(3,11)
,(4,18)
,(5,5)
,(6,19)
,(7,5)
,(8,5)
,(9,3)
,(10,11)
,(11,11)
,(12,3);
WITH recCTE AS
(
SELECT m.[order]
,m.[type]
,1 AS IncCounter
,1 AS [Rank]
FROM #mockup AS m
WHERE m.[order]=1
UNION ALL
SELECT m.[order]
,m.[type]
,CASE WHEN m.[type]=r.[type] THEN r.IncCounter+1 ELSE 1 END
,CASE WHEN m.[type]<>r.[type] THEN r.[Rank]+1 ELSE r.[Rank] END
FROM #mockup AS m
INNER JOIN recCTE AS r ON m.[order]=r.[order]+1
)
SELECT recCTE.[type]
,MAX(recCTE.[IncCounter])
,recCTE.[Rank]
FROM recCTE
GROUP BY recCTE.[type], recCTE.[Rank];
The recursion is traversing down the line increasing the counter if the type is unchanged and increasing the rank if the type is different.
The rest is a simple GROUP BY
I thought I'd post another approach I worked out, I think more along the lines of the dense_rank() work others were thinking about. The only thing this assumes is that _order is a sequential integer (i.e. no gaps).
Same data setup as before:
if object_id('tempdb.dbo.#t') is not null drop table #t
create table #t
(
_order int primary key clustered,
_type int,
_grp int
)
insert into #t (_order, _type)
select 1,7
union all select 2,11
union all select 3,11
union all select 4,18
union all select 5,5
union all select 6,19
union all select 7,5
union all select 8,5
union all select 9,3
union all select 10,11
union all select 11,11
union all select 12,3
What this approach does is row_number each _type so that regardless of where a _type exists, and how many times, the types will have a unique row_number in the order of the _order field. By subtracting that type-specific row number from the global row number (i.e. _order), you'll end up with groups. Here's the code for this one, then I'll walk through this as well.
;with tr as
(
select
-- Create an incrementing integer row_number over each _type (regardless of it's position in the sequence)
_type_rid = row_number() over (partition by _type order by _order),
-- This shows that on rows 6-8 (the transition between type 19 and 5), naively they're all assigned the same group
naive_type_rid = _order - row_number() over (partition by _type order by _order),
-- By adding a value to the type_rid which is a function of _type, those two values are distinct.
-- Originally I just added the value, but I think squaring it ensures that there can't ever be another gap of 1
true_type_rid = (_order - row_number() over (partition by _type order by _order)) + power(_type, 2),
_type,
_order
from #t
-- order by _order -- uncomment this if you want to run the inner select separately
)
select
_grp = dense_rank() over (order by max(_order)),
_type = max(_type)
from tr
group by true_type_rid
order by max(_order)
What's Going On
First things first; I didn't have to create a separate column in the src cte to return _type_rid. I did that mostly for troubleshooting and clarity. Secondly, I also didn't really have to do a second dense_rank on the final selection for the column _grp. I just did that so it matched exactly the results from my other approach.
Within each type, type_rid is unique, and increments by 1. _order also increments by one. So as long as a given type is chugging along, gapped by only 1, _order - _type_rid will be the same value. Let's look at a couple examples (This is the result of the src cte, ordered by _order):
_type_rid naive_type_rid true_type_rid _type _order
-------------------- -------------------- -------------------- ----------- -----------
1 8 17 3 9
2 10 19 3 12
1 4 29 5 5
2 5 30 5 7
3 5 30 5 8
1 0 49 7 1
1 1 122 11 2
2 1 122 11 3
3 7 128 11 10
4 7 128 11 11
1 3 327 18 4
1 5 366 19 6
First row, _order - _type_rid = 1 - 1 = 0. This assigns this row (type 7) to group 0
Second row, 2 - 1 = 1. This assigns type 11 to group 1
Third row, 3 - 2 = 1. This assigns the second sequential type 11 to group 1 also
Forth row, 4 - 1 = 3. This assigns type 18 to group 3
... and so forth.
The groups aren't sequential, but they ARE in the same order as _order which is the important part. You'll also notice I added the value of _type to that value as well. That's because when we hit some of the later rows, groups switched, but the sequence was still incremented by 1. By adding _type, we can differentiate those off-by-one values and still do it in the right order as well.
The final outer select from src orders by the max(_order) (in both my unnecessary dense_rank() _grp modification, and just the general result order).
Conclusion
This is still a little wonky, but definitely well within the bounds of "supported functionality". Given that I ran into one gotcha in there (the off-by-one thing), there might be others I haven't considered, so again, take that with a grain of salt, and do some testing.

How Can I Detect and Bound Changes Between Row Values in a SQL Table?

I have a table which records values over time, similar to the following:
RecordId Time Name
========================
1 10 Running
2 18 Running
3 21 Running
4 29 Walking
5 33 Walking
6 57 Running
7 66 Running
After querying this table, I need a result similar to the following:
FromTime ToTime Name
=========================
10 29 Running
29 57 Walking
57 NULL Running
I've toyed around with some of the aggregate functions (e.g. MIN, MAX, etc.), PARTITION and CTEs, but I can't seem to hit upon the right solution. I'm hoping a SQL guru can give me a hand, or at least point me in the right direction. Is there a fairly straightforward way to query this (preferrably without a cursor?)
Finding "ToTime" By Aggregates Instead of a Join
I would like to share a really wild query that only takes 1 scan of the table with 1 logical read. By comparison, the best other answer on the page, Simon Kingston's query, takes 2 scans.
On a very large set of data (17,408 input rows, producing 8,193 result rows) it takes CPU 574 and time 2645, while Simon Kingston's query takes CPU 63,820 and time 37,108.
It's possible that with indexes the other queries on the page could perform many times better, but it is interesting to me to achieve 111x CPU improvement and 14x speed improvement just by rewriting the query.
(Please note: I mean no disrespect at all to Simon Kingston or anyone else; I am simply excited about my idea for this query panning out so well. His query is better than mine as its performance is plenty and it actually is understandable and maintainable, unlike mine.)
Here is the impossible query. It is hard to understand. It was hard to write. But it is awesome. :)
WITH Ranks AS (
SELECT
T = Dense_Rank() OVER (ORDER BY Time, Num),
N = Dense_Rank() OVER (PARTITION BY Name ORDER BY Time, Num),
*
FROM
#Data D
CROSS JOIN (
VALUES (1), (2)
) X (Num)
), Items AS (
SELECT
FromTime = Min(Time),
ToTime = Max(Time),
Name = IsNull(Min(CASE WHEN Num = 2 THEN Name END), Min(Name)),
I = IsNull(Min(CASE WHEN Num = 2 THEN T - N END), Min(T - N)),
MinNum = Min(Num)
FROM
Ranks
GROUP BY
T / 2
)
SELECT
FromTime = Min(FromTime),
ToTime = CASE WHEN MinNum = 2 THEN NULL ELSE Max(ToTime) END,
Name
FROM Items
GROUP BY
I, Name, MinNum
ORDER BY
FromTime
Note: This requires SQL 2008 or up. To make it work in SQL 2005, change the VALUES clause to SELECT 1 UNION ALL SELECT 2.
Updated Query
After thinking about this a bit, I realized that I was accomplishing two separate logical tasks at the same time, and this made the query unnecessarily complicated: 1) prune out intermediate rows that have no bearing on the final solution (rows that do not begin a new task) and 2) pull the "ToTime" value from the next row. By performing #1 before #2, the query is simpler and performs with approximately half the CPU!
So here is the simplified query that first, trims out the rows we don't care about, then gets the ToTime value using aggregates rather than a JOIN. Yes, it does have 3 windowing functions instead of 2, but ultimately because of the fewer rows (after pruning those we don't care about) it has less work to do:
WITH Ranks AS (
SELECT
Grp =
Row_Number() OVER (ORDER BY Time)
- Row_Number() OVER (PARTITION BY Name ORDER BY Time),
[Time], Name
FROM #Data D
), Ranges AS (
SELECT
Result = Row_Number() OVER (ORDER BY Min(R.[Time]), X.Num) / 2,
[Time] = Min(R.[Time]),
R.Name, X.Num
FROM
Ranks R
CROSS JOIN (VALUES (1), (2)) X (Num)
GROUP BY
R.Name, R.Grp, X.Num
)
SELECT
FromTime = Min([Time]),
ToTime = CASE WHEN Count(*) = 1 THEN NULL ELSE Max([Time]) END,
Name = IsNull(Min(CASE WHEN Num = 2 THEN Name ELSE NULL END), Min(Name))
FROM Ranges R
WHERE Result > 0
GROUP BY Result
ORDER BY FromTime;
This updated query has all the same issues as I presented in my explanation, however, they are easier to solve because I am not dealing with the extra unneeded rows. I also see that the Row_Number() / 2 value of 0 I had to exclude, and I am not sure why I didn't exclude it from the prior query, but in any case this works perfectly and is amazingly fast!
Outer Apply Tidies Things Up
Last, here is a version basically identical to Simon Kingston's query that I think is an easier to understand syntax.
SELECT
FromTime = Min(D.Time),
X.ToTime,
D.Name
FROM
#Data D
OUTER APPLY (
SELECT TOP 1 ToTime = D2.[Time]
FROM #Data D2
WHERE
D.[Time] < D2.[Time]
AND D.[Name] <> D2.[Name]
ORDER BY D2.[Time]
) X
GROUP BY
X.ToTime,
D.Name
ORDER BY
FromTime;
Here's the setup script if you want to do performance comparison on a larger data set:
CREATE TABLE #Data (
RecordId int,
[Time] int,
Name varchar(10)
);
INSERT #Data VALUES
(1, 10, 'Running'),
(2, 18, 'Running'),
(3, 21, 'Running'),
(4, 29, 'Walking'),
(5, 33, 'Walking'),
(6, 57, 'Running'),
(7, 66, 'Running'),
(8, 77, 'Running'),
(9, 81, 'Walking'),
(10, 89, 'Running'),
(11, 93, 'Walking'),
(12, 99, 'Running'),
(13, 107, 'Running'),
(14, 113, 'Walking'),
(15, 124, 'Walking'),
(16, 155, 'Walking'),
(17, 178, 'Running');
GO
insert #data select recordid + (select max(recordid) from #data), time + (select max(time) +25 from #data), name from #data
GO 10
Explanation
Here is the basic idea behind my query.
The times that represent a switch have to appear in two adjacent rows, one to end the prior activity, and one to begin the next activity. The natural solution to this is a join so that an output row can pull from its own row (for the start time) and the next changed row (for the end time).
However, my query accomplishes the need to make end times appear in two different rows by repeating the row twice, with CROSS JOIN (VALUES (1), (2)). We now have all our rows duplicated. The idea is that instead of using a JOIN to do calculation across columns, we'll use some form of aggregation to collapse each desired pair of rows into one.
The next task is to make each duplicate row split properly so that one instance goes with the prior pair and one with the next pair. This is accomplished with the T column, a ROW_NUMBER() ordered by Time, and then divided by 2 (though I changed it do a DENSE_RANK() for symmetry as in this case it returns the same value as ROW_NUMBER). For efficiency I performed the division in the next step so that the row number could be reused in another calculation (keep reading). Since row number starts at 1, and dividing by 2 implicitly converts to int, this has the effect of producing the sequence 0 1 1 2 2 3 3 4 4 ... which has the desired result: by grouping by this calculated value, since we also ordered by Num in the row number, we've now accomplished that all sets after the first one are comprised of a Num = 2 from the "prior" row, and a Num = 1 from the "next" row.
The next difficult task is figuring out a way to eliminate the rows we don't care about and somehow collapse the start time of a block into the same row as the end time of a block. What we want is a way to get each discrete set of Running or Walking to be given its own number so we can group by it. DENSE_RANK() is a natural solution, but a problem is that it pays attention to each value in the ORDER BY clause--we don't have syntax to do DENSE_RANK() OVER (PREORDER BY Time ORDER BY Name) so that the Time does not cause the RANK calculation to change except on each change in Name. After some thought I realized I could crib a bit from the logic behind Itzik Ben-Gan's grouped islands solution, and I figured out that the rank of the rows ordered by Time, subtracted from the rank of the rows partitioned by Name and ordered by Time, would yield a value that was the same for each row in the same group but different from other groups. The generic grouped islands technique is to create two calculated values that both ascend in lockstep with the rows such as 4 5 6 and 1 2 3, that when subtracted will yield the same value (in this example case 3 3 3 as the result of 4 - 1, 5 - 2, and 6 - 3). Note: I initially started with ROW_NUMBER() for my N calculation but it wasn't working. The correct answer was DENSE_RANK() though I am sorry to say I don't remember why I concluded this at the time, and I would have to dive in again to figure it out. But anyway, that is what T-N calculates: a number that can be grouped on to isolate each "island" of one status (either Running or Walking).
But this was not the end because there are some wrinkles. First of all, the "next" row in each group contains the incorrect values for Name, N, and T. We get around this by selecting, from each group, the value from the Num = 2 row when it exists (but if it doesn't, then we use the remaining value). This yields the expressions like CASE WHEN NUM = 2 THEN x END: this will properly weed out the incorrect "next" row values.
After some experimentation, I realized that it was not enough to group by T - N by itself, because both the Walking groups and the Running groups can have the same calculated value (in the case of my sample data provided up to 17, there are two T - N values of 6). But simply grouping by Name as well solves this problem. No group of either "Running" or "Walking" will have the same number of intervening values from the opposite type. That is, since the first group starts with "Running", and there are two "Walking" rows intervening before the next "Running" group, then the value for N will be 2 less than the value for T in that next "Running" group. I just realized that one way to think about this is that the T - N calculation counts the number of rows before the current row that do NOT belong to the same value "Running" or "Walking". Some thought will show that this is true: if we move on to the third "Running" group, it is only the third group by virtue of having a "Walking" group separating them, so it has a different number of intervening rows coming in before it, and due to it starting at a higher position, it is high enough so that the values cannot be duplicated.
Finally, since our final group consists of only one row (there is no end time and we need to display a NULL instead) I had to throw in a calculation that could be used to determine whether we had an end time or not. This is accomplished with the Min(Num) expression and then finally detecting that when the Min(Num) was 2 (meaning we did not have a "next" row) then display a NULL instead of the Max(ToTime) value.
I hope this explanation is of some use to people. I don't know if my "row-multiplying" technique will be generally useful and applicable to most SQL query writers in production environments because of the difficulty understanding it and and the difficulty of maintenance it will most certainly present to the next person visiting the code (the reaction is probably "What on earth is it doing!?!" followed by a quick "Time to rewrite!").
If you have made it this far then I thank you for your time and for indulging me in my little excursion into incredibly-fun-sql-puzzle-land.
See it For Yourself
A.k.a. simulating a "PREORDER BY":
One last note. To see how T - N does the job--and noting that using this part of my method may not be generally applicable to the SQL community--run the following query against the first 17 rows of the sample data:
WITH Ranks AS (
SELECT
T = Dense_Rank() OVER (ORDER BY Time),
N = Dense_Rank() OVER (PARTITION BY Name ORDER BY Time),
*
FROM
#Data D
)
SELECT
*,
T - N
FROM Ranks
ORDER BY
[Time];
This yields:
RecordId Time Name T N T - N
----------- ---- ---------- ---- ---- -----
1 10 Running 1 1 0
2 18 Running 2 2 0
3 21 Running 3 3 0
4 29 Walking 4 1 3
5 33 Walking 5 2 3
6 57 Running 6 4 2
7 66 Running 7 5 2
8 77 Running 8 6 2
9 81 Walking 9 3 6
10 89 Running 10 7 3
11 93 Walking 11 4 7
12 99 Running 12 8 4
13 107 Running 13 9 4
14 113 Walking 14 5 9
15 124 Walking 15 6 9
16 155 Walking 16 7 9
17 178 Running 17 10 7
The important part being that each group of "Walking" or "Running" has the same value for T - N that is distinct from any other group with the same name.
Performance
I don't want to belabor the point about my query being faster than other people's. However, given how striking the difference is (when there are no indexes) I wanted to show the numbers in a table format. This is a good technique when high performance of this kind of row-to-row correlation is needed.
Before each query ran, I used DBCC FREEPROCCACHE; DBCC DROPCLEANBUFFERS;. I set MAXDOP to 1 for each query to remove the time-collapsing effects of parallelism. I selected each result set into variables instead of returning them to the client so as to measure only performance and not client data transmission. All queries were given the same ORDER BY clauses. All tests used 17,408 input rows yielding 8,193 result rows.
No results are displayed for the following people/reasons:
RichardTheKiwi *Could not test--query needs updating*
ypercube *No SQL 2012 environment yet :)*
Tim S *Did not complete tests within 5 minutes*
With no index:
CPU Duration Reads Writes
----------- ----------- ----------- -----------
ErikE 344 344 99 0
Simon Kingston 68672 69582 549203 49
With index CREATE UNIQUE CLUSTERED INDEX CI_#Data ON #Data (Time);:
CPU Duration Reads Writes
----------- ----------- ----------- -----------
ErikE 328 336 99 0
Simon Kingston 70391 71291 549203 49 * basically not worse
With index CREATE UNIQUE CLUSTERED INDEX CI_#Data ON #Data (Time, Name);:
CPU Duration Reads Writes
----------- ----------- ----------- -----------
ErikE 375 414 359 0 * IO WINNER
Simon Kingston 172 189 38273 0 * CPU WINNER
So the moral of the story is:
Appropriate Indexes Are More Important Than Query Wizardry
With the appropriate index, Simon Kingston's version wins overall, especially when including query complexity/maintainability.
Heed this lesson well! 38k reads is not really that many, and Simon Kingston's version ran in half the time as mine. The speed increase of my query was entirely due to there being no index on the table, and the concomitant catastrophic cost this gave to any query needing a join (which mine didn't): a full table scan Hash Match killing its performance. With an index, his query was able to do a Nested Loop with a clustered index seek (a.k.a. a bookmark lookup) which made things really fast.
It is interesting that a clustered index on Time alone was not enough. Even though Times were unique, meaning only one Name occurred per time, it still needed Name to be part of the index in order to utilize it properly.
Adding the clustered index to the table when full of data took under 1 second! Don't neglect your indexes.
This will not work in SQL Server 2008, only in SQL Server 2012 version that has the LAG() and LEAD() analytic functions, but I'll leave it here for anyone with newer versions:
SELECT Time AS FromTime
, LEAD(Time) OVER (ORDER BY Time) AS ToTime
, Name
FROM
( SELECT Time
, LAG(Name) OVER (ORDER BY Time) AS PreviousName
, Name
FROM Data
) AS tmp
WHERE PreviousName <> Name
OR PreviousName IS NULL ;
Tested in SQL-Fiddle
With an index on (Time, Name) it will need an index scan.
Edit:
If NULL is a valid value for Name that needs to be taken as a valid entry, use the following WHERE clause:
WHERE PreviousName <> Name
OR (PreviousName IS NULL AND Name IS NOT NULL)
OR (PreviousName IS NOT NULL AND Name IS NULL) ;
I think you're essentially interested in where the 'Name' changes from one record to the next (in order of 'Time'). If you can identify where this happens you can generate your desired output.
Since you mentioned CTEs I'm going to assume you're on SQL Server 2005+ and can therefore use the ROW_NUMBER() function. You can use ROW_NUMBER() as a handy way to identify consecutive pairs of records and then to find those where the 'Name' changes.
How about this:
WITH OrderedTable AS
(
SELECT
*,
ROW_NUMBER() OVER (ORDER BY Time) AS Ordinal
FROM
[YourTable]
),
NameChange AS
(
SELECT
after.Time AS Time,
after.Name AS Name,
ROW_NUMBER() OVER (ORDER BY after.Time) AS Ordinal
FROM
OrderedTable before
RIGHT JOIN OrderedTable after ON after.Ordinal = before.Ordinal + 1
WHERE
ISNULL(before.Name, '') <> after.Name
)
SELECT
before.Time AS FromTime,
after.Time AS ToTime,
before.Name
FROM
NameChange before
LEFT JOIN NameChange after ON after.Ordinal = before.Ordinal + 1
I assume that the RecordIDs are not always sequential, hence the CTE to create a non-breaking sequential number.
SQLFiddle
;with SequentiallyNumbered as (
select *, N = row_number() over (order by RecordId)
from Data)
, Tmp as (
select A.*, RN=row_number() over (order by A.Time)
from SequentiallyNumbered A
left join SequentiallyNumbered B on B.N = A.N-1 and A.name = B.name
where B.name is null)
select A.Time FromTime, B.Time ToTime, A.Name
from Tmp A
left join Tmp B on B.RN = A.RN + 1;
The dataset I used to test
create table Data (
RecordId int,
Time int,
Name varchar(10));
insert Data values
(1 ,10 ,'Running'),
(2 ,18 ,'Running'),
(3 ,21 ,'Running'),
(4 ,29 ,'Walking'),
(5 ,33 ,'Walking'),
(6 ,57 ,'Running'),
(7 ,66 ,'Running');
Here's a CTE solution that gets the results you're seeking:
;WITH TheRecords (FirstTime,SecondTime,[Name])
AS
(
SELECT [Time],
(
SELECT MIN([Time])
FROM ActivityTable at2
WHERE at2.[Time]>at.[Time]
AND at2.[Name]<>at.[Name]
),
[Name]
FROM ActivityTable at
)
SELECT MIN(FirstTime) AS FromTime,SecondTime AS ToTime,MIN([Name]) AS [Name]
FROM TheRecords
GROUP BY SecondTime
ORDER BY FromTime,ToTime

Get the missing value in a sequence of numbers

I made the following query for the SQL Server backend
SELECT TOP(1) (v.rownum + 99)
FROM
(
SELECT incrementNo-99 as id, ROW_NUMBER() OVER (ORDER BY incrementNo) as rownum
FROM proposals
WHERE [year] = '12'
) as v
WHERE v.rownum <> v.id
ORDER BY v.rownum
to find the first unused proposal number.
(It's not about the lastrecord +1)
But I realized ROW_NUMBER is not supported in access.
I looked and I can't find something similar.
Does anyone know how to get the same result as a ROW_NUMBER in access?
Maybe there's a better way of doing this.
Actually people insert their proposal No (incrementID) with no constraint. This number looks like this 13-152. xx- is for the current year and the -xxx is the proposal number. The last 3 digits are supposed to be incremental but in some case maybe 10 times a year they have to skip some numbers. That's why I can't have the auto increment.
So I do this query so when they open the form, the default number is the first unused.
How it works:
Because the number starts at 100, I do -99 so it starts at 1.
Then I compare the row number with the id so it looks like this
ROW NUMBER | ID
1 1 (100)
2 2 (101)
3 3 (102)
4 5 (104)<--------- WRONG
5 6 (105)
So now I know that we skip 4. So I return (4 - 99) = 103
If there's a better way, I don't mind changing but I really like this query.
If there's really no other way and I can't simulate a row number in access, i will use the pass through query.
Thank you
From your question it appears that you are looking for a gap in a sequence of numbers, so:
SELECT b.akey, (
SELECT Top 1 akey
FROM table1 a
WHERE a.akey > b.akey) AS [next]
FROM table1 AS b
WHERE (
SELECT Top 1 akey
FROM table1 a
WHERE a.akey > b.akey) <> [b].[akey]+1
ORDER BY b.akey
Where table1 is the table and akey is the sequenced number.
SELECT T.Value, T.next -1 FROM (
SELECT b.Value , (
SELECT Top 1 Value
FROM tblSequence a
WHERE a.Value > b.Value) AS [next]
FROM tblSequence b
) T WHERE T.next <> T.Value +1

How do I get the "Next available number" from an SQL Server? (Not an Identity column)

Technologies: SQL Server 2008
So I've tried a few options that I've found on SO, but nothing really provided me with a definitive answer.
I have a table with two columns, (Transaction ID, GroupID) where neither has unique values. For example:
TransID | GroupID
-----------------
23 | 4001
99 | 4001
63 | 4001
123 | 4001
77 | 2113
2645 | 2113
123 | 2113
99 | 2113
Originally, the groupID was just chosen at random by the user, but now we're automating it. Thing is, we're keeping the existing DB without any changes to the existing data(too much work, for too little gain)
Is there a way to query "GroupID" on table "GroupTransactions" for the next available value of GroupID > 2000?
I think from the question you're after the next available, although that may not be the same as max+1 right? - In that case:
Start with a list of integers, and look for those that aren't there in the groupid column, for example:
;WITH CTE_Numbers AS (
SELECT n = 2001
UNION ALL
SELECT n + 1 FROM CTE_Numbers WHERE n < 4000
)
SELECT top 1 n
FROM CTE_Numbers num
WHERE NOT EXISTS (SELECT 1 FROM MyTable tab WHERE num.n = tab.groupid)
ORDER BY n
Note: you need to tweak the 2001/4000 values int the CTE to allow for the range you want. I assumed the name of your table to by MyTable
select max(groupid) + 1 from GroupTransactions
The following will find the next gap above 2000:
SELECT MIN(t.GroupID)+1 AS NextID
FROM GroupTransactions t (updlock)
WHERE NOT EXISTS
(SELECT NULL FROM GroupTransactions n WHERE n.GroupID=t.GroupID+1 AND n.GroupID>2000)
AND t.GroupID>2000
There are always many ways to do everything. I resolved this problem by doing like this:
declare #i int = null
declare #t table (i int)
insert into #t values (1)
insert into #t values (2)
--insert into #t values (3)
--insert into #t values (4)
insert into #t values (5)
--insert into #t values (6)
--get the first missing number
select #i = min(RowNumber)
from (
select ROW_NUMBER() OVER(ORDER BY i) AS RowNumber, i
from (
--select distinct in case a number is in there multiple times
select distinct i
from #t
--start after 0 in case there are negative or 0 number
where i > 0
) as a
) as b
where RowNumber <> i
--if there are no missing numbers or no records, get the max record
if #i is null
begin
select #i = isnull(max(i),0) + 1 from #t
end
select #i
In my situation I have a system to generate message numbers or a file/case/reservation number sequentially from 1 every year. But in some situations a number does not get use (user was testing/practicing or whatever reason) and the number was deleted.
You can use a where clause to filter by year if all entries are in the same table, and make it dynamic (my example is hardcoded). if you archive your yearly data then not needed. The sub-query part for mID and mID2 must be identical.
The "union 0 as seq " for mID is there in case your table is empty; this is the base seed number. It can be anything ex: 3000000 or {prefix}0000. The field is an integer. If you omit " Union 0 as seq " it will not work on an empty table or when you have a table missing ID 1 it will given you the next ID ( if the first number is 4 the value returned will be 5).
This query is very quick - hint: the field must be indexed; it was tested on a table of 100,000+ rows. I found that using a domain aggregate get slower as the table increases in size.
If you remove the "top 1" you will get a list of 'next numbers' but not all the missing numbers in a sequence; ie if you have 1 2 4 7 the result will be 3 5 8.
set #newID = select top 1 mID.seq + 1 as seq from
(select a.[msg_number] as seq from [tblMSG] a --where a.[msg_date] between '2023-01-01' and '2023-12-31'
union select 0 as seq ) as mID
left outer join
(Select b.[msg_number] as seq from [tblMSG] b --where b.[msg_date] between '2023-01-01' and '2023-12-31'
) as mID2 on mID.seq + 1 = mID2.seq where mID2.seq is null order by mID.seq
-- Next: a statement to insert a row with #newID immediately in tblMSG (in a transaction block).
-- Then the row can be updated by your app.

Resources