Anyone can help with this one please? Our attendance system generates the following data:
User Department Date Time Reader
A1 IT 1/3/2014 11:12:00 1
B1 IT 1/3/2014 12:28:06 1
B1 IT 1/3/2014 12:28:07 1
A1 IT 1/3/2014 13:12:00 2
B1 IT 1/3/2014 13:28:06 2
A1 IT 2/3/2014 07:42:15 1
A1 IT 2/3/2014 16:16:15 2
Where Reader value,
1 = Entry
2 = Exit
I'm looking for SQL query to run on MS SQL 2005 that summarize attendance time for each employee on monthly basis, for instance
User Department Month Time
A1 IT 3/2014 10.34
B1 IT 3/2014 01:00
This is a fairly difficult problem to solve with SQL due to the need to find transitions and ranges in the data, which is not trivial. I've broken the problem down into a series of steps made of successive cte's that build on one another and lead up to a final working solution:
First, I add a row index to the data to provide a simple PK for identifying a unique row:
with NumberedAtt as (
select
row_number() over (partition by [user] order by date, time, reader) as ix,
att.[user],
att.[department],
att.[date] + att.[time] as dt,
att.[reader]
from att
)
Then I grab the first and last index value per user which will be used for the outermost boundaries of each entry/exit range:
, MinMax as (
select [user], min(ix) ixMin, max(ix) ixMax
from NumberedAtt N group by [user]
)
Next I put these together to generate a list of all exit/entry ranges, which are the points where the value of Reader changes from 2 to 1. These are the specific points that exactly identify when a previous time range ends, and the next time range begins (and cleanly handles successive duplicate entry/exit reads). By combining this with the first entry and last exit time for each user, a list of all entry/exit transitions is generated:
, Transitions as (
select N.[User], 0 as exitIx, M.ixMin as entryIx
from NumberedAtt N
join MinMax M on N.[User] = M.[User]
where N.ix = M.ixMin
union
select N.[User], M.ixMax as exitIx, 0 as entryIx
from NumberedAtt N
join MinMax M on N.[User] = M.[User]
where N.ix = M.ixMax
union
select A1.[User], A1.ix as exitIx, A2.ix as entryIx
from NumberedAtt A1
join NumberedAtt A2 on A1.ix + 1 = A2.ix and A1.[user] = A2.[user]
where A1.[reader] = 2 and A2.[reader] = 1
)
Here is the output at this point:
| USER | EXITIX | ENTRYIX |
|------|--------|---------|
| A1 | 0 | 1 |
| A1 | 2 | 3 |
| A1 | 4 | 0 |
| B1 | 0 | 1 |
| B1 | 3 | 0 |
Notice that we've neatly captured all of the row indexes where a range of time begins and ends. However, they are offset by one - that is the entry time in one row corresponds to the exit time in the next row. So we need one more transformation to bring the ranges back together by adding a row index to this table and joining each row with the following row:
, NumberedTransitions as (
select
row_number() over (partition by [User] order by exitIx) tix,
T.*
from Transitions T
), EntryExit as (
select
aEntry.ix as ixEntry,
aExit.ix as ixExit,
aEntry.[user],
aEntry.[department],
aEntry.[dt] as entryDT,
aExit.[dt] as exitDT
from NumberedTransitions tEntry
join NumberedAtt aEntry on tEntry.entryIx = aEntry.ix and tEntry.[user] = aEntry.[user]
join NumberedTransitions tExit on tEntry.tix + 1 = tExit.tix and tEntry.[user] = tExit.[user]
join NumberedAtt aExit on tExit.exitIx = aExit.ix and tExit.[user] = aExit.[user]
)
After joining the successive ranges together, I also pull in the original detail data back in, since I've been working only with the row index values so far. At the end of this subquery, we have identified all the entry/exit ranges per user and "swallowed up" any multiple reads:
| IXENTRY | IXEXIT | USER | DEPARTMENT | ENTRYDT | EXITDT |
|---------|--------|------|------------|------------------------------|------------------------------|
| 1 | 2 | A1 | IT | March, 01 2014 11:12:00+0000 | March, 01 2014 13:12:00+0000 |
| 3 | 4 | A1 | IT | March, 02 2014 07:42:15+0000 | March, 02 2014 16:16:15+0000 |
| 1 | 3 | B1 | IT | March, 01 2014 12:28:06+0000 | March, 01 2014 13:28:06+0000 |
Now the only thing left to do is group the data together to report on total hours per user, per month. It is a little tricky to calculate the total hours, but it can be done by taking the sum of minutes between the ranges and then converting the result back into a time value:
, Hours as (
select
[User],
[Department],
Year(EntryDT) Year,
Month(EntryDT) Month,
RIGHT('0' + CAST(SUM(DATEDIFF(Minute, EntryDT, ExitDT)) / 60 as varchar(10)), 2) + ':' +
RIGHT('0' + CAST(SUM(DATEDIFF(Minute, EntryDT, ExitDT)) % 60 as varchar(2)), 2) as TotalHours
from EntryExit EE
group by [User], [Department], Year(EntryDT), Month(EntryDT)
)
This gives a final result which is very close to the desired result:
| USER | DEPARTMENT | YEAR | MONTH | TOTALHOURS |
|------|------------|------|-------|------------|
| A1 | IT | 2014 | 3 | 10:34:00 |
| B1 | IT | 2014 | 3 | 01:00:00 |
A few tweaks could be made to the formatting as desired, but that should be easy to build on top of this framework.
Here is a working demo: http://www.sqlfiddle.com/#!3/f3f37/7
Related
I have a table showing pallets and the amount of product ("units") on those pallets. Individual pallets can have multiple records due to multiple possible defect codes. This means when I am trying to sum the total units on all pallets, the same pallet could get counted more than once, which is undesirable. I would like (but don't know how) to add a running tally column to show how many times a specific pallet ID has appeared so that I can filter out any record where the count is greater than 1:
| Pallet_ID | Units | Defect_Code | COUNT |
+-----------+-------+-------------+-------+
| A1 | 100 | 03 | 1 |
| A1 | 100 | 05 | 2 |
| B1 | 95 | 03 | 1 |
| C1 | 300 | 05 | 1 |
| C1 | 300 | 06 | 2 |
| D1 | 210 | 03 | 1 |
| A1 | 100 | 10 | 3 |
| D1 | 210 | 03 | 2 |
In the above example, the correct sum total of units should be 705. A solution in SQL or in DAX would work (although I lean towards SQL). I have searched for a long time but could not find a solution that fits this particular scenario. Many thanks in advance for your time and consideration!
You may use the windowing function row_number() with the over clause where you partition by the pallet. Within each partition you can control which row is assigned the number 1 by using the order by inside the over clause.
select
*
from (
select
Pallet_ID
, Units
, Defect_Code
, row_number() over(partition by Pallet_ID order by defect_code) as count_of
from yourtable
)
where count_of = 1
Note I have arbitrability use the column defect_code to order by as I don't know what other columns may exist. If your table has a date/time value for when the row was created you could use this instead, or perhaps the unique key of the table.
side note:
I would not recommend using column alias of "count" as it's a SQL reserved word
I have a bunch of value pairs (Before, After) by users in a table. In ideal scenarios these values should form an unbroken chain. e.g.
| UserId | Before | After |
|--------|--------|-------|
| 1 | 0 | 10 |
| 1 | 10 | 20 |
| 1 | 20 | 30 |
| 1 | 30 | 40 |
| 1 | 40 | 30 |
| 1 | 30 | 52 |
| 1 | 52 | 0 |
Unfortunately, these records originate in multiple different tables and are imported into my investigation table. The other values in the table do not lend themselves to ordering (e.g. CreatedDate) due to some quirks in the system saving them out of order.
I need to produce a list of users with gaps in their data. e.g.
| UserId | Before | After |
|--------|--------|-------|
| 1 | 0 | 10 |
| 1 | 10 | 20 |
| 1 | 20 | 30 |
// Row Deleted (30->40)
| 1 | 40 | 30 |
| 1 | 30 | 52 |
| 1 | 52 | 0 |
I've looked at the other Daisy Chaining questions on SO (and online in general), but they all appear to be on a given problem space, where one value in the pair is always lower than the other in a predictable fashion. In my case, there can be increases or decreases.
Is there a way to quickly calculate the longest chain that can be created? I do have a CreatedAt column that would provide some (very rough) relative ordering - When the date is more than about 10 seconds apart, we could consider them orderable)
Are you not therefore simply after this to get the first row where the "chain" is broken?
SELECT UserID, Before, After
FROM dbo.YourTable YT
WHERE NOT EXISTS (SELECT 1
FROM dbo.YourTable NE
WHERE NE.After = YT.Before)
AND YT.Before != 0;
If you want to last row where the row where the "chain" is broken, just swap the aliases on the columns in the WHERE in the NOT EXISTS.
the following performs hierarchical recursion on your example data and calculates a "chain" count column called 'h_level'.
;with recur_cte([UserId], [Before], [After], h_level) as (
select [UserId], [Before], [After], 0
from dbo.test_table
where [Before] is null
union all
select tt.[UserId], tt.[Before], tt.[After], rc.h_level+1
from dbo.test_table tt join recur_cte rc on tt.UserId=rc.UserId
and tt.[Before]=rc.[After]
where tt.[Before]<tt.[after])
select * from recur_cte;
Results:
UserId Before After h_level
1 NULL 10 0
1 10 20 1
1 20 30 2
1 30 40 3
1 30 52 3
Is this helpful? Could you further define which rows to exclude?
If you want users that have more than one chain:
select t.UserID
from <T> as t left outer join <T> as t2
on t2.UserID = t.UserID and t2.Before = t.After
where t2.UserID is null
group by t.UserID
having count(*) > 1;
I have a table consisting of ID, Year, Value
---------------------------------------
| ID | Year | Value |
---------------------------------------
| 1 | 2006 | 100 |
| 1 | 2007 | 200 |
| 1 | 2008 | 150 |
| 1 | 2009 | 250 |
| 2 | 2005 | 50 |
| 2 | 2006 | 75 |
| 2 | 2007 | 65 |
---------------------------------------
I then create a derived, aggregated table consisting of an ID, MinYear, and MaxYear
---------------------------------------
| ID | MinYear | MaxYear |
---------------------------------------
| 1 | 2006 | 2009 |
| 2 | 2005 | 2007 |
---------------------------------------
I then want to find the sum of Values between the MinYear and MaxYear foreach ID in the aggregated table, but I am having trouble determining a proper query.
The final table should look something like this
----------------------------------------------------
| ID | MinYear | MaxYear | SumVal |
----------------------------------------------------
| 1 | 2006 | 2009 | 700 |
| 2 | 2005 | 2007 | 190 |
----------------------------------------------------
Right now I can perform all the joins to create the second table. But then I use a fast forward cursor to iterate through each record of the second table with the code inside the for loop looking like the following
DECLARE #curMin int
DECLARE #curMax int
DECLARE #curID int
FETCH Next FROM fastCursor INTo #curISIN, #curMin , #curMax
WHILE ##FETCH_STATUS = 0
BEGIN
SELECT Sum(Value) FROM ValTable WHERE Year >= #curMin and Year <= #curMax and ID = #curID
Group By ID
FETCH Next FROM fastCursor INTo #curISIN, #curMin , #curMax
Having found the sum of values between specified years, I can connect it back to the second table and I wind up the desired result (the third table).
However, the second table in reality is roughly 4 million rows, so this iteration is extremely time consuming (~generating 300 results a minute) and presumably not the best solution.
My question is, is there a way to generate the third table's results without having to use a cursor/for loop?
During a group by the sum will only be for the ID in question -- since the min year and max year is for the ID itself then you don't need to double query. The query below should give you exactly what you need. If you have a different requirement let me know.
SELECT ID, MIN(YEAR) as MinYear, MAX(YEAR) as MaxYear, SUM(VALUE) as SUMVALUE
FROM tablenameyoudidnotsay
GROUP BY ID
You could use query as bellow
TableA is your first table, and TableB is the second one
SELECT *,
(select SUM(Value) FROM TableA where tablea.ID=TableB.ID AND tableA.Year BETWEEN
TableB.MinYear AND TableB.MaxYear) AS SumValue
from TableB
You can put your criteria into a join and obtain the result all as one set which should be faster:
SELECT b.Id, b.MinYear, b.MaxYear, sum(a.Value)
FROM Table2 b
JOIN Table1 a ON a.Id=b.Id AND b.MinYear <= a.Year AND b.MaxYear >= a.Year
GROUP BY b.Id, b.MinYear, b.MaxYear
I have a requirement of assigning sequential Numbers to students. The problem is the data must be partitioned by course first and then the Number must be assigned starting from say 1 to say 1000.
Each Course should have at least a gap of say 20 ( may differ ) to accommodate a student in the same course in case, someone, if left out as of now appears later.
and so on.
I have tried partitioning and Recursive CTE but haven't succeeded to get this kind of series for assigning finally the RollNumber.
Any help would be very much anticipated.
Thank You.
You can do this in two steps with a subquery. First get your row_number() partitioned by course and order by student id, then you can bump each partition by 20 by counting the previous 1 values returned by your row_number() and multiplying by 20.
SELECT
s_no,
course,
rownumber + (SUM(CASE WHEN rownumber = 1 THEN 1 ELSE 0 END) OVER (ORDER BY course, s_no ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) * 20) - 20
FROM
(
SELECT
s_no,
course,
ROW_NUMBER() OVER (PARTITION BY course ORDER BY s_no) rownumber
FROM test
) sub
ORDER BY course, s_no;
+------+--------+-----------+
| s_no | course | rownumber |
+------+--------+-----------+
| 1 | A | 1 |
| 2 | A | 2 |
| 3 | A | 3 |
| 1 | B | 21 |
| 2 | B | 22 |
| 3 | B | 23 |
| 1 | C | 41 |
| 2 | C | 42 |
| 3 | C | 43 |
+------+--------+-----------+
This isn't exactly as your desired output, but I think it's the same as what you are after. You can monkey with the math in that main query though and bump each partitions starting position to whatever you want.
I can use a traditional subquery approach to count the occurrences in the last ten minutes. For example, this:
drop table if exists [dbo].[readings]
go
create table [dbo].[readings](
[server] [int] NOT NULL,
[sampled] [datetime] NOT NULL
)
go
insert into readings
values
(1,'20170101 08:00'),
(1,'20170101 08:02'),
(1,'20170101 08:05'),
(1,'20170101 08:30'),
(1,'20170101 08:31'),
(1,'20170101 08:37'),
(1,'20170101 08:40'),
(1,'20170101 08:41'),
(1,'20170101 09:07'),
(1,'20170101 09:08'),
(1,'20170101 09:09'),
(1,'20170101 09:11')
go
-- Count in the last 10 minutes - example periods 08:31 to 08:40, 09:12 to 09:21
select server,sampled,(select count(*) from readings r2 where r2.server=r1.server and r2.sampled <= r1.sampled and r2.sampled > dateadd(minute,-10,r1.sampled)) as countinlast10minutes
from readings r1
order by server,sampled
go
How can I use a window function to obtain the same result ? I've tried this:
select server,sampled,
count(case when sampled <= r1.sampled and sampled > dateadd(minute,-10,r1.sampled) then 1 else null end) over (partition by server order by sampled rows between unbounded preceding and current row) as countinlast10minutes
-- count(case when currentrow.sampled <= r1.sampled and currentrow.sampled > dateadd(minute,-10,r1.sampled) then 1 else null end) over (partition by server order by sampled rows between unbounded preceding and current row) as countinlast10minutes
from readings r1
order by server,sampled
But the result is just the running count. Any system variable that refers to the current row pointer ? currentrow.sampled ?
This isn't a very pleasing answer but one possibility is to first create a helper table with all the minutes
CREATE TABLE #DateTimes(datetime datetime primary key);
WITH E1(N) AS
(
SELECT 1 FROM (VALUES(1),(1),(1),(1),(1),
(1),(1),(1),(1),(1)) V(N)
) -- 1*10^1 or 10 rows
, E2(N) AS (SELECT 1 FROM E1 a, E1 b) -- 1*10^2 or 100 rows
, E4(N) AS (SELECT 1 FROM E2 a, E2 b) -- 1*10^4 or 10,000 rows
, E8(N) AS (SELECT 1 FROM E4 a, E4 b) -- 1*10^8 or 100,000,000 rows
,R(StartRange, EndRange)
AS (SELECT MIN(sampled),
MAX(sampled)
FROM readings)
,N(N)
AS (SELECT ROW_NUMBER()
OVER (
ORDER BY (SELECT NULL)) AS N
FROM E8)
INSERT INTO #DateTimes
SELECT TOP (SELECT 1 + DATEDIFF(MINUTE, StartRange, EndRange) FROM R) DATEADD(MINUTE, N.N - 1, StartRange)
FROM N,
R;
And then with that in place you could use ROWS BETWEEN 9 PRECEDING AND CURRENT ROW
WITH T1 AS
( SELECT Server,
MIN(sampled) AS StartRange,
MAX(sampled) AS EndRange
FROM readings
GROUP BY Server )
SELECT Server,
sampled,
Cnt
FROM T1
CROSS APPLY
( SELECT r.sampled,
COUNT(r.sampled) OVER (ORDER BY N.datetime ROWS BETWEEN 9 PRECEDING AND CURRENT ROW) AS Cnt
FROM #DateTimes N
LEFT JOIN readings r
ON r.sampled = N.datetime
AND r.server = T1.server
WHERE N.datetime BETWEEN StartRange AND EndRange ) CA
WHERE CA.sampled IS NOT NULL
ORDER BY sampled
The above assumes that there is at most one sample per minute and that all the times are exact minutes. If this isn't true it would need another table expression pre-aggregating by datetimes rounded to the minute.
As far as I know, there is not a simple exact replacement for your subquery using window functions.
Window functions operate on a set of rows and allow you to work with them based on partitions and order.
What you are trying to do isn't the type of partitioning that we can work with in window functions.
To generate the partitions we would need to be able to use window functions in this instance would just result in overly complicated code.
I would suggest cross apply() as an alternative to your subquery.
I am not sure if you meant to restrict your results to within 9 minutes, but with sampled > dateadd(...) that is what is happening in your original subquery.
Here is what a window function could look like based on partitioning your samples into 10 minute windows, along with a cross apply() version.
select
r.server
, r.sampled
, CrossApply = x.CountRecent
, OriginalSubquery = (
select count(*)
from readings s
where s.server=r.server
and s.sampled <= r.sampled
/* doesn't include 10 minutes ago */
and s.sampled > dateadd(minute,-10,r.sampled)
)
, Slices = count(*) over(
/* partition by server, 10 minute slices, not the same thing*/
partition by server, dateadd(minute,datediff(minute,0,sampled)/10*10,0)
order by sampled
)
from readings r
cross apply (
select CountRecent=count(*)
from readings i
where i.server=r.server
/* changed to >= */
and i.sampled >= dateadd(minute,-10,r.sampled)
and i.sampled <= r.sampled
) as x
order by server,sampled
results: http://rextester.com/BMMF46402
+--------+---------------------+------------+------------------+--------+
| server | sampled | CrossApply | OriginalSubquery | Slices |
+--------+---------------------+------------+------------------+--------+
| 1 | 01.01.2017 08:00:00 | 1 | 1 | 1 |
| 1 | 01.01.2017 08:02:00 | 2 | 2 | 2 |
| 1 | 01.01.2017 08:05:00 | 3 | 3 | 3 |
| 1 | 01.01.2017 08:30:00 | 1 | 1 | 1 |
| 1 | 01.01.2017 08:31:00 | 2 | 2 | 2 |
| 1 | 01.01.2017 08:37:00 | 3 | 3 | 3 |
| 1 | 01.01.2017 08:40:00 | 4 | 3 | 1 |
| 1 | 01.01.2017 08:41:00 | 4 | 3 | 2 |
| 1 | 01.01.2017 09:07:00 | 1 | 1 | 1 |
| 1 | 01.01.2017 09:08:00 | 2 | 2 | 2 |
| 1 | 01.01.2017 09:09:00 | 3 | 3 | 3 |
| 1 | 01.01.2017 09:11:00 | 4 | 4 | 1 |
+--------+---------------------+------------+------------------+--------+
Thanks, Martin and SqlZim, for your answers. I'm going to raise a Connect enhancement request for something like %%currentrow that can be used in window aggregates. I'm thinking this would lead to much more simple and natural sql:
select count(case when sampled <= %%currentrow.sampled and sampled > dateadd(minute,-10,%%currentrow.sampled) then 1 else null end) over (...whatever the window is...)
We can already use expressions like this:
select count(case when sampled <= getdate() and sampled > dateadd(minute,-10,getdate()) then 1 else null end) over (...whatever the window is...)
so thinking would be great if we could reference a column that's in the current row.