EDIT: I changed my example and made it more simple. First Quote is how the source table looks like, second quote is how the result should look like.
Hello everyone,
I have multiple parking that only sends changing states.
It sends a "1" when a car arrived at the parking, then it doesn't send anything until the car leaves again. At that moment the parking sends a "0". I need to do analysis over a long time, so it would be awesome to see the amount of time per hour or so to not get too many rows (compared by minute).
The data looks like this (as requested I reduce it to parking-ID 10 and just the last record from 19.12. and the records from 20.12.):
+------------+------------------+--------+-------------+
| Parking-ID | DateTime | Status | Comment |
+------------+------------------+--------+-------------+
| 10 | 20.12.2019 16:35 | 0 | Car left |
+------------+------------------+--------+-------------+
| 10 | 20.12.2019 08:22 | 1 | Car arrived |
+------------+------------------+--------+-------------+
| 10 | 19.12.2019 22:47 | 0 | Car left |
+------------+------------------+--------+-------------+
Now to not make it too easy for me, next to the "free" and "taken" status there is also a warm status. 1 hour after a car left the parking should be marked as "warm" because some cars have to come and go fast in a few minutes and this time range should be shown as "warm".
To not get too many rows (like for every minute), I would appreciate if it would be possible to get the summary per hour. For my analysis I should be able to see how many hours per day the parking was taken, how many hours it was warm and how many hours it was free.
So the result should look something like this (for Parking-ID 10 and for 20.12.2019):
+------------+------------------+--------+----------+---------+
| Parking-ID | DateTime | Status | Duration | Comment |
+------------+------------------+--------+----------+---------+
| 10 | 20.12.2019 23:00 | 0 | 1.00 | Free |
+------------+------------------+--------+----------+---------+
| 10 | 20.12.2019 22:00 | 0 | 1.00 | Free |
+------------+------------------+--------+----------+---------+
| 10 | 20.12.2019 21:00 | 0 | 1.00 | Free |
+------------+------------------+--------+----------+---------+
| 10 | 20.12.2019 20:00 | 0 | 1.00 | Free |
+------------+------------------+--------+----------+---------+
| 10 | 20.12.2019 19:00 | 0 | 1.00 | Free |
+------------+------------------+--------+----------+---------+
| 10 | 20.12.2019 18:00 | 0 | 1.00 | Free |
+------------+------------------+--------+----------+---------+
| 10 | 20.12.2019 17:00 | 0 | 0.42 | Free |
+------------+------------------+--------+----------+---------+
| 10 | 20.12.2019 17:00 | 2 | 0.58 | Warm |
+------------+------------------+--------+----------+---------+
| 10 | 20.12.2019 16:00 | 2 | 0.42 | Warm |
+------------+------------------+--------+----------+---------+
| 10 | 20.12.2019 16:00 | 1 | 0.58 | Taken |
+------------+------------------+--------+----------+---------+
| 10 | 20.12.2019 15:00 | 1 | 1.00 | Taken |
+------------+------------------+--------+----------+---------+
| 10 | 20.12.2019 14:00 | 1 | 1.00 | Taken |
+------------+------------------+--------+----------+---------+
| 10 | 20.12.2019 13:00 | 1 | 1.00 | Taken |
+------------+------------------+--------+----------+---------+
| 10 | 20.12.2019 12:00 | 1 | 1.00 | Taken |
+------------+------------------+--------+----------+---------+
| 10 | 20.12.2019 11:00 | 1 | 1.00 | Taken |
+------------+------------------+--------+----------+---------+
| 10 | 20.12.2019 10:00 | 1 | 1.00 | Taken |
+------------+------------------+--------+----------+---------+
| 10 | 20.12.2019 09:00 | 1 | 1.00 | Taken |
+------------+------------------+--------+----------+---------+
| 10 | 20.12.2019 08:00 | 1 | 0.63 | Taken |
+------------+------------------+--------+----------+---------+
| 10 | 20.12.2019 08:00 | 0 | 0.37 | Free |
+------------+------------------+--------+----------+---------+
| 10 | 20.12.2019 07:00 | 0 | 1.00 | Free |
+------------+------------------+--------+----------+---------+
| 10 | 20.12.2019 06:00 | 0 | 1.00 | Free |
+------------+------------------+--------+----------+---------+
| 10 | 20.12.2019 05:00 | 0 | 1.00 | Free |
+------------+------------------+--------+----------+---------+
| 10 | 20.12.2019 04:00 | 0 | 1.00 | Free |
+------------+------------------+--------+----------+---------+
| 10 | 20.12.2019 03:00 | 0 | 1.00 | Free |
+------------+------------------+--------+----------+---------+
| 10 | 20.12.2019 02:00 | 0 | 1.00 | Free |
+------------+------------------+--------+----------+---------+
| 10 | 20.12.2019 01:00 | 0 | 1.00 | Free |
+------------+------------------+--------+----------+---------+
| 10 | 20.12.2019 00:00 | 0 | 1.00 | Free |
+------------+------------------+--------+----------+---------+
Does someone have a good approach? I already searched and tried but couldn't find a working approach.
Thank you and best regards
First, your duration output is still wrong,if you cross check.
For example 20.12.2019 08:00 it should be 22.00 and 38.00.Clear this ?
Secondly,Two rows on for 20.12.2019 17:00 is not clear.Why it it will contain 2 rows ?Clear this also.
Create Calendar table in whatever way you want.
CREATE TABLE [dbo].[CalendarDate](
[Dates] [datetime2](0) NOT NULL
PRIMARY KEY CLUSTERED
(
[Dates] ASC
)
) ON [PRIMARY]
GO
insert into [CalendarDate] with(tablock)
select top (100000)
dateadd(day,ROW_NUMBER()over(order by (select null))
,'1950-01-01 00:00:00')
from sys.objects a, sys.objects b, sys.objects c
Then Create number table also
-- Real or #temp your wish
create Table #Number(Hrs int)
insert into #Number (Hrs)
select top 24 ROW_NUMBER()over(order by number)-1
from master..spt_values
Your table sample data.I have kept Parking status in seperate table,follow Normalization.
-- your real table
create table #Parking( ParkingID int, ParkingDateTime Datetime2(0),ParkingStatus tinyint )
insert into #Parking values(10,'2019-12-20 16:35',0),(10,'2019-12-20 08:22',1)
,(10,'2019-12-19 22:47',0)
-- It should be your real table
create table #ParkingStatus( ParkingStatus tinyint,StatusName varchar(50) )
insert into #ParkingStatus values(0,'Car left')
,(1,'Car arrived'),(2,'Free'),(3,'Taken')
,(4,'Warm')
The Script,
declare #From Datetime2(0)='2019-12-20'
declare #To Datetime2(0)=dateadd(second,-1,dateadd(day,1,#From))
-- Put require data in #temp table,since it will be use many times
create table #ParkingTemp(ParkingID int,ParkingDateTime Datetime2(0)
,ParkingDate Date,ParkingStatus tinyint )
insert into #ParkingTemp (ParkingID,ParkingDateTime
,ParkingDate,ParkingStatus)
select P.ParkingID,ParkingDateTime
,p.ParkingDateTime
,ParkingStatus
from #Parking P
where ParkingDateTime>=#From
and ParkingDateTime<=#To
;With CTE as
(
select ParkingID,ParkingDateTime ,count(*)+1 SplitCount
,ParkingStatus as InitialStatus
from #ParkingTemp
group by ParkingID,ParkingDateTime,ParkingStatus
)
, DistinctIDCTE as
(
select distinct ParkingID
from #ParkingTemp
)
, CTE1 as
(
select Dates
,dateadd(hour,hrs,Dates)ReportDateTime
,ParkingID
from [CalendarDate],#Number N,DistinctIDCTE
where dates>=#From and Dates<=#To
),
CTE2 as
(
select c.ParkingID
,dateadd(minute,-datepart(minute,ParkingDateTime),ParkingDateTime) ParkingDate
,ParkingDateTime,hrs as rownum,InitialStatus
from CTE C
cross apply(select hrs from #Number N where c.SplitCount>n.Hrs)ca
)
,CTE3 as
(
select parkingid,ParkingDateTime as FromDatetime
,ToDatetime
from #ParkingTemp C
cross apply(select top 1 ParkingDateTime as ToDatetime
from #ParkingTemp C1 where c.ParkingID=c1.ParkingID
and c1.ParkingStatus=0 and
c1.ParkingDateTime>c.ParkingDateTime
order by c1.ParkingDateTime )c1
where ParkingStatus=1
)
,CTE4 as
(
select c.ParkingID,c.ReportDateTime
from CTE1 C
outer apply(select top 1 FromDatetime ,ToDatetime
from CTE3 c1 where c.ParkingID=c1.ParkingID
and (ReportDateTime>= FromDatetime and ReportDateTime<=ToDatetime))ca
)
--select * from CTE2
,CTE5 as
(
select c4.ParkingID,c4.ReportDateTime
,case when rownum=0 and InitialStatus=1 then 2
when rownum=1 and InitialStatus=1 then 3
when rownum=0 and InitialStatus=0 then 4
when rownum=1 and InitialStatus=0 then 3
else 2 end as ParkingStatusid
,case when rownum=0 then datediff(minute,ReportDateTime,ParkingDateTime)
when rownum=1 then 60- datepart(minute,ParkingDateTime)
else 1.00 end Duration
,ParkingDateTime
,rownum,InitialStatus
from CTE4 c4
left join CTE2 c2 on c4.ParkingID=c2.ParkingID and c2.ParkingDate =c4.ReportDateTime
)
select c5.ParkingID,c5.ReportDateTime,c5.ParkingStatusid
,Duration,PS.StatusName AS Comment
from CTE5 c5
inner join #ParkingStatus ps on c5.ParkingStatusid=ps.ParkingStatus
order by ReportDateTime desc
Clean Up
drop table #Parking,#ParkingStatus,#Number,#ParkingTemp
Alternate and improve :
;WITH CTE
AS (SELECT ParkingID,
ParkingDateTime,
COUNT(*) + 1 SplitCount,
ParkingStatus AS InitialStatus
FROM #ParkingTemp
GROUP BY ParkingID,
ParkingDateTime,
ParkingStatus),
DistinctIDCTE
AS (SELECT DISTINCT
ParkingID
FROM #ParkingTemp),
CTE1
AS (SELECT Dates,
DATEADD(hour, hrs, Dates) ReportDateTime,
ParkingID
FROM [CalendarDate],
#Number N,
DistinctIDCTE
WHERE dates >= #From
AND Dates <= #To),
CTE2
AS (SELECT c.ParkingID,
DATEADD(minute, -DATEPART(minute, ParkingDateTime), ParkingDateTime) ParkingDate,
ParkingDateTime,
hrs AS rownum,
InitialStatus
FROM CTE C
CROSS APPLY
(
SELECT hrs
FROM #Number N
WHERE c.SplitCount > n.Hrs
) ca),
CTE5
AS (SELECT c4.ParkingID,
c4.ReportDateTime,
CASE
WHEN rownum = 0
AND InitialStatus = 1
THEN 2
WHEN rownum = 1
AND InitialStatus = 1
THEN 3
WHEN rownum = 0
AND InitialStatus = 0
THEN 4
WHEN rownum = 1
AND InitialStatus = 0
THEN 3
ELSE 2
END AS ParkingStatusid,
CASE
WHEN rownum = 0
THEN DATEDIFF(minute, ReportDateTime, ParkingDateTime)
WHEN rownum = 1
THEN 60 - DATEPART(minute, ParkingDateTime)
ELSE 1.00
END Duration,
ParkingDateTime,
rownum,
InitialStatus
FROM CTE1 c4
LEFT JOIN CTE2 c2 ON c4.ParkingID = c2.ParkingID
AND c2.ParkingDate = c4.ReportDateTime)
SELECT c5.ParkingID,
c5.ReportDateTime,
c5.ParkingStatusid,
Duration,
PS.StatusName AS Comment
FROM CTE5 c5
INNER JOIN #ParkingStatus ps ON c5.ParkingStatusid = ps.ParkingStatus
ORDER BY ReportDateTime DESC;
Note : clear my doubts.Throw diffrent sample data such within one hour there more than 2 parking staus for same parkingid.
Related
Need help with SQL Server; what will be the easiest way to update the missing begin and end inventory values? Values shown are verified numbers for that week.
+------+--------+-------+----------+-----+
| Week | ItemNr | Begin | Increase | End |
+------+--------+-------+----------+-----+
| 1 | 1001 | 100 | -10 | 90 |
| 2 | 1001 | | 0 | |
| 3 | 1001 | 90 | 0 | 90 |
| 4 | 1001 | | 20 | |
| 5 | 1001 | | 100 | |
| 6 | 1001 | | -20 | |
| 7 | 1001 | | 0 | |
| 8 | 1001 | 200 | 10 | 210 |
| 9 | 1001 | | 0 | |
| 10 | 1001 | | -50 | -50 |
| 11 | 1001 | | 0 | |
+------+--------+-------+----------+-----+
if Begin is NULL then previous week End
END = Begin + Increase
A couple of window functions gets you the result. ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW is the default scope when you specify an ORDER BY in the OVER clause, however, as the other window function has it explicitly stated and doesn't use the default scope, I felt it was important to show; as you can see the difference.
WITH VTE AS(
SELECT *
FROM (VALUES ( 1,1001,100,-10),
( 2,1001,NULL, 0),
( 3,1001, 90, 0),
( 4,1001,NULL, 20),
( 5,1001,NULL,100),
( 6,1001,NULL,-20),
( 7,1001,NULL, 0),
( 8,1001,200, 10),
( 9,1001,NULL, 0),
(10,1001,NULL,-50),
(11,1001,NULL, 0)) V(Week, ItemNr, [Begin],Increase))
SELECT Week,
ItemNr,
ISNULL([Begin],S.Starting + SUM(Increase) OVER (PARTITION BY ItemNr ORDER BY Week ROWS BETWEEN UNBOUNDED PRECEDING AND 1 PRECEDING)) AS [Begin],
Increase,
S.Starting + SUM(Increase) OVER (PARTITION BY ItemNr ORDER BY Week ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS [End]
FROM VTE V
CROSS APPLY (SELECT TOP 1 [Begin] AS Starting
FROM VTE ca
WHERE ca.ItemNr = V.ItemNr
ORDER BY Week ASC) S;
Note: This appears to be some kind of stock system. it's worth noting that this doesn't take into account that the stock level could go wrong. For example, say an item is stolen; the value of [End] and [Begin] (when it has a value of NULL) would be wrong in those events. If this needs to be taken into consideration, then we need to know this in the question.
Edit: Solution to cater for "lost" stock. With this, this takes the last "known" value for the stock and aggregates. So, for this example, in Week 1, although 10 items were "sold", the start of the week 2 shows the beginning value as 35. This means that 5 items are missing (stolen?). This there needs to effect all stock levels going forward. Thus you get:
WITH VTE AS(
SELECT *
FROM (VALUES ( 1,1001,100,-10),
( 2,1001,NULL, 0),
( 3,1001, 90, 0),
( 4,1001,NULL, 20),
( 5,1001,NULL,100),
( 6,1001,NULL,-20),
( 7,1001,NULL, 0),
( 8,1001,200, 10),
( 9,1001,NULL, 0),
(10,1001,NULL,-50),
(11,1001,NULL, 0),
(1,1002,50,-10),
(2,1002,35,0),--Begin value lowered. Some items went "missing"
(3,1002,NULL,5),
(4,1002,40,10)) V(Week, ItemNr, [Begin],Increase))
SELECT Week,
ItemNr,
[Begin],
Increase,
LastKnown,
WeekKnown,
ISNULL([Begin],S.LastKnown + SUM(Increase) OVER (PARTITION BY ItemNr, WeekKnown ORDER BY Week ROWS BETWEEN UNBOUNDED PRECEDING AND 1 PRECEDING)) AS ActualBegin,
ISNULL([Begin],S.LastKnown + SUM(Increase) OVER (PARTITION BY ItemNr, WeekKnown ORDER BY Week ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW)) AS [End]
FROM VTE V
CROSS APPLY (SELECT TOP 1 [Begin] AS LastKnown, Week AS WeekKnown
FROM VTE ca
WHERE ca.ItemNr = V.ItemNr
AND ca.Week <= V.Week
AND ca.[Begin] IS NOT NULL
ORDER BY Week DESC) S
ORDER BY V.ItemNr, V.Week;
Here is another way too
SELECT T1.Week,
T1.ItemNr,
CASE WHEN T1.[Begin] IS NULL THEN
(SELECT MAX([Begin]) + SUM(Increase) FROM #T WHERE Week < T1.Week AND ItemNr = T1.ItemNr)
ELSE
T1.[Begin]
END [Begin],
T1.Increase,
CASE WHEN T1.[Begin] IS NULL THEN
(SELECT MAX([Begin]) + SUM(Increase) FROM #T WHERE Week < T1.Week AND ItemNr = T1.ItemNr)
ELSE
T1.[Begin]
END + T1.Increase [End]
FROM #T T1;
Returns:
+------+--------+-------+----------+-----+
| Week | ItemNr | Begin | Increase | End |
+------+--------+-------+----------+-----+
| 1 | 1001 | 100 | -10 | 90 |
| 2 | 1001 | 90 | 0 | 90 |
| 3 | 1001 | 90 | 0 | 90 |
| 4 | 1001 | 90 | 20 | 110 |
| 5 | 1001 | 110 | 100 | 210 |
| 6 | 1001 | 210 | -20 | 190 |
| 7 | 1001 | 190 | 0 | 190 |
| 8 | 1003 | 200 | 10 | 210 |
| 9 | 1003 | 210 | 0 | 210 |
| 10 | 1003 | 210 | -50 | 160 |
| 11 | 1003 | 160 | 0 | 160 |
+------+--------+-------+----------+-----+
Demo
i need help on the following.
I have the basic query below:
select count(transactions)
from tx
where customer = 'AA'
This gives me a count of all transactions for the relevant client.
What I want is a query that gives me the same output but broken down into the LATEST last 12 weeks (Monday-Sunday is one full week). These values should be presented as 12 columns with the header of each column presented as the last date of the week (ie Sunday's date).
Furthermore the total transactions are split into status- failed and success. I would like the rows of the transactions to be failed and success so the final table would look like this:
25/03/2018 (week 1)| 01/04/2018| ........ |17/06/2018 << (week 12)
Success 100 | 200 | ........ | 150
Failed 3 | 4 | ........ | 6
Any ideas how this can be done?
Thanks you in advance
Returning pivoted data is usually a lot more hassle than it is worth and you should just leave this up to your presentation layer, which will handle your dynamic columns with much more grace. Regardless of the presentation layer you are using (SSRS, Excel, Power BI, etc), you will get the most flexibility by providing it a standard set of unpivoted data:
declare #t table (id int, TransactionDate date, Outcome varchar(8));
insert into #t values
(1,getdate()-1,'Success')
,(2,getdate()-2,'Success')
,(3,getdate()-2,'Success')
,(4,getdate()-3,'Success')
,(5,getdate()-6,'Failed')
,(6,getdate()-6,'Success')
,(7,getdate()-7,'Success')
,(8,getdate()-8,'Success')
,(9,getdate()-8,'Success')
,(10,getdate()-10,'Success')
,(11,getdate()-10,'Failed')
,(12,getdate()-11,'Success')
,(13,getdate()-13,'Success')
;
with w(ws) as(select dateadd(week, datediff(week,0,getdate())-w, 0) -- Monday at the start of the week, minus w.w weeks for all 12
from (values(0),(1),(2),(3),(4),(5),(6),(7),(8),(9),(10),(11)) as w(w)
)
,d(ws,d) as(select w.ws
,dateadd(day,d.d,w.ws) as d -- Each day that makes up each week for equijoin to Transactions table
from w
cross join (values(0),(1),(2),(3),(4),(5),(6)) as d(d)
)
select d.ws as WeekStart
,t.Outcome
,count(t.TransactionDate) as Transactions
from d
left join #t as t
on d.d = t.TransactionDate
group by d.ws
,t.Outcome
order by d.ws
,t.Outcome;
Output:
+-------------------------+---------+--------------+
| WeekStart | Outcome | Transactions |
+-------------------------+---------+--------------+
| 2018-04-09 00:00:00.000 | NULL | 0 |
| 2018-04-16 00:00:00.000 | NULL | 0 |
| 2018-04-23 00:00:00.000 | NULL | 0 |
| 2018-04-30 00:00:00.000 | NULL | 0 |
| 2018-05-07 00:00:00.000 | NULL | 0 |
| 2018-05-14 00:00:00.000 | NULL | 0 |
| 2018-05-21 00:00:00.000 | NULL | 0 |
| 2018-05-28 00:00:00.000 | NULL | 0 |
| 2018-06-04 00:00:00.000 | NULL | 0 |
| 2018-06-11 00:00:00.000 | NULL | 0 |
| 2018-06-11 00:00:00.000 | Success | 2 |
| 2018-06-18 00:00:00.000 | NULL | 0 |
| 2018-06-18 00:00:00.000 | Failed | 2 |
| 2018-06-18 00:00:00.000 | Success | 5 |
| 2018-06-25 00:00:00.000 | NULL | 0 |
| 2018-06-25 00:00:00.000 | Success | 4 |
+-------------------------+---------+--------------+
I have to generate a result set of a SQL query which should match the following, but let me explain both inputs and outputs:
I have a table named Orders and this table has some orders in some days at some hours, then, I have been requested to provide a result-set which should get all days between two dates (i.e. 2017-10-01 and 2017-10-07), with all 24 hours for each day, even if that day or that hour had no orders, but it should be appeared with 0 value.
+------------+------+-------------+
| Day | Hour | TotalOrders |
+------------+------+-------------+
| 2017-10-01 | 0 | 0 |
+------------+------+-------------+
| 2017-10-01 | 1 | 3 |
+------------+------+-------------+
| 2017-10-01 | 2 | 4 |
+------------+------+-------------+
| 2017-10-01 | 3 | 0 |
+------------+------+-------------+
| 2017-10-01 | 4 | 7 |
+------------+------+-------------+
| 2017-10-01 | 5 | 0 |
+------------+------+-------------+
| 2017-10-01 | 6 | 0 |
+------------+------+-------------+
| 2017-10-01 | 7 | 9 |
+------------+------+-------------+
| 2017-10-01 | 8 | 0 |
+------------+------+-------------+
| 2017-10-01 | 9 | 0 |
+------------+------+-------------+
| 2017-10-01 | 10 | 0 |
+------------+------+-------------+
| 2017-10-01 | 11 | 0 |
+------------+------+-------------+
| 2017-10-01 | 12 | 0 |
+------------+------+-------------+
| 2017-10-01 | 13 | 0 |
+------------+------+-------------+
| 2017-10-01 | 14 | 0 |
+------------+------+-------------+
| 2017-10-01 | 15 | 0 |
+------------+------+-------------+
| 2017-10-01 | 16 | 0 |
+------------+------+-------------+
| 2017-10-01 | 17 | 0 |
+------------+------+-------------+
| 2017-10-01 | 18 | 0 |
+------------+------+-------------+
| 2017-10-01 | 19 | 0 |
+------------+------+-------------+
| 2017-10-01 | 20 | 0 |
+------------+------+-------------+
| 2017-10-01 | 21 | 0 |
+------------+------+-------------+
| 2017-10-01 | 22 | 0 |
+------------+------+-------------+
| 2017-10-01 | 23 | 0 |
+------------+------+-------------+
| 2017-10-02 | 0 | 0 |
+------------+------+-------------+
| 2017-10-02 | 1 | 0 |
+------------+------+-------------+
| 2017-10-02 | 2 | 0 |
+------------+------+-------------+
| 2017-10-02 | 3 | 0 |
+------------+------+-------------+
| 2017-10-02 | 4 | 0 |
+------------+------+-------------+
| 2017-10-02 | 5 | 0 |
+------------+------+-------------+
| 2017-10-02 | 6 | 0 |
+------------+------+-------------+
| 2017-10-02 | 7 | 0 |
+------------+------+-------------+
| and so on .................. |
+------------+------+-------------+
So, the above result set should contain every day between the given two dates, and each day should have all 24 hours, irrespective off that day had orders and the same for hour (either it had orders or not)
I did it using a nested CTE:
DECLARE #MinDate DATE = '20171001',
#MaxDate DATE = '20171006';
;WITH INNER_CTE as(
SELECT TOP (DATEDIFF(DAY, #MinDate, #MaxDate) + 1)
Date = DATEADD(DAY, ROW_NUMBER() OVER(ORDER BY a.object_id) - 1, #MinDate)
FROM sys.all_objects a
CROSS JOIN sys.all_objects b) ,
OUTER_CTE as (
select * from INNER_CTE
cross apply (
SELECT TOP (24) n = ROW_NUMBER() OVER (ORDER BY [object_id]) -1
FROM sys.all_objects ORDER BY n)) t4
)
select t1.Date, t1.n [Hour], ISNULL(t2.TotalORders,0) TotalOrders from
OUTER_CTE t1
LEFT JOIN orders t2 on t1.Date = t2.[Day] and t1.n = t2.[Hour]
Good Reading about generating sequences using a query here: https://sqlperformance.com/2013/01/t-sql-queries/generate-a-set-1
I prefer to do this with a tally table instead of using loops. The performance is much better. I keep a tally on my system as a view like this.
create View [dbo].[cteTally] as
WITH
E1(N) AS (select 1 from (values (1),(1),(1),(1),(1),(1),(1),(1),(1),(1))dt(n)),
E2(N) AS (SELECT 1 FROM E1 a, E1 b), --10E+2 or 100 rows
E4(N) AS (SELECT 1 FROM E2 a, E2 b), --10E+4 or 10,000 rows max
cteTally(N) AS
(
SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) FROM E4
)
select N from cteTally
GO
Now that we have our tally table we can use some basic math to get the desired output. Something along these lines.
declare #Date1 datetime = '2017-10-01';
declare #Date2 datetime = '2017-10-07';
select Day = convert(date, DATEADD(hour, t.N, #Date1))
, Hour = t.N - 1
, TotalOrders = COUNT(o.OrderID)
from cteTally t
left join Orders o on o.OrderDate = DATEADD(hour, t.N, #Date1)
where t.N <= DATEDIFF(hour, #Date1, #Date2)
group by DATEDIFF(hour, #Date1, #Date2)
, t.N
The simplest way is to just use a temporary table or table variable to fill the desired result set, and then count the number of Orders for each row.
declare #Date1 date = '2017-10-01';
declare #Date2 date = '2017-10-07';
declare #Hour int;
declare #Period table (Day Date, Hour Time);
while #Date1 <= #Date2
begin
set #Hour = 0;
while #Hour < 24
begin
insert into #Period (Day, Hour) values (#Date1, TimeFromParts(#Hour,0,0,0,0));
set #Hour = #Hour + 1;
end
set #Date1 = DateAdd(Day, 1, #Date1);
end
select Day, Hour,
(select count(*)
from Orders
where Orders.Day = Period.Day and Orders.Hour = Period.Hour) as TotalOrders
from #Period as Period;
This query gives me Event values from 1 to 20 within an hour, how to add to that if a consecutive Event value is >=200 as well?
SELECT ID, count(Event) as numberoftimes
FROM table_name
WHERE Event >=1 and Event <=20
GROUP BY ID, DATEPART(HH, AtHour)
HAVING DATEPART(HH, AtHour) <= 1
ORDER BY ID desc
In this dummy 24h table:
+----+-------+--------+
| ID | Event | AtHour |
+----+-------+--------+
| 1 | 1 | 11:00 |
| 1 | 4 | 11:01 |
| 1 | 1 | 11:02 |
| 1 | 20 | 11:03 |
| 1 | 200 | 11:04 |
| 1 | 1 | 13:00 |
| 1 | 1 | 13:05 |
| 1 | 2 | 13:06 |
| 1 | 500 | 13:07 |
| 1 | 39 | 13:10 |
| 1 | 50 | 13:11 |
| 1 | 2 | 13:12 |
+----+-------+--------+
I would like to select IDs with Event with values with range between 1 and 20 followed immediately by value greater than or equal to 200 within an hour.
Expected result should be something like that:
+----+--------+
| ID | AtHour |
+----+--------+
| 1 | 11 |
| 1 | 13 |
| 2 | 11 |
| 2 | 14 |
| 3 | 09 |
| 3 | 12 |
+----+--------+
or just how many times it has happened for unique ID instead of which hour.
Please excuse me I am still rusty with post formatting!
CREATE TABLE data (Id INT, Event INT, AtHour SMALLDATETIME);
INSERT data (Id, Event, AtHour) VALUES
(1,1,'2017-03-16 11:00:00'),
(1,4,'2017-03-16 11:01:00'),
(1,1,'2017-03-16 11:02:00'),
(1,20,'2017-03-16 11:03:00'),
(1,200,'2017-03-16 11:04:00'),
(1,1,'2017-03-16 13:00:00'),
(1,1,'2017-03-16 13:05:00'),
(1,2,'2017-03-16 13:06:00'),
(1,500,'2017-03-16 13:07:00'),
(1,39,'2017-03-16 13:10:00')
;
; WITH temp as (
SELECT rownum = ROW_NUMBER() OVER (PARTITION BY id ORDER BY AtHour)
, *
FROM data
)
SELECT a.id, DATEPART(HOUR, a.AtHour) as AtHour, COUNT(*) AS NumOfPairs
FROM temp a JOIN temp b ON a.rownum = b.rownum-1
WHERE a.Event BETWEEN 1 and 20 AND b.Event >= 200
AND DATEDIFF(MINUTE, a.AtHour, b.AtHour) <= 60
GROUP BY a.id, DATEPART(HOUR, a.AtHour)
;
I’ve a table MachineStatus which stores the status history of a machine.
The table looks like this:
| MachineStatusId | From | To | State | MachineId |
----------------------------------------------------------------------------------------------------------------------------------------
| B065FC43-DBE7-E611-9BDB-801F02F47041 | 2017-01-30 07:00:00 | 2017-01-30 08:00:00 | 1 | 92649C7B-E962-4EB1-B631-00086EECA98A |
| B165FC43-DBE7-E611-9BDB-801F02F47041 | 2017-01-30 08:00:00 | 2017-01-30 09:00:00 | 200 | 92649C7B-E962-4EB1-B631-00086EECA98A |
| B265FC43-DBE7-E611-9BDB-801F02F47041 | 2017-01-30 07:00:00 | 2017-01-30 08:00:00 | 1 | A2649C7B-E962-4EB1-B631-00086EECA98A |
| B365FC43-DBE7-E611-9BDB-801F02F47041 | 2017-01-30 08:00:00 | 2017-01-30 09:00:00 | 500 | A2649C7B-E962-4EB1-B631-00086EECA98A |
It stores for each machine, for each status change a record with the information [From] when [To] when a certain [State] was valid.
I like to calculate the time each machine spent in each state.
The result should look like this:
| MachineId | Alias | State1 | State200 | State500 |
-------------------------------------------------------------------------------------------------
| 92649C7B-E962-4EB1-B631-00086EECA98A | Somename | 60 | 60 | 0 |
| A2649C7B-E962-4EB1-B631-00086EECA98A | Some other name | 60 | 0 | 60 |
Each state should be represented as a column.
Here is wat I have tried so far:
SELECT
MAX(mState.MachineId),
MAX(m.Alias),
SUM(CASE mState.State WHEN 1 THEN mState.Diff ELSE 0 END) AS CritTime,
SUM(CASE mState.State WHEN 200 THEN mState.Diff ELSE 0 END) AS OpTime,
SUM(CASE mState.State WHEN 500 THEN mState.Diff ELSE 0 END) AS OtherTime
FROM
(
SELECT
DATEDIFF(MINUTE, ms.[From], ISNULL(ms.[To], GETDATE())) AS Diff,
ms.State AS State,
MachineId
FROM
MachineStatus ms
WHERE
ms.[From] >= #rangeFrom AND
(ms.[To] <= #rangeEnd OR ms.[To] IS NULL)
) as mState
INNER JOIN Machines m ON m.MachineId = mState.MachineId
GROUP BY
mState.MachineId,
m.Alias,
mState.State
Calculating the time and grouping the result by machines works but I cannot figure out how to reduce the result set only contain one row per machine but with a column per state.
I started in your subquery without apply any sum to your calculated data:
SELECT m.MachineId,
m.Alias,
Minutes,
s.State
FROM machines m
INNER JOIN states s ON m.MachineId = s.MachineId
Then you can pivot() for [State] and calculate the sum() of every state in this form:
WITH Calc AS
(
SELECT m.MachineId,
m.Alias,
Minutes,
s.State
FROM machines m
INNER JOIN states s ON m.MachineId = s.MachineId
)
SELECT MachineId, Alias, [State1], [State2], [State500]
FROM
(SELECT MachineId, Alias, State, Minutes FROM Calc) AS SourceTable
PIVOT
(
SUM(Minutes) FOR State IN ([State1],[State2],[State500])
) AS PivotTable;
This is the result:
+--------------------------------------+---------+--------+--------+----------+
| MachineId | Alias | State1 | State2 | State500 |
+--------------------------------------+---------+--------+--------+----------+
| 92649C7B-E962-4EB1-B631-00086EECA98A | Alias 1 | 100 | 100 | 100 |
+--------------------------------------+---------+--------+--------+----------+
| A2649C7B-E962-4EB1-B631-00086EECA98A | Alias 2 | 10 | 20 | 70 |
+--------------------------------------+---------+--------+--------+----------+
Notice that you must know how many states return your data.
Can check it here: http://rextester.com/DHDX77489