SELECT RIGHT(timestamp,LEN(timestamp) -12) as DailyTime, left(roundtrip, LEN(roundtrip) -2) as HalfHourDuration, site_code
FROM tblServer_Status
WHERE timestamp >= dateadd(day, datediff(day,'19000101',CURRENT_TIMESTAMP),'19000101') AND timestamp < dateadd(day, datediff(day,'19000101',CURRENT_TIMESTAMP)+1,'19000101') AND server = 'ServerName' AND site_code = 'A'
GROUP BY timestamp, roundtrip, site_code HAVING(((COUNT(site_code))>0))
ORDER BY timestamp
I have this code that gives me this kind of output
| DailyTime | HalfHourDuration | Site_Code|
12:00AM 122 A
12:00AM 143 A
12:00AM 242 A
12:30AM 112 A
12:30AM 222 A
12:30AM 462 A
01:00AM 322 A
01:00AM 642 A
01:00AM 322 A
01:30AM 146 A
01:30AM 167 A
01:30AM 116 A
02:00AM 163 A
02:00AM 145 A
02:00AM 121 A
02:30AM 149 A
02:30AM 135 A
02:30AM 111 A
...................................
But I need to get the Latest duration per time.
Like this one
| DailyTime | HalfHourDuration | Site_Code|
12:00AM 242 A
12:30AM 462 A
01:00AM 322 A
01:30AM 116 A
02:00AM 121 A
02:30AM 111 A
Something like that.
can anyone help me configure my codes.
Thanks.
You can do this using row_number():
with t as (
<your query here without order by>
)
select t.*
from (select t.*,
row_number() over (partition by DailyTime
order by HalfHourDuration desc
) as seqnum
from t
) t
where seqnum = 1;
Add Max function to your column HalfHourDuration.
This will list out the maximum value of the roundtrip alone grouping by timestamp
Related
I have a dataset of price data and would like to get the calculation of the ongoing ATR (Average True Range) for all rows > 21. Row 21 is the AVG([TR]) from Rows 2-21 and is equal to 353.7.
The calculation that needs to be continuous for the rest of that [ATR_20] column will need to be:
ATR_20 (after row 21) = (([Previous ATR_20]*19)+[TR])/20
My dataset:
Date Open High Low Close TotalVolume Prev_Close TR_A TR_B TR_C TR ATR
2017-02-01 5961 5961 5425 5498 22689 NULL 536 NULL NULL NULL NULL
2017-02-02 5697 5868 5615 5734 22210 5498 253 370 117 370 NULL
2017-02-03 5742 5811 5560 5725 15852 5734 251 77 174 251 NULL
2017-02-06 5675 5679 5545 5554 9777 5725 134 46 180 180 NULL
2017-02-07 5597 5613 5426 5481 12692 5554 187 59 128 187 NULL
2017-02-08 5459 5630 5450 5625 9134 5481 180 149 31 180 NULL
2017-02-09 5615 5738 5532 5668 10630 5625 206 113 93 206 NULL
2017-02-10 5651 5661 5488 5602 9709 5668 173 7 180 180 NULL
2017-02-13 5700 6195 5639 6161 26031 5602 556 593 37 593 NULL
2017-02-14 6197 6594 6073 6571 35969 6161 521 433 88 521 NULL
2017-02-15 6510 6650 6275 6492 22046 6571 375 79 296 375 NULL
2017-02-16 6505 6680 6325 6419 12515 6492 355 188 167 355 NULL
2017-02-17 6434 6670 6429 6658 14947 6419 241 251 10 251 NULL
2017-02-21 6800 6957 6603 6654 23838 6658 354 299 55 354 NULL
2017-02-22 6704 6738 6145 6222 25004 6654 593 84 509 593 NULL
2017-02-23 6398 6437 5901 6343 46677 6222 536 215 321 536 NULL
2017-02-24 5280 5589 5260 5404 51757 6343 329 754 1083 1083 NULL
2017-02-27 5437 5461 5260 5300 19831 5404 201 57 144 201 NULL
2017-02-28 5258 5410 5167 5195 15900 5300 243 110 133 243 NULL
2017-03-01 5251 5299 5052 5215 16958 5195 247 104 143 247 NULL
2017-03-02 5160 5231 5063 5130 17805 5215 168 16 152 168 353.7
2017-03-03 5141 5363 5088 5320 14516 5130 275 233 42 275 NULL
I got to this point by the following
WITH cte_ACIA ([RowNumber], [Date], [Open], [High], [Low], [Close],
[Prev_Close], [TotalVolume], [TR_A], [TR_B], [TR_C])
AS
(SELECT
ROW_NUMBER() OVER (ORDER BY [Date] ASC) RowNumber,
[Date],
[Open],
[High],
[Low],
[Close],
LAG([Close]) OVER(ORDER BY [Date]) AS Prev_Close,
[TotalVolume],
ROUND([High]-[Low], 5) AS TR_A,
ABS(ROUND([High]-LAG([Close]) OVER(ORDER BY [Date]), 5)) AS TR_B,
ABS(ROUND([Low]-LAG([Close]) OVER(ORDER BY [Date]), 5)) AS TR_C,
FROM NASDAQ.ACIA_TEMP)
SELECT [RowNumber], [Date], [Open], [High], [Low], [Close], [Prev_Close],
[TotalVolume], [TR_A], [TR_B], [TR_C], [TR],
CASE
WHEN RowNumber = 21 THEN AVG([TR]) OVER (ORDER BY [Date] ASC ROWS 19 PRECEDING)
END AS ATR_20
FROM
(
SELECT [RowNumber],[Date],[Open],[High],[Low],[Close],
IIF(RowNumber = 1, NULL, Prev_Close) Prev_Close,
[TotalVolume],
[TR_A],
IIF(RowNumber > 1, [TR_B], NULL) TR_B,
IIF(RowNumber > 1, [TR_C], NULL) TR_C,
CASE
WHEN TR_A > TR_B AND TR_A > TR_C THEN TR_A
WHEN TR_B > TR_A AND TR_B > TR_C THEN TR_B
ELSE TR_C
END AS TR
FROM cte_ACIA) sub
Please let me know if you have questions or I need to clarify anything.
I suppose you are just looking for a hint. Otherwise you would have posted your table definition. We can't construct a query for you since we don't have the basic pieces. However, here's the hint! Use an aggregating window function with the OVER clause specifying ROWS PRECEDING.
See SELECT - OVER Clause
How can I get the differences between the OrderStop and the OrderStart in the next column?
For example, row 1 has OrderStop date of 1/31/2007 I want to count the days between the next start date 1/26/2007 I know they are overlaps.
ID OrderStart OrderStop
132 4/14/2006 1/31/2007
132 1/26/2007 3/14/2007
132 2/1/2007 3/2/2007
132 3/2/2007 3/14/2007
132 3/14/2007 1/8/2010
132 11/26/2008 1/20/2011
132 1/8/2010 7/14/2010
132 7/14/2010 8/15/2012
132 8/15/2012 1/17/2013
132 1/17/2013 3/22/2013
132 3/21/2013 5/2/2013
132 5/2/2013 8/2/2013
132 5/22/2013 8/2/2013
132 7/29/2013 3/6/2014
132 3/5/2014 7/16/2014
132 7/16/2014 6/19/2015
132 8/21/2014 6/19/2015
132 6/19/2015 4/1/2016
132 6/25/2015 9/9/2015
132 4/1/2016 5/3/2016
132 5/3/2016 7/27/2016
132 8/15/2016 11/2/2016
I am trying to accomplished the below. How can I create a SQL statement that can accomplish this?
132 4/14/2006 1/31/2007
132 1/26/2007 4/1/2016
132 4/1/2016 7/27/2016
132 8/15/2016 11/2/2016
Consumable sample data makes things much easier for us. So does the version of SQL server. Below is a solution that uses 2012's LEAD functionality. The second one is for pre-2012 systems; it accomplishes the same thing but requires a self join and will not be as efficient.
declare #orders table (id int, orderStart date, orderStop date);
insert #orders
values
(132,'4/14/2006 ','1/31/2007'),
(132,'1/26/2007 ','3/14/2007'),
(132,'2/1/2007 ','3/2/2007 '),
(132,'3/2/2007 ','3/14/2007'),
(132,'3/14/2007 ','1/8/2010 '),
(132,'11/26/2008','1/20/2011'),
(132,'1/8/2010 ','7/14/2010'),
(132,'7/14/2010 ','8/15/2012'),
(132,'8/15/2012 ','1/17/2013'),
(132,'1/17/2013 ','3/22/2013'),
(132,'3/21/2013 ','5/2/2013 '),
(132,'5/2/2013 ','8/2/2013 '),
(132,'5/22/2013 ','8/2/2013 '),
(132,'7/29/2013 ','3/6/2014 '),
(132,'3/5/2014 ','7/16/2014'),
(132,'7/16/2014 ','6/19/2015'),
(132,'8/21/2014 ','6/19/2015'),
(132,'6/19/2015 ','4/1/2016 '),
(132,'6/25/2015 ','9/9/2015 '),
(132,'4/1/2016 ','5/3/2016 '),
(132,'5/3/2016 ','7/27/2016'),
(132,'8/15/2016 ','11/2/2016');
select *,
nextStart = lead(orderStart,1) over (order by orderStart),
daysBetween = abs(datediff(day,lead(orderStart,1) over (order by orderStart), orderStop))
from #orders
order by orderStart;
with preSort as
(
select *, rn = row_number() over (order by orderstart)
from #orders
)
select p2.id, p2.orderStart, p2.orderStop , nextStart = p1.orderStart,
daysBetween = abs(datediff(day, p2.orderStop, p1.orderStart))
from preSort p1 join preSort p2 on p1.rn = p2.rn+1
order by p1.orderStart;
Both Return
id orderStart orderStop nextStart daysBetween
----------- ---------- ---------- ---------- -----------
132 2006-04-14 2007-01-31 2007-01-26 5
132 2007-01-26 2007-03-14 2007-02-01 41
132 2007-02-01 2007-03-02 2007-03-02 0
132 2007-03-02 2007-03-14 2007-03-14 0
132 2007-03-14 2010-01-08 2008-11-26 408
132 2008-11-26 2011-01-20 2010-01-08 377
132 2010-01-08 2010-07-14 2010-07-14 0
132 2010-07-14 2012-08-15 2012-08-15 0
132 2012-08-15 2013-01-17 2013-01-17 0
132 2013-01-17 2013-03-22 2013-03-21 1
132 2013-03-21 2013-05-02 2013-05-02 0
132 2013-05-02 2013-08-02 2013-05-22 72
132 2013-05-22 2013-08-02 2013-07-29 4
132 2013-07-29 2014-03-06 2014-03-05 1
132 2014-03-05 2014-07-16 2014-07-16 0
132 2014-07-16 2015-06-19 2014-08-21 302
132 2014-08-21 2015-06-19 2015-06-19 0
132 2015-06-19 2016-04-01 2015-06-25 281
132 2015-06-25 2015-09-09 2016-04-01 205
132 2016-04-01 2016-05-03 2016-05-03 0
132 2016-05-03 2016-07-27 2016-08-15 19
132 2016-08-15 2016-11-02 NULL NULL
I have a list of customers that can have a single, or multiple, service listed. In the table that houses the changes over time there is an indicator of 'Added' or 'Removed'.
What I need: determine those service(s) that are currently active, if at all.
Here is a sample set of data:
CUST_ID SRV_ID STATUS ACTION_DATE
12345 102 Added 1/31/17 10:15
12345 189 Added 4/18/17 15:37
12345 189 Removed 4/21/17 14:08
12345 194 Added 5/2/17 14:43
12345 194 Removed 5/5/17 10:02
12345 194 Added 5/5/17 13:06
12345 69 Added 4/19/17 9:36
12345 69 Removed 5/2/17 14:43
12345 73 Added 4/20/17 10:21
12345 73 Removed 4/25/17 11:20
12345 95 Added 5/4/17 9:48
12345 95 Removed 5/4/17 10:05
Records to be returned: 102 on 1/31/17 10:15 and 194 on 5/5/17 13:06
You can find the latest row for each cust_id and serv_id using top 1 with ties and window function row_number and then filter those with status "Added":
select *
from (
select top 1
with ties *
from your_table
order by row_number() over (
partition by cust_id, srv_id
order by action_date desc
)
) t
where status = 'Added'
Produces:
CUST_ID SRV_ID STATUS ACTION_DATE
12345 102 Added 2017/01/31 10:15
12345 194 Added 2017/05/05 13:06
Demo
Like this:
SELECT SRV_ID
FROM YourTable
GROUP BY SRV_ID
HAVING MAX(CASE WHEN STATUS='Added' THEN ACTION_DATE END)
> MAX(CASE WHEN STATUS='Removed' THEN ACTION_DATE END)
i have a table which has sales at day level
sales_day
loc_id day_id sales
124 2013-01-01 100
124 2013-01-02 120
124 2013-01-03 140
124 2013-01-04 160
124 2013-01-05 180
124 2013-01-06 200
124 2013-01-07 220
there is weekly table which is the aggregate of all the days
loc_id week_id sales
123 201401 1120
Now i need all of the above in table as below
loc_id day_id sales week_sales
124 2013-01-01 100 1120
124 2013-01-02 120 1120
124 2013-01-03 140 1120
124 2013-01-04 160 1120
124 2013-01-05 180 1120
124 2013-01-06 200 1120
124 2013-01-07 220 1120
there are so many loactions and so many weeks,days.
How to get the data exactly without cross join.
Have you tried this:
select loc_id, day_id, sales, week_sales
from table
cross join (
select sum(sales) as week_sales from table
) t
Window analytical function should help you here...
select loc_id,
day_id,
sales,
sum(sales) over(partition by loc_id,date_part('week', day_id)) as week_total_sales
from <table name>
It will sum the sales by location id and the week of the year to give you the total you are looking for.
In your example, 2013-01-07 was included with the other dates, but it isn't actually part of the same calendar week.
It wasn't clear which DBMS you were referring to. The above is for Netezza. For SQL Server etc try changing date_part('week',day_id) to datepart(ww,day_id).
I would like to know if there is a way to do the following within a SQL Server script.
Let's say I have the following table
** To make things simple, IDs in example are INT. In my real scenario, these are UNIQUEIDENTIFIER
ParentId ChildId
-----------------------
101 201
101 202
101 203
102 204
102 205
103 206
103 207
103 208
103 209
I would like to query the table to get the following result.
So far I was able to get the ChildIndex column using the ROW_NUMBER() function. I am now struggling with the ParentIndex column...
ParentId ChildId ChildIndex ParentIndex
---------------------------------------------------
101 201 1 1
101 202 2 1
101 203 3 1
102 204 1 2
102 205 2 2
103 206 1 3
103 207 2 3
103 208 3 3
103 209 4 3
Here is my query so far
SELECT ParentId,
ChildId,
ROW_NUMBER() OVER ( PARTITION BY ParentId ORDER BY ParentId DESC ) AS ChildIndex
FROM MyTable
DENSE_RANK() is all you need.
DENSE_RANK() OVER (ORDER BY ParentId DESC) AS ParentIndex