I have this table in DB2:
DATE
----------
09/11/2021
06/10/2021
28/11/2021
17/11/2021
11/10/2021
24/11/2021
07/11/2021
30/11/2021
I want to count how many times a date appeared in the table and group it by year and month, and display it like this:
| YEAR | OCTOBER | NOVEMBER |
----------------------------
| 2021 | 2 | 6 |
As months are a known quantity you could use a sum of a case statement:
select year(datecol) as year
,sum(case when month(datecol) = 1 then 1 else 0 end) as jan
,sum(case when month(datecol) = 2 then 1 else 0 end) as feb
,sum(case when month(datecol) = 3 then 1 else 0 end) as mar
,sum(case when month(datecol) = 4 then 1 else 0 end) as apr
,sum(case when month(datecol) = 5 then 1 else 0 end) as may
,sum(case when month(datecol) = 6 then 1 else 0 end) as jun
,sum(case when month(datecol) = 7 then 1 else 0 end) as jul
,sum(case when month(datecol) = 8 then 1 else 0 end) as aug
,sum(case when month(datecol) = 9 then 1 else 0 end) as sep
,sum(case when month(datecol) = 10 then 1 else 0 end) as oct
,sum(case when month(datecol) = 11 then 1 else 0 end) as nov
,sum(case when month(datecol) = 12 then 1 else 0 end) as dec
from datetest
group by year(datecol)
order by 1;
That will give you output similar to this:
YEAR JAN FEB MAR APR MAY JUN JUL AUG SEP OCT NOV DEC
----------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- ----------- -----------
2018 0 0 0 0 0 0 0 0 0 0 3 0
2019 0 0 0 0 0 0 0 0 0 1 2 0
2020 0 0 0 0 0 0 0 0 0 1 1 0
2021 0 0 0 0 0 0 0 0 0 2 6 0
You may use a generic routine described at the link dynamic pivot SQL Query in Db2.
Use the following call to get the desired result set for your case:
CALL PIVOT
(
'SELECT YEAR (DATE) AS YEAR, TO_CHAR (DATE, ''MONTH'') AS MONTH FROM DATES'
, 'YEAR'
, 'MONTH'
, 'MONTH'
, 'count'
, 'SESSION.PIVOT'
, '-'
, ?, ?, ?
);
The result is:
YEAR
NOVEMBER
OCTOBER
2021
6
2
I have a simple table containing case number (ID), opening_date and end_date where end_date have null values (unfinished cases). It looks like this:
ID opening_date end_date
1 2021-01-04 2021-01-14
2 2021-01-04 2021-01-26
3 2021-01-14 2021-02-15
4 2021-02-01 NULL
5 2021-02-04 2021-02-26
6 2021-02-10 2021-02-15
I'm trying to write a select query which will show me simply by month/week or day (nevermind), how many cases were set up (opening_date) and how many were closed (end_date) per each month./week. The problem is that I cannot use opening or end date in filters because not every date from opening_date column is in end_date and vice versa. It should be specific date range generated separately as external table in first column or something like that, so if there's situation, where neither opening nor end date occurs (in a day/week/month), a row with zeros should be shown as in the first date below - The result of example by day:
date openings endings
2021-01-01 0 0
2021-01-02 0 0
2021-01-03 0 0
2021-01-04 2 0
2021-01-05 0 0
2021-01-06 0 0
2021-01-07 0 0
2021-01-08 0 0
2021-01-09 0 0
2021-01-10 0 0
2021-01-11 0 0
2021-01-12 0 0
2021-01-13 0 0
2021-01-14 1 1
2021-01-15 0 0
2021-01-16 0 0
2021-01-17 0 0
2021-01-18 0 0
2021-01-19 0 0
2021-01-20 0 0
2021-01-21 0 0
2021-01-22 0 0
2021-01-23 0 0
2021-01-24 0 0
2021-01-25 0 0
2021-01-26 0 1
2021-01-27 0 0
2021-01-28 0 0
2021-01-29 0 0
2021-01-30 0 0
2021-01-31 0 0
2021-02-01 1 0
2021-02-02 0 0
2021-02-03 0 0
2021-02-04 1 0
2021-02-05 0 0
2021-02-06 0 0
2021-02-07 0 0
2021-02-08 0 0
2021-02-09 0 0
2021-02-10 1 0
2021-02-11 0 0
2021-02-12 0 0
2021-02-13 0 0
2021-02-14 0 0
2021-02-15 0 2
2021-02-16 0 0
2021-02-17 0 0
2021-02-18 0 0
2021-02-19 0 0
2021-02-20 0 0
2021-02-21 0 0
2021-02-22 0 0
2021-02-23 0 0
2021-02-24 0 0
2021-02-25 0 0
2021-02-26 0 1
2021-02-27 0 0
2021-02-28 0 0
By months:
Month openings endings
2021-01 3 2
2021-02 3 3
Please help me. Thanks in advance.
You need a calendar table for this. You start with the calendar, and LEFT JOIN everything else.
To get the calculation for each day, we can unpivot and group, then count daily totals
You can have a real table. Or you can generate it on the fly, like this:
WITH
L0 AS ( SELECT c = 1
FROM (VALUES(1),(1),(1),(1),(1),(1),(1),(1),
(1),(1),(1),(1),(1),(1),(1),(1)) AS D(c) ),
L1 AS ( SELECT c = 1 FROM L0 A, L0 B, L0 C ),
Nums AS ( SELECT rownum = ROW_NUMBER() OVER(ORDER BY (SELECT 1))
FROM L1 ),
Dates AS ( SELECT [date] = DATEADD(day, rownum, '20180101')
FROM Nums )
SELECT
d.[date],
openings = ISNULL(t.openings, 0),
endings = ISNULL(t.endings, 0)
FROM Dates d
LEFT JOIN (
SELECT v.AllDates,
openings = COUNT(IsOpen),
endings = COUNT(IsEnd)
FROM YourTable t
CROSS APPLY (VALUES
(opening_date, 1, NULL),
(end_date, NULL, 1)
) v(AllDates, IsOpen, IsEnd)
GROUP BY v.AllDates
) t ON t.AllDates = d.[date];
I have a survey table with many columns but i am focusing on these 2, survey_date and over_rating. i am not sure if it is possible to be done in a single query. I am using sql server 2012. This is my sample data.
select survey_date, overall_rating from survey
survey_date overall_rating
2017-01-06 15:09:51.940 6
2017-02-06 14:18:18.620 4
2017-05-07 16:03:12.037 7
2017-05-23 10:41:30.357 7
2017-05-23 10:41:30.357 5
2017-05-24 12:05:25.217 8
2017-06-01 09:03:47.727 7
2017-06-05 09:01:07.283 9
2017-06-05 09:28:12.597 6
2017-06-15 09:47:29.407 7
2017-07-06 12:10:50.003 2
2017-07-06 13:45:52.997 7
2017-08-06 14:00:35.403 5
2017-08-09 12:21:17.367 8
I need to count the occurrence for each rating 1-10, for each month, and sum it up. Example June 15, rating 10 have 1, rating 9 have 10, ...
This is the result table:
Month 10 9 8 7 6 5 4 3 2 1 Avg Score Total Total >=6 CSI
June'15 1 10 20 3 0 0 0 0 0 0 8 34 34 100%
July'15 1 16 14 0 0 0 0 0 1 0 9 32 31 99%
August'15 7 6 6 0 0 0 0 0 0 0 9 19 19 100%
September'15 0 2 2 0 0 0 0 0 0 0 9 4 4 100%
November'15 0 1 2 0 0 0 0 0 0 0 8 3 3 100%
December'15 0 7 3 4 2 0 0 0 0 0 8 16 16 100%
i have tried this query but is partly wrong as there is duplicate month for each rating:
select si.yr, si.mn,
case when si.overall_rating = 10 then count(si.overall_rating) else 0 end as '10',
case when si.overall_rating = 9 then count(si.overall_rating) else 0 end as '9',
case when si.overall_rating = 8 then count(si.overall_rating) else 0 end as '8',
case when si.overall_rating = 7 then count(si.overall_rating) else 0 end as '7',
case when si.overall_rating = 6 then count(si.overall_rating) else 0 end as '6',
case when si.overall_rating = 5 then count(si.overall_rating) else 0 end as '5',
case when si.overall_rating = 4 then count(si.overall_rating) else 0 end as '4',
case when si.overall_rating = 3 then count(si.overall_rating) else 0 end as '3',
case when si.overall_rating = 2 then count(si.overall_rating) else 0 end as '2',
case when si.overall_rating = 1 then count(si.overall_rating) else 0 end as '1',
sum(si.overall_rating) as month_count
from
(select YEAR(s.survey_date) yr, MONTH(s.survey_date) mn, s.overall_rating
from survey s where s.status='Submitted' and s.survey_date >= '2017-01-01' AND s.survey_date <= '2017-12-31'
group by YEAR(s.survey_date), MONTH(s.survey_date), s.overall_rating) si group by si.yr, si.mn, si.overall_rating;
Results:
yr mm 10 9 8 7 6 5 4 3 2 1 total
2017 1 0 0 0 0 1 0 0 0 0 0 6
2017 2 0 0 0 0 0 0 1 0 0 0 4
2017 5 0 0 0 0 0 1 0 0 0 0 5
2017 5 0 0 0 1 0 0 0 0 0 0 7
2017 5 0 0 1 0 0 0 0 0 0 0 8
2017 6 0 0 0 0 1 0 0 0 0 0 6
2017 6 0 0 0 1 0 0 0 0 0 0 7
2017 6 0 1 0 0 0 0 0 0 0 0 9
2017 7 0 0 0 0 0 0 0 0 1 0 2
2017 7 0 0 0 1 0 0 0 0 0 0 7
2017 8 0 0 0 0 0 1 0 0 0 0 5
2017 8 0 0 1 0 0 0 0 0 0 0 8
As you can see 5 and 6 are repeated for different rating. If anyone could tell me is it possible to be done in a single query. Thanks
I think I understand what you are trying to achieve here and you'll be pleased to know you are not far off. You have the right idea in using a conditional aggregate, but you need to wrap your conditional case expression in the aggregate, not the other way around. To do a conditional count, you can simply return 1 for a condition match and 0 for a no match and then sum up the result.
Doing this allows your group by to remain nice and simple:
declare #t table(survey_date datetime,overall_rating int);
insert into #t values ('2017-01-06 15:09:51.940',6),('2017-02-06 14:18:18.620',4),('2017-05-07 16:03:12.037',7),('2017-05-23 10:41:30.357',7),('2017-05-23 10:41:30.357',5),('2017-05-24 12:05:25.217',8),('2017-06-01 09:03:47.727',7),('2017-06-05 09:01:07.283',9),('2017-06-05 09:28:12.597',6),('2017-06-15 09:47:29.407',7),('2017-07-06 12:10:50.003',2),('2017-07-06 13:45:52.997',7),('2017-08-06 14:00:35.403',5),('2017-08-09 12:21:17.367',8);
select dateadd(m,datediff(m,0,survey_date),0) as [Month]
,sum(case when overall_rating = 10 then 1 else 0 end) as [10]
,sum(case when overall_rating = 9 then 1 else 0 end) as [9]
,sum(case when overall_rating = 8 then 1 else 0 end) as [8]
,sum(case when overall_rating = 7 then 1 else 0 end) as [7]
,sum(case when overall_rating = 6 then 1 else 0 end) as [6]
,sum(case when overall_rating = 5 then 1 else 0 end) as [5]
,sum(case when overall_rating = 4 then 1 else 0 end) as [4]
,sum(case when overall_rating = 3 then 1 else 0 end) as [3]
,sum(case when overall_rating = 2 then 1 else 0 end) as [2]
,sum(case when overall_rating = 1 then 1 else 0 end) as [1]
,count(overall_rating) as ScoresReturned
,sum(overall_rating) as TotalScore
,avg(cast(overall_rating as decimal(10,0))) as Average
from #t
group by dateadd(m,datediff(m,0,survey_date),0)
order by [Month];
Output:
+-------------------------+----+---+---+---+---+---+---+---+---+---+----------------+------------+----------+
| Month | 10 | 9 | 8 | 7 | 6 | 5 | 4 | 3 | 2 | 1 | ScoresReturned | TotalScore | Average |
+-------------------------+----+---+---+---+---+---+---+---+---+---+----------------+------------+----------+
| 2017-01-01 00:00:00.000 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 6 | 6.000000 |
| 2017-02-01 00:00:00.000 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 4 | 4.000000 |
| 2017-05-01 00:00:00.000 | 0 | 0 | 1 | 2 | 0 | 1 | 0 | 0 | 0 | 0 | 4 | 27 | 6.750000 |
| 2017-06-01 00:00:00.000 | 0 | 1 | 0 | 2 | 1 | 0 | 0 | 0 | 0 | 0 | 4 | 29 | 7.250000 |
| 2017-07-01 00:00:00.000 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 2 | 9 | 4.500000 |
| 2017-08-01 00:00:00.000 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 2 | 13 | 6.500000 |
+-------------------------+----+---+---+---+---+---+---+---+---+---+----------------+------------+----------+
select si.yr, si.mn,
case when si.overall_rating = 10 then count(si.overall_rating) else 0 end as '10',
case when si.overall_rating = 9 then count(si.overall_rating) else 0 end as '9',
case when si.overall_rating = 8 then count(si.overall_rating) else 0 end as '8',
case when si.overall_rating = 7 then count(si.overall_rating) else 0 end as '7',
case when si.overall_rating = 6 then count(si.overall_rating) else 0 end as '6',
case when si.overall_rating = 5 then count(si.overall_rating) else 0 end as '5',
case when si.overall_rating = 4 then count(si.overall_rating) else 0 end as '4',
case when si.overall_rating = 3 then count(si.overall_rating) else 0 end as '3',
case when si.overall_rating = 2 then count(si.overall_rating) else 0 end as '2',
case when si.overall_rating = 1 then count(si.overall_rating) else 0 end as '1',
sum(si.overall_rating) as month_count
from
(select YEAR(s.survey_date) yr, MONTH(s.survey_date) mn, s.overall_rating
from survey s where s.status='Submitted' and s.survey_date >= '2017-01-01' AND s.survey_date <= '2017-12-31'
group by YEAR(s.survey_date), MONTH(s.survey_date), s.overall_rating) si group by si.yr, si.mn;
You must remove si.overall_rating in Group By
Oh! I posted late, anyway you can try this otherwise.
DATA:
IF ( OBJECT_ID('tempdb..#temptable') IS NOT NULL )
BEGIN
DROP TABLE #temptable
END
CREATE TABLE #temptable
(
survey_date DATETIME ,
overall_rating NUMERIC(22,2)
)
INSERT INTO #temptable
( survey_date, overall_rating )
VALUES ( '2017-01-06 15:09:51.940', 6 ),
( '2017-02-06 14:18:18.620', 4 ),
( '2017-05-07 16:03:12.037', 7 ),
( '2017-05-23 10:41:30.357', 7 ),
( '2017-05-23 10:41:30.357', 5 ),
( '2017-05-24 12:05:25.217', 8 ),
( '2017-06-01 09:03:47.727', 7 ),
( '2017-06-05 09:01:07.283', 9 ),
( '2017-06-05 09:28:12.597', 6 ),
( '2017-06-15 09:47:29.407', 7 ),
( '2017-07-06 12:10:50.003', 2 ),
( '2017-07-06 13:45:52.997', 7 ),
( '2017-08-06 14:00:35.403', 5 ),
( '2017-08-09 12:21:17.367', 8 )
QUERY:
;
WITH CTE
AS ( SELECT DATENAME(month, survey_date) + ' '''
+ RIGHT(CAST(YEAR(survey_date) AS NVARCHAR(4)),
2) AS [Month] ,
ISNULL([1], 0) [1] ,
ISNULL([2], 0) [2] ,
ISNULL([3], 0) [3] ,
ISNULL([4], 0) [4] ,
ISNULL([5], 0) [5] ,
ISNULL([6], 0) [6] ,
ISNULL([7], 0) [7] ,
ISNULL([8], 0) [8] ,
ISNULL([9], 0) [9] ,
ISNULL([10], 0) [10],
Total,
Average
FROM ( SELECT survey_date ,
COUNT(overall_rating) overall_rating,
CAST(SUM(overall_rating) AS INT) Total,
AVG(overall_rating) Average
FROM ( SELECT DATEADD(MONTH,
DATEDIFF(MONTH,
0, survey_date),
0) survey_date ,
overall_rating
FROM #temptable
) T
GROUP BY t.survey_date
) PVT PIVOT ( SUM(overall_rating) FOR overall_rating IN ( [1],
[2], [3], [4],
[5], [6], [7],
[8], [9], [10] ) ) P
)
SELECT [Month] ,
ISNULL([1], 0) [1] ,
ISNULL([2], 0) [2] ,
ISNULL([3], 0) [3] ,
ISNULL([4], 0) [4] ,
ISNULL([5], 0) [5] ,
ISNULL([6], 0) [6] ,
ISNULL([7], 0) [7] ,
ISNULL([8], 0) [8] ,
ISNULL([9], 0) [9] ,
ISNULL([10], 0) [10],
Total,
Average
FROM CTE
RESULT:
Month 1 2 3 4 5 6 7 8 9 10 Total Average
----------------------- ------ ---- ----- ----- ----- ----- ---- ---- ----- ----- ----------- -------------
January '17 1 0 0 0 0 0 0 0 0 0 6 6.000000
February '17 1 0 0 0 0 0 0 0 0 0 4 4.000000
May '17 0 0 0 4 0 0 0 0 0 0 27 6.750000
June '17 0 0 0 4 0 0 0 0 0 0 29 7.250000
July '17 0 2 0 0 0 0 0 0 0 0 9 4.500000
August '17 0 2 0 0 0 0 0 0 0 0 13 6.500000
(6 row(s) affected)
I have this query that selects the tables like CODE_. I want to join these results to each CODE table like CODE_COUNTRY, CODE_COUNTY to list the value of the SHORT_DESC, it will be values like: United States, Mexico for Country; Brown, Green for County; Male, Female For Gender*. The value in the SHORT_DESC matches the value in the Transformations Table PC_Column*. So I need to join to the Transformations table to find the columns that match. And do left join, right joins to show columns that don't match either database. How do I find the system column to join to the Transformations table?
SELECT tb.[schema_id] AS 'Schema'
,tb.[OBJECT_ID] AS 'TableObj'
,tb.[NAME] AS 'TableName'
,C.NAME as 'Column'
,T.name AS 'Type'
,C.max_length
,C.is_nullable
FROM
SYS.COLUMNS C
INNER JOIN
SYS.TABLES tb ON tb.[object_id] = C.[object_id]
INNER JOIN
SYS.TYPES T ON C.system_type_id = T.user_type_id
--INNER JOIN
-- Bridge_test.dbo.Transformations TF ON TF.PC_Column = C.NAME
WHERE
tb.[is_ms_shipped] = 0
AND tb.[NAME] LIKE '%code_%'
AND C.name = 'SHORT_DESC'
--C.NAME LIKE '%country%'
ORDER BY
tb.[Name]
*Note: these are the columns to add to this query
query result
Schema TableObj TableName Column Type max_length is_nullable *SHORT_DESC Value (from CODE_ table), *PC_Column (from Transformations table)
1 1826105546 CODE_COUNTRY SHORT_DESC nvarchar 20 0 United States, USA
1 2018106230 CODE_COUNTY SHORT_DESC nvarchar 20 0 Mexico, Mexico
For example, the value in the SHORT_DESC should match the PC_Column
CODE_VALUE_KEY CODE_VALUE SHORT_DESC MEDIUM_DESC LONG_DESC STATUS
1001 1001 Autauga Autauga Autauga A
1003 1003 Baldwin Baldwin Baldwin A
1005 1005 Barbour Barbour Barbour A
1007 1007 Bibb Bibb Bibb A
1009 1009 Blount Blount Blount A
Transformations Table
GM_Column Value Note1 F4 PC_Table_Column PC_Table PC_Column Value1 Note2 F10
Ugender M NULL = demographics_gender demographics gender Male NULL NULL
Ugender F NULL = demographics_gender demographics gender Female NULL NULL
Ugender U NULL = demographics_gender demographics gender Unknown NULL NULL
Umarstat D NULL = demographics_marital_status demographics marital_status Divorced NULL NULL
Umarstat M NULL = demographics_marital_status demographics marital_status Married NULL NULL
Umarstat O NULL = demographics_marital_status demographics marital_status Other NULL NULL
Umarstat S NULL = demographics_marital_status demographics marital_status Single NULL NULL
Sytem Tables --queried system tables to find a common column with user data and don't see a common column to join to the user data tables
SELECT * FROM sys.tables WHERE name LIKE '%code%'
/*
name object_id principal_id schema_id parent_object_id type type_desc create_date modify_date is_ms_shipped is_published is_schema_published lob_data_space_id filestream_data_space_id max_column_id_used lock_on_bulk_load uses_ansi_nulls is_replicated has_replication_filter is_merge_published is_sync_tran_subscribed has_unchecked_assembly_data text_in_row_limit large_value_types_out_of_row is_tracked_by_cdc lock_escalation lock_escalation_desc
CODE_SOURCETYPE 5195785 NULL 1 0 U USER_TABLE 45:44.0 16:28.9 0 0 0 0 NULL 18 0 1 0 0 0 0 0 0 0 0 0 TABLE
CODE_CONTROLTYPE 8671795 NULL 1 0 U USER_TABLE 45:37.8 16:28.9 0 0 0 0 NULL 17 0 1 0 0 0 0 0 0 0 0 0 TABLE
CODE_SPONSORSTATUS 21195842 NULL 1 0 U USER_TABLE 45:44.1 16:29.0 0 0 0 0 NULL 19 0 1 0 0 0 0 0 0 0 0 0 TABLE
CODE_COUNTRY 24671852 NULL 1 0 U USER_TABLE 45:37.8 16:29.0 0 0 0 0 NULL 24 0 1 0 0 0 0 0 0 0 0 0 TABLE
CODE_SPONSORTYPE 37195899 NULL 1 0 U USER_TABLE 45:44.1 16:29.2 0 0 0 0 NULL 17 0 1 0 0 0 0 0 0 0 0 0 TABLE
CODE_COUNTY 40671909 NULL 1 0 U USER_TABLE 45:37.9 16:29.2 0 0 0 0 NULL 18 0 1 0 0 0 0 0 0 0 0 0 TABLE
CODE_STATE 53195956 NULL 1 0 U USER_TABLE 45:44.2 16:29.3 0 0 0 0 NULL 19 0 1 0 0 0 0 0 0 0 0 0 TABLE
CODE_CREDCARDTYPE 56671966 NULL 1 0 U USER_TABLE 45:38.0 16:29.3 0 0 0 0 NULL 17 0 1 0 0 0 0 0 0 0 0 0 TABLE
*/
SELECT * FROM sys.objects WHERE type = 'U' --user tables
/*
has table name
name object_id principal_id schema_id parent_object_id type type_desc create_date modify_date is_ms_shipped is_published is_schema_published
ADDRESSSCHEDULE 7671075 NULL 1 0 U USER_TABLE 03:10.6 03:14.2 0 0 0
ADVANCENAME 23671132 NULL 1 0 U USER_TABLE 03:10.7 03:10.7 0 0 0
COMBINEMAILING 55671246 NULL 1 0 U USER_TABLE 03:11.1 03:11.1 0 0 0
DEMOGRAPHICS 87671360 NULL 1 0 U USER_TABLE 03:11.4 03:11.4 0 0 0
*/
SELECT * FROM sys.columns
/*
has column name
object_id name column_id system_type_id user_type_id max_length precision scale collation_name is_nullable is_ansi_padded is_rowguidcol is_identity is_computed is_filestream is_replicated is_non_sql_subscribed is_merge_published is_dts_replicated is_xml_document xml_collection_id default_object_id rule_object_id is_sparse is_column_set
119671474 HONORS 10 231 231 12 0 0 SQL_Latin1_General_CP1_CI_AS 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0
119671474 TRANSCRIPT_DATE 11 61 61 8 23 3 NULL 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
119671474 CLASS_RANK 12 56 56 4 10 0 NULL 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
119671474 CLASS_SIZE 13 56 56 4 10 0 NULL 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
*/
SELECT * FROM sys.tables
/*
has table name
name object_id principal_id schema_id parent_object_id type type_desc create_date modify_date is_ms_shipped is_published is_schema_published lob_data_space_id filestream_data_space_id max_column_id_used lock_on_bulk_load uses_ansi_nulls is_replicated has_replication_filter is_merge_published is_sync_tran_subscribed has_unchecked_assembly_data text_in_row_limit large_value_types_out_of_row is_tracked_by_cdc lock_escalation lock_escalation_desc
ADDRESSSCHEDULE 7671075 NULL 1 0 U USER_TABLE 03:10.6 03:14.2 0 0 0 0 NULL 39 0 1 0 0 0 0 0 0 0 0 0 TABLE
ADVANCENAME 23671132 NULL 1 0 U USER_TABLE 03:10.7 03:10.7 0 0 0 0 NULL 18 0 1 0 0 0 0 0 0 0 0 0 TABLE
COMBINEMAILING 55671246 NULL 1 0 U USER_TABLE 03:11.1 03:11.1 0 0 0 0 NULL 16 0 1 0 0 0 0 0 0 0 0 0 TABLE
DEMOGRAPHICS 87671360 NULL 1 0 U USER_TABLE 03:11.4 03:11.4 0 0 0 0 NULL 31 0 1 0 0 0 0 0 0 0 0 0 TABLE
*/
SELECT * FROM sys.tables WHERE name LIKE '%code%'
SELECT * FROM systypes
/*
has column type
name xtype status xusertype length xprec xscale tdefault domain uid reserved collationid usertype variable allownulls type printfmt prec scale collation
text 35 0 35 16 0 0 0 0 4 0 872468488 19 0 1 35 NULL NULL NULL SQL_Latin1_General_CP1_CI_AS
uniqueidentifier 36 0 36 16 0 0 0 0 4 0 NULL 0 0 1 37 NULL 16 NULL NULL
date 40 0 40 3 10 0 0 0 4 0 NULL 0 0 1 0 NULL 10 0 NULL
time 41 0 41 5 16 7 0 0 4 0 NULL 0 0 1 0 NULL 16 7 NULL
datetime2 42 0 42 8 27 7 0 0 4 0 NULL 0 0 1 0 NULL 27 7 NULL
datetimeoffset 43 0 43 10 34 7 0 0 4 0 NULL 0 0 1 0 NULL 34 7 NULL
...
34 rows
*/
I'm using SQL Server 2008 R2
I posted on a sheet for formatting
https://docs.google.com/spreadsheets/d/1k3rubaSm0M4jXf5VKgk3DuS8QkuPwgzPZYJnsMJdQcI/edit?usp=sharing
I've got a table which shows: Power, Time, Diff-Days, Diff-Hours, Diff-Minutes where the diff columns use datediff and lag to calculate the difference in times between the rows.
(PowerkW) (Time) (Diff-Days) (Diff-Hours) (Diff-Minutes)
31011.39 2014-01-01 00:30:00 NULL NULL NULL
31838.74 2014-01-01 00:40:00 0 0 -10
32356.35 2014-01-01 00:50:00 0 0 -10
32358.82 2014-01-01 01:00:00 0 -1 -10
32414.15 2014-01-01 01:10:00 0 0 -10
32413.81 2014-01-01 01:20:00 0 0 -10
32412.35 2014-01-01 01:30:00 0 0 -10
32416.23 2014-01-01 01:40:00 0 0 -10
32014.94 2014-01-01 01:50:00 0 0 -10
31184.45 2014-01-01 03:40:00 0 -2 -110
32403.38 2014-01-01 03:50:00 0 0 -10
32415.07 2014-01-01 04:00:00 0 -1 -10
32388.04 2014-01-01 04:10:00 0 0 -10
32320.70 2014-01-01 04:20:00 0 0 -10
32297.44 2014-01-01 04:30:00 0 0 -10
What I want is a 6th column which groups these rows into events which are happening consecutively, i.e 9 happening every 1 minutes one after the other would have 1 in the 6th column then there could be a 2 hour difference and then 6 rows happening after each other which could have 2 in the 6th column, is this possible?
i.e.
(PowerkW) (Time) (Diff-Days) (Diff-Hours) (Diff-Minutes) (Group)
31011.39 2014-01-01 00:30:00 NULL NULL NULL 1
31838.74 2014-01-01 00:40:00 0 0 -10 1
32356.35 2014-01-01 00:50:00 0 0 -10 1
32358.82 2014-01-01 01:00:00 0 -1 -10 1
32414.15 2014-01-01 01:10:00 0 0 -10 1
32413.81 2014-01-01 01:20:00 0 0 -10 1
32412.35 2014-01-01 01:30:00 0 0 -10 1
32416.23 2014-01-01 01:40:00 0 0 -10 1
32014.94 2014-01-01 01:50:00 0 0 -10 1
31184.45 2014-01-01 03:40:00 0 -2 -110 2
32403.38 2014-01-01 03:50:00 0 0 -10 2
32415.07 2014-01-01 04:00:00 0 -1 -10 2
32388.04 2014-01-01 04:10:00 0 0 -10 2
32320.70 2014-01-01 04:20:00 0 0 -10 2
32297.44 2014-01-01 04:30:00 0 0 -10 2
If your definition of consecutive is based on the "diff minutes" being greater than some value (or less than, given that these are negative), then you can use a cumulative sum:
with q as (<your query here>)
select q.*,
sum(case when diff_minutes < -50 then 1 else 0 end) over (order by time) as grp
from q;
If you really want to start at "1", then just add one to the value.