Here is my problem, I have a tickets table which stores tickets read and users work 8 hours shift. I need to group tickets read in 8 groups.
Basically I need something like
if HourStart is 15:20
Group Hour Quantity
1 15:20:00 20
2 16:20:00 20
3 17:20:00 40
4 18:20:00 0
5 19:20:00 0
6 20:20:00 0
7 21:20:00 0
8 22:20:00 0
so because i need 8 rows all the time i thought creating a temporary table would be the best so i could make a join and rows were still showing even with null data if no records were entered in those hours.
Problem is this is a bit slow in terms of performance and a bit dirty and i'm looking if there is a better way to group data by some generated rows without having to create a temporary table
CREATE TABLE Production.hProductionRecods
(
ID INT IDENTITY PRIMARY KEY,
HourStart TIME
)
CREATE TABLE Production.hTickets
(
ID INT IDENTITY PRIMARY KEY,
DateRead DATETIME,
ProductionRecordId INT
)
CREATE TABLE #TickersPerHour
(
Group INT,
Hour TIME
)
DECLARE #HourStart TIME = (SELECT HourStart
FROM Production.hProductionRecords
WHERE Id = 1)
INSERT INTO #TickersPerHour (Group, Hour)
VALUES (1, #HourStart),
(2, DATEADD(hh, 1, #HourStart)),
(3, DATEADD(hh, 2, #HourStart)),
(4, DATEADD(hh, 3, #HourStart)),
(5, DATEADD(hh, 4, #HourStart)),
(6, DATEADD(hh, 5, #HourStart)),
(7, DATEADD(hh, 6, #HourStart)),
(8, DATEADD(hh, 7, #HourStart))
SELECT
TEMP.Group,
TEMP.Hour,
ISNULL(SUM(E.Quantity),0) Quantity
FROM
Production.hProductionRecords P
LEFT JOIN
Production.hTickets E ON E.ProductionRecordId = P.Id
RIGHT JOIN
#TickersPerHour TEMP
ON TEMP.Hour = CASE
WHEN CAST(E.DateRead AS TIME) >= P.HourStart
AND CAST(E.DateRead AS TIME) < DATEADD(hour, 1, P.HourStart)
THEN DATEADD(hour, 1, P.HourStart)
WHEN CAST(E.DateRead AS TIME) >= P.HourStart
AND CAST(E.DateRead AS TIME) < DATEADD(hour, 2, P.HourStart)
THEN DATEADD(hour, 2, P.HourStart)
WHEN CAST(E.DateRead AS TIME) >= P.HourStart
AND CAST(E.DateRead AS TIME) < DATEADD(hour, 3, P.HourStart)
THEN DATEADD(hour, 3, P.HourStart)
WHEN CAST(E.DateRead AS TIME) >= P.HourStart
AND CAST(E.DateRead AS TIME) < DATEADD(hour, 4, P.HourStart)
THEN DATEADD(hour, 4, P.HourStart)
WHEN CAST(E.DateRead AS TIME) >= P.HourStart
AND CAST(E.DateRead AS TIME) < DATEADD(hour, 5, P.HourStart)
THEN DATEADD(hour,5, P.HourStart)
WHEN CAST(E.DateRead AS TIME) >= P.HourStart
AND CAST(E.DateRead AS TIME) < DATEADD(hour, 6, P.HourStart)
THEN DATEADD(hour, 6, P.HourStart)
WHEN CAST(E.DateRead AS TIME) >= P.HourStart
AND CAST(E.DateRead AS TIME) < DATEADD(hour, 7, P.HourStart)
THEN DATEADD(hour,7, P.HourStart)
WHEN CAST(E.DateRead AS TIME) >= P.HourStart
AND CAST(E.DateRead AS TIME) < DATEADD(hour, 8, P.HourStart)
THEN DATEADD(hour, 8, P.HourStart)
END
GROUP BY
TEMP.Group, TEMP.Hour
ORDER BY
Group
DROP TABLE #TickersPerHour
You could try aggregating the tickets without joining them with the temp table ranges, because the aggregation is quite static: Tickets whose minute part is before the minute of HourStart "belong" to the previous hour. The aggregation will return 8 groups and those 8 groups can be joined with the ranges (either temp table, or derived).
/*
--drop table Production.hTickets
--drop table Production.hProductionRecords
--drop schema Production
go
create schema Production
go
CREATE TABLE Production.hProductionRecords
(
ID INT IDENTITY PRIMARY KEY,
HourStart TIME
)
CREATE TABLE Production.hTickets
(
ID INT IDENTITY PRIMARY KEY,
DateRead DATETIME,
ProductionRecordId INT,
Quantity INT
)
go
insert into Production.hProductionRecords values('15:20')
insert into Production.hTickets(DateRead, ProductionRecordId, Quantity)
select dateadd(minute, abs(checksum(newid()))% 600, '15:20'), 1, abs(checksum(newid()))% 75
from sys.columns as a
cross join sys.columns as b
*/
DECLARE #HourStart TIME = (SELECT HourStart
FROM Production.hProductionRecords
WHERE Id = 1)
declare #MinuteBoundary int = datepart(minute, #HourStart);
select *
from
(
values (1, #HourStart),
(2, DATEADD(hh, 1, #HourStart)),
(3, DATEADD(hh, 2, #HourStart)),
(4, DATEADD(hh, 3, #HourStart)),
(5, DATEADD(hh, 4, #HourStart)),
(6, DATEADD(hh, 5, #HourStart)),
(7, DATEADD(hh, 6, #HourStart)),
(8, DATEADD(hh, 7, #HourStart))
) AS h(id, HourStart)
full outer join
(
select AdjustedHour, sum(Quantity) AS SumQuantity
from
(
select
--tickets read before the minute boundary, belong to the previous hour
case
when datepart(minute, DateRead) < #MinuteBoundary then datepart(hour, dateadd(hour, -1, DateRead))
else datepart(hour, DateRead)
end AS AdjustedHour,
Quantity
from Production.hTickets
where 1=1
--and --filter, for dates, hours outside the 8 hours period and whatnot...
) as src
group by AdjustedHour
) AS grp ON datepart(hour, h.HourStart) = grp.AdjustedHour;
i'm looking if there is a better way to group data by some generated
rows without having to create a temporary table
Instead of a temporary table you can build a lazy sequence, for that you need rangeAB (code at the end of this post). Lazy Sequences (AKA Tally Table or Virtual Auxiliary Table of Numbers) are nasty fast. Note this example:
DECLARE #HourStart TIME = '15:20:00';
SELECT
r.RN,
r.OP,
TimeAsc = DATEADD(HOUR,r.RN,#HourStart),
TimeDesc = DATEADD(HOUR,r.OP,#HourStart)
FROM dbo.rangeAB(0,7,1,0) AS r
ORDER BY r.RN;
Results:
RN OP TimeAsc TimeDesc
---- ---- ---------------- ----------------
0 7 15:20:00.0000000 22:20:00.0000000
1 6 16:20:00.0000000 21:20:00.0000000
2 5 17:20:00.0000000 20:20:00.0000000
3 4 18:20:00.0000000 19:20:00.0000000
4 3 19:20:00.0000000 18:20:00.0000000
5 2 20:20:00.0000000 17:20:00.0000000
6 1 21:20:00.0000000 16:20:00.0000000
7 0 22:20:00.0000000 15:20:00.0000000
Note that I am able to generate these dates in ASCending and/or DESCending order without a sort in the execution plan. This is because rangeAB leverages what I call a Virtual Index. You can order by, group by, etc, even join on the RN column without sorting. Note the execution plan - No Sort, that's huge!
Now to use RangeAB to solve you problem:
-- Create Some Sample Data
DECLARE #things TABLE (Something CHAR(1), SomeTime TIME);
WITH Something(x) AS (SELECT 1)
INSERT #things
SELECT x, xx
FROM Something
CROSS APPLY (VALUES('15:30:00'),('20:19:00'),('16:30:00'),('16:33:00'),
('17:10:00'),('18:13:00'),('19:01:00'),('21:35:00'),
('15:13:00'),('21:55:00'),('19:22:00'),('16:39:00')) AS f(xx);
-- Solution:
DECLARE #HourStart TIME = '15:20:00'; -- you get via a subquery
SELECT
GroupNumber = r.RN+1,
HourStart = MAX(f.HStart),
Quantity = COUNT(t.SomeThing)
FROM dbo.rangeAB(0,7,1,0) AS r
CROSS APPLY (VALUES(DATEADD(HOUR,r.RN,#HourStart))) AS f(HStart)
CROSS APPLY (VALUES(DATEADD(SECOND,3599,f.HStart))) AS f2(HEnd)
LEFT JOIN #things AS t
ON t.SomeTime BETWEEN f.HStart AND f2.HEnd
GROUP BY r.RN;
Results:
GroupNumber HourStart Quantity
------------- ----------------- -----------
1 15:20:00.0000000 1
2 16:20:00.0000000 4
3 17:20:00.0000000 1
4 18:20:00.0000000 1
5 19:20:00.0000000 2
6 20:20:00.0000000 0
7 21:20:00.0000000 2
8 22:20:00.0000000 0
Execution Plan:
Let me know if you have questions. dbo.rangeAB below.
CREATE FUNCTION dbo.rangeAB
(
#low bigint,
#high bigint,
#gap bigint,
#row1 bit
)
/****************************************************************************************
[Purpose]:
Creates up to 531,441,000,000 sequentia1 integers numbers beginning with #low and ending
with #high. Used to replace iterative methods such as loops, cursors and recursive CTEs
to solve SQL problems. Based on Itzik Ben-Gan's getnums function with some tweeks and
enhancements and added functionality. The logic for getting rn to begin at 0 or 1 is
based comes from Jeff Moden's fnTally function.
The name range because it's similar to clojure's range function. The name "rangeAB" as
used because "range" is a reserved SQL keyword.
[Author]: Alan Burstein
[Compatibility]:
SQL Server 2008+ and Azure SQL Database
[Syntax]:
SELECT r.RN, r.OP, r.N1, r.N2
FROM dbo.rangeAB(#low,#high,#gap,#row1) AS r;
[Parameters]:
#low = a bigint that represents the lowest value for n1.
#high = a bigint that represents the highest value for n1.
#gap = a bigint that represents how much n1 and n2 will increase each row; #gap also
represents the difference between n1 and n2.
#row1 = a bit that represents the first value of rn. When #row = 0 then rn begins
at 0, when #row = 1 then rn will begin at 1.
[Returns]:
Inline Table Valued Function returns:
rn = bigint; a row number that works just like T-SQL ROW_NUMBER() except that it can
start at 0 or 1 which is dictated by #row1.
op = bigint; returns the "opposite number that relates to rn. When rn begins with 0 and
ends with 10 then 10 is the opposite of 0, 9 the opposite of 1, etc. When rn begins
with 1 and ends with 5 then 1 is the opposite of 5, 2 the opposite of 4, etc...
n1 = bigint; a sequential number starting at the value of #low and incrementing by the
value of #gap until it is less than or equal to the value of #high.
n2 = bigint; a sequential number starting at the value of #low+#gap and incrementing
by the value of #gap.
[Dependencies]:
N/A
[Developer Notes]:
1. The lowest and highest possible numbers returned are whatever is allowable by a
bigint. The function, however, returns no more than 531,441,000,000 rows (8100^3).
2. #gap does not affect rn, rn will begin at #row1 and increase by 1 until the last row
unless its used in a query where a filter is applied to rn.
3. #gap must be greater than 0 or the function will not return any rows.
4. Keep in mind that when #row1 is 0 then the highest row-number will be the number of
rows returned minus 1
5. If you only need is a sequential set beginning at 0 or 1 then, for best performance
use the RN column. Use N1 and/or N2 when you need to begin your sequence at any
number other than 0 or 1 or if you need a gap between your sequence of numbers.
6. Although #gap is a bigint it must be a positive integer or the function will
not return any rows.
7. The function will not return any rows when one of the following conditions are true:
* any of the input parameters are NULL
* #high is less than #low
* #gap is not greater than 0
To force the function to return all NULLs instead of not returning anything you can
add the following code to the end of the query:
UNION ALL
SELECT NULL, NULL, NULL, NULL
WHERE NOT (#high&#low&#gap&#row1 IS NOT NULL AND #high >= #low AND #gap > 0)
This code was excluded as it adds a ~5% performance penalty.
8. There is no performance penalty for sorting by rn ASC; there is a large performance
penalty for sorting in descending order WHEN #row1 = 1; WHEN #row1 = 0
If you need a descending sort the use op in place of rn then sort by rn ASC.
Best Practices:
--===== 1. Using RN (rownumber)
-- (1.1) The best way to get the numbers 1,2,3...#high (e.g. 1 to 5):
SELECT RN FROM dbo.rangeAB(1,5,1,1);
-- (1.2) The best way to get the numbers 0,1,2...#high-1 (e.g. 0 to 5):
SELECT RN FROM dbo.rangeAB(0,5,1,0);
--===== 2. Using OP for descending sorts without a performance penalty
-- (2.1) The best way to get the numbers 5,4,3...#high (e.g. 5 to 1):
SELECT op FROM dbo.rangeAB(1,5,1,1) ORDER BY rn ASC;
-- (2.2) The best way to get the numbers 0,1,2...#high-1 (e.g. 5 to 0):
SELECT op FROM dbo.rangeAB(1,6,1,0) ORDER BY rn ASC;
--===== 3. Using N1
-- (3.1) To begin with numbers other than 0 or 1 use N1 (e.g. -3 to 3):
SELECT N1 FROM dbo.rangeAB(-3,3,1,1);
-- (3.2) ROW_NUMBER() is built in. If you want a ROW_NUMBER() include RN:
SELECT RN, N1 FROM dbo.rangeAB(-3,3,1,1);
-- (3.3) If you wanted a ROW_NUMBER() that started at 0 you would do this:
SELECT RN, N1 FROM dbo.rangeAB(-3,3,1,0);
--===== 4. Using N2 and #gap
-- (4.1) To get 0,10,20,30...100, set #low to 0, #high to 100 and #gap to 10:
SELECT N1 FROM dbo.rangeAB(0,100,10,1);
-- (4.2) Note that N2=N1+#gap; this allows you to create a sequence of ranges.
-- For example, to get (0,10),(10,20),(20,30).... (90,100):
SELECT N1, N2 FROM dbo.rangeAB(0,90,10,1);
-- (4.3) Remember that a rownumber is included and it can begin at 0 or 1:
SELECT RN, N1, N2 FROM dbo.rangeAB(0,90,10,1);
[Examples]:
--===== 1. Generating Sample data (using rangeAB to create "dummy rows")
-- The query below will generate 10,000 ids and random numbers between 50,000 and 500,000
SELECT
someId = r.rn,
someNumer = ABS(CHECKSUM(NEWID())%450000)+50001
FROM rangeAB(1,10000,1,1) r;
--===== 2. Create a series of dates; rn is 0 to include the first date in the series
DECLARE #startdate DATE = '20180101', #enddate DATE = '20180131';
SELECT r.rn, calDate = DATEADD(dd, r.rn, #startdate)
FROM dbo.rangeAB(1, DATEDIFF(dd,#startdate,#enddate),1,0) r;
GO
--===== 3. Splitting (tokenizing) a string with fixed sized items
-- given a delimited string of identifiers that are always 7 characters long
DECLARE #string VARCHAR(1000) = 'A601225,B435223,G008081,R678567';
SELECT
itemNumber = r.rn, -- item's ordinal position
itemIndex = r.n1, -- item's position in the string (it's CHARINDEX value)
item = SUBSTRING(#string, r.n1, 7) -- item (token)
FROM dbo.rangeAB(1, LEN(#string), 8,1) r;
GO
--===== 4. Splitting (tokenizing) a string with random delimiters
DECLARE #string VARCHAR(1000) = 'ABC123,999F,XX,9994443335';
SELECT
itemNumber = ROW_NUMBER() OVER (ORDER BY r.rn), -- item's ordinal position
itemIndex = r.n1+1, -- item's position in the string (it's CHARINDEX value)
item = SUBSTRING
(
#string,
r.n1+1,
ISNULL(NULLIF(CHARINDEX(',',#string,r.n1+1),0)-r.n1-1, 8000)
) -- item (token)
FROM dbo.rangeAB(0,DATALENGTH(#string),1,1) r
WHERE SUBSTRING(#string,r.n1,1) = ',' OR r.n1 = 0;
-- logic borrowed from: http://www.sqlservercentral.com/articles/Tally+Table/72993/
--===== 5. Grouping by a weekly intervals
-- 5.1. how to create a series of start/end dates between #startDate & #endDate
DECLARE #startDate DATE = '1/1/2015', #endDate DATE = '2/1/2015';
SELECT
WeekNbr = r.RN,
WeekStart = DATEADD(DAY,r.N1,#StartDate),
WeekEnd = DATEADD(DAY,r.N2-1,#StartDate)
FROM dbo.rangeAB(0,datediff(DAY,#StartDate,#EndDate),7,1) r;
GO
-- 5.2. LEFT JOIN to the weekly interval table
BEGIN
DECLARE #startDate datetime = '1/1/2015', #endDate datetime = '2/1/2015';
-- sample data
DECLARE #loans TABLE (loID INT, lockDate DATE);
INSERT #loans SELECT r.rn, DATEADD(dd, ABS(CHECKSUM(NEWID())%32), #startDate)
FROM dbo.rangeAB(1,50,1,1) r;
-- solution
SELECT
WeekNbr = r.RN,
WeekStart = dt.WeekStart,
WeekEnd = dt.WeekEnd,
total = COUNT(l.lockDate)
FROM dbo.rangeAB(0,datediff(DAY,#StartDate,#EndDate),7,1) r
CROSS APPLY (VALUES (
CAST(DATEADD(DAY,r.N1,#StartDate) AS DATE),
CAST(DATEADD(DAY,r.N2-1,#StartDate) AS DATE))) dt(WeekStart,WeekEnd)
LEFT JOIN #loans l ON l.lockDate BETWEEN dt.WeekStart AND dt.WeekEnd
GROUP BY r.RN, dt.WeekStart, dt.WeekEnd ;
END;
--===== 6. Identify the first vowel and last vowel in a along with their positions
DECLARE #string VARCHAR(200) = 'This string has vowels';
SELECT TOP(1) position = r.rn, letter = SUBSTRING(#string,r.rn,1)
FROM dbo.rangeAB(1,LEN(#string),1,1) r
WHERE SUBSTRING(#string,r.rn,1) LIKE '%[aeiou]%'
ORDER BY r.rn;
-- To avoid a sort in the execution plan we'll use op instead of rn
SELECT TOP(1) position = r.op, letter = SUBSTRING(#string,r.op,1)
FROM dbo.rangeAB(1,LEN(#string),1,1) r
WHERE SUBSTRING(#string,r.rn,1) LIKE '%[aeiou]%'
ORDER BY r.rn;
---------------------------------------------------------------------------------------
[Revision History]:
Rev 00 - 20140518 - Initial Development - Alan Burstein
Rev 01 - 20151029 - Added 65 rows to make L1=465; 465^3=100.5M. Updated comment section
- Alan Burstein
Rev 02 - 20180613 - Complete re-design including opposite number column (op)
Rev 03 - 20180920 - Added additional CROSS JOIN to L2 for 530B rows max - Alan Burstein
****************************************************************************************/
RETURNS TABLE WITH SCHEMABINDING AS RETURN
WITH L1(N) AS
(
SELECT 1
FROM (VALUES
(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),
(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),
(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),
(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),
(0),(0)) T(N) -- 90 values
),
L2(N) AS (SELECT 1 FROM L1 a CROSS JOIN L1 b CROSS JOIN L1 c),
iTally AS (SELECT rn = ROW_NUMBER() OVER (ORDER BY (SELECT 1)) FROM L2 a CROSS JOIN L2 b)
SELECT
r.RN,
r.OP,
r.N1,
r.N2
FROM
(
SELECT
RN = 0,
OP = (#high-#low)/#gap,
N1 = #low,
N2 = #gap+#low
WHERE #row1 = 0
UNION ALL -- ISNULL required in the TOP statement below for error handling purposes
SELECT TOP (ABS((ISNULL(#high,0)-ISNULL(#low,0))/ISNULL(#gap,0)+ISNULL(#row1,1)))
RN = i.rn,
OP = (#high-#low)/#gap+(2*#row1)-i.rn,
N1 = (i.rn-#row1)*#gap+#low,
N2 = (i.rn-(#row1-1))*#gap+#low
FROM iTally AS i
ORDER BY i.rn
) AS r
WHERE #high&#low&#gap&#row1 IS NOT NULL AND #high >= #low AND #gap > 0;
GO
I have a legacy table, which I can't change.
The values in it can be modified from legacy application (application also can't be changed).
Due to a lot of access to the table from new application (new requirement), I'd like to create a temporary table, which would hopefully speed up the queries.
The actual requirement, is to calculate number of business days from X to Y. For example, give me all business days from Jan 1'st 2001 until Dec 24'th 2004. The table is used to mark which days are off, as different companies may have different days off - it isn't just Saturday + Sunday)
The temporary table would be created from a .NET program, each time user enters the screen for this query (user may run query multiple times, with different values, table is created once), so I'd like it to be as fast as possible. Approach below runs in under a second, but I only tested it with a small dataset, and still it takes probably close to half a second, which isn't great for UI - even though it's just the overhead for first query.
The legacy table looks like this:
CREATE TABLE [business_days](
[country_code] [char](3) ,
[state_code] [varchar](4) ,
[calendar_year] [int] ,
[calendar_month] [varchar](31) ,
[calendar_month2] [varchar](31) ,
[calendar_month3] [varchar](31) ,
[calendar_month4] [varchar](31) ,
[calendar_month5] [varchar](31) ,
[calendar_month6] [varchar](31) ,
[calendar_month7] [varchar](31) ,
[calendar_month8] [varchar](31) ,
[calendar_month9] [varchar](31) ,
[calendar_month10] [varchar](31) ,
[calendar_month11] [varchar](31) ,
[calendar_month12] [varchar](31) ,
misc.
)
Each month has 31 characters, and any day off (Saturday + Sunday + holiday) is marked with X. Each half day is marked with an 'H'. For example, if a month starts on a Thursday, than it will look like (Thursday+Friday workdays, Saturday+Sunday marked with X):
' XX XX ..'
I'd like the new table to look like so:
create table #Temp (country varchar(3), state varchar(4), date datetime, hours int)
And I'd like to only have rows for days which are off (marked with X or H from previous query)
What I ended up doing, so far is this:
Create a temporary-intermediate table, that looks like this:
create table #Temp_2 (country_code varchar(3), state_code varchar(4), calendar_year int, calendar_month varchar(31), month_code int)
To populate it, I have a union which basically unions calendar_month, calendar_month2, calendar_month3, etc.
Than I have a loop which loops through all the rows in #Temp_2, after each row is processed, it is removed from #Temp_2.
To process the row there is a loop from 1 to 31, and substring(calendar_month, counter, 1) is checked for either X or H, in which case there is an insert into #Temp table.
[edit added code]
Declare #country_code char(3)
Declare #state_code varchar(4)
Declare #calendar_year int
Declare #calendar_month varchar(31)
Declare #month_code int
Declare #calendar_date datetime
Declare #day_code int
WHILE EXISTS(SELECT * From #Temp_2) -- where processed = 0)
BEGIN
Select Top 1 #country_code = t2.country_code, #state_code = t2.state_code, #calendar_year = t2.calendar_year, #calendar_month = t2.calendar_month, #month_code = t2.month_code From #Temp_2 t2 -- where processed = 0
set #day_code = 1
while #day_code <= 31
begin
if substring(#calendar_month, #day_code, 1) = 'X'
begin
set #calendar_date = convert(datetime, (cast(#month_code as varchar) + '/' + cast(#day_code as varchar) + '/' + cast(#calendar_year as varchar)))
insert into #Temp (country, state, date, hours) values (#country_code, #state_code, #calendar_date, 8)
end
if substring(#calendar_month, #day_code, 1) = 'H'
begin
set #calendar_date = convert(datetime, (cast(#month_code as varchar) + '/' + cast(#day_code as varchar) + '/' + cast(#calendar_year as varchar)))
insert into #Temp (country, state, date, hours) values (#country_code, #state_code, #calendar_date, 4)
end
set #day_code = #day_code + 1
end
delete from #Temp_2 where #country_code = country_code AND #state_code = state_code AND #calendar_year = calendar_year AND #calendar_month = calendar_month AND #month_code = month_code
--update #Temp_2 set processed = 1 where #country_code = country_code AND #state_code = state_code AND #calendar_year = calendar_year AND #calendar_month = calendar_month AND #month_code = month_code
END
I am not an expert in SQL, so I'd like to get some input on my approach, and maybe even a much better approach suggestion.
After having the temp table, I'm planning to do (dates would be coming from a table):
select cast(convert(datetime, ('01/31/2012'), 101) -convert(datetime, ('01/17/2012'), 101) as int) - ((select sum(hours) from #Temp where date between convert(datetime, ('01/17/2012'), 101) and convert(datetime, ('01/31/2012'), 101)) / 8)
Besides the solution of normalizing the table, the other solution I implemented for now, is a function which does all this logic of getting the business days by scanning the current table. It runs pretty fast, but I'm hesitant to call a function, if I can instead add a simpler query to get result.
(I'm currently trying this on MSSQL, but I would need to do same for Sybase ASE and Oracle)
This should fulfill the requirement, "...calculate number of business days from X to Y."
It counts each space as a business day and anything other than an X or a space as a half day (should just be H, according to the OP).
I pulled this off in SQL Server 2008 R2:
-- Calculate number of business days from X to Y
declare #start date = '20120101' -- X
declare #end date = '20120101' -- Y
-- Outer query sums the length of the full_year text minus non-work days
-- Spaces are doubled to help account for half-days...then divide by two
select sum(datalength(replace(replace(substring(full_year, first_day, last_day - first_day + 1), ' ', ' '), 'X', '')) / 2.0) as number_of_business_days
from (
select
-- Get substring start value for each year
case
when calendar_year = datepart(yyyy, #start) then datepart(dayofyear, #start)
else 1
end as first_day
-- Get substring end value for each year
, case
when calendar_year = datepart(yyyy, #end) then datepart(dayofyear, #end)
when calendar_year > datepart(yyyy, #end) then 0
when calendar_year < datepart(yyyy, #start) then 0
else datalength(full_year)
end as last_day
, full_year
from (
select calendar_year
-- Get text representation of full year
, calendar_month
+ calendar_month2
+ calendar_month3
+ calendar_month4
+ calendar_month5
+ calendar_month6
+ calendar_month7
+ calendar_month8
+ calendar_month9
+ calendar_month10
+ calendar_month11
+ calendar_month12 as full_year
from business_days
-- where country_code = 'USA' etc.
) as get_year
) as get_days
A where clause can go on the inner-most query.
It is not an un-pivot of the legacy format, which the OP spends much time on and which will probably take more (and possibly unnecessary) computing cycles. I'm assuming such a thing was "nice to see" rather than part of the requirements. Jeff Moden has great articles on how a tally table could help in that case (for SQL Server, anyway).
It might be necessary to watch trailing spaces depending upon how a particular DBMS is set (notice that I'm using datalength and not len).
UPDATE: Added the OP's requested temp table:
select country_code
, state_code
, dateadd(d, t.N - 1, cast(cast(a.calendar_year as varchar(8)) as date)) as calendar_date
, case substring(full_year, t.N, 1) when 'X' then 0 when 'H' then 4 else 8 end as business_hours
from (
select country_code
, state_code
, calendar_year
, calendar_month
+ calendar_month2
+ calendar_month3
+ calendar_month4
+ calendar_month5
+ calendar_month6
+ calendar_month7
+ calendar_month8
+ calendar_month9
+ calendar_month10
+ calendar_month11
+ calendar_month12
as full_year
from business_days
) as a, (
select a.N + b.N * 10 + c.N * 100 + 1 as N
from (select 0 as N union all select 1 union all select 2 union all select 3 union all select 4 union all select 5 union all select 6 union all select 7 union all select 8 union all select 9) a
, (select 0 as N union all select 1 union all select 2 union all select 3 union all select 4 union all select 5 union all select 6 union all select 7 union all select 8 union all select 9) b
, (select 0 as N union all select 1 union all select 2 union all select 3 union all select 4 union all select 5 union all select 6 union all select 7 union all select 8 union all select 9) c
) as t -- cross join with Tally table built on the fly
where t.N <= datalength(a.full_year)
Given your temp table is slow to create, are you able to pre-calculate it?
If you're able to put a trigger on the existing table, perhaps you could fire a proc which will drop and create the temp table. Or have an agent job which checks to see if the existing table has been updated (raise a flag somewhere) and then recomputes the temp table.
The existing table's structure is so woeful that I wouldn't be surprised if it will always be expensive to normalize it. Pre-calculating is an easy and simple way around that problem.