TSQL Divide by Zero Error - sql-server

I have a query that is doing some math and in there are times where a number could be a zero that its dividing by which would cause an error.
I found something on this website about how to fix it but now the number doesnt change at all.
Example Data:
los.shortTermLosses = 1
A.shortTerm = 15
Giving the equation of 1/15*12 = 0.8
COALESCE(los.shortTermLosses / NULLIF(A.shortTerm,0),0.00)* 12 AS shortTermAttrition
It has to be something I did to prevent the divide by zero error but not sure how to get it to work correctly. The current result is always the 0.00
Update
For those who want to see the whole query..
SELECT A.QID,
(SELECT TOP 1 E.[FirstName],
E.[LastName],
E.[NTID],
E.[TitleDesc],
A.[countOfDirects],
A.[longTerm],
A.[shortTerm],
COALESCE(los.totalLosses,0) totalLosses,
COALESCE(los.longTermLosses, 0) longTermLosses,
COALESCE(los.shortTermLosses,0) shortTermLosses,
COALESCE(los.shortTermLosses / NULLIF(A.shortTerm,0),0.00)* 12 AS shortTermAttrition,
COALESCE(los.longTermLosses / NULLIF(A.longTerm,0),0.00)* 12 AS longTermAttrition,
COALESCE(los.totalLosses / NULLIF(A.countOfDirects,0),0.00)* 12 AS totalAttrition
FROM employeeTable_historical AS E
OUTER APPLY (SELECT COUNT(b.leaver) as [totalLosses],
sum(case when b.term = 'LTA' then 1 else 0 end) as [longTermLosses],
sum(case when b.term = 'STA' then 1 else 0 end) as [shortTermLosses]
FROM dbo.attritionData AS B
WHERE E.QID = B.supervisor
AND MONTH(B.leaveDate) = #month
AND YEAR(B.leaveDate) = #year
GROUP BY b.supervisor
)los
WHERE E.qid = A.[QID]
AND CONVERT (DATE, dateadd(mm, (#year - 1900) * 12 + #month - 1 , #day - 1)) >= CONVERT (DATE, E.[Meta_LogDate])
ORDER BY meta_logDate DESC
FOR XML PATH (''), TYPE, ELEMENTS)
FROM (SELECT QID,
[timestamp],
[countOfDirects],
[longTerm],
[shortTerm]
FROM (SELECT QID,
[timestamp],
[countOfDirects],
[shortTerm],
[longTerm],
ROW_NUMBER() OVER (PARTITION BY QID ORDER BY [Timestamp]) AS Row
FROM [red].[dbo].[attritionCounts]
WHERE [mgrQID] = #director
AND YEAR(CAST ([timestamp] AS DATE)) = #year
AND MONTH(CAST ([timestamp] AS DATE)) = #month) AS Tmp1
WHERE Row = 1) AS A
FOR XML PATH ('DirectReport'), TYPE, ELEMENTS, ROOT ('Root');

You error does not come from the NULLIF, you are just dividing an Integer e.g. 1 / 15 = 0
, just change your term to:
COALESCE(CAST(los.shortTermLosses as float) / NULLIF(A.shortTerm,0),0.00)* 12 AS shortTermAttrition
The closest transformation for you XML export might be casting the Result as money
Declare #shortTermLosses int = 1
Declare #shortTerm int = 15
select
CAST(
COALESCE(CAST(#shortTermLosses AS float) / Cast(NULLIF(#shortTerm,0) AS float),0.00)* 12
as Money)
AS shortTermAttrition
FOR XML PATH (''), TYPE, ELEMENTS

Related

SQL Server : group data by 8 generated hours

Here is my problem, I have a tickets table which stores tickets read and users work 8 hours shift. I need to group tickets read in 8 groups.
Basically I need something like
if HourStart is 15:20
Group Hour Quantity
1 15:20:00 20
2 16:20:00 20
3 17:20:00 40
4 18:20:00 0
5 19:20:00 0
6 20:20:00 0
7 21:20:00 0
8 22:20:00 0
so because i need 8 rows all the time i thought creating a temporary table would be the best so i could make a join and rows were still showing even with null data if no records were entered in those hours.
Problem is this is a bit slow in terms of performance and a bit dirty and i'm looking if there is a better way to group data by some generated rows without having to create a temporary table
CREATE TABLE Production.hProductionRecods
(
ID INT IDENTITY PRIMARY KEY,
HourStart TIME
)
CREATE TABLE Production.hTickets
(
ID INT IDENTITY PRIMARY KEY,
DateRead DATETIME,
ProductionRecordId INT
)
CREATE TABLE #TickersPerHour
(
Group INT,
Hour TIME
)
DECLARE #HourStart TIME = (SELECT HourStart
FROM Production.hProductionRecords
WHERE Id = 1)
INSERT INTO #TickersPerHour (Group, Hour)
VALUES (1, #HourStart),
(2, DATEADD(hh, 1, #HourStart)),
(3, DATEADD(hh, 2, #HourStart)),
(4, DATEADD(hh, 3, #HourStart)),
(5, DATEADD(hh, 4, #HourStart)),
(6, DATEADD(hh, 5, #HourStart)),
(7, DATEADD(hh, 6, #HourStart)),
(8, DATEADD(hh, 7, #HourStart))
SELECT
TEMP.Group,
TEMP.Hour,
ISNULL(SUM(E.Quantity),0) Quantity
FROM
Production.hProductionRecords P
LEFT JOIN
Production.hTickets E ON E.ProductionRecordId = P.Id
RIGHT JOIN
#TickersPerHour TEMP
ON TEMP.Hour = CASE
WHEN CAST(E.DateRead AS TIME) >= P.HourStart
AND CAST(E.DateRead AS TIME) < DATEADD(hour, 1, P.HourStart)
THEN DATEADD(hour, 1, P.HourStart)
WHEN CAST(E.DateRead AS TIME) >= P.HourStart
AND CAST(E.DateRead AS TIME) < DATEADD(hour, 2, P.HourStart)
THEN DATEADD(hour, 2, P.HourStart)
WHEN CAST(E.DateRead AS TIME) >= P.HourStart
AND CAST(E.DateRead AS TIME) < DATEADD(hour, 3, P.HourStart)
THEN DATEADD(hour, 3, P.HourStart)
WHEN CAST(E.DateRead AS TIME) >= P.HourStart
AND CAST(E.DateRead AS TIME) < DATEADD(hour, 4, P.HourStart)
THEN DATEADD(hour, 4, P.HourStart)
WHEN CAST(E.DateRead AS TIME) >= P.HourStart
AND CAST(E.DateRead AS TIME) < DATEADD(hour, 5, P.HourStart)
THEN DATEADD(hour,5, P.HourStart)
WHEN CAST(E.DateRead AS TIME) >= P.HourStart
AND CAST(E.DateRead AS TIME) < DATEADD(hour, 6, P.HourStart)
THEN DATEADD(hour, 6, P.HourStart)
WHEN CAST(E.DateRead AS TIME) >= P.HourStart
AND CAST(E.DateRead AS TIME) < DATEADD(hour, 7, P.HourStart)
THEN DATEADD(hour,7, P.HourStart)
WHEN CAST(E.DateRead AS TIME) >= P.HourStart
AND CAST(E.DateRead AS TIME) < DATEADD(hour, 8, P.HourStart)
THEN DATEADD(hour, 8, P.HourStart)
END
GROUP BY
TEMP.Group, TEMP.Hour
ORDER BY
Group
DROP TABLE #TickersPerHour
You could try aggregating the tickets without joining them with the temp table ranges, because the aggregation is quite static: Tickets whose minute part is before the minute of HourStart "belong" to the previous hour. The aggregation will return 8 groups and those 8 groups can be joined with the ranges (either temp table, or derived).
/*
--drop table Production.hTickets
--drop table Production.hProductionRecords
--drop schema Production
go
create schema Production
go
CREATE TABLE Production.hProductionRecords
(
ID INT IDENTITY PRIMARY KEY,
HourStart TIME
)
CREATE TABLE Production.hTickets
(
ID INT IDENTITY PRIMARY KEY,
DateRead DATETIME,
ProductionRecordId INT,
Quantity INT
)
go
insert into Production.hProductionRecords values('15:20')
insert into Production.hTickets(DateRead, ProductionRecordId, Quantity)
select dateadd(minute, abs(checksum(newid()))% 600, '15:20'), 1, abs(checksum(newid()))% 75
from sys.columns as a
cross join sys.columns as b
*/
DECLARE #HourStart TIME = (SELECT HourStart
FROM Production.hProductionRecords
WHERE Id = 1)
declare #MinuteBoundary int = datepart(minute, #HourStart);
select *
from
(
values (1, #HourStart),
(2, DATEADD(hh, 1, #HourStart)),
(3, DATEADD(hh, 2, #HourStart)),
(4, DATEADD(hh, 3, #HourStart)),
(5, DATEADD(hh, 4, #HourStart)),
(6, DATEADD(hh, 5, #HourStart)),
(7, DATEADD(hh, 6, #HourStart)),
(8, DATEADD(hh, 7, #HourStart))
) AS h(id, HourStart)
full outer join
(
select AdjustedHour, sum(Quantity) AS SumQuantity
from
(
select
--tickets read before the minute boundary, belong to the previous hour
case
when datepart(minute, DateRead) < #MinuteBoundary then datepart(hour, dateadd(hour, -1, DateRead))
else datepart(hour, DateRead)
end AS AdjustedHour,
Quantity
from Production.hTickets
where 1=1
--and --filter, for dates, hours outside the 8 hours period and whatnot...
) as src
group by AdjustedHour
) AS grp ON datepart(hour, h.HourStart) = grp.AdjustedHour;
i'm looking if there is a better way to group data by some generated
rows without having to create a temporary table
Instead of a temporary table you can build a lazy sequence, for that you need rangeAB (code at the end of this post). Lazy Sequences (AKA Tally Table or Virtual Auxiliary Table of Numbers) are nasty fast. Note this example:
DECLARE #HourStart TIME = '15:20:00';
SELECT
r.RN,
r.OP,
TimeAsc = DATEADD(HOUR,r.RN,#HourStart),
TimeDesc = DATEADD(HOUR,r.OP,#HourStart)
FROM dbo.rangeAB(0,7,1,0) AS r
ORDER BY r.RN;
Results:
RN OP TimeAsc TimeDesc
---- ---- ---------------- ----------------
0 7 15:20:00.0000000 22:20:00.0000000
1 6 16:20:00.0000000 21:20:00.0000000
2 5 17:20:00.0000000 20:20:00.0000000
3 4 18:20:00.0000000 19:20:00.0000000
4 3 19:20:00.0000000 18:20:00.0000000
5 2 20:20:00.0000000 17:20:00.0000000
6 1 21:20:00.0000000 16:20:00.0000000
7 0 22:20:00.0000000 15:20:00.0000000
Note that I am able to generate these dates in ASCending and/or DESCending order without a sort in the execution plan. This is because rangeAB leverages what I call a Virtual Index. You can order by, group by, etc, even join on the RN column without sorting. Note the execution plan - No Sort, that's huge!
Now to use RangeAB to solve you problem:
-- Create Some Sample Data
DECLARE #things TABLE (Something CHAR(1), SomeTime TIME);
WITH Something(x) AS (SELECT 1)
INSERT #things
SELECT x, xx
FROM Something
CROSS APPLY (VALUES('15:30:00'),('20:19:00'),('16:30:00'),('16:33:00'),
('17:10:00'),('18:13:00'),('19:01:00'),('21:35:00'),
('15:13:00'),('21:55:00'),('19:22:00'),('16:39:00')) AS f(xx);
-- Solution:
DECLARE #HourStart TIME = '15:20:00'; -- you get via a subquery
SELECT
GroupNumber = r.RN+1,
HourStart = MAX(f.HStart),
Quantity = COUNT(t.SomeThing)
FROM dbo.rangeAB(0,7,1,0) AS r
CROSS APPLY (VALUES(DATEADD(HOUR,r.RN,#HourStart))) AS f(HStart)
CROSS APPLY (VALUES(DATEADD(SECOND,3599,f.HStart))) AS f2(HEnd)
LEFT JOIN #things AS t
ON t.SomeTime BETWEEN f.HStart AND f2.HEnd
GROUP BY r.RN;
Results:
GroupNumber HourStart Quantity
------------- ----------------- -----------
1 15:20:00.0000000 1
2 16:20:00.0000000 4
3 17:20:00.0000000 1
4 18:20:00.0000000 1
5 19:20:00.0000000 2
6 20:20:00.0000000 0
7 21:20:00.0000000 2
8 22:20:00.0000000 0
Execution Plan:
Let me know if you have questions. dbo.rangeAB below.
CREATE FUNCTION dbo.rangeAB
(
#low bigint,
#high bigint,
#gap bigint,
#row1 bit
)
/****************************************************************************************
[Purpose]:
Creates up to 531,441,000,000 sequentia1 integers numbers beginning with #low and ending
with #high. Used to replace iterative methods such as loops, cursors and recursive CTEs
to solve SQL problems. Based on Itzik Ben-Gan's getnums function with some tweeks and
enhancements and added functionality. The logic for getting rn to begin at 0 or 1 is
based comes from Jeff Moden's fnTally function.
The name range because it's similar to clojure's range function. The name "rangeAB" as
used because "range" is a reserved SQL keyword.
[Author]: Alan Burstein
[Compatibility]:
SQL Server 2008+ and Azure SQL Database
[Syntax]:
SELECT r.RN, r.OP, r.N1, r.N2
FROM dbo.rangeAB(#low,#high,#gap,#row1) AS r;
[Parameters]:
#low = a bigint that represents the lowest value for n1.
#high = a bigint that represents the highest value for n1.
#gap = a bigint that represents how much n1 and n2 will increase each row; #gap also
represents the difference between n1 and n2.
#row1 = a bit that represents the first value of rn. When #row = 0 then rn begins
at 0, when #row = 1 then rn will begin at 1.
[Returns]:
Inline Table Valued Function returns:
rn = bigint; a row number that works just like T-SQL ROW_NUMBER() except that it can
start at 0 or 1 which is dictated by #row1.
op = bigint; returns the "opposite number that relates to rn. When rn begins with 0 and
ends with 10 then 10 is the opposite of 0, 9 the opposite of 1, etc. When rn begins
with 1 and ends with 5 then 1 is the opposite of 5, 2 the opposite of 4, etc...
n1 = bigint; a sequential number starting at the value of #low and incrementing by the
value of #gap until it is less than or equal to the value of #high.
n2 = bigint; a sequential number starting at the value of #low+#gap and incrementing
by the value of #gap.
[Dependencies]:
N/A
[Developer Notes]:
1. The lowest and highest possible numbers returned are whatever is allowable by a
bigint. The function, however, returns no more than 531,441,000,000 rows (8100^3).
2. #gap does not affect rn, rn will begin at #row1 and increase by 1 until the last row
unless its used in a query where a filter is applied to rn.
3. #gap must be greater than 0 or the function will not return any rows.
4. Keep in mind that when #row1 is 0 then the highest row-number will be the number of
rows returned minus 1
5. If you only need is a sequential set beginning at 0 or 1 then, for best performance
use the RN column. Use N1 and/or N2 when you need to begin your sequence at any
number other than 0 or 1 or if you need a gap between your sequence of numbers.
6. Although #gap is a bigint it must be a positive integer or the function will
not return any rows.
7. The function will not return any rows when one of the following conditions are true:
* any of the input parameters are NULL
* #high is less than #low
* #gap is not greater than 0
To force the function to return all NULLs instead of not returning anything you can
add the following code to the end of the query:
UNION ALL
SELECT NULL, NULL, NULL, NULL
WHERE NOT (#high&#low&#gap&#row1 IS NOT NULL AND #high >= #low AND #gap > 0)
This code was excluded as it adds a ~5% performance penalty.
8. There is no performance penalty for sorting by rn ASC; there is a large performance
penalty for sorting in descending order WHEN #row1 = 1; WHEN #row1 = 0
If you need a descending sort the use op in place of rn then sort by rn ASC.
Best Practices:
--===== 1. Using RN (rownumber)
-- (1.1) The best way to get the numbers 1,2,3...#high (e.g. 1 to 5):
SELECT RN FROM dbo.rangeAB(1,5,1,1);
-- (1.2) The best way to get the numbers 0,1,2...#high-1 (e.g. 0 to 5):
SELECT RN FROM dbo.rangeAB(0,5,1,0);
--===== 2. Using OP for descending sorts without a performance penalty
-- (2.1) The best way to get the numbers 5,4,3...#high (e.g. 5 to 1):
SELECT op FROM dbo.rangeAB(1,5,1,1) ORDER BY rn ASC;
-- (2.2) The best way to get the numbers 0,1,2...#high-1 (e.g. 5 to 0):
SELECT op FROM dbo.rangeAB(1,6,1,0) ORDER BY rn ASC;
--===== 3. Using N1
-- (3.1) To begin with numbers other than 0 or 1 use N1 (e.g. -3 to 3):
SELECT N1 FROM dbo.rangeAB(-3,3,1,1);
-- (3.2) ROW_NUMBER() is built in. If you want a ROW_NUMBER() include RN:
SELECT RN, N1 FROM dbo.rangeAB(-3,3,1,1);
-- (3.3) If you wanted a ROW_NUMBER() that started at 0 you would do this:
SELECT RN, N1 FROM dbo.rangeAB(-3,3,1,0);
--===== 4. Using N2 and #gap
-- (4.1) To get 0,10,20,30...100, set #low to 0, #high to 100 and #gap to 10:
SELECT N1 FROM dbo.rangeAB(0,100,10,1);
-- (4.2) Note that N2=N1+#gap; this allows you to create a sequence of ranges.
-- For example, to get (0,10),(10,20),(20,30).... (90,100):
SELECT N1, N2 FROM dbo.rangeAB(0,90,10,1);
-- (4.3) Remember that a rownumber is included and it can begin at 0 or 1:
SELECT RN, N1, N2 FROM dbo.rangeAB(0,90,10,1);
[Examples]:
--===== 1. Generating Sample data (using rangeAB to create "dummy rows")
-- The query below will generate 10,000 ids and random numbers between 50,000 and 500,000
SELECT
someId = r.rn,
someNumer = ABS(CHECKSUM(NEWID())%450000)+50001
FROM rangeAB(1,10000,1,1) r;
--===== 2. Create a series of dates; rn is 0 to include the first date in the series
DECLARE #startdate DATE = '20180101', #enddate DATE = '20180131';
SELECT r.rn, calDate = DATEADD(dd, r.rn, #startdate)
FROM dbo.rangeAB(1, DATEDIFF(dd,#startdate,#enddate),1,0) r;
GO
--===== 3. Splitting (tokenizing) a string with fixed sized items
-- given a delimited string of identifiers that are always 7 characters long
DECLARE #string VARCHAR(1000) = 'A601225,B435223,G008081,R678567';
SELECT
itemNumber = r.rn, -- item's ordinal position
itemIndex = r.n1, -- item's position in the string (it's CHARINDEX value)
item = SUBSTRING(#string, r.n1, 7) -- item (token)
FROM dbo.rangeAB(1, LEN(#string), 8,1) r;
GO
--===== 4. Splitting (tokenizing) a string with random delimiters
DECLARE #string VARCHAR(1000) = 'ABC123,999F,XX,9994443335';
SELECT
itemNumber = ROW_NUMBER() OVER (ORDER BY r.rn), -- item's ordinal position
itemIndex = r.n1+1, -- item's position in the string (it's CHARINDEX value)
item = SUBSTRING
(
#string,
r.n1+1,
ISNULL(NULLIF(CHARINDEX(',',#string,r.n1+1),0)-r.n1-1, 8000)
) -- item (token)
FROM dbo.rangeAB(0,DATALENGTH(#string),1,1) r
WHERE SUBSTRING(#string,r.n1,1) = ',' OR r.n1 = 0;
-- logic borrowed from: http://www.sqlservercentral.com/articles/Tally+Table/72993/
--===== 5. Grouping by a weekly intervals
-- 5.1. how to create a series of start/end dates between #startDate & #endDate
DECLARE #startDate DATE = '1/1/2015', #endDate DATE = '2/1/2015';
SELECT
WeekNbr = r.RN,
WeekStart = DATEADD(DAY,r.N1,#StartDate),
WeekEnd = DATEADD(DAY,r.N2-1,#StartDate)
FROM dbo.rangeAB(0,datediff(DAY,#StartDate,#EndDate),7,1) r;
GO
-- 5.2. LEFT JOIN to the weekly interval table
BEGIN
DECLARE #startDate datetime = '1/1/2015', #endDate datetime = '2/1/2015';
-- sample data
DECLARE #loans TABLE (loID INT, lockDate DATE);
INSERT #loans SELECT r.rn, DATEADD(dd, ABS(CHECKSUM(NEWID())%32), #startDate)
FROM dbo.rangeAB(1,50,1,1) r;
-- solution
SELECT
WeekNbr = r.RN,
WeekStart = dt.WeekStart,
WeekEnd = dt.WeekEnd,
total = COUNT(l.lockDate)
FROM dbo.rangeAB(0,datediff(DAY,#StartDate,#EndDate),7,1) r
CROSS APPLY (VALUES (
CAST(DATEADD(DAY,r.N1,#StartDate) AS DATE),
CAST(DATEADD(DAY,r.N2-1,#StartDate) AS DATE))) dt(WeekStart,WeekEnd)
LEFT JOIN #loans l ON l.lockDate BETWEEN dt.WeekStart AND dt.WeekEnd
GROUP BY r.RN, dt.WeekStart, dt.WeekEnd ;
END;
--===== 6. Identify the first vowel and last vowel in a along with their positions
DECLARE #string VARCHAR(200) = 'This string has vowels';
SELECT TOP(1) position = r.rn, letter = SUBSTRING(#string,r.rn,1)
FROM dbo.rangeAB(1,LEN(#string),1,1) r
WHERE SUBSTRING(#string,r.rn,1) LIKE '%[aeiou]%'
ORDER BY r.rn;
-- To avoid a sort in the execution plan we'll use op instead of rn
SELECT TOP(1) position = r.op, letter = SUBSTRING(#string,r.op,1)
FROM dbo.rangeAB(1,LEN(#string),1,1) r
WHERE SUBSTRING(#string,r.rn,1) LIKE '%[aeiou]%'
ORDER BY r.rn;
---------------------------------------------------------------------------------------
[Revision History]:
Rev 00 - 20140518 - Initial Development - Alan Burstein
Rev 01 - 20151029 - Added 65 rows to make L1=465; 465^3=100.5M. Updated comment section
- Alan Burstein
Rev 02 - 20180613 - Complete re-design including opposite number column (op)
Rev 03 - 20180920 - Added additional CROSS JOIN to L2 for 530B rows max - Alan Burstein
****************************************************************************************/
RETURNS TABLE WITH SCHEMABINDING AS RETURN
WITH L1(N) AS
(
SELECT 1
FROM (VALUES
(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),
(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),
(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),
(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),(0),
(0),(0)) T(N) -- 90 values
),
L2(N) AS (SELECT 1 FROM L1 a CROSS JOIN L1 b CROSS JOIN L1 c),
iTally AS (SELECT rn = ROW_NUMBER() OVER (ORDER BY (SELECT 1)) FROM L2 a CROSS JOIN L2 b)
SELECT
r.RN,
r.OP,
r.N1,
r.N2
FROM
(
SELECT
RN = 0,
OP = (#high-#low)/#gap,
N1 = #low,
N2 = #gap+#low
WHERE #row1 = 0
UNION ALL -- ISNULL required in the TOP statement below for error handling purposes
SELECT TOP (ABS((ISNULL(#high,0)-ISNULL(#low,0))/ISNULL(#gap,0)+ISNULL(#row1,1)))
RN = i.rn,
OP = (#high-#low)/#gap+(2*#row1)-i.rn,
N1 = (i.rn-#row1)*#gap+#low,
N2 = (i.rn-(#row1-1))*#gap+#low
FROM iTally AS i
ORDER BY i.rn
) AS r
WHERE #high&#low&#gap&#row1 IS NOT NULL AND #high >= #low AND #gap > 0;
GO

Count the number of specific days of the month that have passed since a given date

I'm writing a function in SQL Server 2012 that will need to know the number of 3 specific days of the month that have passed since a given date. I can do this with a while loop, but its slow and I was looking for a better way.
Here is what I have so far:
Let's assume that GETDATE() = '11/14/2016' and #productDate = '10/1/2016'
--Get the number of "units" that have passed since the date on the label
DECLARE #unitCount INT = 0;
DECLARE #countingDate DATE
SET #countingDate = DATEADD(DAY,1,#productDate);--add 1 to prevent counting the date on the label as the first unit
WHILE (#countingDate < CAST(GETDATE() As date ))
BEGIN
SELECT #unitCount = #unitCount +
CASE
WHEN DAY(#countingDate) = 1 OR DAY(#countingDate) = 10 OR DAY(#countingDate) = 20 THEN 1
ELSE 0
END
SET #countingDate = DATEADD(DAY,1,#countingDate);
END
This will result in #unitCount = 4
GETDATE() of '11/20/2016' would result in #unitCount = 5
Without using a numbers table
create function dbo.fn_DateCounter
(
#datefrom date,
#dateto date
)
returns int
as
begin
return
-- number of complete months
3 *
(
(DATEPART(YYYY, #dateto) * 12 + DATEPART(MM, #dateto))
-(DATEPART(YYYY, #datefrom) * 12 + DATEPART(MM, #datefrom))
- 1
)
-- add on the extras from the first month
+ case when DATEPART(DD, #datefrom) < 10 then 2
when DATEPART(DD, #datefrom) < 20 then 1
else 0
end
-- add on the extras from the last month
+ case when DATEPART(DD, #dateto) > 20 then 3
when DATEPART(DD, #dateto) > 10 then 2
else 1
end
end
go
select dbo.fn_DateCounter('01-jan-2000','01-jan-2000') -- 0
select dbo.fn_DateCounter('01-jan-2000','10-jan-2000') -- 0
select dbo.fn_DateCounter('01-jan-2000','11-jan-2000') -- 1
select dbo.fn_DateCounter('01-jan-2000','20-jan-2000') -- 1
select dbo.fn_DateCounter('01-jan-2000','21-jan-2000') -- 2
select dbo.fn_DateCounter('11-jan-2000','21-jan-2000') -- 1
select dbo.fn_DateCounter('11-jan-2000','21-feb-2000') -- 4
select dbo.fn_DateCounter('01-jan-2000','01-jan-2001') -- 36
select dbo.fn_DateCounter('01-jan-2000','11-jan-2001') -- 37
You can use a combination of sum, case, and the dbo.spt_values table:
declare #productDate datetime = '11/01/2016',
#unitCount int
;with nums as ( -- use a CTE to build a number list
select top 1000 number from master..spt_values
)
select #unitCount = sum(
case when day(dateadd(day, n, #productDate)) in (1, 10, 20)
then 1 else 0 end
) -- add 1 for each 1,10,20 we find
from (
select n = row_number() over (order by nums.number)
from nums cross join nums as num -- 1000*1000 = 1 million rows
) n
where dateadd(day, n, #productDate) < getdate()
select #unitCount
This will grab each date between #productDate and getdate(). The case statement will select 1 for each 1/10/20, and 0 for every other date. Finally, we take the sum of the result.
For 11/1 - 11/11, it returns 1.
For 1/1 - 11/11, the result is 31.
EDIT: In the CTE (with nums as...), we select 1-1000, and then we do a cross join which gives us a million records to work with. The answer is still limited, but now you can go ~2700 years with this.

Unexcpected NULL result with SELECT CASE statement

The following is my query:
DECLARE
#ThisYear INT = CAST(DATEPART(yy, GETDATE()) AS INT)
,#LastYear INT = CAST(DATEPART(yy, GETDATE()) - 1 AS INT)
SELECT
CASE WHEN CAST(DATEPART(yy, mq.[qDate]) AS INT) = #ThisYear THEN 5
WHEN CAST(DATEPART(yy, mq.[qDate]) - 1 AS INT) = #LastYear THEN 6 END AS 'TimePeriodID'
,Name = 'QueryTotals'
,SUM(CASE WHEN dQuery = 1 THEN 1 ELSE 0 END) AS 'DefaultQuery'
,SUM(CASE WHEN dQuery = 0 THEN 1 ELSE 0 END) AS 'Non-DefaultQuery'
,COUNT(dQuery) AS 'TotalQueries'
FROM
mQuery mq
INNER JOIN
mParameter mp
ON
mq.id = mp.id
INNER JOIN
Variable v
ON
mp.id = v.id
WHERE
v.id <= 10
GROUP BY
CASE WHEN CAST(DATEPART(yy, mq.[qDate]) AS INT) = #ThisYear THEN 5
WHEN CAST(DATEPART(yy, mq.[qDate]) - 1 AS INT) = #LastYear THEN 6 END
The following is a screenshot of my results:
Please disregard rows with TimePeriodID = 1, 2, 3, 4 as these were UNIONed to the query stated in question.
Why am I getting a NULL in TimePeriodID when it should be 6?
Please note that the resulting totals are correct and there are dates in the table with dates that satisfy the condition to be 6.
I am at a loss with this.
An extra eye would be fantastic.
Thanks.
This combination of statements:
DECLARE
#ThisYear INT = CAST(DATEPART(yy, GETDATE()) AS INT)
,#LastYear INT = CAST(DATEPART(yy, GETDATE()) - 1 AS INT)
SELECT
CASE WHEN CAST(DATEPART(yy, mq.[qDate]) AS INT) = #ThisYear THEN 5
WHEN CAST(DATEPART(yy, mq.[qDate]) - 1 AS INT) = #LastYear THEN 6 END AS 'TimePeriodID'
makes it impossible for the 2nd case to ever be true. If the first cast != #ThisYear, then the second cast (which is the 1st - 1) can't be == #LastYear, becuase #LastYear = #ThisYear - 1.
So That's the reason. The solution is unclear because you didn't show what you're trying to do.

Count total time diffrence with single query

I have a data from database Like this :
HDiffrence MDiffrence Interv
2 14 2 Hours 14 Minutes
0 4 0 Hours 4 Minutes
so i need to convert both of H and M into time format and I do some logic thing in other query
I do some check like this, here's to check if HH:MM is more than 15 minutes:
count((select(case when((select convert(time,(HDiffrence +':'+MDiffrence),114)) > ((select convert(time,('00' +':'+'15'),114))) )then 1 when ((select convert(time,(HDiffrence +':'+MDiffrence),114)) is null) then null else 0 end)))
and I put the check into :
select contractor , COUNT(pm.PantauID) as total ,
count((select(case when((select convert(time,(HDiffrence +':'+MDiffrence),114)) > ((select convert(time,('00' +':'+'15'),114))) )then 1 when ((select convert(time,(HDiffrence +':'+MDiffrence),114)) is null) then null else 0 end)))
from Pantau p
left join PantauMSG pm
on p.PantauID = pm.PantauID
where PantauType = 'PT2' and PantauStatus <> 'PS1' and
(CAST(SCH_DATE AS DATE) = (SELECT CONVERT (DATE, GETDATE(), 103) AS Expr1))
group by CONTRACTOR
but yes, absoulutely get error :
Cannot perform an aggregate function on an expression containing an aggregate or a subquery.
so based on my logic to check the time , is it other way more simplified to count value more than 15 minutes
If you're just working with integer values for Hours and Minutes, just use basic maths instead of converting it to a Time value:
declare #hours int = 1
declare #mins int = 35
declare #total_mins int = (#hours * 60) + #mins
select #total_mins - 15
Then use it something like this:
select contractor, COUNT(pm.PantauID) as total, sum(HDiffrence) as Hours,
sum(MDiffrence) as Minutes, sum(HDiffrence) * 60 + sum(MDiffrence) as TotalMinutes
from Pantau p
left join PantauMSG pm
on p.PantauID = pm.PantauID
where PantauType = 'PT2' and PantauStatus <> 'PS1' and
(CAST(SCH_DATE AS DATE) = (SELECT CONVERT (DATE, GETDATE(), 103) AS Expr1))
group by CONTRACTOR
Okay based on Tanner Idea I change my query and yes, finally it works:
select contractor , COUNT(pm.PantauID) as total ,
count(case when ((convert(int, HDiffrence)* 60 )+convert(int,MDiffrence)) > 15 then ((convert(int, HDiffrence)* 60 )+convert(int,MDiffrence)) else null end) as Morethan15,
count(case when ((convert(int, HDiffrence)* 60 )+convert(int,MDiffrence)) < 15 then ((convert(int, HDiffrence)* 60 )+convert(int,MDiffrence)) else null end) as lessthan15
from Pantau p
left join PantauMSG pm
on p.PantauID = pm.PantauID
where PantauType = 'PT2' and PantauStatus <> 'PS1' and
(CAST(SCH_DATE AS DATE) = (SELECT CONVERT (DATE, GETDATE(), 103) AS Expr1))
group by CONTRACTOR

Normalizing a table

I have a legacy table, which I can't change.
The values in it can be modified from legacy application (application also can't be changed).
Due to a lot of access to the table from new application (new requirement), I'd like to create a temporary table, which would hopefully speed up the queries.
The actual requirement, is to calculate number of business days from X to Y. For example, give me all business days from Jan 1'st 2001 until Dec 24'th 2004. The table is used to mark which days are off, as different companies may have different days off - it isn't just Saturday + Sunday)
The temporary table would be created from a .NET program, each time user enters the screen for this query (user may run query multiple times, with different values, table is created once), so I'd like it to be as fast as possible. Approach below runs in under a second, but I only tested it with a small dataset, and still it takes probably close to half a second, which isn't great for UI - even though it's just the overhead for first query.
The legacy table looks like this:
CREATE TABLE [business_days](
[country_code] [char](3) ,
[state_code] [varchar](4) ,
[calendar_year] [int] ,
[calendar_month] [varchar](31) ,
[calendar_month2] [varchar](31) ,
[calendar_month3] [varchar](31) ,
[calendar_month4] [varchar](31) ,
[calendar_month5] [varchar](31) ,
[calendar_month6] [varchar](31) ,
[calendar_month7] [varchar](31) ,
[calendar_month8] [varchar](31) ,
[calendar_month9] [varchar](31) ,
[calendar_month10] [varchar](31) ,
[calendar_month11] [varchar](31) ,
[calendar_month12] [varchar](31) ,
misc.
)
Each month has 31 characters, and any day off (Saturday + Sunday + holiday) is marked with X. Each half day is marked with an 'H'. For example, if a month starts on a Thursday, than it will look like (Thursday+Friday workdays, Saturday+Sunday marked with X):
' XX XX ..'
I'd like the new table to look like so:
create table #Temp (country varchar(3), state varchar(4), date datetime, hours int)
And I'd like to only have rows for days which are off (marked with X or H from previous query)
What I ended up doing, so far is this:
Create a temporary-intermediate table, that looks like this:
create table #Temp_2 (country_code varchar(3), state_code varchar(4), calendar_year int, calendar_month varchar(31), month_code int)
To populate it, I have a union which basically unions calendar_month, calendar_month2, calendar_month3, etc.
Than I have a loop which loops through all the rows in #Temp_2, after each row is processed, it is removed from #Temp_2.
To process the row there is a loop from 1 to 31, and substring(calendar_month, counter, 1) is checked for either X or H, in which case there is an insert into #Temp table.
[edit added code]
Declare #country_code char(3)
Declare #state_code varchar(4)
Declare #calendar_year int
Declare #calendar_month varchar(31)
Declare #month_code int
Declare #calendar_date datetime
Declare #day_code int
WHILE EXISTS(SELECT * From #Temp_2) -- where processed = 0)
BEGIN
Select Top 1 #country_code = t2.country_code, #state_code = t2.state_code, #calendar_year = t2.calendar_year, #calendar_month = t2.calendar_month, #month_code = t2.month_code From #Temp_2 t2 -- where processed = 0
set #day_code = 1
while #day_code <= 31
begin
if substring(#calendar_month, #day_code, 1) = 'X'
begin
set #calendar_date = convert(datetime, (cast(#month_code as varchar) + '/' + cast(#day_code as varchar) + '/' + cast(#calendar_year as varchar)))
insert into #Temp (country, state, date, hours) values (#country_code, #state_code, #calendar_date, 8)
end
if substring(#calendar_month, #day_code, 1) = 'H'
begin
set #calendar_date = convert(datetime, (cast(#month_code as varchar) + '/' + cast(#day_code as varchar) + '/' + cast(#calendar_year as varchar)))
insert into #Temp (country, state, date, hours) values (#country_code, #state_code, #calendar_date, 4)
end
set #day_code = #day_code + 1
end
delete from #Temp_2 where #country_code = country_code AND #state_code = state_code AND #calendar_year = calendar_year AND #calendar_month = calendar_month AND #month_code = month_code
--update #Temp_2 set processed = 1 where #country_code = country_code AND #state_code = state_code AND #calendar_year = calendar_year AND #calendar_month = calendar_month AND #month_code = month_code
END
I am not an expert in SQL, so I'd like to get some input on my approach, and maybe even a much better approach suggestion.
After having the temp table, I'm planning to do (dates would be coming from a table):
select cast(convert(datetime, ('01/31/2012'), 101) -convert(datetime, ('01/17/2012'), 101) as int) - ((select sum(hours) from #Temp where date between convert(datetime, ('01/17/2012'), 101) and convert(datetime, ('01/31/2012'), 101)) / 8)
Besides the solution of normalizing the table, the other solution I implemented for now, is a function which does all this logic of getting the business days by scanning the current table. It runs pretty fast, but I'm hesitant to call a function, if I can instead add a simpler query to get result.
(I'm currently trying this on MSSQL, but I would need to do same for Sybase ASE and Oracle)
This should fulfill the requirement, "...calculate number of business days from X to Y."
It counts each space as a business day and anything other than an X or a space as a half day (should just be H, according to the OP).
I pulled this off in SQL Server 2008 R2:
-- Calculate number of business days from X to Y
declare #start date = '20120101' -- X
declare #end date = '20120101' -- Y
-- Outer query sums the length of the full_year text minus non-work days
-- Spaces are doubled to help account for half-days...then divide by two
select sum(datalength(replace(replace(substring(full_year, first_day, last_day - first_day + 1), ' ', ' '), 'X', '')) / 2.0) as number_of_business_days
from (
select
-- Get substring start value for each year
case
when calendar_year = datepart(yyyy, #start) then datepart(dayofyear, #start)
else 1
end as first_day
-- Get substring end value for each year
, case
when calendar_year = datepart(yyyy, #end) then datepart(dayofyear, #end)
when calendar_year > datepart(yyyy, #end) then 0
when calendar_year < datepart(yyyy, #start) then 0
else datalength(full_year)
end as last_day
, full_year
from (
select calendar_year
-- Get text representation of full year
, calendar_month
+ calendar_month2
+ calendar_month3
+ calendar_month4
+ calendar_month5
+ calendar_month6
+ calendar_month7
+ calendar_month8
+ calendar_month9
+ calendar_month10
+ calendar_month11
+ calendar_month12 as full_year
from business_days
-- where country_code = 'USA' etc.
) as get_year
) as get_days
A where clause can go on the inner-most query.
It is not an un-pivot of the legacy format, which the OP spends much time on and which will probably take more (and possibly unnecessary) computing cycles. I'm assuming such a thing was "nice to see" rather than part of the requirements. Jeff Moden has great articles on how a tally table could help in that case (for SQL Server, anyway).
It might be necessary to watch trailing spaces depending upon how a particular DBMS is set (notice that I'm using datalength and not len).
UPDATE: Added the OP's requested temp table:
select country_code
, state_code
, dateadd(d, t.N - 1, cast(cast(a.calendar_year as varchar(8)) as date)) as calendar_date
, case substring(full_year, t.N, 1) when 'X' then 0 when 'H' then 4 else 8 end as business_hours
from (
select country_code
, state_code
, calendar_year
, calendar_month
+ calendar_month2
+ calendar_month3
+ calendar_month4
+ calendar_month5
+ calendar_month6
+ calendar_month7
+ calendar_month8
+ calendar_month9
+ calendar_month10
+ calendar_month11
+ calendar_month12
as full_year
from business_days
) as a, (
select a.N + b.N * 10 + c.N * 100 + 1 as N
from (select 0 as N union all select 1 union all select 2 union all select 3 union all select 4 union all select 5 union all select 6 union all select 7 union all select 8 union all select 9) a
, (select 0 as N union all select 1 union all select 2 union all select 3 union all select 4 union all select 5 union all select 6 union all select 7 union all select 8 union all select 9) b
, (select 0 as N union all select 1 union all select 2 union all select 3 union all select 4 union all select 5 union all select 6 union all select 7 union all select 8 union all select 9) c
) as t -- cross join with Tally table built on the fly
where t.N <= datalength(a.full_year)
Given your temp table is slow to create, are you able to pre-calculate it?
If you're able to put a trigger on the existing table, perhaps you could fire a proc which will drop and create the temp table. Or have an agent job which checks to see if the existing table has been updated (raise a flag somewhere) and then recomputes the temp table.
The existing table's structure is so woeful that I wouldn't be surprised if it will always be expensive to normalize it. Pre-calculating is an easy and simple way around that problem.

Resources