SQL QUERY to display NULL values when using COUNT - sql-server

I need to produce the number of healthclub members that are enrolled through their employer(several different employers not just 1) for a monthly membership and what level of membership they have plus their family members. The problem I am having is that currently we do not have any LEVEL D membership but may in the future. I need the report to display ‘0’ when there is no membership. I tried
COUNT(DISTINCT CUSTOMER_ID) +
COUNT(CASE WHEN CUSTOMER_ID IS NULL THEN 1 END) AS NUMBER_OF_CUSTOMERS
And it did not work, any help is appreciated.
SELECT
MEMBERSHIP_TYPE,
COUNT(DISTINCT CUSTOMER_ID) AS NUMBER_OF_CUSTOMERS,
COUNT(CASE WHEN CUSTOMER_RELATION = ‘FAMILYMEMBER’ THEN 1 END) AS FAMILY_MEMBERS,
COUNT(DISTINCT CUSTOMER_ID) + COUNT(CASE WHEN CUSTOMER_RELATION ‘FAMILY_MEMBERS’ THEN 1 END) AS TOTAL
This is what I currently get
MEMBERSHIP_TYPE NUMBER_OF_CUSTOMERS FAMILY_MEMBERS TOTAL
-------------------------------------------------------------
LEVEL A 100 25 125
LEVEL B 630 340 970
LEVEL C 1201 630 1831
I need this
MEMBERSHIP_TYPE NUMBER_OF_CUSTOMERS FAMILY_MEMBERS TOTAL
-------------------------------------------------------------
LEVEL A 100 25 125
LEVEL B 630 340 970
LEVEL C 1201 630 1831
LEVEL D 0 0 0

You will hardcode the value into the select statement. Add a UNION ALL to the SQL statement
..your current select statement
UNION
SELECT LEVEL D, 0,0,0;
When an entry of Level D will be made in the table, the updated value will be shown correctly.

Related

Choose row that equal to the max value from a query

I want to know who has the most friends from the app I own(transactions), which means it can be either he got paid, or paid himself to many other users.
I can't make the query to show me only those who have the max friends number (it can be 1 or many, and it can be changed so I can't use limit).
;with relationships as
(
select
paid as 'auser',
Member_No as 'afriend'
from Payments$
union all
select
member_no as 'auser',
paid as 'afriend'
from Payments$
),
DistinctRelationships AS (
SELECT DISTINCT *
FROM relationships
)
select
afriend,
count(*) cnt
from DistinctRelationShips
GROUP BY
afriend
order by
count(*) desc
I just can't figure it out, I've tried count, max(count), where = max, nothing worked.
It's a two columns table - "Member_No" and "Paid" - member pays the money, and the paid is the one who got the money.
Member_No
Paid
14
18
17
1
12
20
12
11
20
8
6
3
2
4
9
20
8
10
5
20
14
16
5
2
12
1
14
10
It's from Excel, but I loaded it into sql-server.
It's just a sample, there are 1000 more rows
It seems like you are massively over-complicating this. There is no need for self-joining.
Just unpivot each row so you have both sides of the relationship, then group it up by one side and count distinct of the other side
SELECT
-- for just the first then SELECT TOP (1)
-- for all that tie for the top place use SELECT TOP (1) WITH TIES
v.Id,
Relationships = COUNT(DISTINCT v.Other),
TotalTransactions = COUNT(*)
FROM Payments$ p
CROSS APPLY (VALUES
(p.Member_No, p.Paid),
(p.Paid, p.Member_No)
) v(Id, Other)
GROUP BY
v.Id
ORDER BY
COUNT(DISTINCT v.Other) DESC;
db<>fiddle

create date range report based on history table

We have been keeping track of some changes in a History Table like this:
ChangeID EmployeeID PropertyName OldValue NewValue ModifiedDate
100 10 EmploymentStart Not Set 1 2013-01-01
101 10 SalaryValue Not Set 55000 2013-01-01
102 10 SalaryValue 55000 61500 2013-03-20
103 10 SalaryEffectiveDate 2013-01-01 2013-04-01 2013-03-20
104 11 EmploymentStart Not Set 1 2013-01-21
105 11 SalaryValue Not Set 43000 2013-01-21
106 10 SalaryValue 61500 72500 2013-09-20
107 10 SalaryEffectiveDate 2013-04-01 2013-10-01 2013-09-20
Basically if an Employee's Salary changes, we log two rows in the history table. One row for the Salary value itself and the other row for the salary effective date. So these two have identical Modification Date/Time and are kind safe to assume that are always after each other in the database. We can also assume that Salary Value is always logged first (so it is one record before the corresponding effective date
Now we are looking into creating reports based on a given date range into a table like this:
Annual Salary Change Report (2013)
EmployeeID Date1 Date2 Salary
10 2013-01-01 2013-04-01 55000
10 2013-04-01 2013-10-01 61500
10 2013-10-01 2013-12-31 72500
11 2013-03-21 2013-12-31 43000
I have done something similar in the past by joining the table to itself but in those cases the effective date and the new value where in the same row. Now I have to create each row of the output table by looking into a few rows of the existing history table. Is there an straightforward way of doing this whitout using cursors?
Edit #1:
Im reading on this and apparently its doable using PIVOTs
Thank you very much in advance.
You can use self join to get the result you want. The trick is to create a cte and add two rows for each EmployeeID as follows (I call the history table ht):
with cte1 as
(
select EmployeeID, PropertyName, OldValue, NewValue, ModifiedDate
from ht
union all
select t1.EmployeeID,
(case when t1.PropertyName = "EmploymentStart" then "SalaryEffectiveDate" else t1.PropertyName end),
(case when t1.PropertyName = "EmploymentStart" then t1.ModifiedDate else t1.NewValue end),
(case when t1.PropertyName = "SalaryValue" then t1.NewValue
when t1.PropertyName = "SalaryEffectiveDate" then "2013-12-31"
when t1.PropertyName = "EmploymentStart" then "2013-12-31" end),
"2013-12-31"
from ht t1
where t1.ModifiedDate = (select max(t2.ModifiedDate) from ht t2 where t1.EmployeeID = t2.EmployeeID)
)
select t3.EmployeeID, t4.OldValue Date1, t4.NewValue Date2, t3.OldValue Salary
from cte1 t3
inner join cte1 t4 on t3.EmployeeID = t4.EmployeeID
and t3.ModifiedDate = t4.ModifiedDate
where t3.PropertyName = "SalaryValue"
and t4.PropertyName = "SalaryEffectiveDate"
order by t3.EmployeeID, Date1
I hope this helps.
It is a little over kill to use pivot since you only need two properties. Use GROUP BY can also achieve this:
;WITH cte_salary_history(EmployeeID,SalaryEffectiveDate,SalaryValue)
AS
(
SELECT EmployeeID,
MAX(CASE WHEN PropertyName='SalaryEffectiveDate' THEN NewValue ELSE NULL END) AS SalaryEffectiveDate,
MAX(CASE WHEN PropertyName='SalaryValue' THEN NewValue ELSE NULL END) AS SalaryValue
FROM yourtable
GROUP BY EmployeeID,ModifiedDate
)
SELECT EmployeeID,SalaryEffectiveDate,
LEAD(SalaryEffectiveDate,1,'9999-12-31') OVER(PARTITION BY EmployeeID ORDER BY SalaryEffectiveDate) AS SalaryEndDate,
SalaryValue
FROM cte_salary_history

T-SQL Count of Records in Status for Previous Months

I have a T-SQL Quotes table and need to be able to count how many quotes were in an open status during past months.
The dates I have to work with are an 'Add_Date' timestamp and an 'Update_Date' timestamp. Once a quote is put into a 'Won' or 'Loss' columns with a value of '1' in that column it can no longer be updated. Therefore, the 'Update_Date' effectively becomes the Closed_Status timestamp.
Here's a few example records:
Quote_No Add_Date Update_Date Open_Quote Win Loss
001 01-01-2016 NULL 1 0 0
002 01-01-2016 3-1-2016 0 1 0
003 01-01-2016 4-1-2016 0 0 1
Here's a link to all the data here:
https://drive.google.com/open?id=0B4xdnV0LFZI1T3IxQ2ZKRDhNd1k
I asked this question previously this year and have been using the following code:
with n as (
select row_number() over (order by (select null)) - 1 as n
from master..spt_values
)
select format(dateadd(month, n.n, q.add_date), 'yyyy-MM') as yyyymm,
count(*) as Open_Quote_Count
from quotes q join
n
on (closed_status = 1 and dateadd(month, n.n, q.add_date) <= q.update_date) or
(closed_status = 0 and dateadd(month, n.n, q.add_date) <= getdate())
group by format(dateadd(month, n.n, q.add_date), 'yyyy-MM')
order by yyyymm;
The problem is this code is returning a cumulative value. So January was fine, but then Feb is really Jan + Feb, and March is Jan+Feb+March, etc. etc. It took me a while to discover this and the numbers returned now way, way off and I'm trying to correct them.
From the full data set the results of this code are:
Year-Month Open_Quote_Count
2017-01 153
2017-02 265
2017-03 375
2017-04 446
2017-05 496
2017-06 560
2017-07 609
The desired result would be how many quotes were in an open status during that particular month, not the cumulative :
Year-Month Open_Quote_Count
2017-01 153
2017-02 112
2017-03 110
2017-04 71
Thank you in advance for your help!
Unless I am missing something, LAG() would be a good fit here
Example
Declare #YourTable Table ([Year-Month] varchar(50),[Open_Quote_Count] int)
Insert Into #YourTable Values
('2017-01',153)
,('2017-02',265)
,('2017-03',375)
,('2017-04',446)
,('2017-05',496)
,('2017-06',560)
,('2017-07',609)
Select *
,NewValue = [Open_Quote_Count] - lag([Open_Quote_Count],1,0) over (Order by [Year-Month])
From #YourTable --<< Replace with your initial query
Returns
Year-Month Open_Quote_Count NewValue
2017-01 153 153
2017-02 265 112
2017-03 375 110
2017-04 446 71
2017-05 496 50
2017-06 560 64
2017-07 609 49

Join/UNION ALL to show result in different columns

I am fetching COUNT from 3 different table based on some conditions but to group them on time interval. (Like: 1 hour, 30 minutes.)
I need the following output:
Date Interval Success Un-Success Closed CLInotFound
2/20/2016 01:01 – 02:00 5 3 2 13
2/20/2016 02:01 – 03:00 14 9 23 5
2/20/2016 03:01 – 04:00 8 67 89 345
2/20/2016 04:01 – 05:00 2 23 92 12
2/20/2016 05:01 – 06:00 44 55 78 98
2/20/2016 06:01 – 07:00 12 87 56 445
I am able to calculate them separately but when I am trying to combine the result gets different.
Query 1 For Success & Un-Success:
SELECT CONVERT(VARCHAR(5), A.InsertionDate ,108) AS 'Interval',
COUNT(CASE WHEN A.call_result = 0 then 1 ELSE NULL END) AS 'Success',
COUNT(CASE WHEN A.call_result = 1 then 1 ELSE NULL END) AS 'Un-Success'
from dbo.AutoRectifier A
WHERE CONVERT(DateTime,A.InsertionDate,101) BETWEEN '2016-02-19 02:10:35.000' AND '2016-02-19 07:15:35.000'
GROUP BY A.InsertionDate;
Query 2 For Closed:
SELECT CONVERT(VARCHAR(5), C.DateAdded ,108) AS 'Interval',
COUNT(*) AS 'Closed' FROM dbo.ChangeTicketState C
WHERE C.SourceFlag = 'S-CNR' AND C.RET LIKE '%CLOSE%'
AND C.DateAdded BETWEEN '2016-02-19 02:10:35.000' AND '2016-02-19 07:15:35.000'
GROUP BY C.DateAdded;
Query 3 For CLI Not Found:
SELECT CONVERT(VARCHAR(5), T.DateAdded ,108) AS 'Interval',
COUNT(*) 'CLI Not Found' FROM dbo.TICKET_INFO T
WHERE T.CONTACT_NUMBER = '' AND T.DateAdded BETWEEN '2016-02-19 02:10:35.000' AND '2016-02-19 07:15:35.000'
GROUP BY T.DateAdded;
You have got several problems to solve in you question.
You have to produce a union result set from Query1, Query2, Query3 to group it. You can use UNION ALL for it but all 3 queries must have similar column list for it. So, add
0 as Closed, 0 as CLInotFound
to select-list of the Query1,
add
0 as Success, 0 as Un-Success, 0 as CLInotFound
to select-list of the Query2 and add
0 as Success, 0 as Un-Success, 0 as Closed
to Query3
Then you can write
select * from Query1
union all
select * from Query2
union all
select * from Query3
Don't convert date to varchar at Query1, Query2, Query3. Better return datetime from query to use it for grouping after union. So, query 1 will look like
SELECT A.InsertionDate AS Date, ...
Query2 -
SELECT C.DateAdded AS Date, ...
etc.
Then you can group results on per-hour basis, for instance using GROUP BY SUBSTRING(CONVERT(VARCHAR(20), Date ,120), 1, 13)
So, the result will look like
SELECT SUBSTRING(CONVERT(VARCHAR(20), Date ,120), 1, 13) as Interval,
sum(Success) as
sum(Un-Success) as,
sum(Closed) as,
sum(CLInotFound) as
from (
select * from Query1
union all
select * from Query2
union all
select * from Query3
) q
GROUP BY SUBSTRING(CONVERT(VARCHAR(20), Date ,120), 1, 13)
Its result have slightly different format of Date and Interval field, but shows the idea.
You can use GROUP BY DATEPART(yy, Date), DATEPART(mm, Date), DATEPART(dd, Date), DATEPART(hh, Date) instead of GROUP BY SUBSTRING(CONVERT(VARCHAR(20), Date ,120), 1, 13) and format if as you wish.
Also result set does not contain intervals that not present at original data.
You can add Query4, containing all intervals required and zeros at all fields to fix it.

sql cross join - what use has anyone found for it? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Today, for the first time in 10 years of development with sql server I used a cross join in a production query. I needed to pad a result set to a report and found that a cross join between two tables with a creative where clause was a good solution. I was wondering what use has anyone found in production code for the cross join?
Update: the code posted by Tony Andrews is very close to what I used the cross join for. Believe me, I understand the implications of using a cross join and would not do so lightly. I was excited to have finally used it (I'm such a nerd) - sort of like the time I first used a full outer join.
Thanks to everyone for the answers! Here's how I used the cross join:
SELECT CLASS, [Trans-Date] as Trans_Date,
SUM(CASE TRANS
WHEN 'SCR' THEN [Std-Labor-Value]
WHEN 'S+' THEN [Std-Labor-Value]
WHEN 'S-' THEN [Std-Labor-Value]
WHEN 'SAL' THEN [Std-Labor-Value]
WHEN 'OUT' THEN [Std-Labor-Value]
ELSE 0
END) AS [LABOR SCRAP],
SUM(CASE TRANS
WHEN 'SCR' THEN [Std-Material-Value]
WHEN 'S+' THEN [Std-Material-Value]
WHEN 'S-' THEN [Std-Material-Value]
WHEN 'SAL' THEN [Std-Material-Value]
ELSE 0
END) AS [MATERIAL SCRAP],
SUM(CASE TRANS WHEN 'RWK' THEN [Act-Labor-Value] ELSE 0 END) AS [LABOR REWORK],
SUM(CASE TRANS
WHEN 'PRD' THEN [Act-Labor-Value]
WHEN 'TRN' THEN [Act-Labor-Value]
WHEN 'RWK' THEN [Act-Labor-Value]
ELSE 0
END) AS [ACTUAL LABOR],
SUM(CASE TRANS
WHEN 'PRD' THEN [Std-Labor-Value]
WHEN 'TRN' THEN [Std-Labor-Value]
ELSE 0
END) AS [STANDARD LABOR],
SUM(CASE TRANS
WHEN 'PRD' THEN [Act-Labor-Value] - [Std-Labor-Value]
WHEN 'TRN' THEN [Act-Labor-Value] - [Std-Labor-Value]
--WHEN 'RWK' THEN [Act-Labor-Value]
ELSE 0 END) -- - SUM([Std-Labor-Value]) -- - SUM(CASE TRANS WHEN 'RWK' THEN [Act-Labor-Value] ELSE 0 END)
AS [LABOR VARIANCE]
FROM v_Labor_Dist_Detail
where [Trans-Date] between #startdate and #enddate
--and CLASS = (CASE #class WHEN '~ALL' THEN CLASS ELSE #class END)
GROUP BY [Trans-Date], CLASS
UNION --REL 2/6/09 Pad result set with any missing dates for each class.
select distinct [Description] as class, cast([Date] as datetime) as [Trans-Date], 0,0,0,0,0,0
FROM Calendar_To_Fiscal cross join PRMS.Product_Class
where cast([Date] as datetime) between #startdate and #enddate and
not exists (select class FROM v_Labor_Dist_Detail vl where [Trans-Date] between #startdate and #enddate
and vl.[Trans-Date] = cast(Calendar_To_Fiscal.[Date] as datetime)
and vl.class= PRMS.Product_Class.[Description]
GROUP BY [Trans-Date], CLASS)
order by [Trans-Date], CLASS
A typical legitimate use of a cross join would be a report that shows e.g. total sales by product and region. If no sales were made of product P in region R then we want to see a row with a zero, rather than just not showing a row.
select r.region_name, p.product_name, sum(s.sales_amount)
from regions r
cross join products p
left outer join sales s on s.region_id = r.region_id
and s.product_id = p.product_id
group by r.region_name, p.product_name
order by r.region_name, p.product_name;
One use I've come across a lot is splitting records out into several records, mainly for reporting purposes.
Imagine a string where each character represents some event during the corresponding hour.
ID | Hourly Event Data
1 | -----X-------X-------X--
2 | ---X-----X------X-------
3 | -----X---X--X-----------
4 | ----------------X--X-X--
5 | ---X--------X-------X---
6 | -------X-------X-----X--
Now you want a report which shows how many events happened at what day. Cross join the table with a table of IDs 1 to 24, then work your magic...
SELECT
[hour].id,
SUM(CASE WHEN SUBSTRING([data].string, [hour].id, 1) = 'X' THEN 1 ELSE 0 END)
FROM
[data]
CROSS JOIN
[hours]
GROUP BY
[hours].id
=>
1, 0
2, 0
3, 0
4, 2
5, 0
6, 2
7, 0
8, 1
9, 0
10, 2
11, 0
12, 0
13, 2
14, 1
15, 0
16, 1
17, 2
18, 0
19, 0
20, 1
21, 1
22, 3
23, 0
24, 0
I have different reports that prefilter the recordset (by various lines of business within the firm), but there were calculations that required percentages of revenue firm-wide. The recordsource had to contain the firm total instead of relying on calculating the overall sum in the report itself.
Example: The recordset has balances for each client and the Line of Business the client's revenue comes from. The report may only show 'retail' clients. There is no way to get a sum of the balances for the entire firm, but the report shows the percentage of the firm's revenue.
Since there are different balance fields, I felt it was less complicated to have full join with the view that has several balances (I can also reuse this view of firm totals) instead of multiple fields made up sub queries.
Another one is an update statement where multiple records needed to be created (one record for each step in a preset workflow process).
Here's one, where the CROSS JOIN substitutes for an INNER JOIN. This is useful and legitimate when there are no identical values between two tables on which to join. For example, suppose you have a table that contains version 1, version 2 and version 3 of some statement or company document, all saved in a SQL Server table so that you can recreate a document that is associated with an order, on the fly, long after the order, and long after your document was rewritten into a new version. But only one of the two tables you need to join (the Documents table) has a VersionID column. Here is a way to do this:
SELECT DocumentText, VersionID =
(
SELECT d.VersionID
FROM Documents d
CROSS JOIN Orders o
WHERE o.DateOrdered BETWEEN d.EffectiveStart AND d.EffectiveEnd
)
FROM Documents
I've used a CROSS JOIN recently in a report that we use for sales forcasting, the report needs to break out the amount of sales that a sales person has done in each General Ledger account.
So in the report I do something to this effect:
SELECT gla.AccountN, s.SalespersonN
FROM
GLAccounts gla
CROSS JOIN Salesperson s
WHERE (gla.SalesAnalysis = 1 OR gla.AccountN = 47500)
This gives me every GL account for every sales person like:
SalesPsn AccountN
1000 40100
1000 40200
1000 40300
1000 48150
1000 49980
1000 49990
1005 40100
1005 40200
1005 40300
1054 48150
1054 49980
1054 49990
1078 40100
1078 40200
1078 40300
1078 48150
1078 49980
1078 49990
1081 40100
1081 40200
1081 40300
1081 48150
1081 49980
1081 49990
1188 40100
1188 40200
1188 40300
1188 48150
1188 49980
1188 49990
For charting (reports) where every grouping must have a record even if it is zero.
(e.g. RadCharts)
I had combinations of am insolvency field from my source data.
There are 5 distinct types but the data had combinations of 2 of these. So I created lookup table of the 5 distinct values then used a cross join for an insert statement to fill out the rest. like so
insert into LK_Insolvency (code,value)
select a.code+b.code, a.value+' '+b.value
from LK_Insolvency a
cross join LK_Insolvency b
where a.code <> b.code <--this makes sure the x product of the value with itself is not used as this does not appear in the source data.
I personally try to avoid cartesian product's in my queries. I suppose have a result set of every combination of your join could be useful, but usually if I end up with one, I know I have something wrong.

Resources