How to replace a functional (many) OUTER APPLY (SELECT * FROM) - sql-server

Applies to Microsoft SQL Server 2008 R2.
The problem is
If we have a few dozen Outer Apply (30) then they begin to work pretty slowly. In the middle of the Outer Apply I have something more complicated than a simple select, a view.
Details
I'm writing a sort of attributes assigned to tables (in the database). Generally, a few tables, holds a reference to a table of attributes (key, value).
Pseudo structure looks like this:
DECLARE #Lot TABLE (
LotId INT PRIMARY KEY IDENTITY,
SomeText VARCHAR(8))
INSERT INTO #Lot
OUTPUT INSERTED.*
VALUES ('Hello'), ('World')
DECLARE #Attribute TABLE(
AttributeId INT PRIMARY KEY IDENTITY,
LotId INT,
Val VARCHAR(8),
Kind VARCHAR(8))
INSERT INTO #Attribute
OUTPUT INSERTED.* VALUES
(1, 'Foo1', 'Kind1'), (1, 'Foo2', 'Kind2'),
(2, 'Bar1', 'Kind1'), (2, 'Bar2', 'Kind2'), (2, 'Bar3', 'Kind3')
LotId SomeText
----------- --------
1 Hello
2 World
AttributeId LotId Val Kind
----------- ----------- -------- --------
1 1 Foo1 Kind1
2 1 Foo2 Kind2
3 2 Bar1 Kind1
4 2 Bar2 Kind2
5 2 Bar3 Kind3
I can now run a query such as:
SELECT
[l].[LotId]
, [SomeText]
, [Oa1].[AttributeId]
, [Oa1].[LotId]
, 'Kind1Val' = [Oa1].[Val]
, [Oa1].[Kind]
, [Oa2].[AttributeId]
, [Oa2].[LotId]
, 'Kind2Val' = [Oa2].[Val]
, [Oa2].[Kind]
, [Oa3].[AttributeId]
, [Oa3].[LotId]
, 'Kind3Val' = [Oa3].[Val]
, [Oa3].[Kind]
FROM #Lot AS l
OUTER APPLY(SELECT * FROM #Attribute AS la WHERE la.[LotId] = l.[LotId] AND la.[Kind] = 'Kind1') AS Oa1
OUTER APPLY(SELECT * FROM #Attribute AS la WHERE la.[LotId] = l.[LotId] AND la.[Kind] = 'Kind2') AS Oa2
OUTER APPLY(SELECT * FROM #Attribute AS la WHERE la.[LotId] = l.[LotId] AND la.[Kind] = 'Kind3') AS Oa3
LotId SomeText AttributeId LotId Kind1Val Kind AttributeId LotId Kind2Val Kind AttributeId LotId Kind3Val Kind
----------- -------- ----------- ----------- -------- -------- ----------- ----------- -------- -------- ----------- ----------- -------- --------
1 Hello 1 1 Foo1 Kind1 2 1 Foo2 Kind2 NULL NULL NULL NULL
2 World 3 2 Bar1 Kind1 4 2 Bar2 Kind2 5 2 Bar3 Kind3
The simple way to get the pivot table of attribute values ​​and results for Lot rows that do not have attribute such a Kind3.
I know Microsoft PIVOT and it is not simple and do not fits here.
Finally, what will be faster and will give the same results?

In order to get the result you can unpivot and then pivot the data.
There are two ways that you can perform this. First, you can use the UNPIVOT and the PIVOT function:
select *
from
(
select LotId,
SomeText,
col+'_'+CAST(rn as varchar(10)) col,
value
from
(
select l.LotId,
l.SomeText,
cast(a.AttributeId as varchar(8)) attributeid,
cast(a.LotId as varchar(8)) a_LotId,
a.Val,
a.Kind,
ROW_NUMBER() over(partition by l.lotid order by a.attributeid) rn
from #Lot l
left join #Attribute a
on l.LotId = a.LotId
) src
unpivot
(
value
for col in (attributeid, a_Lotid, val, kind)
) unpiv
) d
pivot
(
max(value)
for col in (attributeid_1, a_LotId_1, Val_1, Kind_1,
attributeid_2, a_LotId_2, Val_2, Kind_2,
attributeid_3, a_LotId_3, Val_3, Kind_3)
) piv
See SQL Fiddle with Demo.
Or starting in SQL Server 2008+, you can use CROSS APPLY with a VALUES clause to unpivot the data:
select *
from
(
select LotId,
SomeText,
col+'_'+CAST(rn as varchar(10)) col,
value
from
(
select l.LotId,
l.SomeText,
cast(a.AttributeId as varchar(8)) attributeid,
cast(a.LotId as varchar(8)) a_LotId,
a.Val,
a.Kind,
ROW_NUMBER() over(partition by l.lotid order by a.attributeid) rn
from #Lot l
left join #Attribute a
on l.LotId = a.LotId
) src
cross apply
(
values ('attributeid', attributeid),('LotId', a_LotId), ('Value', Val), ('Kind', Kind)
) c (col, value)
) d
pivot
(
max(value)
for col in (attributeid_1, LotId_1, Value_1, Kind_1,
attributeid_2, LotId_2, Value_2, Kind_2,
attributeid_3, LotId_3, Value_3, Kind_3)
) piv
See SQL Fiddle with Demo.
The unpivot process takes the multiple columns for each LotID and SomeText and converts it into rows giving the result:
| LOTID | SOMETEXT | COL | VALUE |
--------------------------------------------
| 1 | Hello | attributeid_1 | 1 |
| 1 | Hello | LotId_1 | 1 |
| 1 | Hello | Value_1 | Foo1 |
| 1 | Hello | Kind_1 | Kind1 |
| 1 | Hello | attributeid_2 | 2 |
I added a row_number() to the inner subquery to be used to create the new column names to pivot. Once the names are created the pivot can be applied to the new columns giving the final result
This could also be done using dynamic SQL:
DECLARE #cols AS NVARCHAR(MAX),
#query AS NVARCHAR(MAX)
select #cols = STUFF((SELECT ',' + QUOTENAME(col+'_'+rn)
from
(
select
cast(ROW_NUMBER() over(partition by l.lotid order by a.attributeid) as varchar(10)) rn
from Lot l
left join Attribute a
on l.LotId = a.LotId
) t
cross apply (values ('attributeid', 1),
('LotId', 2),
('Value', 3),
('Kind', 4)) c (col, so)
group by col, rn, so
order by rn, so
FOR XML PATH(''), TYPE
).value('.', 'NVARCHAR(MAX)')
,1,1,'')
set #query = 'SELECT LotId,
SomeText,' + #cols + '
from
(
select LotId,
SomeText,
col+''_''+CAST(rn as varchar(10)) col,
value
from
(
select l.LotId,
l.SomeText,
cast(a.AttributeId as varchar(8)) attributeid,
cast(a.LotId as varchar(8)) a_LotId,
a.Val,
a.Kind,
ROW_NUMBER() over(partition by l.lotid order by a.attributeid) rn
from Lot l
left join Attribute a
on l.LotId = a.LotId
) src
cross apply
(
values (''attributeid'', attributeid),(''LotId'', a_LotId), (''Value'', Val), (''Kind'', Kind)
) c (col, value)
) x
pivot
(
max(value)
for col in (' + #cols + ')
) p '
execute(#query)
See SQL Fiddle with Demo
All three versions will give the same result:
| LOTID | SOMETEXT | ATTRIBUTEID_1 | LOTID_1 | VALUE_1 | KIND_1 | ATTRIBUTEID_2 | LOTID_2 | VALUE_2 | KIND_2 | ATTRIBUTEID_3 | LOTID_3 | VALUE_3 | KIND_3 |
-----------------------------------------------------------------------------------------------------------------------------------------------------------
| 1 | Hello | 1 | 1 | Foo1 | Kind1 | 2 | 1 | Foo2 | Kind2 | (null) | (null) | (null) | (null) |
| 2 | World | 3 | 2 | Bar1 | Kind1 | 4 | 2 | Bar2 | Kind2 | 5 | 2 | Bar3 | Kind3 |

Related

How to extract text from string between known characters, if characters appear more than once

I have text, that include for example #(sharp) character. String contains parameters. Parameters begin with # and end with #.
declare #TEXT varchar(200) = 'Dear #NAMEOFGUEST# , we glad to see youSOMEHOTEL tomorrow.'
declare #scanChar char(1)='#'
select
SUBSTRING(#TEXT, CHARINDEX(#scanChar, #TEXT) + 1, (((LEN(#TEXT)) - CHARINDEX(#scanChar, REVERSE(#TEXT))) - CHARINDEX(#scanChar, #TEXT)))
Return:
NAMEOFGUEST
It's the correct result.
When string contains only one parameter #NAMEOFGUEST# it works. If we add SOMEHOTEL to into the #, as #SOMEHOTEL# result is not as we want.
declare #TEXT varchar(200) = 'Dear #NAMEOFGUEST# , we glad to see you #SOMEHOTEL# tomorrow.'
declare #scanChar char(1)='#'
Returns:
NAMEOFGUEST# , we glad to see you #SOMEHOTEL
I want the same result as in previous, like NAMEOFGUEST only.
Using CHARINDEX(#FindString, #PrintData, CHARINDEX(#FindString, #PrintData) + 1) you can find the second occurences of the #, then based on that the remaining calculation can be done.
The following query will work.
DECLARE #PrintData AS VARCHAR (200) = 'Dear #NAMEOFGUEST# , we glad to see you #SOMEHOTEL# tomorrow.';
DECLARE #FindString AS CHAR (1) = '#';
DECLARE #LenFindString AS INT = LEN(#FindString);
SELECT SUBSTRING(#PrintData,
CHARINDEX(#FindString, #PrintData) + #LenFindString,
CHARINDEX(#FindString, #PrintData, CHARINDEX(#FindString, #PrintData) + 1) - (CHARINDEX(#FindString, #PrintData) + #LenFindString)
);
Demo on db<>fiddle
You can use a recursive approach like this:
A mockup-table to simulate a set oriented scenario
declare #tbl TABLE(ID INT IDENTITY, SomeComment VARCHAR(100),SomeString varchar(200));
INSERT INTO #tbl VALUES('3 Terms','Dear #NAMEOFGUEST# , we glad to see you #SOMEHOTEL# tomorrow. And even #AThirdOne# is here.')
,('1 Term','Dear #NAMEOFGUEST# , we glad to see you soon.')
,('No Term','Dear Guest, nice to see you.')
,('invalid 1','Dear Guest, nice to #see you.')
,('invalid ?','Dear #Guest, nice# to see you.');
declare #scanChar char(1)='#';
--the query
WITH recCTE AS
(
SELECT t.ID
,t.SomeComment
,t.SomeString AS TextToWork
,1 AS TermIndex
,D.*
FROM #tbl t
OUTER APPLY(SELECT CHARINDEX(#scanChar,t.SomeString)) A(StartingAt)
OUTER APPLY(SELECT CHARINDEX(#scanChar,t.SomeString,A.StartingAt+1)) B(EndingAt)
OUTER APPLY(SELECT CASE WHEN A.StartingAt>0 AND B.EndingAt >0 THEN SUBSTRING(t.SomeString,A.StartingAt+1,B.EndingAt- A.StartingAt-1) END) C(TermCandidate)
OUTER APPLY(SELECT A.StartingAt,B.EndingAt,C.TermCandidate,SUBSTRING(t.SomeString,B.EndingAt+1,1000) AS RestString) D
UNION ALL
SELECT t.ID
,t.SomeComment
,t.RestString
,t.TermIndex+1
,D.*
FROM recCTE t
OUTER APPLY(SELECT CHARINDEX(#scanChar,t.RestString)) A(StartingAt)
OUTER APPLY(SELECT CHARINDEX(#scanChar,t.RestString,A.StartingAt+1)) B(EndingAt)
OUTER APPLY(SELECT CASE WHEN A.StartingAt>0 AND B.EndingAt >0 THEN SUBSTRING(t.RestString,A.StartingAt+1,B.EndingAt- A.StartingAt-1) END) C(TermCandidate)
OUTER APPLY(SELECT A.StartingAt,B.EndingAt,C.TermCandidate,SUBSTRING(t.RestString,B.EndingAt+1,1000) AS RestString) D
WHERE (LEN(t.RestString) - LEN(REPLACE(t.RestString,#scanChar,'')))%2=0 AND CHARINDEX(#scanChar,t.RestString)>0
)
SELECT ID
,SomeComment
,TermIndex
--this will exclude "Guest, nice" due to the blank
,CASE WHEN CHARINDEX(' ',TermCandidate)>0 THEN NULL ELSE TermCandidate END AS Term
FROM recCTE
ORDER BY ID,TermIndex;
The result
+----+-------------+-----------+-------------+
| ID | SomeComment | TermIndex | Term |
+----+-------------+-----------+-------------+
| 1 | 3 Terms | 1 | NAMEOFGUEST |
+----+-------------+-----------+-------------+
| 1 | 3 Terms | 2 | SOMEHOTEL |
+----+-------------+-----------+-------------+
| 1 | 3 Terms | 3 | AThirdOne |
+----+-------------+-----------+-------------+
| 2 | 1 Term | 1 | NAMEOFGUEST |
+----+-------------+-----------+-------------+
| 3 | No Term | 1 | NULL |
+----+-------------+-----------+-------------+
| 4 | invalid 1 | 1 | NULL |
+----+-------------+-----------+-------------+
| 5 | invalid ? | 1 | NULL |
+----+-------------+-----------+-------------+

Reverse order of a XML Column in SQL Server

In a SQL Server table, I have a XML column where status are happened (first is oldest, last current status).
I have to write a stored procedure that returns the statuses: newest first, oldest last.
This is what I wrote:
ALTER PROCEDURE [dbo].[GetDeliveryStatus]
#invoiceID nvarchar(255)
AS
BEGIN
SET NOCOUNT ON;
DECLARE #xml xml
SET #xml = (SELECT statusXML
FROM Purchase
WHERE invoiceID = #invoiceID )
SELECT
t.n.value('text()[1]', 'nvarchar(50)') as DeliveryStatus
FROM
#xml.nodes('/statuses/status') as t(n)
ORDER BY
DeliveryStatus DESC
END
Example of value in the statusXML column:
<statuses>
<status>A</status>
<status>B</status>
<status>A</status>
<status>B</status>
<status>C</status>
</statuses>
I want the procedure to return:
C
B
A
B
A
with ORDER BY .... DESC it return ALPHABETIC reversed (C B B A A)
How should I correct my procedure ?
Create a sequence for the nodes based on the existing order then reverse it.
WITH [x] AS (
SELECT
t.n.value('text()[1]', 'nvarchar(50)') as DeliveryStatus
,ROW_NUMBER() OVER (ORDER BY t.n.value('..', 'NVARCHAR(100)')) AS [Order]
FROM
#xml.nodes('/statuses/status') as t(n)
)
SELECT
DeliveryStatus
FROM [x]
ORDER BY [x].[Order] DESC
... results ...
DeliveryStatus
C
B
A
B
A
There is no need to declare a variable first. You can (and you should!) read the needed values from your table column directly. Best was an inline table valued function (rather than a SP just to read something...)
Better performance
inlineable
You can query many InvoiceIDs at once
set-based
Try this (I drop the mock-table at the end - carefull with real data!):
CREATE TABLE Purchase(ID INT IDENTITY,statusXML XML, InvocieID INT, OtherValues VARCHAR(100));
INSERT INTO Purchase VALUES('<statuses>
<status>A</status>
<status>B</status>
<status>A</status>
<status>B</status>
<status>C</status>
</statuses>',100,'Other values of your row');
GO
WITH NumberedStatus AS
(
SELECT ID
,InvocieID
, ROW_NUMBER() OVER(ORDER BY (SELECT NULL)) AS Nr
,stat.value('.','nvarchar(max)') AS [Status]
,OtherValues
FROM Purchase
CROSS APPLY statusXML.nodes('/statuses/status') AS A(stat)
WHERE InvocieID=100
)
SELECT *
FROM NumberedStatus
ORDER BY Nr DESC
GO
--Clean-Up
--DROP TABLE Purchase;
The result
+---+-----+---+---+--------------------------+
| 1 | 100 | 5 | C | Other values of your row |
+---+-----+---+---+--------------------------+
| 1 | 100 | 4 | B | Other values of your row |
+---+-----+---+---+--------------------------+
| 1 | 100 | 3 | A | Other values of your row |
+---+-----+---+---+--------------------------+
| 1 | 100 | 2 | B | Other values of your row |
+---+-----+---+---+--------------------------+
| 1 | 100 | 1 | A | Other values of your row |
+---+-----+---+---+--------------------------+

Stop sum when value reached

Assume Table1:
|PaymentID|CashAmount|
----------------------
| P1 | 3000|
| P2 | 5000|
| P3 | 8000|
| P4 | 700|
| P5 | 5500|
| P6 | 1900|
If I want to sum of CashAmount to be 'at least' 9000. PaymentID order should be the same.
Expected Result:
|PaymentID|CashAmount|
----------------------
| P1 | 3000|
| P2 | 5000|
| P3 | 8000|
If I want to sum of CashAmount to be 'at least' 4000. PaymentID order should be the same.
Expected Result:
|PaymentID|CashAmount|
----------------------
| P1 | 3000|
| P2 | 5000|
I had a look at limiting the rows to where the sum a column equals a certain value in MySQL. But the accepted answer is not working with MSSQL and is not exactly what I'm looking for. Most of the answers there I've tested and they return only rows that the total amount is less than, not at least specific value.
SQL Server 2005 and Later
SELECT *
FROM TableName t
CROSS APPLY (SELECT SUM(Amount)
FROM TableName
WHERE [Date] <= t.[DATE]) c(AmtSum)
WHERE AmtSum <= 13
SQL Server 2012 and Later
SELECT *
FROM (
SELECT *
,SUM(Amount) OVER (ORDER BY [Date], Amount) AmtSum
FROM TableName
)t
WHERE AmtSum <= 13
According to your new input I changed my approach slightly. Hope this is what you need...
EDIT: Here's the version with SUM(x) OVER(...):
DECLARE #payment TABLE(PaymentID VARCHAR(10),CashAmount INT);
INSERT INTO #payment VALUES
('P1',3000)
,('P2',5000)
,('P3',8000)
,('P4',700)
,('P5',5500)
,('P6',1900);
DECLARE #myMinToReach INT=9000;
WITH SortedPayment AS
(
SELECT ROW_NUMBER() OVER(ORDER BY PaymentID) AS inx
,SUM(CashAmount) OVER(ORDER BY PaymentID) AS Summa
FROM #payment
)
SELECT * FROM SortedPayment
WHERE inx<=(SELECT TOP 1 x.inx
FROM SortedPayment AS x
WHERE Summa>#myMinToReach
ORDER BY Summa ASC);
And that's the old version for SQL-Server < 2012
DECLARE #payment TABLE(PaymentID VARCHAR(10),CashAmount INT);
INSERT INTO #payment VALUES
('P1',3000)
,('P2',5000)
,('P3',8000)
,('P4',700)
,('P5',5500)
,('P6',1900);
DECLARE #myMinToReach INT=4000;
WITH SortedPayment AS
(
SELECT ROW_NUMBER() OVER(ORDER BY PaymentID) AS inx,*
FROM #payment
)
,Accumulated AS
(
SELECT tbl.*
FROM
(
SELECT SortedPayment.*
,Accumulated.Summa
FROM SortedPayment
CROSS APPLY
(
SELECT SUM(ps2.CashAmount) AS Summa
FROM SortedPayment AS ps2
WHERE ps2.inx<=SortedPayment.inx
) AS Accumulated
) AS tbl
)
SELECT * FROM Accumulated
WHERE inx<=(SELECT TOP 1 x.inx
FROM Accumulated AS x
WHERE Summa>#myMinToReach
ORDER BY Summa ASC);
declare #s int;
update table set rc=row_count() over (order by date)
declare #i int;
set #i=1;
while #s<=12 or #i<100000
set #s=#s+(select amount from table where rc=#i+1);
set #i=#i+1;
end
// #s has at least 12

How to convert JSON Array of Arrays to columns and rows

I'm pulling data from an API in JSON with a format like the example data below. Where essentially every "row" is an array of values. The API doc defines the columns and their types in advance. So I know the col1 is, for example, a varchar, and that col2 is an int.
CREATE TEMP TABLE dat (data json);
INSERT INTO dat
VALUES ('{"COLUMNS":["col1","col2"],"DATA":[["a","1"],["b","2"]]}');
I want to transform this within PostgreSQL 9.3 such that I end up with:
col1 | col2
------------
a | 1
b | 2
Using json_array_elements I can get to:
SELECT json_array_elements(data->'DATA')
FROM dat
json_array_elements
json
---------
["a","1"]
["b","2"]
but then I can't figure out how to do either convert the JSON array to a PostgreSQL array so I can perform something like unnest(ARRAY['a','1'])
General case for unknown columns
To get a result like
col1 | col2
------------
a | 1
b | 2
will require a bunch of dynamic SQL, because you don't know the types of the columns in advance, nor the column names.
You can unpack the json with something like:
SELECT
json_array_element_text(colnames, colno) AS colname,
json_array_element_text(colvalues, colno) AS colvalue,
rn,
idx,
colno
FROM (
SELECT
data -> 'COLUMNS' AS colnames,
d AS colvalues,
rn,
row_number() OVER () AS idx
FROM (
SELECT data, row_number() OVER () AS rn FROM dat
) numbered
cross join json_array_elements(numbered.data -> 'DATA') d
) elements
cross join generate_series(0, json_array_length(colnames) - 1) colno;
producing a result set like:
colname | colvalue | rn | idx | colno
---------+----------+----+-----+-------
col1 | a | 1 | 1 | 0
col2 | 1 | 1 | 1 | 1
col1 | b | 1 | 2 | 0
col2 | 2 | 1 | 2 | 1
(4 rows)
You can then use this as input to the crosstab function from the tablefunc module with something like:
SELECT * FROM crosstab('
SELECT
to_char(rn,''00000000'')||''_''||to_char(idx,''00000000'') AS rowid,
json_array_element_text(colnames, colno) AS colname,
json_array_element_text(colvalues, colno) AS colvalue
FROM (
SELECT
data -> ''COLUMNS'' AS colnames,
d AS colvalues,
rn,
row_number() OVER () AS idx
FROM (
SELECT data, row_number() OVER () AS rn FROM dat
) numbered
cross join json_array_elements(numbered.data -> ''DATA'') d
) elements
cross join generate_series(0, json_array_length(colnames) - 1) colno;
') results(rowid text, col1 text, col2 text);
producing:
rowid | col1 | col2
---------------------+------+------
00000001_ 00000001 | a | 1
00000001_ 00000002 | b | 2
(2 rows)
The column names are not retained here.
If you were on 9.4 you could avoid the row_number() calls and use WITH ORDINALITY, making it much cleaner.
Simplified with fixed, known columns
Since you apparently know the number of columns and their types in advance the query can be considerably simplified.
SELECT
col1, col2
FROM (
SELECT
rn,
row_number() OVER () AS idx,
elem ->> 0 AS col1,
elem ->> 1 :: integer AS col2
FROM (
SELECT data, row_number() OVER () AS rn FROM dat
) numbered
cross join json_array_elements(numbered.data -> 'DATA') elem
ORDER BY 1, 2
) x;
result:
col1 | col2
------+------
a | 1
b | 2
(2 rows)
Using 9.4 WITH ORDINALITY
If you were using 9.4 you could keep it cleaner using WITH ORDINALITY:
SELECT
col1, col2
FROM (
SELECT
elem ->> 0 AS col1,
elem ->> 1 :: integer AS col2
FROM
dat
CROSS JOIN
json_array_elements(dat.data -> 'DATA') WITH ORDINALITY AS elements(elem, idx)
ORDER BY idx
) x;
this code worked fine for me, maybe it be useful for someone.
select to_json(array_agg(t))
from (
select text, pronunciation,
(
select array_to_json(array_agg(row_to_json(d)))
from (
select part_of_speech, body
from definitions
where word_id=words.id
order by position asc
) d
) as definitions
from words
where text = 'autumn'
) t
Credits:
https://hashrocket.com/blog/posts/faster-json-generation-with-postgresql

SQL EXCEPT Not Working

; WITH cte AS
(SELECT p.BudgetNumber, t.MilestoneNumber FROM
(SELECT DISTINCT BudgetNumber FROM tblMilestones) p
CROSS JOIN
(SELECT DISTINCT MilestoneNumber FROM tblMilestoneTemplate) t)
SELECT BudgetNumber, MilestoneNumber FROM cte
EXCEPT (SELECT BudgetNumber, MilestoneNumber FROM tblMilestones)
ORDER BY BudgetNumber, MilestoneNumber
The query above creates all possible BudgetNumber and MilestoneNumber combinations using a cross join, and then attempts to locate combinations that are not in the tblMilestones table (I didn't create this database, I know the table prefixes are weird and this db isn't normalized).
There are no NULL entries in any of these fields. When I use this query with the EXCEPT clause above, I get some missing values (but not all), but I also get some non-missing values. When I change the EXCEPT to a LEFT JOIN, I get the same results. When I change the EXCEPT to a WHERE NOT EXISTS, I get no results at all. Can anyone please help?
SQLFiddle Output:
| BUDGETNUMBER | MILESTONENUMBER |
|--------------|-----------------|
| BA04001 | 0 |
| BA04001 | 99 |
| BA04005 | 0 |
| BA04005 | 99 |
| BA05001 | 0 |
| BA05001 | 99 |
| BA05002 | 0 |
| BA05002 | 99 |
Here is the way you would need to use NOT EXISTS correctly. You need specify where clause inside subquery to get correct result.
;
WITH cte
AS (
SELECT p.BudgetNumber
,t.MilestoneNumber
FROM (
SELECT DISTINCT BudgetNumber
FROM tblMilestones
) p
CROSS JOIN (
SELECT DISTINCT MilestoneNumber
FROM tblMilestoneTemplate
) t
)
SELECT BudgetNumber
,MilestoneNumber
FROM cte t
WHERE NOT EXISTS ( SELECT 1
FROM tblMilestones s
WHERE t.BudgetNumber = s.BudgetNumber
AND t.MilestoneNumber = s.MilestoneNumber )
ORDER BY BudgetNumber
,MilestoneNumber
Look at following two examples
DECLARE #NoPrecision AS TABLE ( MyNumber DECIMAL )
INSERT INTO #NoPrecision ( MyNumber ) VALUES ( 12345.123456789 )
SELECT * FROM #NoPrecision
output: 12345
DECLARE #Precision AS TABLE ( MyNumber DECIMAL(10,4) )
INSERT INTO #Precision ( MyNumber ) VALUES ( 12345.123456789 )
SELECT * FROM #Precision
output: 12345.1235

Resources