Accumulating (concatenate) values in variable does not work - sql-server

I'm trying to accumulate values into a variable in SQL Server>=2012.
It works in case 1 below, but in case 2 I get the answer ",CD" instead of the expected ",EF,AB,CD" Why?
In MMS:
USE MyDB
GO
-- Create a simple table
CREATE TABLE Tbl1 (Code VARCHAR(2), So TINYINT NULL)
INSERT INTO Tbl1 VALUES('AB', 10)
INSERT INTO Tbl1 VALUES('CD', NULL)
INSERT INTO Tbl1 VALUES('EF', 5)
GO
-- Case 1
DECLARE #MyVar VARCHAR(255) = ''
SELECT #MyVar=#MyVar + ',' + Code FROM Tbl1 ORDER BY So
SELECT #MyVar
GO
-- Case 2
DECLARE #MyVar VARCHAR(255) = ''
SELECT #MyVar=#MyVar + ',' + Code FROM Tbl1 ORDER BY ISNULL(So, 255)
SELECT #MyVar
GO

The explanation is in the documentation:
Don't use a variable in a SELECT statement to concatenate values (that
is, to compute aggregate values). Unexpected query results may occur.
Because, all expressions in the SELECT list (including assignments)
aren't necessarily run exactly once for each output row.
There are opinions (but not in the official docs), stating that without an ORDER BY clause (and/or a DISTINCT clause) the aggregation works as you expect.
If you are using SQL Server 2017+, you may use STRING_AGG() to build the expected output:
DECLARE #MyVar VARCHAR(255) = ''
SELECT #MyVar = STRING_AGG(Code, ',') WITHIN GROUP (ORDER BY ISNULL(So, 255))
FROM Tbl1
SELECT #MyVar

Related

Using a multi-valued parameter in SSRS with dynamic SQL

I am trying to use a multi-valued parameter in SSRS within a dynamic SQL query. In a static query I would use
SELECT myField
FROM myTable
WHERE myField IN (#myParameter)
Using answers to this question (TSQL Passing MultiValued Reporting Services Parameter into Dynamic SQL) I have tried
-- SSRS requires the output field names to be static
CREATE TABLE #temp
(
myField VARCHAR(100)
)
DECLARE #myQuery VARCHAR(5000) = 'SELECT myField
INTO #temp
FROM myTable
WHERE CHARINDEX('','' + myField + '','', '',''+''' + #myParameter + '''+'','') > 0'
EXEC (#myQuery)
This approach should work if the query understood #myParameter to be a string in a CSV format, but it doesn't seem to (as suggested by the link above). For example
SELECT #myParameter
won't work if there is more than one value selected.
I've also tried moving the parameter into a temporary table:
SELECT myField
INTO #tempParameter
FROM #myParameter
-- SSRS requires the output field names to be static
CREATE TABLE #temp
(
myField VARCHAR(100)
)
DECLARE #myQuery VARCHAR(5000) = 'SELECT myField
INTO #temp
FROM myTable
WHERE myField IN (SELECT myField FROM #tempParameter)'
EXEC (#myQuery)
I have SSRS 2012 and SQL Server 2012. NB: I need to use dynamic SQL for other reasons.
You don't need dynamic SQL for this. SSRS will (much to my dislike) inject multi value parameters when using a hard coded SQL statement in the report. Therefore you can just do something like the following:
SELECT *
FROM MyTable
WHERE MyColumn IN (#MyParameter)
AND OtherCol > 0;
Before running the query, SSRS will remove #MyParameter and inject a delimited list of parameters.
The best guess, if you need to use dynamic SQL, is to use a string splitter and an SP (I use DelimitedSplit8K_LEAD here). SSRS will then pass the value of the parameter (#MultiParam) as a delimited string, and you can then split that in the dynamic statement:
CREATE PROC dbo.YourProc #MultiParam varchar(8000), #TableName sysname AS
BEGIN
DECLARE #SQL nvarchar(MAX);
SET #SQL = N'SELECT * FROM dbo.' + QUOTENAME(#TableName) + N' MT CROSS APPLY dbo.DelimitedSplit8K_LEAD (#MultiParam,'','') DS WHERE MT.MyColumn = DS.item;';
EXEC sp_executesql #SQL, N'#MultiParam varchar(8000)', #MultiParam;
END;
GO
As I mentioned in the comments, your parameter is coming from SSRS as a single comma separated string as such:
#myParameter = 'FirstValue, Second Value Selected, Third Val'
When you try to use the parameter in the IN clause, it is read as such:
select *
from my table
where my column in ('FirstValue, Second Value Selected, Third Val')
This is invalid. The correct syntax would be like below, with quotes around each value.
select *
from my table
where my column in ('FirstValue', 'Second Value Selected', 'Third Val')
So, you need to find a way to quote each value, which is hard because you don't know how many values there will be. So, the best thing to do is split that parameter into a table, and JOIN to it. Since we use a table-valued function in this example, we use CROSS APPLY.
First, create the function that Jeff Moden made, and so many people use. Or, use STRING_SPLIT if you are on 2016 onward, or make your own. However, anything that uses a recursive CTE, WHILE loop, cursor, etc will be far slower than the one below.
CREATE FUNCTION [dbo].[DelimitedSplit8K]
--===== Define I/O parameters
(#pString VARCHAR(8000), #pDelimiter CHAR(1))
--WARNING!!! DO NOT USE MAX DATA-TYPES HERE! IT WILL KILL PERFORMANCE!
RETURNS TABLE WITH SCHEMABINDING AS
RETURN
--===== "Inline" CTE Driven "Tally Table" produces values from 1 up to 10,000...
-- enough to cover VARCHAR(8000)
WITH E1(N) AS (
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1
), --10E+1 or 10 rows
E2(N) AS (SELECT 1 FROM E1 a, E1 b), --10E+2 or 100 rows
E4(N) AS (SELECT 1 FROM E2 a, E2 b), --10E+4 or 10,000 rows max
cteTally(N) AS (--==== This provides the "base" CTE and limits the number of rows right up front
-- for both a performance gain and prevention of accidental "overruns"
SELECT TOP (ISNULL(DATALENGTH(#pString),0)) ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) FROM E4
),
cteStart(N1) AS (--==== This returns N+1 (starting position of each "element" just once for each delimiter)
SELECT 1 UNION ALL
SELECT t.N+1 FROM cteTally t WHERE SUBSTRING(#pString,t.N,1) = #pDelimiter
),
cteLen(N1,L1) AS(--==== Return start and length (for use in substring)
SELECT s.N1,
ISNULL(NULLIF(CHARINDEX(#pDelimiter,#pString,s.N1),0)-s.N1,8000)
FROM cteStart s
)
--===== Do the actual split. The ISNULL/NULLIF combo handles the length for the final element when no delimiter is found.
SELECT ItemNumber = ROW_NUMBER() OVER(ORDER BY l.N1),
Item = SUBSTRING(#pString, l.N1, l.L1)
FROM cteLen l
;
Then, you simply call that with your function like so:
DB FIDDLE DEMO
create table mytable (Names varchar(64))
insert into mytable values ('Bob'),('Mary'),('Tom'),('Frank')
--this is your parameter from SSRS
declare #var varchar(4000) = 'Bob,Mary,Janice,Scarlett'
select distinct mytable.*
from mytable
cross apply dbo.DelimitedSplit8K(#var,',') spt
where spt.Item = mytable.Names
The split string solution didn't work for me either, and I tried to figure out what type the Multi parameter variable in SSRS actually is so I somehow could work with it. The multi parameter variable was #productIds and the type of the data field was UNIQUEIDENTIFIER. So I wrote the type to a log table
INSERT INTO DebugLog
SELECT CAST(SQL_VARIANT_PROPERTY(#productIds,'BaseType') AS VARCHAR(MAX))
When I selected one value the type was NVARCHAR, however when I selected two or more values I got an error
System.Data.SqlClient.SqlException: The sql_variant_property function requires 2 argument(s).
So I stopped trying to figure out what a multi parameter variable actually was, I already spent to much time.
Since I had the values in a table I could select the values from that table into a temp table and then join on that in the dynamic built query
SELECT p.Id
INTO #tempProdIds
FROM Products p WHERE p.Id IN #productIds
SET #query +='
....
JOIN #tempProdIds tpi on tl.Product = tpi.Id
....
'
EXEC(#query)

Two tables as parameters of user defined function

I am trying to create a function in SQL SERVER which I can use to compare two tables, to check if they are identical. I do that with two excepts.
The Tables are supposed to be exactly the same, with the same data formats and column names as well as all values identical in both tables. This will be a manual check, so if differences are there, a thrown error is not a problem. The aim is just to see if two approaches of creating the tables leads to the same tables.
I am really new to functions in SQL, so I am not sure how to solve the problem.
I want to pass both tables as parameters to the function, to get something like this:
CREATE FUNCTION DIFFERING_ROWS
(#TABLE1, #TABLE2)
RETURNS TABLE
AS
RETURN (
SELECT *, 'A_not_B' as [Difference] FROM #TABLE1
except
SELECT *, 'A_not_B' as [Difference] FROM #TABLE2
union all
SELECT *, 'B_not_A' as [Difference] FROM #TABLE2
except
SELECT *, 'B_not_A' as [Difference] FROM #TABLE1
)
END
How is this implemented correctly?
Can anybody help me?
You cannot do this in a function. The only way you can pass table names as parameters is to use Dynamic SQL, and Dynamic SQL is not allowed in functions. You CAN do it with a stored procedure.
You can create this stored procedure that counts if the tables have the same column_names:
CREATE PROCEDURE checkEqualTables
#table1 varchar(100),
#table2 varchar(100)
AS
BEGIN
DECLARE #xCount int;
(SELECT #xCount = COUNT(*) from (SELECT column_name FROM information_schema.COLUMNS WHERE table_name=#table1) base
where column_name not in (SELECT column_name FROM information_schema.COLUMNS WHERE table_name=#table2))
IF(#xCount <= 0)
print 'Tables are equal!';
ELSE
print 'Tables are not equal!'
END
Ok I took the information from the answers and comments and researched about how to put this into procedures, and this is what I built:
I think this does what I want:
CREATE PROCEDURE checkEqualTables
#table1 nvarchar(100),
#table2 nvarchar(100)
AS
BEGIN
DECLARE #SQL nvarchar(max);
SET #SQL = 'SELECT * FROM ' + #TABLE1 +
'except
SELECT * FROM ' + #TABLE2 +
'union all
SELECT * FROM ' + #TABLE2 +
'except
SELECT * FROM ' + #TABLE1
EXECUTE sp_executesql #SQL
END

SQL Server Conversion failed varchar to int

I have a table (no.1) which has 10 columns. One of them clm01 is integer and not allowed with null values.
There is a second table (no.2) which has many columns. One of them is string type clm02. An example of this column data is 1,2,3.
I'd like to make a query like:
select *
from table1 t1, table2 t2
where t1.clm01 not in (t2.clm2)
For example in table1 I have 5 records with values in clm01 1,2,3,4,5 and in table2 I've got 1 record with value in clm02 = 1,2,3
So I would like with the query to return only the record with the value 4 and 5 in the clm01.
Instead I get:
Conversion failed when converting the varchar value '1,2,3' to data type int
Any ideas?
Use STRING_SPLIT() function to split the comma separated values, if you are using SQL Server 2016.
SELECT *
FROM table1 t1
WHERE t1.clm1 NOT IN (SELECT Value FROM table2 t2
CROSS APPLY STRING_SPLIT(t2.clm2,','))
If you are using any lower versions of SQL server write a UDF to split string and use the function in CROSS APPLY clause.
CREATE FUNCTION [dbo].[SplitString]
(
#string NVARCHAR(MAX),
#delimiter CHAR(1)
)
RETURNS #output TABLE(Value NVARCHAR(MAX)
)
BEGIN
DECLARE #start INT, #end INT
SELECT #start = 1, #end = CHARINDEX(#delimiter, #string)
WHILE #start < LEN(#string) + 1 BEGIN
IF #end = 0
SET #end = LEN(#string) + 1
INSERT INTO #output (Value)
VALUES(SUBSTRING(#string, #start, #end - #start))
SET #start = #end + 1
SET #end = CHARINDEX(#delimiter, #string, #start)
END
RETURN
END
I decided to give you a couple of options but this really is a duplicate question I see pretty often.
There are two main ways of going about the problem.
1) Use LIKE to and compare the strings but you actually have to build strings a little oddly to do it:
SELECT *
FROM
#Table1 t1
WHERE
NOT EXISTS (SELECT *
FROM #Table2 t2
WHERE ',' + t2.clm02 + ',' LIKE '%,' + CAST(t1.clm01 AS VARCHAR(15)) + ',%')
What you see is ,1,2,3, is like %,clm01value,% you must add the delimiter to the strings for this to work properly and you have to cast/convert clm01 to a char datatype. There are drawbacks to this solution but if your data sets are straight forward it could work for you.
2) Split the comma delimited string to rows and then use a left join, not exists, or not in. here is a method to convert your csv to xml and then split
;WITH cteClm02Split AS (
SELECT
clm02
FROM
(SELECT
CAST('<X>' + REPLACE(clm02,',','</X><X>') + '</X>' AS XML) as xclm02
FROM
#Table2) t
CROSS APPLY (SELECT t.n.value('.','INT') clm02
FROM
t.xclm02.nodes('X') as t(n)) ca
)
SELECT t1.*
FROM
#Table1 t1
LEFT JOIN cteClm02Split t2
ON t1.clm01 = t2.clm02
WHERE
t2.clm02 IS NULL
OR use NOT EXISTS with same cte
SELECT t1.*
FROM
#Table1 t1
WHERE
NOT EXISTS (SELECT * FROM cteClm02Split t2 WHERE t1.clm01 = t2.clm02)
There are dozens of other ways to split delimited strings and you can choose whatever way works for you.
Note: I am not showing IN/NOT IN as an answer because I don't recommend the use of it. If you do use it make sure that you are never comparing a NULL in the select etc. Here is another good post concerning performance etc. NOT IN vs NOT EXISTS
here are the table variables that were used:
DECLARE #Table1 AS TABLE (clm01 INT)
DECLARE #Table2 AS TABLE (clm02 VARCHAR(15))
INSERT INTO #Table1 VALUES (1),(2),(3),(4),(5)
INSERT INTO #Table2 VALUES ('1,2,3')

Error converting to float -- How to screen out bad values

I am working in SQL Server 2008. I have a table with all columns set up as varchar(255), which is necessary since I do data validation on this table. I have the following query on this table:
SELECT
col_1
FROM
table_A
WHERE
col_1 NOT LIKE '%[^.0123456789]%'
AND CAST(col_1 AS float) <= 2.5
I'm getting an error stating that it can't convert one of my table values to data type float. The offending value is '3269e+'. I don't understanding why this value causes an error. Wouldn't this value have been excluded by the first condition in the WHERE clause? If I'm doing something wrong, how should I re-write this query?
Instead of trying to parse out the string using a like statement, you can use ISNUMERIC. This does have some false positives, which people discuss on the comments of the MSDN page. In your example, it could be:
SELECT
col_1
FROM
table_A
WHERE
ISNUMERIC(col_1) = 1
AND CAST(col_1 AS float) <= 2.5
You can do this with a cte to first separate the valid rows, then apply your math.
with ValidRows as
(
SELECT
col_1
FROM table_A
WHERE col_1 NOT LIKE '%[^.0123456789]%'
)
select *
from ValidRows
WHERE CAST(col_1 AS float) <= 2.5
No it wont be taken out by the first condition. that order of execution is done by query optimizer.
Tf you want to make it work you could use a subquery like this:
SELECT *
FROM (
SELECT col_1
FROM table_A
WHERE col_1 NOT LIKE '%[^.0123456789]%' ) t
WHERE CAST(col_1 AS float) <= 2.5
Here is a working version with data similar to yours:
DECLARE #value AS VARCHAR(MAX) = '1'
DECLARE #value1 AS VARCHAR(MAX) = '2.1'
DECLARE #value2 AS VARCHAR(MAX) = '1.5'
DECLARE #value3 AS VARCHAR(MAX) = '32344'
DECLARE #value4 AS VARCHAR(MAX) = '23324e+'
DECLARE #value5 AS VARCHAR(MAX) = '23434334e+'
select * from(
select #value as v
UNION
select #value1
UNION
select #value2
UNION
select #value3
UNION
select #value4
UNION
select #value5
) t
where t.v NOT LIKE '%[^.0123456789]%'
AND CAST(t.v AS float) <= 2.5

SQL Server: UPDATE a table by using ORDER BY

I would like to know if there is a way to use an order by clause when updating a table. I am updating a table and setting a consecutive number, that's why the order of the update is important. Using the following sql statement, I was able to solve it without using a cursor:
DECLARE #Number INT = 0
UPDATE Test
SET #Number = Number = #Number +1
now what I'd like to to do is an order by clause like so:
DECLARE #Number INT = 0
UPDATE Test
SET #Number = Number = #Number +1
ORDER BY Test.Id DESC
I've read: How to update and order by using ms sql The solutions to this question do not solve the ordering problem - they just filter the items on which the update is applied.
Take care,
Martin
No.
Not a documented 100% supported way. There is an approach sometimes used for calculating running totals called "quirky update" that suggests that it might update in order of clustered index if certain conditions are met but as far as I know this relies completely on empirical observation rather than any guarantee.
But what version of SQL Server are you on? If SQL2005+ you might be able to do something with row_number and a CTE (You can update the CTE)
With cte As
(
SELECT id,Number,
ROW_NUMBER() OVER (ORDER BY id DESC) AS RN
FROM Test
)
UPDATE cte SET Number=RN
You can not use ORDER BY as part of the UPDATE statement (you can use in sub-selects that are part of the update).
UPDATE Test
SET Number = rowNumber
FROM Test
INNER JOIN
(SELECT ID, row_number() OVER (ORDER BY ID DESC) as rowNumber
FROM Test) drRowNumbers ON drRowNumbers.ID = Test.ID
Edit
Following solution could have problems with clustered indexes involved as mentioned here. Thanks to Martin for pointing this out.
The answer is kept to educate those (like me) who don't know all side-effects or ins and outs of SQL Server.
Expanding on the answer gaven by Quassnoi in your link, following works
DECLARE #Test TABLE (Number INTEGER, AText VARCHAR(2), ID INTEGER)
DECLARE #Number INT
INSERT INTO #Test VALUES (1, 'A', 1)
INSERT INTO #Test VALUES (2, 'B', 2)
INSERT INTO #Test VALUES (1, 'E', 5)
INSERT INTO #Test VALUES (3, 'C', 3)
INSERT INTO #Test VALUES (2, 'D', 4)
SET #Number = 0
;WITH q AS (
SELECT TOP 1000000 *
FROM #Test
ORDER BY
ID
)
UPDATE q
SET #Number = Number = #Number + 1
The row_number() function would be the best approach to this problem.
UPDATE T
SET T.Number = R.rowNum
FROM Test T
JOIN (
SELECT T2.id,row_number() over (order by T2.Id desc) rowNum from Test T2
) R on T.id=R.id
update based on Ordering by the order of values in a SQL IN() clause
Solution:
DECLARE #counter int
SET #counter = 0
;WITH q AS
(
select * from Products WHERE ID in (SELECT TOP (10) ID FROM Products WHERE ID IN( 3,2,1)
ORDER BY ID DESC)
)
update q set Display= #counter, #counter = #counter + 1
This updates based on descending 3,2,1
Hope helps someone.
I had a similar problem and solved it using ROW_NUMBER() in combination with the OVER keyword. The task was to retrospectively populate a new TicketNo (integer) field in a simple table based on the original CreatedDate, and grouped by ModuleId - so that ticket numbers started at 1 within each Module group and incremented by date. The table already had a TicketID primary key (a GUID).
Here's the SQL:
UPDATE Tickets SET TicketNo=T2.RowNo
FROM Tickets
INNER JOIN
(select TicketID, TicketNo,
ROW_NUMBER() OVER (PARTITION BY ModuleId ORDER BY DateCreated) AS RowNo from Tickets)
AS T2 ON T2.TicketID = Tickets.TicketID
Worked a treat!
I ran into the same problem and was able to resolve it in very powerful way that allows unlimited sorting possibilities.
I created a View using (saving) 2 sort orders (*explanation on how to do so below).
After that I simply applied the update queries to the View created and it worked great.
Here are the 2 queries I used on the view:
1st Query:
Update MyView
Set SortID=0
2nd Query:
DECLARE #sortID int
SET #sortID = 0
UPDATE MyView
SET #sortID = sortID = #sortID + 1
*To be able to save the sorting on the View I put TOP into the SELECT statement. This very useful workaround allows the View results to be returned sorted as set when the View was created when the View is opened. In my case it looked like:
(NOTE: Using this workaround will place an big load on the server if using a large table and it is therefore recommended to include as few fields as possible in the view if working with large tables)
SELECT TOP (600000)
dbo.Items.ID, dbo.Items.Code, dbo.Items.SortID, dbo.Supplier.Date,
dbo.Supplier.Code AS Expr1
FROM dbo.Items INNER JOIN
dbo.Supplier ON dbo.Items.SupplierCode = dbo.Supplier.Code
ORDER BY dbo.Supplier.Date, dbo.Items.ID DESC
Running: SQL Server 2005 on a Windows Server 2003
Additional Keywords: How to Update a SQL column with Ascending or Descending Numbers - Numeric Values / how to set order in SQL update statement / how to save order by in sql view / increment sql update / auto autoincrement sql update / create sql field with ascending numbers
SET #pos := 0;
UPDATE TABLE_NAME SET Roll_No = ( SELECT #pos := #pos + 1 ) ORDER BY First_Name ASC;
In the above example query simply update the student Roll_No column depending on the student Frist_Name column. From 1 to No_of_records in the table. I hope it's clear now.
IF OBJECT_ID('tempdb..#TAB') IS NOT NULL
BEGIN
DROP TABLE #TAB
END
CREATE TABLE #TAB(CH1 INT,CH2 INT,CH3 INT)
DECLARE #CH2 INT = NULL , #CH3 INT=NULL,#SPID INT=NULL,#SQL NVARCHAR(4000)='', #ParmDefinition NVARCHAR(50)= '',
#RET_MESSAGE AS VARCHAR(8000)='',#RET_ERROR INT=0
SET #ParmDefinition='#SPID INT,#CH2 INT OUTPUT,#CH3 INT OUTPUT'
SET #SQL='UPDATE T
SET CH1=#SPID,#CH2= T.CH2,#CH3= T.CH3
FROM #TAB T WITH(ROWLOCK)
INNER JOIN (
SELECT TOP(1) CH1,CH2,CH3
FROM
#TAB WITH(NOLOCK)
WHERE CH1 IS NULL
ORDER BY CH2 DESC) V ON T.CH2= V.CH2 AND T.CH3= V.CH3'
INSERT INTO #TAB
(CH2 ,CH3 )
SELECT 1,2 UNION ALL
SELECT 2,3 UNION ALL
SELECT 3,4
BEGIN TRY
WHILE EXISTS(SELECT TOP 1 1 FROM #TAB WHERE CH1 IS NULL)
BEGIN
EXECUTE #RET_ERROR = sp_executesql #SQL, #ParmDefinition,#SPID =##SPID, #CH2=#CH2 OUTPUT,#CH3=#CH3 OUTPUT;
SELECT * FROM #TAB
SELECT #CH2,#CH3
END
END TRY
BEGIN CATCH
SET #RET_ERROR=ERROR_NUMBER()
SET #RET_MESSAGE = '#ERROR_NUMBER : ' + CAST(ERROR_NUMBER() AS VARCHAR(255)) + '#ERROR_SEVERITY :' + CAST( ERROR_SEVERITY() AS VARCHAR(255))
+ '#ERROR_STATE :' + CAST(ERROR_STATE() AS VARCHAR(255)) + '#ERROR_LINE :' + CAST( ERROR_LINE() AS VARCHAR(255))
+ '#ERROR_MESSAGE :' + ERROR_MESSAGE() ;
SELECT #RET_ERROR,#RET_MESSAGE;
END CATCH

Resources