Can I use #table variable in SQL Server Report Builder? - sql-server

Using SQL Server 2008 Reporting services:
I'm trying to write a report that displays some correlated data so I thought to use a #table variable like so
DECLARE #Results TABLE (Number int
,Name nvarchar(250)
,Total1 money
,Total2 money
)
insert into #Results(Number, Name, Total1)
select number, name, sum(total)
from table1
group by number, name
update #Results
set total2 = total
from
(select number, sum(total) from table2) s
where s.number = number
select from #results
However, Report Builder keeps asking to enter a value for the variable #Results. It this at all possible?
EDIT: As suggested by KM I've used a stored procedure to solve my immediate problem, but the original question still stands: can I use #table variables in Report Builder?

No.
ReportBuilder will
2nd guess you
treats #Results as a parameter

Put all of that in a stored procedure and have report builder call that procedure. If you have many rows to process you might be better off (performance wise) with a #temp table where you create a clustered primary key on Number (or would it be Number+Name, not sure of your example code).
EDIT
you could try to do everything in one SELECT and send that to report builder, this should be the fastest (no temp tables):
select
dt.number, dt.name, dt.total1, s.total2
from (select
number, name, sum(total) AS total1
from table1
group by number, name
) dt
LEFT OUTER JOIN (select
number, sum(total) AS total2
from table2
GROUP BY number --<<OP code didn't have this, but is it needed??
) s ON dt.number=s.number

I've seen this problem as well. It seems SQLRS is a bit case-sensitive. If you ensure that your table variable is declared and referenced everywhere with the same letter case, you will clear up the prompt for parameter.

You can use Table Variables in SSRS dataset query like in my code where I am adding needed "empty" records for keep group footer in fixed postion (sample use pubs database):
DECLARE #NumberOfLines INT
DECLARE #RowsToProcess INT
DECLARE #CurrentRow INT
DECLARE #CurRow INT
DECLARE #cntMax INT
DECLARE #NumberOfRecords INT
DECLARE #SelectedType char(12)
DECLARE #varTable TABLE (# int, type char(12), ord int)
DECLARE #table1 TABLE (type char(12), title varchar(80), ord int )
DECLARE #table2 TABLE (type char(12), title varchar(80), ord int )
INSERT INTO #varTable
SELECT count(type) as '#', type, count(type) FROM titles GROUP BY type ORDER BY type
SELECT #cntMax = max(#) from #varTable
INSERT into #table1 (type, title, ord) SELECT type, N'', 1 FROM titles
INSERT into #table2 (type, title, ord) SELECT type, title, 1 FROM titles
SET #CurrentRow = 0
SET #SelectedType = N''
SET #NumberOfLines = #RowsPerPage
SELECT #RowsToProcess = COUNT(*) from #varTable
WHILE #CurrentRow < #RowsToProcess
BEGIN
SET #CurrentRow = #CurrentRow + 1
SELECT TOP 1 #NumberOfRecords = ord, #SelectedType = type
FROM #varTable WHERE type > #SelectedType
SET #CurRow = 0
WHILE #CurRow < (#NumberOfLines - #NumberOfRecords % #NumberOfLines) % #NumberOfLines
BEGIN
SET #CurRow = #CurRow + 1
INSERT into #table2 (type, title, ord)
SELECT type, '' , 2
FROM #varTable WHERE type = #SelectedType
END
END
SELECT type, title FROM #table2 ORDER BY type ASC, ord ASC, title ASC

Why can't you just UNION the two resultsets?

How about using a table valued function rather than a stored proc?

It's possible, only declare your table with '##'. Example:
DECLARE ##results TABLE (Number int
,Name nvarchar(250)
,Total1 money
,Total2 money
)
insert into ##results (Number, Name, Total1)
select number, name, sum(total)
from table1
group by number, name
update ##results
set total2 = total
from
(select number, sum(total) from table2) s
where s.number = number
select * from ##results

Related

Substring is slow with while loop in SQL Server

One of my table column stores ~650,000 characters (each value of the column contains entire table). I know its bad design however, Client will not be able to change it.
I am tasked to convert the column into multiple columns.
I chose to use dbo.DelimitedSplit8K function
Unfortunately, it can only handle 8k characters at max.
So I decided to split the column into 81 8k batches using while loop and store the same in a variable table (temp or normal table made no improvement)
DECLARE #tab1 table ( serialnumber int, etext nvarchar(1000))
declare #scriptquan int = (select MAX(len (errortext)/8000) from mytable)
DECLARE #Counter INT
DECLARE #A bigint = 1
DECLARE #B bigint = 8000
SET #Counter=1
WHILE ( #Counter <= #scriptquan + 1)
BEGIN
insert into #tab1 select ItemNumber, Item from dbo.mytable cross apply dbo.DelimitedSplit8K(substring(errortext, #A, #B), CHAR(13)+CHAR(10))
SET #A = #A + 8000
SET #B = #B + 8000
SET #Counter = #Counter + 1
END
This followed by using below code
declare #tab2 table (Item nvarchar(max),itemnumber int, Colseq varchar(10)) -- declare table variable
;with cte as (
select [etext] ,ItemNumber, Item from #tab1 -- insert table name
cross apply dbo.DelimitedSplit8K(etext,' ')) -- insert table columns name that contains text
insert into #tab2 Select Item,itemnumber, 'a'+ cast (ItemNumber as varchar) colseq
from cte -- insert values to table variable
;WITH Tbl(item, colseq) AS(
select item, colseq from #tab2
),
CteRn AS(
SELECT item, colseq,
Rn = ROW_NUMBER() OVER(PARTITION BY colseq ORDER BY colseq)
FROM Tbl
)
SELECT
a1 Time,a2 Number,a3 Type,a4 Remarks
FROM CteRn r
PIVOT(
MAX(item)
FOR colseq IN(a1,a2,a3,a4)
)p
where a3 = 'error'
gives the desired output. However, just the loop takes 15 minutes to complete and overall query completes by 27 minutes. Is there any way I can make it faster? Total row count in my table is 2. So I don't think Index can help.
Client uses Azure SQL Database so I can't choose PowerShell or Python to accomplish this either.
Please let me know if more information is needed. I tried my best to mention everything I could.

How do I use update multiple rows in a stored procedure

I get batches of inventory items to update and I would like to eliminate calling the stored procedure multiple times and instead call it once with multiple values. I have done similar in oracle with the parameters as an array trick. I would like to do something similar for SQL Server.
I have a comma separated list of Sku
I have a comma separated list of Quantity.
I have a comma separated list of StoreIds.
The standard update is
Update Inventory
set quantity = #Quantity
where sku = #Sku and StoreId = #StoreId;
Table definition
CREATE TABLE Inventory
(
[Sku] NVARCHAR(50) NOT NULL,
[Quantity] DECIMAL NULL DEFAULT 0.0,
[StoreId] INT NOT NULL
}
My bad attempt at doing this
ALTER PROCEDURE UpdateList
(#Sku varchar(max),
#Quantity varchar(max),
#StoreId varchar(max))
AS
BEGIN
DECLARE #n int = 0;
DECLARE #skuTable TABLE = SELECT CONVERT(value) FROM STRING_SPLIT(#Sku, ',');
DECLARE #quantityTable = SELECT CONVERT(value) FROM STRING_SPLIT(#Quantity, ',');
DECLARE #StoreIdTable = SELECT CONVERT(value) FROM STRING_SPLIT(#StoreId , ',');
WHILE #n < #skuTable.Count
BEGIN
UPDATE inventoryItem
SET Quantity = #quantityTable
WHERE Sku = #skuTable AND StoreId = #StoreIdTable;
SELECT #n = #n + 1;
END
END
I am open to using temp tables as parameters instead of comma separated. This is being called from an Entity Framework 6 context object from the front end system.
It's a bad practice to pass tabular values in this way.
Best solution is to pass it as a "user defined table type", if possible,
otherwise, it's better to get JSON/XML parameter
and then you can update your table like this:
--[ Parameters ]--
DECLARE #json AS NVARCHAR(MAX) = '[{"Sku":"A","Quantity":1.4,"StoreId":1},{"Sku":"B","Quantity":2.5,"StoreId":2},{"Sku":"C","Quantity":3.6,"StoreId":3}]';
--[ Bulk Update ]--
UPDATE inventoryItem
SET Quantity = I.Quantity
FROM inventoryItem AS T
JOIN OPENJSON(#json) WITH (Sku NVARCHAR(50), Quantity DECIMAL(5,1),StoreId INT) AS I
ON I.Sku = T.Sku
AND I.StoreId = T.StoreId
It's a bad practice to pass tabular values as varchar columns parameters,
but if you still want to go this way, here is a working code:
--[ Parameters ]--
DECLARE #Sku VARCHAR(max) = 'A,B,C',
#Quantity VARCHAR(max) = '1.4,2.5,3.6',
#StoreId VARCHAR(max) = '1,2,3'
--[ Converting VARCHAR Parameters to Table #Inventory ]--
DROP TABLE IF EXISTS #Sku
SELECT IDENTITY(int, 1,1) AS RowNum,
T.value
INTO #Sku
FROM STRING_SPLIT(#Sku, ',') AS T
DROP TABLE IF EXISTS #Quantity
SELECT IDENTITY(int, 1,1) AS RowNum,
T.value
INTO #Quantity
FROM STRING_SPLIT(#Quantity, ',') AS T
DROP TABLE IF EXISTS #StoreId
SELECT IDENTITY(int, 1,1) AS RowNum,
T.value
INTO #StoreId
FROM STRING_SPLIT(#StoreId, ',') AS T
DROP TABLE IF EXISTS #Inventory
SELECT Sku.value AS Sku,
Quantity.value AS Quantity,
StoreId.value AS StoreId
INTO #Inventory
FROM #Sku AS Sku
JOIN #Quantity AS Quantity ON Quantity.RowNum = Sku.RowNum
JOIN #StoreId AS StoreId ON StoreId.RowNum = Sku.RowNum
--[ Bulk Update ]--
UPDATE inventoryItem
SET Quantity = I.Quantity
FROM inventoryItem AS T
JOIN #Inventory AS I
ON I.Sku = T.Sku
AND I.StoreId = T.StoreId
The above answers are correct for updates and answered my question. But I wanted to add the insert here as I am sure someone will be looking for both. Maybe I will come back an make a new question and answer it myself.
I think the JSON version is best for my issue because I am doing entity framework and serializing an object to JSON is a trivial task. The basic process is to create an inline temp table from the json string. Calling out the objects via a simple dot notation string. I would suggest making the object passed in as simple as possible and preferably one level of properties.
create or alter Procedure bulkInventoryInsert( #json AS NVARCHAR(MAX))
AS
BEGIN
INSERT into inventory
SELECT Sku, Quantity, StoreId FROM
OPENJSON(#json)
WITH(Sku varchar(200) '$.Sku',
Quantity decimal(5,1) '$.Quantity',
StoreId INT '$.StoreId');
END
DECLARE #json AS NVARCHAR(MAX) = '[{"Sku":"A","Quantity":1.4,"StoreId":2},{"Sku":"B","Quantity":2.5,"StoreId":3},{"Sku":"C","Quantity":3.6,"StoreId":2}]';
EXECUTE bulkInventoryInsert #json;
The key part to recognize is this section here:
SELECT Sku, Quantity, StoreId FROM
OPENJSON(#json)
WITH(Sku varchar(200) '$.Sku',
Quantity decimal(5,1) '$.Quantity',
StoreId INT '$.StoreId');
This is creating a temp table with columns that match the table that it will be inserted into. The "WITH" portion specifies the column name, type, and where in the Json string to get the value.
I hope this will help. Maybe when I get time I will do a question and answer for this.

How to fiind out the missing records (ID) from an indexed [order] table in sql

I have a table [Order] that has records with sequential ID (in odd number only, i.e. 1,3,5,7...989, 991, 993, 995, 997, 999), it is seen that a few records were accidentally deleted and should be inserted back, first thing is to find out what records are missing in the current table, there are hundreds of records in this table
Don't know how to write the query, can anyone kindly help, please?
I am thinking if I have to write a stored procedure or function but would be better if I can avoid them for environment reasons.
Below peuso code is what I am thinking:
set #MaxValue = Max(numberfield)
set #TestValue = 1
open cursor on recordset ordered by numberfield
foreach numberfield
while (numberfield != #testvalue) and (#testvalue < #MaxValue) then
Insert #testvalue into #temp table
set #testvalue = #textvalue + 2
Next
Next
UPDATE:
Expected result:
Order ID = 7 should be picked up as the only missing record.
Update 2:
If I use
WHERE
o.id IS NULL;
It returns nothing:
Since I didn't get a response from you, in the comments, I've altered the script for you to fill in accordingly:
declare #id int
declare #maxid int
set #id = 1
select #maxid = max([Your ID Column Name]) from [Your Table Name]
declare #IDseq table (id int)
while #id < #maxid --whatever you max is
begin
insert into #IDseq values(#id)
set #id = #id + 1
end
select
s.id
from #IDseq s
left join [Your Table Name] t on s.id = t.[Your ID Column Name]
where t.[Your ID Column Name] is null
Where you see [Your ID Column Name], replace everything with your column name and the same goes for [Your Table Name].
I'm sure this will give you the results you seek.
We can try joining to a number table, which contains all the odd numbers which you might expect to appear in your own table.
DECLARE #start int = 1
DECLARE #end int = 1000
WITH cte AS (
SELECT #start num
UNION ALL
SELECT num + 2 FROM cte WHERE num < #end
)
SELECT num
FROM cte t
LEFT JOIN [Order] o
ON t.num = o.numberfield
WHERE
o.numberfield IS NULL;

How do I loop through a table, search with that data, and then return search criteria and result to new table?

I have a set of records that need to be validated (searched) in a SQL table. I will call these ValData and SearchTable respectively. A colleague created a SQL query in which a record from the ValData can be copied and pasted in to a string variable, and then it is searched in the SearchTable. The best result from the SearchTable is returned. This works very well.
I want to automate this process. I loaded the ValData to SQL in a table like so:
RowID INT, FirstName, LastName, DOB, Date1, Date2, TextDescription.
I want to loop through this set of data, by RowID, and then create a result table that is the ValData joined with the best match from the SearchTable. Again, I already have a query that does that portion. I just need the loop portion, and my SQL skills are virtually non-existent.
Suedo code would be:
DECLARE #SearchID INT = 1
DECLARE #MaxSearchID INT = 15000
DECLARE #FName VARCHAR(50) = ''
DECLARE #FName VARCHAR(50) = ''
etc...
WHILE #SearchID <= #MaxSearchID
BEGIN
SET #FNAME = (SELECT [Fname] FROM ValData WHERE [RowID] = #SearchID)
SET #LNAME = (SELECT [Lname] FROM ValData WHERE [RowID] = #SearchID)
etc...
Do colleague's query, and then insert(?) search criteria joined with the result from the SearchTable in to a temporary result table.
END
SELECT * FROM FinalResultTable;
My biggest lack of knowledge comes in how do I create a temporary result table that is ValData's fields + SearchTable's fields, and during the loop iterations how do I add one row at a time to this temporary result table that includes the ValData joined with the result from the SearchTable?
If it helps, I'm using/wanting to join all fields from ValData and all fields from SearchTable.
Wouldn't this be far easier with a query like this..?
SELECT FNAME,
LNAME
FROM ValData
WHERE (FName = #Fname
OR LName = #Lname)
AND RowID <= #MaxSearchID
ORDER BY RowID ASC;
There is literally no reason to use a WHILE other than to destroy performance of the query.
With a bit more trial and error, I was able to answer what I was looking for (which, at its core, was creating a temp table and then inserting rows in to it).
CREATE TABLE #RESULTTABLE(
[feedname] VARCHAR(100),
...
[SCORE] INT,
[Max Score] INT,
[% Score] FLOAT(4),
[RowID] SMALLINT
)
SET #SearchID = 1
SET #MaxSearchID = (SELECT MAX([RowID]) FROM ValidationData
WHILE #SearchID <= #MaxSearchID
BEGIN
SET #FNAME = (SELECT [Fname] FROM ValidationData WHERE [RowID] = #SearchID)
...
--BEST MATCH QUERY HERE
--Select the "top" best match (order not guaranteed) in to the RESULTTABLE.
INSERT INTO #RESULTTABLE
SELECT TOP 1 *, #SearchID AS RowID
--INTO #RESULTTABLE
FROM #TABLE3
WHERE [% Score] IN (SELECT MAX([% Score]) FROM #TABLE3)
--Drop temp tables that were created/used during best match query.
DROP TABLE #TABLE1
DROP TABLE #TABLE2
DROP TABLE #TABLE3
SET #SearchID = #SearchID + 1
END;
--Join the data that was validated (searched) to the results that were found.
SELECT *
FROM ValidationData vd
LEFT JOIN #RESULTTABLE rt ON rt.[RowID] = vd.[RowID]
ORDER BY vd.[RowID]
DROP TABLE #RESULTTABLE
I know this could be approved by doing a join, probably with the "BEST MATCH QUERY" as an inner query. I am just not that skilled yet. This takes a manual process which took hours upon hours and shortens it to just an hour or so.

Is it possible to iterate over a list of numbers in tsql?

I have a comma separated list of personIDs: 1265, 8632.
What I want to do is something I'd like to indicate by a chunk of pseudo-code, which does not work, of course.
declare #max int = (select count(*) from dbo.tablePersons)-1
declare #cnt = 0
BEGIN
SELECT personID FROM dbo.tablePersons
WHERE (1265, 8632)[#cnt]
IS NOT IN SELECT personID FROM dbo.tablePersons
SET #cnt = #cnt + 1
END
I want to iterate over the list 1265, 8632 and check whether the IDs in the list are NOT there in SELECT personID FROM dbo.tablePersons. The goal is to find all those IDs in my list that are there in SELECT personID FROM dbo.tablePersons.
Given the fact that this pseudo-code is not viable -is there a kind of a workaround?
You can try this:
DECLARE #mockperson TABLE(ID INT, PersName VARCHAR(100));
INSERT INTO #mockperson VALUES
(1,'pers 1')
,(2,'pers 2');
DECLARE #yourlist VARCHAR(100)='1,999,2';
--The query splits the given list and checks for numbers NOT IN the person table
WITH Casted AS
(
SELECT CAST('<x>' + REPLACE(#yourlist,',','</x><x>') + '</x>' AS XML) AS TheListAsXml
)
SELECT x.value('text()[1]','int')
FROM Casted
CROSS APPLY TheListAsXml.nodes('/x') AS A(x)
WHERE x.value('text()[1]','int') NOT IN(SELECT ID FROM #mockperson);
Starting with v2016
With STRING_SPLIT() (v2016+) this would be the same approach, just easier
SELECT *
FROM STRING_SPLIT(#yourlist,',') AS A
WHERE A.value NOT IN(SELECT ID FROM #mockperson);

Resources