How do I use update multiple rows in a stored procedure - sql-server

I get batches of inventory items to update and I would like to eliminate calling the stored procedure multiple times and instead call it once with multiple values. I have done similar in oracle with the parameters as an array trick. I would like to do something similar for SQL Server.
I have a comma separated list of Sku
I have a comma separated list of Quantity.
I have a comma separated list of StoreIds.
The standard update is
Update Inventory
set quantity = #Quantity
where sku = #Sku and StoreId = #StoreId;
Table definition
CREATE TABLE Inventory
(
[Sku] NVARCHAR(50) NOT NULL,
[Quantity] DECIMAL NULL DEFAULT 0.0,
[StoreId] INT NOT NULL
}
My bad attempt at doing this
ALTER PROCEDURE UpdateList
(#Sku varchar(max),
#Quantity varchar(max),
#StoreId varchar(max))
AS
BEGIN
DECLARE #n int = 0;
DECLARE #skuTable TABLE = SELECT CONVERT(value) FROM STRING_SPLIT(#Sku, ',');
DECLARE #quantityTable = SELECT CONVERT(value) FROM STRING_SPLIT(#Quantity, ',');
DECLARE #StoreIdTable = SELECT CONVERT(value) FROM STRING_SPLIT(#StoreId , ',');
WHILE #n < #skuTable.Count
BEGIN
UPDATE inventoryItem
SET Quantity = #quantityTable
WHERE Sku = #skuTable AND StoreId = #StoreIdTable;
SELECT #n = #n + 1;
END
END
I am open to using temp tables as parameters instead of comma separated. This is being called from an Entity Framework 6 context object from the front end system.

It's a bad practice to pass tabular values in this way.
Best solution is to pass it as a "user defined table type", if possible,
otherwise, it's better to get JSON/XML parameter
and then you can update your table like this:
--[ Parameters ]--
DECLARE #json AS NVARCHAR(MAX) = '[{"Sku":"A","Quantity":1.4,"StoreId":1},{"Sku":"B","Quantity":2.5,"StoreId":2},{"Sku":"C","Quantity":3.6,"StoreId":3}]';
--[ Bulk Update ]--
UPDATE inventoryItem
SET Quantity = I.Quantity
FROM inventoryItem AS T
JOIN OPENJSON(#json) WITH (Sku NVARCHAR(50), Quantity DECIMAL(5,1),StoreId INT) AS I
ON I.Sku = T.Sku
AND I.StoreId = T.StoreId

It's a bad practice to pass tabular values as varchar columns parameters,
but if you still want to go this way, here is a working code:
--[ Parameters ]--
DECLARE #Sku VARCHAR(max) = 'A,B,C',
#Quantity VARCHAR(max) = '1.4,2.5,3.6',
#StoreId VARCHAR(max) = '1,2,3'
--[ Converting VARCHAR Parameters to Table #Inventory ]--
DROP TABLE IF EXISTS #Sku
SELECT IDENTITY(int, 1,1) AS RowNum,
T.value
INTO #Sku
FROM STRING_SPLIT(#Sku, ',') AS T
DROP TABLE IF EXISTS #Quantity
SELECT IDENTITY(int, 1,1) AS RowNum,
T.value
INTO #Quantity
FROM STRING_SPLIT(#Quantity, ',') AS T
DROP TABLE IF EXISTS #StoreId
SELECT IDENTITY(int, 1,1) AS RowNum,
T.value
INTO #StoreId
FROM STRING_SPLIT(#StoreId, ',') AS T
DROP TABLE IF EXISTS #Inventory
SELECT Sku.value AS Sku,
Quantity.value AS Quantity,
StoreId.value AS StoreId
INTO #Inventory
FROM #Sku AS Sku
JOIN #Quantity AS Quantity ON Quantity.RowNum = Sku.RowNum
JOIN #StoreId AS StoreId ON StoreId.RowNum = Sku.RowNum
--[ Bulk Update ]--
UPDATE inventoryItem
SET Quantity = I.Quantity
FROM inventoryItem AS T
JOIN #Inventory AS I
ON I.Sku = T.Sku
AND I.StoreId = T.StoreId

The above answers are correct for updates and answered my question. But I wanted to add the insert here as I am sure someone will be looking for both. Maybe I will come back an make a new question and answer it myself.
I think the JSON version is best for my issue because I am doing entity framework and serializing an object to JSON is a trivial task. The basic process is to create an inline temp table from the json string. Calling out the objects via a simple dot notation string. I would suggest making the object passed in as simple as possible and preferably one level of properties.
create or alter Procedure bulkInventoryInsert( #json AS NVARCHAR(MAX))
AS
BEGIN
INSERT into inventory
SELECT Sku, Quantity, StoreId FROM
OPENJSON(#json)
WITH(Sku varchar(200) '$.Sku',
Quantity decimal(5,1) '$.Quantity',
StoreId INT '$.StoreId');
END
DECLARE #json AS NVARCHAR(MAX) = '[{"Sku":"A","Quantity":1.4,"StoreId":2},{"Sku":"B","Quantity":2.5,"StoreId":3},{"Sku":"C","Quantity":3.6,"StoreId":2}]';
EXECUTE bulkInventoryInsert #json;
The key part to recognize is this section here:
SELECT Sku, Quantity, StoreId FROM
OPENJSON(#json)
WITH(Sku varchar(200) '$.Sku',
Quantity decimal(5,1) '$.Quantity',
StoreId INT '$.StoreId');
This is creating a temp table with columns that match the table that it will be inserted into. The "WITH" portion specifies the column name, type, and where in the Json string to get the value.
I hope this will help. Maybe when I get time I will do a question and answer for this.

Related

How do I loop through a table, search with that data, and then return search criteria and result to new table?

I have a set of records that need to be validated (searched) in a SQL table. I will call these ValData and SearchTable respectively. A colleague created a SQL query in which a record from the ValData can be copied and pasted in to a string variable, and then it is searched in the SearchTable. The best result from the SearchTable is returned. This works very well.
I want to automate this process. I loaded the ValData to SQL in a table like so:
RowID INT, FirstName, LastName, DOB, Date1, Date2, TextDescription.
I want to loop through this set of data, by RowID, and then create a result table that is the ValData joined with the best match from the SearchTable. Again, I already have a query that does that portion. I just need the loop portion, and my SQL skills are virtually non-existent.
Suedo code would be:
DECLARE #SearchID INT = 1
DECLARE #MaxSearchID INT = 15000
DECLARE #FName VARCHAR(50) = ''
DECLARE #FName VARCHAR(50) = ''
etc...
WHILE #SearchID <= #MaxSearchID
BEGIN
SET #FNAME = (SELECT [Fname] FROM ValData WHERE [RowID] = #SearchID)
SET #LNAME = (SELECT [Lname] FROM ValData WHERE [RowID] = #SearchID)
etc...
Do colleague's query, and then insert(?) search criteria joined with the result from the SearchTable in to a temporary result table.
END
SELECT * FROM FinalResultTable;
My biggest lack of knowledge comes in how do I create a temporary result table that is ValData's fields + SearchTable's fields, and during the loop iterations how do I add one row at a time to this temporary result table that includes the ValData joined with the result from the SearchTable?
If it helps, I'm using/wanting to join all fields from ValData and all fields from SearchTable.
Wouldn't this be far easier with a query like this..?
SELECT FNAME,
LNAME
FROM ValData
WHERE (FName = #Fname
OR LName = #Lname)
AND RowID <= #MaxSearchID
ORDER BY RowID ASC;
There is literally no reason to use a WHILE other than to destroy performance of the query.
With a bit more trial and error, I was able to answer what I was looking for (which, at its core, was creating a temp table and then inserting rows in to it).
CREATE TABLE #RESULTTABLE(
[feedname] VARCHAR(100),
...
[SCORE] INT,
[Max Score] INT,
[% Score] FLOAT(4),
[RowID] SMALLINT
)
SET #SearchID = 1
SET #MaxSearchID = (SELECT MAX([RowID]) FROM ValidationData
WHILE #SearchID <= #MaxSearchID
BEGIN
SET #FNAME = (SELECT [Fname] FROM ValidationData WHERE [RowID] = #SearchID)
...
--BEST MATCH QUERY HERE
--Select the "top" best match (order not guaranteed) in to the RESULTTABLE.
INSERT INTO #RESULTTABLE
SELECT TOP 1 *, #SearchID AS RowID
--INTO #RESULTTABLE
FROM #TABLE3
WHERE [% Score] IN (SELECT MAX([% Score]) FROM #TABLE3)
--Drop temp tables that were created/used during best match query.
DROP TABLE #TABLE1
DROP TABLE #TABLE2
DROP TABLE #TABLE3
SET #SearchID = #SearchID + 1
END;
--Join the data that was validated (searched) to the results that were found.
SELECT *
FROM ValidationData vd
LEFT JOIN #RESULTTABLE rt ON rt.[RowID] = vd.[RowID]
ORDER BY vd.[RowID]
DROP TABLE #RESULTTABLE
I know this could be approved by doing a join, probably with the "BEST MATCH QUERY" as an inner query. I am just not that skilled yet. This takes a manual process which took hours upon hours and shortens it to just an hour or so.

update value into another table using trigger

Create table data with column StudentId (varchar type), Marks (Double). Create table data1 with column StudentId (varchar type), OldMarks (Double),NewMarks,Date.
Create trigger on data table.If mark is changed,create entry in data1 table for student with old marks,new marks & current date.
Here is the code I've tried:
CREATE TRIGGER marksss ON [dbo].[data] after UPDATE
AS declare #studentid int;
declare #marks int;
declare #xyz int;
declare #newmarks int;
declare #oldmarks int;
select #studentid=i.student_id from inserted i;
--to fetch inserted values
select #marks=i.marks from inserted i;
begin if update(marks) --set #oldmarks=#mark set #newmarks=#marks
insert into data1(student_id,new_marks,old_marks,date)
values (#studentid,#newmarks,#oldmarks,getdate()enter code here);
end
go
the problem is that it does not display old marks
I've managed to get what you want. First of all, you want to use an instead of trigger instead. Oracle has a before trigger which is what you ideally need however MSSQL doesn't have this feature so we have to do the passed in update manually too...
Here is the code with the table setup that I used, just changed to suit your needs.
CREATE TABLE A (ID INT IDENTITY PRIMARY KEY, SCORE INT)
CREATE TABLE B (ID INT FOREIGN KEY REFERENCES A(ID), SCORE INT, OLDSCORE INT, [date] DATETIME)
GO
CREATE TRIGGER marksss ON A INSTEAD OF UPDATE
AS
BEGIN
IF (SELECT A.SCORE FROM A INNER JOIN INSERTED I ON I.ID = A.ID) != (SELECT I.SCORE FROM INSERTED I)
BEGIN
INSERT INTO B(ID,SCORE,OLDSCORE,[date])
SELECT I.ID, I.SCORE, A.SCORE, GETDATE()
FROM INSERTED I
INNER JOIN A ON I.ID = A.ID
END
BEGIN
UPDATE A
SET SCORE = (SELECT I.SCORE FROM INSERTED I)
END
END

Insert random text into columns from a reference table variable

I have a table ABSENCE that has 40 employee ids and need to add two columns from a table variable, which acts as a reference table. For each emp id, I need to randomly assign the values from the table variable. Here's the code I tried without randomizing:
USE TSQL2012;
GO
DECLARE #MAX SMALLINT;
DECLARE #MIN SMALLINT;
DECLARE #RECODE SMALLINT;
DECLARE #RE CHAR(100);
DECLARE #rearray table (recode smallint,re char(100));
insert into #rearray values (100,'HIT BY BEER TRUCK')
,(200,'BAD HAIR DAY')
,(300,'ASPIRIN OVERDOSE')
,(400,'MAKEUP DISASTER')
,(500,'GOT LOCKED IN THE SALOON')
DECLARE #REFCURSOR AS CURSOR;
SET #REFCURSOR = CURSOR FOR
SELECT RECODE,RE FROM #REARRAY;
OPEN #REFCURSOR;
SET #MAX = (SELECT DISTINCT ##ROWCOUNT FROM ABSENCE);
SET #MIN = 0;
ALTER TABLE ABSENCE ADD CODE SMALLINT, REASONING CHAR(100);
WHILE (#MIN <= #MAX)
BEGIN
FETCH NEXT FROM #REFCURSOR INTO #RECODE,#RE;
INSERT INTO ABSENCE (CODE, REASONING) VALUES (#RECODE,#RE);
SET #MIN+=1;
END
CLOSE #REFCURSOR
DEALLOCATE #REFCURSOR
SELECT EMPID,CODE,REASONING FROM ABSENCE
Though am inserting into two columns only, it is attempting to insert into empid (which has already been filled) and as it cannot be NULL, the insertion fails.
Also, how to randomize the values from the REARRAY table variable to insert them into the ABSENCE table?
Since this is a small dataset, one approach might be to use CROSS APPLY with a SELECT TOP(1) ... FROM #rearray ORDER BY NEWID() approach. This will essentially join your ABSENCE table with your reference table in an UPDATE statement, selecting a random row each time in the join. In full, it would look like:
UPDATE ABSENCE
SET col1 = x1.recode, col2 = x2.recode
FROM ABSENCE a
CROSS APPLY (SELECT TOP(1) * FROM #rearray ORDER BY NEWID()) x1(recode, re)
CROSS APPLY (SELECT TOP(1) * FROM #rearray ORDER BY NEWID()) x2(recode, re)

coalesce two records into one

I have a table that stores two values; 'total' and 'owing' for each customer. Data is uploaded to the table using two files, one that brings in 'total' and the other brings in 'owing'. This means I have two records for each customerID:
customerID:--------Total:--------- Owing:
1234---------------- 1000----------NULL
1234-----------------NULL-----------200
I want to write a stored procedure that merges the two records together:
customerID:--------Total:--------- Owing:
1234---------------- 1000----------200
I have seen examples using COALESCE so put together something like this:
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
--Variable declarations
DECLARE #customer_id varchar(20)
DECLARE #total decimal(15,8)
DECLARE #owing decimal(15,8)
DECLARE #customer_name_date varchar(255)
DECLARE #organisation varchar(4)
DECLARE #country_code varchar(2)
DECLARE #created_date datetime
--Other Variables
DECLARE #totals_staging_id int
--Get the id of the first row in the staging table
SELECT #totals_staging_id = MIN(totals_staging_id)
from TOTALS_STAGING
--iterate through the staging table
WHILE #totals_staging_id is not null
BEGIN
update TOTALS_STAGING
SET
total = coalesce(#total, total),
owing = coalesce(#owing, owing)
where totals_staging_id = #totals_staging_id
END
END
Any Ideas?
SELECT t1.customerId, t1.total, t2.owing FROM test t1 JOIN test t2 ON ( t1.customerId = t2.customerId) WHERE t1.total IS NOT NULL AND t2.owing IS NOT NULL
Wondering why aren't you just using UPDATE on a second file execution?
Except for COUNT, aggregate functions ignore null values. Aggregate
functions are frequently used with the GROUP BY clause of the SELECT
statement. MSDN
So you don't need to worry about null values with summing. Following will give your merging records together. Fiddle-demo
select customerId,
sum(Total) Total,
sum(Owing) Owing
from T
Group by customerId
Try this :
CREATE TABLE #Temp
(
CustomerId int,
Total int,
Owing int
)
insert into #Temp
values (1024,100,null),(1024,null,200),(1025,10,null)
Create Table #Final
(
CustomerId int,
Total int,
Owing int
)
insert into #Final
values (1025,100,50)
MERGE #Final AS F
USING
(SELECT customerid,sum(Total) Total,sum(owing) owing FROM #Temp
group by #Temp.customerid
) AS a
ON (F.customerid = a.customerid)
WHEN MATCHED THEN UPDATE SET F.Total = F.Total + isnull(a.Total,0)
,F.Owing = F.Owing + isnull(a.Owing,0)
WHEN NOT MATCHED THEN
INSERT (CustomerId,Total,Owing)
VALUES (a.customerid,a.Total,a.owing);
select * from #Final
drop table #Temp
drop table #Final
This should work:
SELECT CustomerID,
COALESCE(total1, total2) AS Total,
COALESCE(owing1, owing2) AS Owing
FROM
(SELECT row1.CustomerID AS CustomerID,
row1.Total AS total1,
row2.Total AS total2,
row1.Owing AS owing1,
row2.Owing AS owing2
FROM YourTable row1 INNER JOIN YourTable row2 ON row1.CustomerID = row2.CustomerID
WHERE row1.Total IS NULL AND row2.Total IS NOT NULL) temp
--Note: Alter the WHERE clause as necessary to ensure row1 and row2 are unique.
...but note that you'll need some mechanism to ensure row1 and row2 are unique. My WHERE clause is an example based on the data you provided. You'll need to tweak this to add something more specific to your business rules.

Can I use #table variable in SQL Server Report Builder?

Using SQL Server 2008 Reporting services:
I'm trying to write a report that displays some correlated data so I thought to use a #table variable like so
DECLARE #Results TABLE (Number int
,Name nvarchar(250)
,Total1 money
,Total2 money
)
insert into #Results(Number, Name, Total1)
select number, name, sum(total)
from table1
group by number, name
update #Results
set total2 = total
from
(select number, sum(total) from table2) s
where s.number = number
select from #results
However, Report Builder keeps asking to enter a value for the variable #Results. It this at all possible?
EDIT: As suggested by KM I've used a stored procedure to solve my immediate problem, but the original question still stands: can I use #table variables in Report Builder?
No.
ReportBuilder will
2nd guess you
treats #Results as a parameter
Put all of that in a stored procedure and have report builder call that procedure. If you have many rows to process you might be better off (performance wise) with a #temp table where you create a clustered primary key on Number (or would it be Number+Name, not sure of your example code).
EDIT
you could try to do everything in one SELECT and send that to report builder, this should be the fastest (no temp tables):
select
dt.number, dt.name, dt.total1, s.total2
from (select
number, name, sum(total) AS total1
from table1
group by number, name
) dt
LEFT OUTER JOIN (select
number, sum(total) AS total2
from table2
GROUP BY number --<<OP code didn't have this, but is it needed??
) s ON dt.number=s.number
I've seen this problem as well. It seems SQLRS is a bit case-sensitive. If you ensure that your table variable is declared and referenced everywhere with the same letter case, you will clear up the prompt for parameter.
You can use Table Variables in SSRS dataset query like in my code where I am adding needed "empty" records for keep group footer in fixed postion (sample use pubs database):
DECLARE #NumberOfLines INT
DECLARE #RowsToProcess INT
DECLARE #CurrentRow INT
DECLARE #CurRow INT
DECLARE #cntMax INT
DECLARE #NumberOfRecords INT
DECLARE #SelectedType char(12)
DECLARE #varTable TABLE (# int, type char(12), ord int)
DECLARE #table1 TABLE (type char(12), title varchar(80), ord int )
DECLARE #table2 TABLE (type char(12), title varchar(80), ord int )
INSERT INTO #varTable
SELECT count(type) as '#', type, count(type) FROM titles GROUP BY type ORDER BY type
SELECT #cntMax = max(#) from #varTable
INSERT into #table1 (type, title, ord) SELECT type, N'', 1 FROM titles
INSERT into #table2 (type, title, ord) SELECT type, title, 1 FROM titles
SET #CurrentRow = 0
SET #SelectedType = N''
SET #NumberOfLines = #RowsPerPage
SELECT #RowsToProcess = COUNT(*) from #varTable
WHILE #CurrentRow < #RowsToProcess
BEGIN
SET #CurrentRow = #CurrentRow + 1
SELECT TOP 1 #NumberOfRecords = ord, #SelectedType = type
FROM #varTable WHERE type > #SelectedType
SET #CurRow = 0
WHILE #CurRow < (#NumberOfLines - #NumberOfRecords % #NumberOfLines) % #NumberOfLines
BEGIN
SET #CurRow = #CurRow + 1
INSERT into #table2 (type, title, ord)
SELECT type, '' , 2
FROM #varTable WHERE type = #SelectedType
END
END
SELECT type, title FROM #table2 ORDER BY type ASC, ord ASC, title ASC
Why can't you just UNION the two resultsets?
How about using a table valued function rather than a stored proc?
It's possible, only declare your table with '##'. Example:
DECLARE ##results TABLE (Number int
,Name nvarchar(250)
,Total1 money
,Total2 money
)
insert into ##results (Number, Name, Total1)
select number, name, sum(total)
from table1
group by number, name
update ##results
set total2 = total
from
(select number, sum(total) from table2) s
where s.number = number
select * from ##results

Resources