Error in merge statement in T-SQL - sql-server

I have this my stored procedure:
create procedure [dbo].[Sp_AddPermission]
#id nvarchar(max)
as
declare #words varchar(max), #sql nvarchar(max)
set #words = #id
set #sql = 'merge admin AS target
using (values (''' + replace(replace(#words,';','),('''),'-',''',') + ')) AS source(uname, [add], [edit], [delete], [view],Block)
on target.uname = source.uname
when matched then update set [add] = source.[add], [edit] = source.[edit], [delete] = source.[delete], [view] = source.[view], [Block]=source.[Block];'
exec(#sql);
When executing it, this error is shown:
The MERGE statement attempted to UPDATE or DELETE the same row more
than once. This happens when a target row matches more than one source
row. A MERGE statement cannot UPDATE/DELETE the same row of the target
table multiple times. Refine the ON clause to ensure a target row
matches at most one source row, or use the GROUP BY clause to group
the source rows.
How to resolve this?
Regards
Baiju

The problem is obvious: you a generating a source table with multiple values for the same uname.
Why are you using a merge for this? I think a simple update would do, and I don't think update will return an error when you have multiple identical keys in source:
update t
set [add] = source.[add],
[edit] = source.[edit],
[delete] = source.[delete],
[view] = source.[view],
[Block]=source.[Block]
from target t join
(values(. . . )) s(uname, [add], [edit], [delete], [view],Block)
on t.uname = s.uname;
But, you could fix this if you like by choosing an arbitrary row for the update (which is what the above does):
update t
set [add] = source.[add],
[edit] = source.[edit],
[delete] = source.[delete],
[view] = source.[view],
[Block]=source.[Block]
from target t join
(select s.*,
row_number() over (partition by uname order by uname) as seqnum
from (values(. . . )) s(uname, [add], [edit], [delete], [view],Block)
) s
on t.uname = s.uname and s.seqnum = 1;
Of course, this approach can also be used with the merge.

Related

Can I use a variable inside cursor declaration?

Is this code valid?
-- Zadavatel Login ID
DECLARE #ZadavatelLoginId nvarchar(max) =
(SELECT TOP 1 LoginId
FROM
(SELECT Z.LoginId, z.Prijmeni, k.spojeni
FROM TabCisZam Z
LEFT JOIN TabKontakty K ON Z.ID = K.IDCisZam
WHERE druh IN (6,10)) t1
LEFT JOIN
(SELECT ko.Prijmeni, k.spojeni, ko.Cislo
FROM TabCisKOs KO
LEFT JOIN TabKontakty K ON K.IDCisKOs = KO.id
WHERE druh IN (6, 10)) t2 ON t1.spojeni = t2.spojeni
AND t1.Prijmeni = t2.Prijmeni
WHERE
t2.Cislo = (SELECT CisloKontOsoba
FROM TabKontaktJednani
WHERE id = #IdKJ))
-- Pokud je řešitelský tým prázdný
IF NOT EXISTS (SELECT * FROM TabKJUcastZam WHERE IDKJ = #IdKJ)
BEGIN
DECLARE ac_loginy CURSOR FAST_FORWARD LOCAL FOR
-- Zadavatel
SELECT #ZadavatelLoginId
END
ELSE BEGIN
I am trying to pass the variable #ZadavatelLoginId into the cursor declaration and SSMS keeps telling me there is a problem with the code even though it is working.
Msg 116, Level 16, State 1, Procedure et_TabKontaktJednani_ANAFRA_Tis_Notifikace, Line 575 [Batch Start Line 7]
Only one expression can be specified in the select list when the subquery is not introduced with EXISTS
Can anyone help?
I do not see anything in your posted query that could trigger the specific message that you listed. You might get an error if the subquery (SELECT CisloKontOsoba FROM TabKontaktJednani WHERE id = #IdKJ) returned more than one value, but that error would be a very specific "Subquery returned more than 1 value...".
However, as written, your cursor query is a single select of a scalar, which would never yield anything other than a single row.
If you need to iterate over multiple user IDs, but wish to separate your selection query from your cursor definition, what you likely need is a table variable than can hold multiple user IDs instead of a scalar variable.
Something like:
DECLARE #ZadavatelLoginIds TABLE (LoginId nvarchar(max))
INSERT #ZadavatelLoginIds
SELECT t1.LoginId
FROM ...
DECLARE ac_loginy CURSOR FAST_FORWARD LOCAL FOR
SELECT LoginId
FROM #ZadavatelLoginIds
OPEN ac_loginy
DECLARE #LoginId nvarchar(max)
FETCH NEXT FROM ac_loginy INTO #LoginId
WHILE ##FETCH_STATUS = 0
BEGIN
... Send email to #LoginId ...
FETCH NEXT FROM ac_loginy INTO #LoginId
END
CLOSE ac_loginy
DEALLOCATE ac_loginy
A #Temp table can also be used in place of the table variable with the same results, but the table variable is often more convenient to use.
As others have mentioned, I believe that your login selection query is overly complex. Although this was not the focus of your question, I still suggest that you attempt to simplify it.
An alternative might be something like:
SELECT Z.LoginId
FROM TabKontaktJednani KJ
JOIN TabCisKOs KO ON KO.Cislo = KJ.CisloKontOsoba
JOIN TabCisZam Z ON Z.Prijmeni = KO.Prijmeni
JOIN TabKontakty K ON K.IDCisZam = Z.ID
WHERE KJ.id = #IdKJ
AND K.druh IN (6,10)
The above is my attempt to rewrite your posted query after tracing the relationships. I did not see any LEFT JOINS that were not superseded by other conditions that forced them into effectively being inner joins, so the above uses inner joins for everything. I have assumed that the druh column is in the TabKontakty table. Otherwise I see no need for that table. I do not guarantee that my re-interpretation is correct though.
How about you create a #temp table for each sub query since the problem is coming up due to the sub queries?
CREATE TABLE #TEMP1
(
LoginID nvarchar(max)
)
CREATE TABLE #TEMP2
(
ko.Prijmeni nvarchar(max),
k.spojeni nvarchar(max),
ko.Cislo nvarchar(max)
)

select query on a table is hanging when a stored procedure to update the table is run

I've a simple stored procedure to update a table as follows:
This sp is updating the table properly. But when I execute select query on po_tran table, its hanging.
Is there any mistake in the stored procedure..?
alter procedure po_tran_upd #locid char(3)
as
SET NOCOUNT ON;
begin
update t
set t.lastndaysale = (select isnull(sum( qty)*-1, 0)
from exp_tran
where exp_tran.loc_id =h.loc_id and
item_code = t.item_code and
exp_tran.doc_date > dateadd(dd,-30,getdate() )
and exp_tran.doc_type in ('PI', 'IN', 'SR')),
t.stk_qty = (select isnull(sum( qty), 0)
from exp_tran
where exp_tran.loc_id =h.loc_id and
item_code = t.item_code )
from po_tran t, po_hd h
where t.entry_no=h.entry_no and
h.loc_id=#locid and
h.entry_date> getdate()-35
end
;
Try the following possible ways to optimize your procedure.
Read this article, where I have explained the same example using CURSOR, Here I also have updated a field of the table using CURSOR.
Important: Remove Subquery, As I can see you have used a subquery to update the field.
You can use Join or Save the result of your query in the temp variable and you can use that variable while update.
i.g
DECLARE #lastndaysale AS FLOAT
DECLARE #stk_qty AS INT
select #lastndaysale = isnull(sum( qty)*-1, 0) from exp_tran where exp_tran.loc_id =h.loc_id and
item_code = t.item_code and exp_tran.doc_date > dateadd(dd,-30,getdate() ) and exp_tran.doc_type in ('PI', 'IN', 'SR')
select #stk_qty = isnull(sum( qty), 0) from exp_tran where exp_tran.loc_id =h.loc_id and item_code = t.item_code
update t set t.lastndaysale =#lastndaysale,
t.stk_qty = #stk_qty
from po_tran t, po_hd h where t.entry_no=h.entry_no and h.loc_id=#locid and h.entry_date> getdate()-35
This is just a sample example you can do need full changes in that.
I added a possibly more performant update, however, I do not fully understand your question. If "any" query is running slow against the po_tran, then I suggest you examine the indexing on that table and ensure it has a proper clustered index. If "this" query is running slow then I suggest you look into "covering indexes". The two fields entry_no and item_code seem like good candidates to include in a covering index.
update t
set t.lastndaysale =
CASE WHEN e.doc_date > dateadd(dd,-30,getdate() AND e.doc_type in ('PI', 'IN', 'SR') THEN
isnull(sum(qty) OVER (PARTITION BY e.loc_id, t.item_code) *-1, 0)
ELSE 0
END,
t.stk_qty = isnull(SUM(qty) OVER (PARTITION BY e.loc_id, t.item_code),0)
from
po_tran t
INNER JOIN po_hd h ON h.entry_no=t.entry_no AND h.entry_date> getdate()-35
INNER JOIN exp_tran e ON e.loc_id = h.loc_id AND e.itesm_code = t.item_code
where
h.loc_id=#locid

MS-SQL getting case specific values

We have a table LogicalTableSharing as follows:
For a specific requirement, we need to take PhysicalCompany of each TableCode into a variable.
We tried a case-based query as follows:
declare #tablecode varchar(50)
declare #inputcompany varchar(50)
declare #query nvarchar(2500)
set #inputcompany= 91
set #query = '
select '''+#inputcompany+''' AS inputcompany,
CASE WHEN lts.TableCode = ''tsctm005'' THEN lts.PhysicalCompany ELSE NULL END as tsctm005_company,
CASE WHEN lts.TableCode = ''tccom000'' THEN lts.PhysicalCompany ELSE NULL END as tccom000_company
from LogicalTableSharing lts
where lts.LogicalCompany = '''+#inputcompany+'''
'
EXEC sp_executesql #query
which obviously gives the result as
The desired output is
What is the right approach?
Try subqueries in a FROM-less SELECT. For performance you want an index on (logicalcompany, tablecode) (or the other way round depending on which is more selective).
SELECT #inputcompany inputcompany,
(SELECT TOP 1
physicalcompany
WHERE logicalcompany = #inputcompany
AND tablecode = 'tsctm005'
ORDER BY <some criteria>) tsctm005_company,
(SELECT TOP 1
physicalcompany
WHERE logicalcompany = #inputcompany
AND tablecode = 'tccom000'
ORDER BY <some criteria>) tccom000_company;
You should find <some criteria> to order by in case of multiple possible rows to decide which one takes precedence. Unless you just want a random one possibly each time you run the query another one, that is.

Looping table valued parameter values passed as parameter to stored procedure for checking a value

I have a stored procedure that receives a TVP as input. Now, I need to check the received data for a particular ID in a primary key column. If it exists, then I just need to update the table using those new values for other column (sent via TVP). Else, do an insert.
How to do it?
CREATE PROCEDURE ABC
#tvp MyTable READONLY
AS
IF EXISTS (SELECT 1 FROM MYTAB WHERE ID= #tvp.ID)
DO update
ELSE
Create
Just wondering the if exists loop I did is correct. I reckon its wrong as it will only check for first value and then update. What about other values? How should I loop through this?
Looping/CURSOR is the weapon of last resort, always search for solution that is SET based, not ROW based.
You should use MERGE which is designed for this type of operation:
MERGE table_name AS TGT
USING (SELECT * FROM #tvp) AS SRC
ON TGT.id = SRC.ID
WHEN MATCHED THEN
UPDATE SET col = SRC.col
WHEN NOT MATCHED THEN
INSERT (col_name, col_name2, ...)
VALUES (SRC.col_name1, SRC.col_name2, ...)
If you don't like MERGE use INSERT/UPDATE:
UPDATE table_name
SET col = tv.col,
...
FROM table_name AS tab
JOIN #tvp AS tv
ON tab.id = tv.id
INSERT INTO table_name(col1, col2, ...)
SELECT tv.col1, tv.col2, ...
FROM table_name AS tab
RIGHT JOIN #tvp AS tv
ON tab.id = tv.id
WHERE tab.id IS NULL

tsql bulk update

MyTableA has several million records. On regular occasions every row in MyTableA needs to be updated with values from TheirTableA.
Unfortunately I have no control over TheirTableA and there is no field to indicate if anything in TheirTableA has changed so I either just update everything or I update based on comparing every field which could be different (not really feasible as this is a long and wide table).
Unfortunately the transaction log is ballooning doing a straight update so I wanted to chunk it by using UPDATE TOP, however, as I understand it I need some field to determine if the records in MyTableA have been updated yet or not otherwise I'll end up in an infinite loop:
declare #again as bit;
set #again = 1;
while #again = 1
begin
update top (10000) MyTableA
set my.A1 = their.A1, my.A2 = their.A2, my.A3 = their.A3
from MyTableA my
join TheirTableA their on my.Id = their.Id
if ##ROWCOUNT > 0
set #again = 1
else
set #again = 0
end
is the only way this will work if I add in a
where my.A1 <> their.A1 and my.A2 <> their.A2 and my.A3 <> their.A3
this seems like it will be horribly inefficient with many columns to compare
I'm sure I'm missing an obvious alternative?
Assuming both tables are the same structure, you can get a resultset of rows that are different using
SELECT * into #different_rows from MyTable EXCEPT select * from TheirTable and then update from that using whatever key fields are available.
Well, the first, and simplest solution, would obviously be if you could change the schema to include a timestamp for last update - and then only update the rows with a timestamp newer than your last change.
But if that is not possible, another way to go could be to use the HashBytes function, perhaps by concatenating the fields into an xml that you then compare. The caveat here is an 8kb limit (https://connect.microsoft.com/SQLServer/feedback/details/273429/hashbytes-function-should-support-large-data-types) EDIT: Once again, I have stolen code, this time from:
http://sqlblogcasts.com/blogs/tonyrogerson/archive/2009/10/21/detecting-changed-rows-in-a-trigger-using-hashbytes-and-without-eventdata-and-or-s.aspx
His example is:
select batch_id
from (
select distinct batch_id, hash_combined = hashbytes( 'sha1', combined )
from ( select batch_id,
combined =( select batch_id, batch_name, some_parm, some_parm2
from deleted c -- need old values
where c.batch_id = d.batch_id
for xml path( '' ) )
from deleted d
union all
select batch_id,
combined =( select batch_id, batch_name, some_parm, some_parm2
from some_base_table c -- need current values (could use inserted here)
where c.batch_id = d.batch_id
for xml path( '' ) )
from deleted d
) as r
) as c
group by batch_id
having count(*) > 1
A last resort (and my original suggestion) is to try Binary_Checksum? As noted in the comment, this does open the risk for a rather high collision rate.
http://msdn.microsoft.com/en-us/library/ms173784.aspx
I have stolen the following example from lessthandot.com - link to the full SQL (and other cool functions) is below.
--Data Mismatch
SELECT 'Data Mismatch', t1.au_id
FROM( SELECT BINARY_CHECKSUM(*) AS CheckSum1 ,au_id FROM pubs..authors) t1
JOIN(SELECT BINARY_CHECKSUM(*) AS CheckSum2,au_id FROM tempdb..authors2) t2 ON t1.au_id =t2.au_id
WHERE CheckSum1 <> CheckSum2
Example taken from http://wiki.lessthandot.com/index.php/Ten_SQL_Server_Functions_That_You_Have_Ignored_Until_Now
I don't know if this is better than adding where my.A1 <> their.A1 and my.A2 <> their.A2 and my.A3 <> their.A3, but I would definitely give it a try (assuming SQL Server 2005+):
declare #again as bit;
set #again = 1;
declare #idlist table (Id int);
while #again = 1
begin
update top (10000) MyTableA
set my.A1 = their.A1, my.A2 = their.A2, my.A3 = their.A3
output inserted.Id into #idlist (Id)
from MyTableA my
join TheirTableA their on my.Id = their.Id
left join #idlist i on my.Id = i.Id
where i.Id is null
/* alternatively (instead of left join + where):
where not exists (select * from #idlist where Id = my.Id) */
if ##ROWCOUNT > 0
set #again = 1
else
set #again = 0
end
That is, declare a table variable for collecting the IDs of the rows being updated and use that table for looking up (and omitting) IDs that have already been updated.
A slight variation on the method would be to use a local temporary table instead of a table variable. That way you would be able to create an index on the ID lookup table, which might result in better performance.
If schema change is not possible. How about using trigger to save off the Ids that have changed. And only import/export those rows.
Or use trigger to export it immediately.

Resources