I'm new to SQL Server and stored procedures and could do with a couple of pointers regarding transaction handling on a bug I've inherited.
I have two stored procedures, one inserts a record passed into it, then it calls another one where the first thing it does is read what was inserted.
But sometimes it completes successfully without processing the data. My suspicion is that the selects are happening before the insert has 'hit' the table and retrieve no records, and the stored procedure doesn't handle that.
I don't have time to re-engineer just yet, but the transaction handling looks suspect. Below is a rough outline of what the stored procedures do.
procedure sp1
(#id, #pbody)
as
begin
begin try
set nocount on;
begin
insert into tbl1 (id, tbody)
values (#id, #pbody)
exec sp2 #id
end
end try
begin catch
execute sperror
end catch
end
go
procedure sp2 (#id)
as
begin
begin try
set nocount on;
declare #vbody varchar(max)
select #vbody = tbody -- I don't believe this step always retrieves the row inserted by sp1
from tbl1 with (nolock)
where id = #id
create table #tmp1 (id, msg)
insert into #tmp1
select id, msg
from openjson........
while exists(select top 1 * from #tmp1) -- this looks similar to above, not sure the insert has finished before the read
begin
** do some stuff **
end
end try
begin catch
execute sperror
end catch
end
go
sp2 is using the WITH (NOLOCK) query hint, which can have unintended side-effects. Missing rows is just one of them.
Using NOLOCK? Here's How You'll Get the Wrong Query Results. - Brent Ozar UnlimitedĀ®
I'd strongly recommend removing that hint unless you really understand what it does and have a very good reason for using it.
Related
Why not exists recompile option for trigger?
Suddenly the performance of one of our procedure (multiple SELECTs, multiple tables, insert into table) went from returning data in around 1 secs to 10-30secs.
After adding various debugging and logging we noticed that the performance would increase from the slow 10-30secs, back to sub-second speeds. (because alter trigger one of the table)
Just to clarify. The sequence of events:
Slow performance of Insert
Alter trigger table
Fast performance of Insert
I think slow performance associated with create wrong plan cash. because, before call insert command on the procedure, I write print datetime and the beginning of the the trigger command, add print datetime, so when call the procedure before alter trigger, The time difference between the first print and the second print is 20 sec, but when alter trigger, back to sub-second speeds. It should be noted that the commands in the trigger are not complicated
so, I need to add recompile option to trigger like procedure
it is trigger Script sample:
create trigger t_test on tbl AFTER insert
as
begin
begin try
declare #yearid int,
#id int
select #id = id,#yearid = yearid
from inserted
if exists(select * from FinancialYear where id = #yearid and flag = 0)
begin
raiserror('year not correct',16,1)
end
DECLARE #PublicNo BIGINT=(SELECT ISNULL(MAX(PublicNo),0)+1 FROM tbl)
update tbl
set PublicNo = #PublicNo
where #id
insert into tbl2
values (...)
end try
begin catch
print error_message()
end catch
end
Would there be any benefit of not using SCOPE_IDENTITY() and switching to ##IDENTITY? For the area I'm talking about is part of an install script that sets up a database for our customers. It's inserting a record in one table and using the identifier key from that table and inserting it into a foreign key into another. We are doing this twice.
We seem to have a rare condition in which the 2nd time this happens, we are inserting the id from the first insert into the 2nd table for both passes, causing issues with the data. There is a chance that something else altogether is causing this, but my lead seemed to zeroed in on the SCOPE_IDENTITY() as possibly being the culprit.
Declare #TheId int
Insert into dbo.TableName (Name) Values ('xxxx')
Select #TheId = SCOPE_IDENTITY()
-- some code here that uses #TheId
-- ...
Insert into dbo.TableName (Name) Values ('yyyy')
Select #TheId = SCOPE_IDENTITY()
-- some code here that uses #TheId
-- at this point, we may have the condition that SCOPE_IDENTITY() still has the value before that 2nd insert...
The only way scope_identity() could have the prior id value in this context is if the INSERT statement does not create any rows. In that situation, ##IDENTITY isn't gonna fix anything. In fact, ##IDENTITY is less specific, and therefore could only hope to make things worse.
What you can do is use a different variable for the second insert. Or, you could set #TheId back to NULL before the second insert runs. In this way, you'll be able to tell if something went wrong. ##rowcount is also useful for this.
I did see this in the comments:
"The second insert did not fail as the record was found in the database."
I put it to you perhaps the record was already in the database, before the code ran. Moreover, if there is a constraint on the table this could be the reason why the insert fails.
Within the scope of the proc or script the #TheId created by the first insert is not same object as the #TheId created by the second insert. While it's possible to reuse variables it's not a good practice imo when it comes to multiple DML statements within a code block. In this script I add TRY/CATCH and SET XACT_ABORT ON to ensure a complete rollback of all DML statements within the block.
Something like this
set nocount on;
set xact_abort on;
begin transaction
begin try
Insert into dbo.TableName (Name) Values ('xxxx');
if ##rowcount=1
begin
Declare #Id1 int = SCOPE_IDENTITY();
-- some code here that uses #Id1
-- ...
end
else
throw 50000, 'The first insert failed', 1;
Insert into dbo.TableName (Name) Values ('yyyy');
if ##rowcount=1
begin
Declare #Id2 int = SCOPE_IDENTITY();
-- some code here that uses #Id2
-- ...
end
else
throw 50000, 'The second insert failed', 1;
commit transaction
end try
begin catch
/* put error handling here */
rollback transaction
end catch
Thanks everyone for the help. We will likely go with creating a new variable for the 2nd insert.
This is my code. How to avoid error happened any query automatically rollback already stored.
insert into muser(UserKey, Email, UserPassword)
values(#Key, #Useremail, 'test')
set #UserId = SCOPE_IDENTITY()
set #Key = NEWID()
insert into mUserProfile(UserProfileKey, UserId, UserEmail)
values(#Key, #UserId, #Useremail)
exec SP_Store #Useremail, #ClientId,
Your question is not very clear but I think you want something along these lines.
begin transaction
begin try
insert into muser(UserKey,Email,UserPassword)
values(#Key,#Useremail,'test')
set #UserId= SCOPE_IDENTITY()
set #Key =NEWID()
insert into mUserProfile(UserProfileKey,UserId,UserEmail)
values(#Key,#UserId,#Useremail)
exec SP_Store #Useremail,#ClientId --You should avoid the SP_ prefix.
commit transaction
end try
begin catch
--Report the error here, do NOT silently rollback you transaction
rollback transaction
end catch
This should work with one caveat. If you have a transaction is SP_Store this is not going to work correctly because you can't nest transactions in sql server.
Also, you really avoid the SP_ prefix, or even better avoid prefixes entirely. http://sqlperformance.com/2012/10/t-sql-queries/sp_prefix
I'm using the following query.
select * from OPENQUERY(EXITWEB,N'SET NOCOUNT ON;
declare #result table (id int);
insert into [system_files] ([is_public], [file_name], [file_size], [content_type], [disk_name], [updated_at], [created_at])
output inserted.id into #result(id)
values (N''1'',N''7349.jpg'',N''146921'',N''image/jpeg'',N''5799dcc8a1eb1413195192.jpg'',N''2016-07-28 10:22:00.000'',N''2016-07-28 10:22:00.000'')
declare #id int = (select top 1 id from #result)
select * from system_files where id = #id
insert into linkToExternal (id, id_ext) values(#id, 47)
--select #id
')
when I perform a select from within the query it works just fine:
But when I go to check my database when the call has finished, the record is no longer there.
So I'm suspecting a transaction is rolled back. My question is: why. What can I do to prevent the transaction to be rolled back if that's the case.
Well, as always, after days of struggling and me post a question on stackoverflow I find the solution: http://www.sqlservercentral.com/Forums/Topic1128997-391-1.aspx#bm1288825
I was having the same problem as you and almost gave up on it but have
finally found an answer to the problem. Reading an article about
sharing data between stored procedures I discovered that OPENQUERY
issues an Implicit Transaction and that it was Rolling back my insert.
So I had to add an explicit Commit to my stored procedures, in
additional I discovered that if I use it in a query that has a Union
it has to be Commited twice. Since I'm doing my insert inside a BEGIN
TRY I can always just commit twice and not worry about whether it is
being used in a UNION. I'm returning different values if there is an
error but that was just apart of my debugging.
SELECT TOP 5 *
FROM mm
JOIN OPENQUERY([LOCALSERVER], 'EXEC cms60.dbo.sp_RecordReportLastRun ''LPS'', ''Test''') RptStats ON 1=1
ALTER PROCEDURE [dbo].[sp_RecordReportLastRun]
-- Add the parameters for the stored procedure here
#LibraryName varchar(50),
#ReportName varchar(50)
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from interfering with SELECT statements.
SET NOCOUNT ON;
-- Insert statements for procedure here
BEGIN TRY
INSERT INTO cms60.dbo.ReportStatistics (LibraryName, ReportName, RunDate) VALUES (#LibraryName, #ReportName, GETDATE())
--
COMMIT; --Needed because OPENQUERY starts an Implicit Transaction but doesn't commit it.
COMMIT; --Need second Commit when used in a UNION and although it throws an error when not used in a UNION doesn't cause a problem.
END TRY
BEGIN CATCH
SELECT 2 Test
END CATCH
SELECT 1 Test
END
In my case, adding a ;COMMIT; after the inserts solved it, and made sure it got written into the database.
On our SQL Server (Version 10.0.1600), I have a stored procedure that I wrote.
It is not throwing any errors, and it is returning the correct values after making the insert in the database.
However, the last command spSendEventNotificationEmail (which sends out email notifications) is not being run.
I can run the spSendEventNotificationEmail script manually using the same data, and the notifications show up, so I know it works.
Is there something wrong with how I call it in my stored procedure?
[dbo].[spUpdateRequest](#packetID int, #statusID int output, #empID int, #mtf nVarChar(50)) AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
DECLARE #id int
SET #id=-1
-- Insert statements for procedure here
SELECT A.ID, PacketID, StatusID
INTO #act FROM Action A JOIN Request R ON (R.ID=A.RequestID)
WHERE (PacketID=#packetID) AND (StatusID=#statusID)
IF ((SELECT COUNT(ID) FROM #act)=0) BEGIN -- this statusID has not been entered. Continue
SELECT ID, MTF
INTO #req FROM Request
WHERE PacketID=#packetID
WHILE (0 < (SELECT COUNT(ID) FROM #req)) BEGIN
SELECT TOP 1 #id=ID FROM #req
INSERT INTO Action (RequestID, StatusID, EmpID, DateStamp)
VALUES (#id, #statusID, #empID, GETDATE())
IF ((#mtf IS NOT NULL) AND (0 < LEN(RTRIM(#mtf)))) BEGIN
UPDATE Request SET MTF=#mtf WHERE ID=#id
END
DELETE #req WHERE ID=#id
END
DROP TABLE #req
SELECT #id=##IDENTITY, #statusID=StatusID FROM Action
SELECT TOP 1 #statusID=ID FROM Status
WHERE (#statusID<ID) AND (-1 < Sequence)
EXEC spSendEventNotificationEmail #packetID, #statusID, 'http:\\cpweb:8100\NextStep.aspx'
END ELSE BEGIN
SET #statusID = -1
END
DROP TABLE #act
END
Idea of how the data tables are connected:
From your comments I get you do mainly C# development. A basic test is to make sure the sproc is called with the exact same arguments you expect
PRINT '#packetID: ' + #packetID
PRINT '#statusID: ' + #statusID
EXEC spSendEventNotificationEmail #packetID, #statusID, 'http:\\cpweb:8100\NextStep.aspx'
This way you 1. know that the exec statement is reached 2. the exact values
If this all works than I very good candidate is that you have permission to run the sproc and your (C#?) code that calls it doesn't. I would expect that an error is thrown tough.
A quick test to see if the EXEC is executed fine is to do an insert in a dummy table after it.
Update 1
I suggested to add PRINT statements but indeed as you say you cannot (easily) catch them from C#. What you could do is insert the 2 variables in a log table that you newly create. This way you know the exact values that flow from the C# execution.
As to the why it now works if you add permissions I can't give you a ready answer. SQL security is not transparent to me either. But its good to research yourself a but further. Do you have to add both guest and public?
It would also help to see what's going inside spSendEventNotificationEmail. Chances are good that sproc is using a resource where it didn't have permission before. This could be an object like a table or maybe another sproc. Security is heavily dependent on context/settings and not an easy problem to tackle with a Q/A site like SO.