SQL Server SCOPE_IDENTITY() vs ##IDENTITY - sql-server

Would there be any benefit of not using SCOPE_IDENTITY() and switching to ##IDENTITY? For the area I'm talking about is part of an install script that sets up a database for our customers. It's inserting a record in one table and using the identifier key from that table and inserting it into a foreign key into another. We are doing this twice.
We seem to have a rare condition in which the 2nd time this happens, we are inserting the id from the first insert into the 2nd table for both passes, causing issues with the data. There is a chance that something else altogether is causing this, but my lead seemed to zeroed in on the SCOPE_IDENTITY() as possibly being the culprit.
Declare #TheId int
Insert into dbo.TableName (Name) Values ('xxxx')
Select #TheId = SCOPE_IDENTITY()
-- some code here that uses #TheId
-- ...
Insert into dbo.TableName (Name) Values ('yyyy')
Select #TheId = SCOPE_IDENTITY()
-- some code here that uses #TheId
-- at this point, we may have the condition that SCOPE_IDENTITY() still has the value before that 2nd insert...

The only way scope_identity() could have the prior id value in this context is if the INSERT statement does not create any rows. In that situation, ##IDENTITY isn't gonna fix anything. In fact, ##IDENTITY is less specific, and therefore could only hope to make things worse.
What you can do is use a different variable for the second insert. Or, you could set #TheId back to NULL before the second insert runs. In this way, you'll be able to tell if something went wrong. ##rowcount is also useful for this.
I did see this in the comments:
"The second insert did not fail as the record was found in the database."
I put it to you perhaps the record was already in the database, before the code ran. Moreover, if there is a constraint on the table this could be the reason why the insert fails.

Within the scope of the proc or script the #TheId created by the first insert is not same object as the #TheId created by the second insert. While it's possible to reuse variables it's not a good practice imo when it comes to multiple DML statements within a code block. In this script I add TRY/CATCH and SET XACT_ABORT ON to ensure a complete rollback of all DML statements within the block.
Something like this
set nocount on;
set xact_abort on;
begin transaction
begin try
Insert into dbo.TableName (Name) Values ('xxxx');
if ##rowcount=1
begin
Declare #Id1 int = SCOPE_IDENTITY();
-- some code here that uses #Id1
-- ...
end
else
throw 50000, 'The first insert failed', 1;
Insert into dbo.TableName (Name) Values ('yyyy');
if ##rowcount=1
begin
Declare #Id2 int = SCOPE_IDENTITY();
-- some code here that uses #Id2
-- ...
end
else
throw 50000, 'The second insert failed', 1;
commit transaction
end try
begin catch
/* put error handling here */
rollback transaction
end catch

Thanks everyone for the help. We will likely go with creating a new variable for the 2nd insert.

Related

SQL Server stored procedures reading data before insert completed

I'm new to SQL Server and stored procedures and could do with a couple of pointers regarding transaction handling on a bug I've inherited.
I have two stored procedures, one inserts a record passed into it, then it calls another one where the first thing it does is read what was inserted.
But sometimes it completes successfully without processing the data. My suspicion is that the selects are happening before the insert has 'hit' the table and retrieve no records, and the stored procedure doesn't handle that.
I don't have time to re-engineer just yet, but the transaction handling looks suspect. Below is a rough outline of what the stored procedures do.
procedure sp1
(#id, #pbody)
as
begin
begin try
set nocount on;
begin
insert into tbl1 (id, tbody)
values (#id, #pbody)
exec sp2 #id
end
end try
begin catch
execute sperror
end catch
end
go
procedure sp2 (#id)
as
begin
begin try
set nocount on;
declare #vbody varchar(max)
select #vbody = tbody -- I don't believe this step always retrieves the row inserted by sp1
from tbl1 with (nolock)
where id = #id
create table #tmp1 (id, msg)
insert into #tmp1
select id, msg
from openjson........
while exists(select top 1 * from #tmp1) -- this looks similar to above, not sure the insert has finished before the read
begin
** do some stuff **
end
end try
begin catch
execute sperror
end catch
end
go
sp2 is using the WITH (NOLOCK) query hint, which can have unintended side-effects. Missing rows is just one of them.
Using NOLOCK? Here's How You'll Get the Wrong Query Results. - Brent Ozar UnlimitedĀ®
I'd strongly recommend removing that hint unless you really understand what it does and have a very good reason for using it.

Trigger handle multiple inserts

I've got a trigger:
CREATE TRIGGER tgr_incheck_vlucht
ON PassagierVoorVlucht
AFTER INSERT, UPDATE
AS
BEGIN
IF ##ROWCOUNT= 0 BEGIN RETURN END
SET NOCOUNT ON
BEGIN TRY
IF EXISTS
(SELECT *
FROM inserted I
WHERE EXISTS(SELECT *
FROM PassagierVoorVlucht P inner join Vlucht V on P.vluchtnummer = V.vluchtnummer
WHERE I.inchecktijdstip >= vertrektijdstip))
BEGIN
RAISERROR('Inchecktijdstip moet voor de aankomsttijdliggen', 16,1)
END
END TRY
BEGIN CATCH
IF ##TRANCOUNT > 0 BEGIN ROLLBACK TRANSACTION END
DECLARE #ErrorMessage NVARCHAR(4000) = ERROR_MESSAGE(),
#errorSeverity INT = ERROR_SEVERITY(),
#errorState INT = ERROR_STATE()
RAISERROR (#ErrorMessage, #errorSeverity, #errorState)
END CATCH
END
now i have written some test statements:
INSERT INTO PassagierVoorVlucht
VALUES(850, 5316, 1, '2002-01-01 13:37:00.000', 21),
(1002, 5316, 1, '2004-01-01 13:37:00.000', 21),
(1601, 5316, 1, '2004-05-01 13:37:00.000', 21),
(1602, 5316, 1, '2004-05-01 13:37:00.000', 21)
the trigger works for only ONE insert row at the time not for the whole block. How can i write the trigger that it can handle multiple inserts?
The answer is you cannot... Fundamentally, triggers are fired for each insertion, not a batch of code. This is why it is called a trigger, or a response.
Furthermore, if your table had an Insert Identity column, triggers will not prevent gaps in the rows since they function just like sequences are: activated at the row level and after the statement completes (unless INSTEAD OF).
Perhaps helpful in explaining triggers is the following from MSDN:
CREATE TRIGGER (Transact-SQL)
Although a TRUNCATE TABLE statement is in effect a DELETE statement, it does not activate a trigger because the operation does not log individual row deletions. However, only those users with permissions to execute a TRUNCATE TABLE statement need be concerned about inadvertently circumventing a DELETE trigger this way.
Notice the part about individual row deletions as well as the point about scope. Perhaps understanding more what your requirements are might make the solution clear.
Now, you may use an INSTEAD OF trigger to fire before the attempted code executes, but that likely is not going to be a solution, unless you had some staging table setup or similar for it to function. Instead of replaces the entire DML or DDL statement.
If you wish to protect the integrity of your data consider the use of stored or pre-planned procedures.
If that is unacceptable or impractical, other constraints like referential keys or even staging tables (depending on the need for the timeliness of data) which can later be imported could be options.
To do this you need to use a temporary table inside your trigger and store all the inserted data in it and then loop that table using a while loop.
here is an example where we want to add all the inserted users into a audit table using a trigger for insert
alter trigger tr_userData_forInsert
on userData
for insert
AS
BEGIN
declare #id int, #name varchar(20)
select *
into #inserted_data
from inserted
while ( exists(select id from #inserted_data) )
begin
select top 1 #id = id, #name = full_name
from #inserted_data
insert into loggs
values ( concat( 'a user with ',#id, ' and name of ', #name
, ' is added at ', getdate() ) )
delete from #inserted_data
where id = #id
end
end
I already used this in more than one situation and it work perfect

openquery apears to be rolled back when done

I'm using the following query.
select * from OPENQUERY(EXITWEB,N'SET NOCOUNT ON;
declare #result table (id int);
insert into [system_files] ([is_public], [file_name], [file_size], [content_type], [disk_name], [updated_at], [created_at])
output inserted.id into #result(id)
values (N''1'',N''7349.jpg'',N''146921'',N''image/jpeg'',N''5799dcc8a1eb1413195192.jpg'',N''2016-07-28 10:22:00.000'',N''2016-07-28 10:22:00.000'')
declare #id int = (select top 1 id from #result)
select * from system_files where id = #id
insert into linkToExternal (id, id_ext) values(#id, 47)
--select #id
')
when I perform a select from within the query it works just fine:
But when I go to check my database when the call has finished, the record is no longer there.
So I'm suspecting a transaction is rolled back. My question is: why. What can I do to prevent the transaction to be rolled back if that's the case.
Well, as always, after days of struggling and me post a question on stackoverflow I find the solution: http://www.sqlservercentral.com/Forums/Topic1128997-391-1.aspx#bm1288825
I was having the same problem as you and almost gave up on it but have
finally found an answer to the problem. Reading an article about
sharing data between stored procedures I discovered that OPENQUERY
issues an Implicit Transaction and that it was Rolling back my insert.
So I had to add an explicit Commit to my stored procedures, in
additional I discovered that if I use it in a query that has a Union
it has to be Commited twice. Since I'm doing my insert inside a BEGIN
TRY I can always just commit twice and not worry about whether it is
being used in a UNION. I'm returning different values if there is an
error but that was just apart of my debugging.
SELECT TOP 5 *
FROM mm
JOIN OPENQUERY([LOCALSERVER], 'EXEC cms60.dbo.sp_RecordReportLastRun ''LPS'', ''Test''') RptStats ON 1=1
ALTER PROCEDURE [dbo].[sp_RecordReportLastRun]
-- Add the parameters for the stored procedure here
#LibraryName varchar(50),
#ReportName varchar(50)
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from interfering with SELECT statements.
SET NOCOUNT ON;
-- Insert statements for procedure here
BEGIN TRY
INSERT INTO cms60.dbo.ReportStatistics (LibraryName, ReportName, RunDate) VALUES (#LibraryName, #ReportName, GETDATE())
--
COMMIT; --Needed because OPENQUERY starts an Implicit Transaction but doesn't commit it.
COMMIT; --Need second Commit when used in a UNION and although it throws an error when not used in a UNION doesn't cause a problem.
END TRY
BEGIN CATCH
SELECT 2 Test
END CATCH
SELECT 1 Test
END
In my case, adding a ;COMMIT; after the inserts solved it, and made sure it got written into the database.

SQL Server error handling in cursor inside trigger

I'm new to SQL Server error handling, and my English isn't too clear, so I apologize in advance for any misuderstandings.
The problem is: I insert multiple records into a table. The table has an AFTER INSERT trigger, which is processing the records one by one in the FETCH WHILE cycle with a cursor. If something error happens, everything is rolling back. So if there is just one wrong field in the inserted records, I lost all of them. And the insert rolls back also, so I can't find the wrong record. So I need to handle the errors inside of the cursor, to rollback only the wrong record.
I made a test database with 3 tables:
tA
VarSmallint smallint
VarTinyint tinyint
String varchar(20)
tB
ID int (PK, identity)
Timestamp datetime (default: getdate())
VarSmallint smallint
VarTinyint tinyint
String varchar(20)
tC
ID int PK
Timestamp datetime
VarTinyint1 tinyint
VarTinyint2 tinyint
String varchar(10)
tA contains 3 records with 1 wrong one. I insert this content into tB.
tB has the trigger, and inserts the records into tC ony by one.
tC has only tinyint variables, so there can be problem to insert values greater than 255. This is the point where the error occurs for the test!
My trigger is:
ALTER TRIGGER [dbo].[trg_tB]
ON [dbo].[tB]
AFTER INSERT
AS
BEGIN
IF ##rowcount = 0
RETURN;
SET NOCOUNT ON;
DECLARE
#ID AS int,
#Timestamp AS datetime,
#VarSmallint AS smallint,
#VarTinyint AS tinyint,
#String AS varchar(20),
DECLARE curNyers CURSOR DYNAMIC
FOR
SELECT
[ID], [Timestamp], [VarSmallint], [VarTinyint], [String]
FROM INSERTED
ORDER BY [ID]
OPEN curNyers
FETCH NEXT FROM curNyers INTO #ID, #Timestamp, #VarSmallint, #VarTinyint, #String
WHILE ##FETCH_STATUS = 0
BEGIN
BEGIN TRY
BEGIN TRAN
INSERT INTO [dbo].[tC]([ID], [Timestamp], [VarTinyint1], [VarTinyint2], [String])
VALUES (#ID, #Timestamp, #VarSmallint, #VarTinyint, #String)
COMMIT TRAN
END TRY
BEGIN CATCH
ROLLBACK TRAN
INSERT INTO [dbo].[tErrorLog]([ErrorTime], [UserName], [ErrorNumber],
[ErrorSeverity], [ErrorState],
[ErrorProcedure], [ErrorLine],
[ErrorMessage], [RecordID])
VALUES (SYSDATETIME(), SUSER_NAME(), ERROR_NUMBER(),
ERROR_SEVERITY(), ERROR_STATE(),
ERROR_PROCEDURE(), ERROR_LINE(),
ERROR_MESSAGE(), #ID)
END CATCH
FETCH NEXT FROM curNyers INTO #ID, #Timestamp, #VarSmallint, #VarTinyint, #String
END
CLOSE curNyers
DEALLOCATE curNyers
END
If I insert 2 good records with 1 wrong, everything is rolling back and I got an error:
Msg 3609, Level 16, State 1, Line 1
The transaction ended in the trigger. The batch has been aborted.
Please help me! How to modify this trigger to work well?
If I insert the wrong record, I need:
All the inserted records in tB
All the good records in tC
Error logged in tErrorLog
Thanks!
You have TWO major disasters in your trigger:
do not use a cursor inside a trigger - that's just horrible! Triggers fire whenever a given operation happens - you have little control over when and how many times they fire. Therefore, in order not to compromise your system performance too much, triggers should be very small, fast, nimble - do not do any heavy lifting and extensive processing in a trigger. A cursor is anything but nimble and fast - it's a resource-hog, processor-hog, memory-leaking monster - AVOID those whenever you can, and most definitely inside a trigger! (and you don't need them, 99% of the cases, anyway)
You can rewrite your whole logic into this one single, fast, set-based statement:
ALTER TRIGGER [dbo].[trg_tB]
ON [dbo].[tB]
AFTER INSERT
AS
BEGIN
INSERT INTO [dbo].[tC]([ID], [Timestamp], [VarTinyint1], [VarTinyint2], [String])
SELECT
[ID], [Timestamp], [VarSmallint], [VarTinyint], [String]
FROM
INSERTED
END
Never call COMMIT TRAN inside a trigger. The trigger executes inside the context and transaction of the statement that caused it to fire - if everything is OK, just let the trigger finish and then the transaction will be committed just fine. If you need to abort, call ROLLBACK. But never ever call COMMIT TRAN in the middle of a trigger. Just don't.....
I deleted the TRIGGER, and copy-pasted the code from it into a STORED PROCEDURE.
Then I added a row Status to tB, and set defaults to 0.
1 is "Record processed OK", 2 is "Record processing fault".
I fill the cursor with WHERE Status = 0.
In the TRY section I update the status to 1, in the CATCH section I UPDATE it to 2.
I have no jobs, so I run the SP from Windows scheduler with a batch file with the SQLCMD command.
Now the processing works well, moreover it worked well for the first time. Thanks for the help!

How do you write a recursive stored procedure

I simply want a stored procedure that calculates a unique id (that is separate from the identity column) and inserts it. If it fails it just calls itself to regenerate said id. I have been looking for an example, but cant find one, and am not sure how I should get the SP to call itself, and set the appropriate output parameter. I would also appreciate someone pointing out how to test this SP also.
Edit
What I have now come up with is the following (Note I already have an identity column, I need a secondary id column.
ALTER PROCEDURE [dbo].[DataInstance_Insert]
#DataContainerId int out,
#ModelEntityId int,
#ParentDataContainerId int,
#DataInstanceId int out
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
WHILE (#DataContainerId is null)
EXEC DataContainer_Insert #ModelEntityId, #ParentDataContainerId, #DataContainerId output
INSERT INTO DataInstance (DataContainerId, ModelEntityId)
VALUES (#DataContainerId, #ModelEntityId)
SELECT #DataInstanceId = scope_identity()
END
ALTER PROCEDURE [dbo].[DataContainer_Insert]
#ModelEntityId int,
#ParentDataContainerId int,
#DataContainerId int out
AS
BEGIN
BEGIN TRY
SET NOCOUNT ON;
DECLARE #ReferenceId int
SELECT #ReferenceId = isnull(Max(ReferenceId)+1,1) from DataContainer Where ModelEntityId=#ModelEntityId
INSERT INTO DataContainer (ReferenceId, ModelEntityId, ParentDataContainerId)
VALUES (#ReferenceId, #ModelEntityId, #ParentDataContainerId)
SELECT #DataContainerId = scope_identity()
END TRY
BEGIN CATCH
END CATCH
END
In CATCH blocks you must check the XACT_STATE value. You may be in a doomed transaction (-1) and in that case you are forced to rollback. Or your transaction may had already had rolled back and you should not continue to work under the assumption of an existing transaction. For a template procedure that handles T-SQL exceptions, try/catch blcoks and transactions correctly, see Exception handling and nested transactions
Never, under any languages, do recursive calls in exception blocks. You don't check why you hit an exception, therefore you don't know if is OK to try again. What if the exception is 652, read-only filegroup? Or your database is at max size? You'll re-curse until you'll hit stackoverflow...
Code that reads a value, makes a decision based on that value, then writes something is always going to fail under concurrency unless properly protected. You need to wrap the SELECT and INSERT in a transaction and your SELECT must be under SERIALISABLE isolation level.
And finally, ignoring the blatantly wrong code in your post, here is how you call a stored procedure passing in OUTPUT arguments:
exec DataContainer_Insert #SomeData, #DataContainerId OUTPUT;
Better yet, why not make UserID an identity column instead of trying to re-implement an identity column manually?
BTW: I think you meant
VALUES (#DataContainerId + 1 , SomeData)
Why not use the:
NewId()
T SQL function? (assuming sql server 2005/2008)
that sp will never ever do a successful insert, you have an identity property on the DataContainer table but you are inserting the ID, in that case you will need to set identity_insert on but then scope_identity() won't work
A PK violation also might not be trapped so you might also need to check for XACT_STATE()
why are you messing around with max, use scope_identity() and be done with it

Resources