I simply want a stored procedure that calculates a unique id (that is separate from the identity column) and inserts it. If it fails it just calls itself to regenerate said id. I have been looking for an example, but cant find one, and am not sure how I should get the SP to call itself, and set the appropriate output parameter. I would also appreciate someone pointing out how to test this SP also.
Edit
What I have now come up with is the following (Note I already have an identity column, I need a secondary id column.
ALTER PROCEDURE [dbo].[DataInstance_Insert]
#DataContainerId int out,
#ModelEntityId int,
#ParentDataContainerId int,
#DataInstanceId int out
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
WHILE (#DataContainerId is null)
EXEC DataContainer_Insert #ModelEntityId, #ParentDataContainerId, #DataContainerId output
INSERT INTO DataInstance (DataContainerId, ModelEntityId)
VALUES (#DataContainerId, #ModelEntityId)
SELECT #DataInstanceId = scope_identity()
END
ALTER PROCEDURE [dbo].[DataContainer_Insert]
#ModelEntityId int,
#ParentDataContainerId int,
#DataContainerId int out
AS
BEGIN
BEGIN TRY
SET NOCOUNT ON;
DECLARE #ReferenceId int
SELECT #ReferenceId = isnull(Max(ReferenceId)+1,1) from DataContainer Where ModelEntityId=#ModelEntityId
INSERT INTO DataContainer (ReferenceId, ModelEntityId, ParentDataContainerId)
VALUES (#ReferenceId, #ModelEntityId, #ParentDataContainerId)
SELECT #DataContainerId = scope_identity()
END TRY
BEGIN CATCH
END CATCH
END
In CATCH blocks you must check the XACT_STATE value. You may be in a doomed transaction (-1) and in that case you are forced to rollback. Or your transaction may had already had rolled back and you should not continue to work under the assumption of an existing transaction. For a template procedure that handles T-SQL exceptions, try/catch blcoks and transactions correctly, see Exception handling and nested transactions
Never, under any languages, do recursive calls in exception blocks. You don't check why you hit an exception, therefore you don't know if is OK to try again. What if the exception is 652, read-only filegroup? Or your database is at max size? You'll re-curse until you'll hit stackoverflow...
Code that reads a value, makes a decision based on that value, then writes something is always going to fail under concurrency unless properly protected. You need to wrap the SELECT and INSERT in a transaction and your SELECT must be under SERIALISABLE isolation level.
And finally, ignoring the blatantly wrong code in your post, here is how you call a stored procedure passing in OUTPUT arguments:
exec DataContainer_Insert #SomeData, #DataContainerId OUTPUT;
Better yet, why not make UserID an identity column instead of trying to re-implement an identity column manually?
BTW: I think you meant
VALUES (#DataContainerId + 1 , SomeData)
Why not use the:
NewId()
T SQL function? (assuming sql server 2005/2008)
that sp will never ever do a successful insert, you have an identity property on the DataContainer table but you are inserting the ID, in that case you will need to set identity_insert on but then scope_identity() won't work
A PK violation also might not be trapped so you might also need to check for XACT_STATE()
why are you messing around with max, use scope_identity() and be done with it
Related
Would there be any benefit of not using SCOPE_IDENTITY() and switching to ##IDENTITY? For the area I'm talking about is part of an install script that sets up a database for our customers. It's inserting a record in one table and using the identifier key from that table and inserting it into a foreign key into another. We are doing this twice.
We seem to have a rare condition in which the 2nd time this happens, we are inserting the id from the first insert into the 2nd table for both passes, causing issues with the data. There is a chance that something else altogether is causing this, but my lead seemed to zeroed in on the SCOPE_IDENTITY() as possibly being the culprit.
Declare #TheId int
Insert into dbo.TableName (Name) Values ('xxxx')
Select #TheId = SCOPE_IDENTITY()
-- some code here that uses #TheId
-- ...
Insert into dbo.TableName (Name) Values ('yyyy')
Select #TheId = SCOPE_IDENTITY()
-- some code here that uses #TheId
-- at this point, we may have the condition that SCOPE_IDENTITY() still has the value before that 2nd insert...
The only way scope_identity() could have the prior id value in this context is if the INSERT statement does not create any rows. In that situation, ##IDENTITY isn't gonna fix anything. In fact, ##IDENTITY is less specific, and therefore could only hope to make things worse.
What you can do is use a different variable for the second insert. Or, you could set #TheId back to NULL before the second insert runs. In this way, you'll be able to tell if something went wrong. ##rowcount is also useful for this.
I did see this in the comments:
"The second insert did not fail as the record was found in the database."
I put it to you perhaps the record was already in the database, before the code ran. Moreover, if there is a constraint on the table this could be the reason why the insert fails.
Within the scope of the proc or script the #TheId created by the first insert is not same object as the #TheId created by the second insert. While it's possible to reuse variables it's not a good practice imo when it comes to multiple DML statements within a code block. In this script I add TRY/CATCH and SET XACT_ABORT ON to ensure a complete rollback of all DML statements within the block.
Something like this
set nocount on;
set xact_abort on;
begin transaction
begin try
Insert into dbo.TableName (Name) Values ('xxxx');
if ##rowcount=1
begin
Declare #Id1 int = SCOPE_IDENTITY();
-- some code here that uses #Id1
-- ...
end
else
throw 50000, 'The first insert failed', 1;
Insert into dbo.TableName (Name) Values ('yyyy');
if ##rowcount=1
begin
Declare #Id2 int = SCOPE_IDENTITY();
-- some code here that uses #Id2
-- ...
end
else
throw 50000, 'The second insert failed', 1;
commit transaction
end try
begin catch
/* put error handling here */
rollback transaction
end catch
Thanks everyone for the help. We will likely go with creating a new variable for the 2nd insert.
I
need some code advice,
when printing out offers, our ERP Program generates an ID Number in the Table "Angebot" in the format AYYNNNNN, this mask is set in the administrative settings, but it also has an option to override this number and set a manual one, which causes lots of trouble, as people tend to break the id counter.
I'd like to create a trigger that sends a message when the id number is not in the correct format, so I have to check for that specific column to be correct.
The if statenent would check the following:
if (Angebotsnr NOT LIKE 'A'+RIGHT(DATEPART(yy,getdate())+'_____') then RAISEERROR
There is already an existing trigger in the database that checks for something else, so I only need to add the second check to ensure that it is right, but where would I put the if statement and how do I check it?
This is the code of the existing trigger:
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER TRIGGER [dbo].[ANGEBOT_ITrig] ON [dbo].[Angebot] FOR INSERT AS
SET NOCOUNT ON
/* * KEINE EINFÜGUNG BEI FEHLEN EINES PASSENDEN SCHLÜSSELS IN 'ErlGrp' */
IF (SELECT COUNT(*) FROM inserted) !=
(SELECT COUNT(*) FROM ErlGrp, inserted WHERE (ErlGrp.ABTNR = inserted.ABTEILUNG))
BEGIN
RAISERROR ( 'Some error statement',0,0)
ROLLBACK TRANSACTION
END
The action would be the same, just with a different error message.
Can someone point me to the right direction.
Thanks.
I would handle this in the procedure that's doing the insert so it doesn't ever insert and fire off other triggers.
create proc myInsertProc (#ID char(8))
as
begin
--copied from you, but it's missing part of the right function
if (Angebotsnr NOT LIKE 'A'+RIGHT(DATEPART(yy,getdate())+'_____')
begin
raiserror('You provided a wrongly formatted ID',16,1) with log
return
end
...continue on with other code
end
This will raise the error, and write it to the SQL Server Error Log. You can remove with log if you don't want that. The return ends the batch. You can also wrap this in a try catch on insert if you'd.
I'd use this in the IF block personally.
if (Angebotsnr NOT LIKE 'A' + right(convert(varchar,DATEPART(year,getdate())),2) + '%' or len(Angebotsnr) != 8)
EDIT This questions is no longer valid as the issue was something else. Please see my explanation below in my answer.
I'm not sure of the etiquette so i'l leave this question in its' current state
I have a stored procedure that writes some data to a table.
I'm using Microsoft Practices Enterprise library for making my stored procedure call.
I invoke the stored procedure using a call to ExecuteNonQuery.
After ExecuteNonQuery returns i invoke a 3rd party library. It calls back to me on a separate thread in about 100 ms.
I then invoke another stored procedure to pull the data I had just written.
In about 99% of cases the data is returned. Once in a while it returns no rows( ie it can't find the data). If I put a conditional break point to detect this condition in the debugger and manually rerun the stored procedure it always returns my data.
This makes me believe the writing stored procedure is working just not committing when its called.
I'm fairly novice when it comes to sql, so its entirely possible that I'm doing something wrong. I would have thought that the writing stored procedure would block until its contents were committed to the db.
Writing Stored Procedure
ALTER PROCEDURE [dbo].[spWrite]
#guid varchar(50),
#data varchar(50)
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
-- see if this guid has already been added to the table
DECLARE #foundGuid varchar(50);
SELECT #foundGuid = [guid] from [dbo].[Details] where [guid] = #guid;
IF #foundGuid IS NULL
-- first time we've seen this guid
INSERT INTO [dbo].[Details] ( [guid], data ) VALUES (#guid, #data)
ELSE
-- updaeting or verifying order
UPDATE [dbo].[Details] SET data =#data WHERE [guid] = #guid
END
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
Reading Stored Procedure
ALTER PROCEDURE [dbo].[spRead]
#guid varchar(50)
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
SELECT * from [dbo].[Details] where [guid] = #guid;
END
To actually block other transactions and manually commit,
maybe adding
BEGIN TRANSACTION
--place your
--transactions you wish to do here
--if everything was okay
COMMIT TRANSACTION
--or
--ROLLBACK TRANSACTION if something went wrong
could help you?
I’m not familiar with the data access tools you mention, but from your description I would guess that either the process does not wait for the stored procedure to complete execution before proceeding to the next steps, or ye olde “something else” is messing with the data in between your write and read calls.
One way to tell what’s going on is to use SQL Profiler. Fire it up, monitor all possible query execution events on the database (including stored procedure and stored procedures line start/stop events), watch the Text and Started/Ended columns, correlate this with the times you are seeing while tracing the application, and that should help you figure out what’s going on there. (SQL Profiler can be complex to use, but there are many sources on the web that explain it, and it is well worth learning how to use it.)
I'll leave my answer below as there are comments on it...
Ok, I feel shame I had simplified my question too much. What was actually happening is two things:
1) the inserting procedure is actually running on a separate machine( distributed system).
2) the inserting procedure actually inserts data into two tables without a transaction.
This means the query can run at the same time and find the tables in a state where one has been written to and the second table hasn't' yet had its write committed.
A simple transaction fixes this as the reading query can handle either case of no write or full write but couldn't handle the case of one table written to and the other having a pending commit.
Well it turns out that when I created the stored procedure the MSSQLadmin tool added a line to it by default:
SET NOCOUNT ON;
If I turn that to:
SET NOCOUNT OFF;
then my procedure actually commits to the database properly. Strange that this default would actually end up causing problems.
Easy way using try-catch, like it if useful
BEGIN TRAN
BEGIN try
INSERT INTO meals
(
...
)
Values(...)
COMMIT TRAN
END try
BEGIN catch
ROLLBACK TRAN
SET #resp = (convert(varchar,ERROR_LINE()), ERROR_MESSAGE() )
END catch
I have a table with unique constraint on it:
create table dbo.MyTab
(
MyTabID int primary key identity,
SomeValue nvarchar(50)
);
Create Unique Index IX_UQ_SomeValue
On dbo.MyTab(SomeValue);
Go
Which code is better to check for duplicates (success = 0 if duplicate found)?
Option 1
Declare #someValue nvarchar(50) = 'aaa'
Declare #success bit = 1;
Begin Try
Insert Into MyTab(SomeValue) Values ('aaa');
End Try
Begin Catch
-- lets assume that only constraint errors can happen
Set #success = 0;
End Catch
Select #success
Option 2
Declare #someValue nvarchar(50) = 'aaa'
Declare #success bit = 1;
IF EXISTS (Select 1 From MyTab Where SomeValue = #someValue)
Set #success = 0;
Else
Insert Into MyTab(SomeValue) Values ('aaa');
Select #success
From my point of view- i do believe that Try/Catch is for errors, that were NOT expected (like deadlock or even constraints when duplicates are not expected). In this case- it is possible that sometimes a user will try to submit duplicate, so the error is expected.
I have found article by Aaron Bertrand that states- checking for duplicates is not much slower even if most of inserts are successful.
There is also loads of advices over the net to use Try/Catch (to avoid 2 statements not 1). In my environment there could be just like 1% of unsuccessful cases, so that kind of makes sense too.
What is your opinion? Whats other reasons to use option 1 OR option 2?
UPDATE: I'm not sure it is important in this case, but table have instead of update trigger (for audit purposes- row deletion also happens through Update statement).
I've seen that article but note that for low failure rates I'd prefer the "JFDI" pattern. I've used this on high volume systems before (40k rows/second).
In Aaron's code, you can still get a duplicate when testing first under high load and lots of writes. (explained here on dba.se) This is important: your duplicates still happen, just less often. You still need exception handling and knowing when to ignore the duplicate error (2627)
Edit: explained succinctly by Remus in another answer
However, I would have a separate TRY/CATCH to test only for the duplicate error
BEGIN TRY
-- stuff
BEGIN TRY
INSERT etc
END TRY
BEGIN CATCH
IF ERROR_NUMBER() <> 2627
RAISERROR etc
END CATCH
--more stuff
BEGIN CATCH
RAISERROR etc
END CATCH
To start with, the EXISTS(SELECT ...) is incorrect as it fails under concurrency: multiple transactions could run the check concurrently and all conclude that they have to INSERT, one will be the the lucky winner that inserts first, all the rest will hit constraint violation. In other words you have a race condition between the check and the insert. So you will have to TRY/CATCH anyway, so better just try/catch.
Error logging
Don't hold me for this but there are likely logging implications when an exception is thrown. If you check before inserting no such thing happens.
Knowing why and when it can break
try/catch block should be used for parts that can break for non-deterministic reasons. I would say it's wiser in your case to check existing records because you know it can break and why exactly. So checking it yourself is from a developer's point of view a better way.
But in your code it may still break on insert because between the check time and insert time some other user inserted it already... But that is (as said previously) non-deterministic error. That's why you:
should be checking with exists
inserting within try/catch
Self explanatory code
Another positive is also that it is plain to see from the code why it can break while the try/catch block can hide that and one may remove them thinking why is this here, it's just inserting records...
Option - 3
Begin Try
SET XACT_ABORT ON
Begin Tran
IF NOT EXISTS (Select 1 From MyTab Where SomeValue = #someValue)
Begin
Insert Into MyTab(SomeValue) Values ('aaa');
End
Commit Tran
End Try
begin Catch
Rollback Tran
End Catch
Why not implement a INSTEAD OF INSERT trigger on the table? You can check if the row exists, do nothing if it does, and insert the row if it doesn't.
I have a system which requires I have IDs on my data before it goes to the database. I was using GUIDs, but found them to be too big to justify the convenience.
I'm now experimenting with implementing a sequence generator which basically reserves a range of unique ID values for a given context. The code is as follows;
ALTER PROCEDURE [dbo].[Sequence.ReserveSequence]
#Name varchar(100),
#Count int,
#FirstValue bigint OUTPUT
AS
BEGIN
SET NOCOUNT ON;
-- Ensure the parameters are valid
IF (#Name IS NULL OR #Count IS NULL OR #Count < 0)
RETURN -1;
-- Reserve the sequence
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
BEGIN TRANSACTION
-- Get the sequence ID, and the last reserved value of the sequence
DECLARE #SequenceID int;
DECLARE #LastValue bigint;
SELECT TOP 1 #SequenceID = [ID], #LastValue = [LastValue]
FROM [dbo].[Sequences]
WHERE [Name] = #Name;
-- Ensure the sequence exists
IF (#SequenceID IS NULL)
BEGIN
-- Create the new sequence
INSERT INTO [dbo].[Sequences] ([Name], [LastValue])
VALUES (#Name, #Count);
-- The first reserved value of a sequence is 1
SET #FirstValue = 1;
END
ELSE
BEGIN
-- Update the sequence
UPDATE [dbo].[Sequences]
SET [LastValue] = #LastValue + #Count
WHERE [ID] = #SequenceID;
-- The sequence start value will be the last previously reserved value + 1
SET #FirstValue = #LastValue + 1;
END
COMMIT TRANSACTION
END
The 'Sequences' table is just an ID, Name (unique), and the last allocated value of the sequence. Using this procedure I can request N values in a named sequence and use these as my identifiers.
This works great so far - it's extremely quick since I don't have to constantly ask for individual values, I can just use up a range of values and then request more.
The problem is that at extremely high frequency, calling the procedure concurrently can sometimes result in a deadlock. I have only found this to occur when stress testing, but I'm worried it'll crop up in production. Are there any notable flaws in this procedure, and can anyone recommend any way to improve on it? It would be nice to do with without transactions for example, but I do need this to be 'thread safe'.
MS themselves offer a solution and even they say it locks/deadlocks.
If you want to add some lock hints then you'd reduce concurrency for your high loads
Options:
You could develop against the "Denali" CTP which is the next release
Use IDENTITY and the OUTPUT clause like everyone else
Adopt/modify the solutions above
On DBA.SE there is "Emulate a TSQL sequence via a stored procedure": see dportas' answer which I think extends the MS solution.
I'd recommend sticking with the GUIDs, if as you say, this is mostly about composing data ready for a bulk insert (it's simpler than what I present below).
As an alternative, could you work with a restricted count? Say, 100 ID values at a time? In that case, you could have a table with an IDENTITY column, insert into that table, return the generated ID (say, 39), and then your code could assign all values between 3900 and 3999 (e.g. multiply up by your assumed granularity) without consulting the database server again.
Of course, this could be extended to allocating multiple IDs in a single call - provided that your okay with some IDs potentially going unused. E.g. you need 638 IDs - so you ask the database to assign you 7 new ID values (which imply that you've allocated 700 values), use the 638 you want, and the remaining 62 never get assigned.
Can you get some kind of deadlock trace? For example, enable trace flag 1222 as shown here. Duplicate the deadlock. Then look in the SQL Server log for the deadlock trace.
Also, you might inspect what locks are taken out in your code by inserting a call to exec sp_lock or select * from sys.dm_tran_locks immediately before the COMMIT TRANSACTION.
Most likely you are observing a conversion deadlock. To avoid them, you want to make sure that your table is clustered and has a PK, but this advice is specific to 2005 and 2008 R2, and they can change the implementation, rendering this advice useless. Google up "Some heap tables may be more prone to deadlocks than identical tables with clustered indexes".
Anyway, if you observe an error during stress testing, it is likely that sooner or later it will occur in production as well.
You may want to use sp_getapplock to serialize your requests. Google up "Application Locks (or Mutexes) in SQL Server 2005". Also I described a few useful ideas here: "Developing Modifications that Survive Concurrency".
I thought I'd share my solution. I doesn't deadlock, nor does it produce duplicate values. An important difference between this and my original procedure is that it doesn't create the queue if it doesn't already exist;
ALTER PROCEDURE [dbo].[ReserveSequence]
(
#Name nvarchar(100),
#Count int,
#FirstValue bigint OUTPUT
)
AS
BEGIN
SET NOCOUNT ON;
IF (#Count <= 0)
BEGIN
SET #FirstValue = NULL;
RETURN -1;
END
DECLARE #Result TABLE ([LastValue] bigint)
-- Update the sequence last value, and get the previous one
UPDATE [Sequences]
SET [LastValue] = [LastValue] + #Count
OUTPUT INSERTED.LastValue INTO #Result
WHERE [Name] = #Name;
-- Select the first value
SELECT TOP 1 #FirstValue = [LastValue] + 1 FROM #Result;
END