using Lock in Stored Procedure SQL Server 2005 - sql-server

What I am looking to avoid concurrency in my stored procedure
here is my script, I am trying SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
ALTER proc [dbo].[SP_GenerateNextReportID]
#type nvarchar(255), #identity int output
AS BEGIN
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
declare #id int;
set #id = IsNull((select LastUsedIdentity from ReportProcessGenerator where Type = #type), 0)
if(#id =0)
insert ReportProcessGenerator values(#type, #id +1)
else
update ReportProcessGenerator set LastUsedIdentity = #id +1 where Type = #type
set #identity = #id +1
END
Not sure is this a right way or not?

If you have a UNIQUE index or a PRIMARY KEY on ReportProcessGenerator.Type, then your stored procedures will not be able to modify the record for the same type concurrently.
Note that you should use SELECT FOR UPDATE or OUTPUT clause to avoid deadlocks, as #Martin points out. With SERIALIZABLE, concurrent SELECT queries won't lift the shared locks which the UPDATE queries will not later be able to upgrade.
However, why would you want to maintain separate per-type identities? Usually, one identity which is unique across types is as good as multiple ones unique within types, the former being much easier to maintain.

Related

Rollback Transaction when concurrency check fails

I have a stored procedure which does a lot of probing of the database to determine if some records should be updated
Each record (Order) has a TIMESTAMP called [RowVersion]
I store the candidate record ids and RowVersions in a temporary table called #Ids
DECLARE #Ids TABLE (id int, [RowVersion] Binary(8))
I get the count of candidates with the the following
DECLARE #FoundCount int
SELECT #FoundCount = COUNT(*) FROM #Ids
Since records may change from when i SELECT to when i eventually try to UPDATE, i need a way to check concurrency and ROLLBACK TRANSACTION if that check fails
What i have so far
BEGIN TRANSACTION
-- create new combinable order group
INSERT INTO CombinableOrders DEFAULT VALUES
-- update orders found into new group
UPDATE Orders
SET Orders.CombinableOrder_Id = SCOPE_IDENTITY()
FROM Orders AS Orders
INNER JOIN #Ids AS Ids
ON Orders.Id = Ids.Id
AND Orders.[RowVersion] = Ids.[RowVersion]
-- if the rows updated dosnt match the rows found, then there must be a concurrecy issue, roll back
IF (##ROWCOUNT != #FoundCount)
BEGIN
ROLLBACK TRANSACTION
set #Updated = -1
END
ELSE
COMMIT
From the above, i'm filtering the UPDATE with the stored [RowVersion] this will skip any records that have since been changed (hopefully)
However i'm not quite sure if i'm using transactions or optimistic concurrency in regards to TIMESTAMP correctly, or if there are better ways to achieve my desired goals
It's difficult to understand what logic you are trying to implement.
But, if you absolutely must perform several non-atomic actions in a procedure and make sure that the whole block of code is not executed again while it is running (for example, by another user), consider using sp_getapplock.
Places a lock on an application resource.
Your procedure may look similar to this:
CREATE PROCEDURE [dbo].[YourProcedure]
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
BEGIN TRANSACTION;
BEGIN TRY
DECLARE #VarLockResult int;
EXEC #VarLockResult = sp_getapplock
#Resource = 'UniqueStringFor_app_lock',
#LockMode = 'Exclusive',
#LockOwner = 'Transaction',
#LockTimeout = 60000,
#DbPrincipal = 'public';
IF #VarLockResult >= 0
BEGIN
-- Acquired the lock
-- perform your complex processing
-- populate table with IDs
-- update other tables using IDs
-- ...
END;
COMMIT TRANSACTION;
END TRY
BEGIN CATCH
ROLLBACK TRANSACTION;
END CATCH;
END
When you SELECT the data, try using HOLDLOCK and UPDLOCK while inside of an explicit transaction. It's going to mess with the concurrency of OTHER transactions but not yours.
http://msdn.microsoft.com/en-us/library/ms187373.aspx

Locally scoped begin-end declares (altering multiple triggers in a single transaction)

Goal
I need to alter a number of almost identical triggers on a number of tables (and a number of databases).
Therefore I wan't to make one big script, and perform all the changes in one succeed-or-fail transaction.
My first attempt (that doesn't work)
---First alter trigger
ALTER TRIGGER [dbo].[trg_UserGarbleValue] ON [dbo].[users]
FOR INSERT
AS
Begin
DECLARE #GarbleValue NVARCHAR(200)
DECLARE #NewID NVARCHAR(20)
SET #NewID = (SELECT TOP 1 usr_id FROM users ORDER BY usr_id DESC)
SET #GarbleValue = dbo.fn_GetRandomString(4) + #NewID + dbo.fn_GetRandomString(4)
UPDATE users SET usr_garble_value = #GarbleValue WHERE usr_id = #NewID
End
Go
--Subsequent alter trigger (there would be many more in the real-world usage)
ALTER TRIGGER [dbo].[trg_SegmentGarbleValue] ON [dbo].[segment]
FOR INSERT
AS
Begin
DECLARE #GarbleValue NVARCHAR(200)
DECLARE #NewID NVARCHAR(20)
SET #NewID = (SELECT TOP 1 seg_id FROM segment ORDER BY seg_id DESC)
SET #GarbleValue = dbo.fn_GetRandomString(4) + #NewID + dbo.fn_GetRandomString(4)
UPDATE segment SET seg_garble_value = #GarbleValue WHERE seg_id = #NewID
End
Go
Running each of the alter trigger statements by themselves works fine. But when both of them are run in the same transaction, the declares crash in the second alter because the variables name already exists.
How do I accomplish this? Is there any way to declare a variable locally within a begin-end scope, or do I need to rethink it completely?
(I'm aware that the "top 1" for fetching new records is probably not very clever, but that is another matter)
I think you've confused GO (the batch separator) and transactions. It shouldn't complain about the variable names being redeclared, provided the batch separators are still present:
BEGIN TRANSACTION
GO
ALTER TRIGGER [dbo].[trg_UserGarbleValue] ON [dbo].[users]
FOR INSERT
AS
---Etc
Go
ALTER TRIGGER [dbo].[trg_SegmentGarbleValue] ON [dbo].[segment]
FOR INSERT
AS
---Etc
Go
COMMIT
As to your note about TOP 1, it's worse than you think - a trigger runs once per statement, not once per row - it could be running in response to multiple rows having been inserted. And, happily, there is a pseudo-table available (called inserted) that contains exactly those rows which caused the trigger to fire - there's no need for you to go searching for those row(s) in the base table.

Setting Serializable isolation level in stored procedure

I have three tables and three stored procedures respectively to insert/update records in these tables. The first table has a primary key column, RecNo, which is auto-generated.
Around 1000 users are entering records in these tables simultaneously from different geographic locations. I am noticing that sometimes inserts in the second or third table get missed even when inserts were successfully done and no warning was generated.
I want to know how the auto-generated primary key column handles concurrent issues. Do I need to set isolation level to SERIALIZABLE on top of each stored procedure?
I am using SQL Server 2008 R2 Express with default isolation level, i.e., READ COMMITTED.
One of my stored procedure looks like:
ALTER PROCEDURE [dbo].[pheSch_CreateOrUpdateTubewellDetails]
-- Add the parameters for the stored procedure here
#TwTaskFlag nvarchar(6),
#TwParameterID bigint,
#SerialNumber bigint,
#TotalNum int,
#TwType nvarchar(50),
#Depth nvarchar(60),
#Diameter nvarchar(60),
#WaterCapacity nvarchar(60),
#PS nvarchar(15),
#PSNum int,
#PSType nvarchar(60),
#Remarks nvarchar(80)
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
-- Insert statements for procedure here
BEGIN
UPDATE tw_details
SET
TotalNum = #TotalNum,
TwType = #TwType,
Depth = #Depth,
Diameter = #Diameter,
WaterCapacity = #WaterCapacity,
PS = #PS,
PSNum = #PSNum,
PSType = #PSType,
Remarks = #Remarks
WHERE twpid = #TwParameterID;
END
END
You need not change the isolation level,
the Identity column is well suited for concurrent inserts.
IF
you have no any Triggers attached to the table - then show all the details
BUT
you noticed the INSERTS - i do not see any of them here

SQL Server - Implementing sequences

I have a system which requires I have IDs on my data before it goes to the database. I was using GUIDs, but found them to be too big to justify the convenience.
I'm now experimenting with implementing a sequence generator which basically reserves a range of unique ID values for a given context. The code is as follows;
ALTER PROCEDURE [dbo].[Sequence.ReserveSequence]
#Name varchar(100),
#Count int,
#FirstValue bigint OUTPUT
AS
BEGIN
SET NOCOUNT ON;
-- Ensure the parameters are valid
IF (#Name IS NULL OR #Count IS NULL OR #Count < 0)
RETURN -1;
-- Reserve the sequence
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
BEGIN TRANSACTION
-- Get the sequence ID, and the last reserved value of the sequence
DECLARE #SequenceID int;
DECLARE #LastValue bigint;
SELECT TOP 1 #SequenceID = [ID], #LastValue = [LastValue]
FROM [dbo].[Sequences]
WHERE [Name] = #Name;
-- Ensure the sequence exists
IF (#SequenceID IS NULL)
BEGIN
-- Create the new sequence
INSERT INTO [dbo].[Sequences] ([Name], [LastValue])
VALUES (#Name, #Count);
-- The first reserved value of a sequence is 1
SET #FirstValue = 1;
END
ELSE
BEGIN
-- Update the sequence
UPDATE [dbo].[Sequences]
SET [LastValue] = #LastValue + #Count
WHERE [ID] = #SequenceID;
-- The sequence start value will be the last previously reserved value + 1
SET #FirstValue = #LastValue + 1;
END
COMMIT TRANSACTION
END
The 'Sequences' table is just an ID, Name (unique), and the last allocated value of the sequence. Using this procedure I can request N values in a named sequence and use these as my identifiers.
This works great so far - it's extremely quick since I don't have to constantly ask for individual values, I can just use up a range of values and then request more.
The problem is that at extremely high frequency, calling the procedure concurrently can sometimes result in a deadlock. I have only found this to occur when stress testing, but I'm worried it'll crop up in production. Are there any notable flaws in this procedure, and can anyone recommend any way to improve on it? It would be nice to do with without transactions for example, but I do need this to be 'thread safe'.
MS themselves offer a solution and even they say it locks/deadlocks.
If you want to add some lock hints then you'd reduce concurrency for your high loads
Options:
You could develop against the "Denali" CTP which is the next release
Use IDENTITY and the OUTPUT clause like everyone else
Adopt/modify the solutions above
On DBA.SE there is "Emulate a TSQL sequence via a stored procedure": see dportas' answer which I think extends the MS solution.
I'd recommend sticking with the GUIDs, if as you say, this is mostly about composing data ready for a bulk insert (it's simpler than what I present below).
As an alternative, could you work with a restricted count? Say, 100 ID values at a time? In that case, you could have a table with an IDENTITY column, insert into that table, return the generated ID (say, 39), and then your code could assign all values between 3900 and 3999 (e.g. multiply up by your assumed granularity) without consulting the database server again.
Of course, this could be extended to allocating multiple IDs in a single call - provided that your okay with some IDs potentially going unused. E.g. you need 638 IDs - so you ask the database to assign you 7 new ID values (which imply that you've allocated 700 values), use the 638 you want, and the remaining 62 never get assigned.
Can you get some kind of deadlock trace? For example, enable trace flag 1222 as shown here. Duplicate the deadlock. Then look in the SQL Server log for the deadlock trace.
Also, you might inspect what locks are taken out in your code by inserting a call to exec sp_lock or select * from sys.dm_tran_locks immediately before the COMMIT TRANSACTION.
Most likely you are observing a conversion deadlock. To avoid them, you want to make sure that your table is clustered and has a PK, but this advice is specific to 2005 and 2008 R2, and they can change the implementation, rendering this advice useless. Google up "Some heap tables may be more prone to deadlocks than identical tables with clustered indexes".
Anyway, if you observe an error during stress testing, it is likely that sooner or later it will occur in production as well.
You may want to use sp_getapplock to serialize your requests. Google up "Application Locks (or Mutexes) in SQL Server 2005". Also I described a few useful ideas here: "Developing Modifications that Survive Concurrency".
I thought I'd share my solution. I doesn't deadlock, nor does it produce duplicate values. An important difference between this and my original procedure is that it doesn't create the queue if it doesn't already exist;
ALTER PROCEDURE [dbo].[ReserveSequence]
(
#Name nvarchar(100),
#Count int,
#FirstValue bigint OUTPUT
)
AS
BEGIN
SET NOCOUNT ON;
IF (#Count <= 0)
BEGIN
SET #FirstValue = NULL;
RETURN -1;
END
DECLARE #Result TABLE ([LastValue] bigint)
-- Update the sequence last value, and get the previous one
UPDATE [Sequences]
SET [LastValue] = [LastValue] + #Count
OUTPUT INSERTED.LastValue INTO #Result
WHERE [Name] = #Name;
-- Select the first value
SELECT TOP 1 #FirstValue = [LastValue] + 1 FROM #Result;
END

Stored Procedure Serialization Problem

Is the following stored procedure code robust for multi-user application. It is working fine. I was wondering if there is any better way to do it and if there is any performance issues.
The proc has three sql statements together like 1.Updating hardware status in Allocation Table 2. Calculating Next appropriate Primary Key value for new record to be inserted in DEFECT_LOG table 3.Inserting the values into DEFECT_LOG table. I am also using a variable to return 1 if the transaction was successful.
ALTER PROCEDURE spCreateDefective
(
#alloc_ID nvarchar(50),
#cur_date datetime,
#problem_desc nvarchar(MAX),
#got_defect_date datetime,
#trans_status tinyint OUTPUT --Used to check transaction status
)
AS
/* Transaction Maintainer Statements */
BEGIN TRAN transac1
SET XACT_ABORT ON
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
/* Declaration Area */
DECLARE #temp nvarchar(10)
DECLARE #def_ID nvarchar(20)
/* Updating Inventory Status to 'Defective' and Updating Release Date to current System date */
UPDATE INV_Allocation_DB
SET INV_STATUS = 'Defective' , RELEASE_DATE=#cur_date
WHERE INV_ID=#alloc_ID
/* Calculates New Primary Key for the DEFECT_LOG Table */
-- Returns the largest number or 1 if no records are present in the table
SELECT #temp=COALESCE(CONVERT(int,SUBSTRING(MAX(DEFECT_ID),5,5))+1,1) FROM DEFECT_LOG
SET #def_ID = 'DEF_'+ RIGHT(replicate('0',5)+ convert(varchar(5),#temp),5)
/* Insert statement for inserting data into DEFECT_LOG */
INSERT INTO DEFECT_LOG (DEFECT_ID,INV_ID,PROB_DESC,GOT_DEFECT_DATE)
VALUES(#def_ID,#alloc_ID,#problem_desc,#got_defect_date)
SET #trans_status = 1
COMMIT TRAN transac1
/* Returns 1 if transaction successful */
RETURN #trans_status
Using a SERIALIZABLE transaction level is not recommended unless you absolutely have to have one. It will increase the likelihood of blocking and decrease throughput.
You seem to be using it in order to guarantee a unique DEFECT_ID? Why not use an IDENTITY column for DEFECT_ID instead?
Personally I would use an IDENTITY field as a true primary key, but then have an additional column with the alphanumeric identifier. (Possibly as a persisted computed field)
This should reduce the number of issues with regards to concurrency.

Resources