I observed a strange thing inside a stored procedure with select on table variables. It always returns the value (on subsequent iterations) that was fetched in the first iteration of cursor. Here is some sample code that proves this.
DECLARE #id AS INT;
DECLARE #outid AS INT;
DECLARE sub_cursor CURSOR FAST_FORWARD
FOR SELECT [TestColumn]
FROM testtable1;
OPEN sub_cursor;
FETCH NEXT FROM sub_cursor INTO #id;
WHILE ##FETCH_STATUS = 0
BEGIN
DECLARE #Log TABLE (LogId BIGINT NOT NULL);
PRINT 'id: ' + CONVERT (VARCHAR (10), #id);
INSERT INTO Testtable2 (TestColumn)
OUTPUT inserted.[TestColumn] INTO #Log
VALUES (#id);
IF ##ERROR = 0
BEGIN
SELECT TOP 1 #outid = LogId
FROM #Log;
PRINT 'Outid: ' + CONVERT (VARCHAR (10), #outid);
INSERT INTO [dbo].[TestTable3] ([TestColumn])
VALUES (#outid);
END
FETCH NEXT FROM sub_cursor INTO #id;
END
CLOSE sub_cursor;
DEALLOCATE sub_cursor;
However, while I was posting the code on SO and tried various combinations, I observed that removing top from the below line, gives me the right values out of table variable inside a cursor.
SELECT TOP 1 #outid = LogId FROM #Log;
which would make it like this
SELECT #outid = LogId FROM #Log;
I am not sure what is happening here. I thought TOP 1 on table variable should work, thinking that a new table is created on every iteration of the loop. Can someone throw light on the table variable scoping and lifetime.
Update: I have the solution to circumvent the strange behavior here.
As a solution, I have declared the table at the top before the loop and deleting all rows at the beginning of the loop.
There are numerous things a bit off with this code.
First off, you roll back your embedded transaction on error, but I never see you commit it on success. As written, this will leak a transaction, which could cause major issues for you in the following code.
What might be confusing you about the #Log table situation is that SQL Server doesn't use the same variable scoping and lifetime rules as C++ or other standard programming languages. Even when declaring your table variable in the cursor block you will only get a single #Log table which then lives for the remainder of the batch, and which gets multiple rows inserted into it.
As a result, your use of TOP 1 is not really meaningful, since there's no ORDER BY clause to impose any sort of deterministic ordering on the table. Without that, you get whatever order SQL Server sees fit to give you, which in this case appears to be the insertion order, giving you the first inserted element of that log table every time you run the SELECT.
If you truly want only the last ID value, you will need to provide some real ordering criterion for your #Log table -- some form of autonumber or date field alongside the data column that can be used to provide the proper ordering for what you want to do.
Related
I'm working on a huge SQL code and unfortunately it has a CURSOR which handles another two nested CURSORS within it (totally three cursors inside a stored procedure), which handles millions of data to be DELETE,UPDATE and INSERT. This takes a whole lot of time because of row by row execution and I wish to modify this in to SET based approach
From many articles it shows use of CURSORs is not recommend and the alternate is to use WHILE loops instead, So I tried and replaced the three CUROSRs with three WHILE loops nothing more, though I get the same result but there is no improvement in performance, it took the same time as it took for CUROSRs.
Below is the basic structure of the code I'm working on (i Will try to put as simple as possible) and I will put the comments what they are supposed to do.
declare #projects table (
ProjectID INT,
fieldA int,
fieldB int,
fieldC int,
fieldD int)
INSERT INTO #projects
SELECT ProjectID,fieldA,fieldB,fieldC, fieldD
FROM ProjectTable
DECLARE projects1 CURSOR LOCAL FOR /*First cursor - fetch the cursor from ProjectaTable*/
Select ProjectID FROM #projects
OPEN projects1
FETCH NEXT FROM projects1 INTO #ProjectID
WHILE ##FETCH_STATUS = 0
BEGIN
BEGIN TRY
BEGIN TRAN
DELETE FROM T_PROJECTGROUPSDATA td
WHERE td.ID = #ProjectID
DECLARE datasets CURSOR FOR /*Second cursor - this will get the 'collectionDate'field from datasetsTable for every project fetched in above cursor*/
Select DataID, GroupID, CollectionDate
FROM datasetsTable
WHERE datasetsTable.projectID = #ProjectID /*lets say this will fetch ten records for a single projectID*/
OPEN datasets
FETCH NEXT FROM datasets INTO #DataID, #GroupID, #CollectionDate
WHILE ##FETCH_STATUS = 0
BEGIN
DECLARE period CURSOR FOR /*Third Cursor - this will process the records from another table called period with above fetched #collectionDate*/
SELECT ID, dbo.fn_GetEndOfPeriod(ID)
FROM T_PERIODS
WHERE DATEDIFF(dd,#CollectionDate,dbo.fn_GetEndOfPeriod(ID)) >= 0 /*lets say this will fetch 20 records for above fetched single #CollectionDate*/
ORDER BY [YEAR],[Quarter]
OPEN period
FETCH NEXT FROM period INTO #PeriodID, #EndDate
WHILE ##FETCH_STATUS = 0
BEGIN
IF EXISTS (some conditions No - 1 )
BEGIN
BREAK
END
IF EXISTS (some conditions No - 2 )
BEGIN
FETCH NEXT FROM period INTO #PeriodID, #EndDate
CONTINUE
END
/*get the appropirate ID from T_uploads table for the current projectID and periodID fetched*/
SET #UploadID = (SELECT ID FROM T_UPLOADS u WHERE u.project_savix_ID = #ProjectID AND u.PERIOD_ID = #PeriodID AND u.STATUS = 3)
/*Update some fields in T_uploads table for the current projectID and periodID fetched*/
UPDATE T_uploads
SET fieldA = mp.fieldA, fieldB = mp.fieldB
FROM #projects mp
WHERE T_UPLOADS.ID = #UploadID AND mp.ProjectID = #ProjectID
/*Insert some records in T_PROJECTGROUPSDATA table for the current projectID and periodID fetched*/
INSERT INTO T_PROJECTGROUPSDATA tpd ( fieldA,fieldB,fieldC,fieldD,uploadID)
SELECT fieldA,fieldB,fieldC,fieldD,#UploadID
FROM #projects
WHERE tpd.DataID = #DataID
FETCH NEXT FROM period INTO #PeriodID, #EndDate
END
CLOSE period
DEALLOCATE period
FETCH NEXT FROM datasets INTO #DataID, #GroupID, #CollectionDate, #Status, #Createdate
END
CLOSE datasets
DEALLOCATE datasets
COMMIT
END TRY
BEGIN CATCH
Error handling
IF ##TRANCOUNT > 0
ROLLBACK
END CATCH
FETCH NEXT FROM projects1 INTO #ProjectID, #FAID
END
CLOSE projects1
DEALLOCATE projects1
SELECT 1 as success
I request you to suggest any methods to rewrite this code to follow the SET based approach.
Until the table structure and expected result sample data is not provided, here are a few quick things I see that can be improved (some of those are already mentioned by others above):
WHILE Loop is also a cursor. So, changing into to while loop is not
going make things any faster.
Use LOCAL FAST_FORWARD cursor unless you have need to back track a record. This would make the execution much faster.
Yes, I agree that having a SET based approach would be the fastest in most cases, however if you must store intermediate resultset somewhere, I would suggest using a temp table instead of a table variable. Temp table is 'lesser evil' between these 2 options. Here are a few reason why you should try to avoid using a table variable:
Since SQL Server would not have any prior statistics on the table variable during building on Execution Plan, it will always consider that only one record would be returned by the table variable during construction of the execution plan. And accordingly Storage Engine would assign only as much RAM memory for execution of the query. But in reality, there could be millions of records that the table variable might hold during execution. If that happens, SQL Server would be forced spill the data to hard disk during execution (and you will see lots of PAGEIOLATCH in sys.dm_os_wait_stats) making the queries way slower.
One way to get rid of the above issue would be by providing statement level hint OPTION (RECOMPILE) at the end of each query where a table value is used. This would force SQL Server to construct the Execution Plan of those queries each time during runtime and the less memory allocation issue can be avoided. However the downside of this is: SQL Server will no longer be able to take advantage of an already cached execution plan for that stored procedure, and would require recompilation every time, which would deteriorate the performance by some extent. So, unless you know that data in the underlying table changes frequently or the stored procedure itself is not frequently executed, this approach is not recommended by Microsoft MVPs.
Replacing Cursor with While blindly, is not a recommended option, hence it would not impact your performance and might even have negative impact on the performance.
When you define the cursor using Declare C Cursor in fact you are going to create a SCROLL cursor which specifies that all fetch options (FIRST, LAST, PRIOR, NEXT, RELATIVE, ABSOLUTE) are available.
When you need just Fetch Next as scroll option, you can declare the cursor as FAST_FORWARD
Here is the quote about FAST_FORWARD cursor in Microsoft docs:
Specifies that the cursor can only move forward and be scrolled from
the first to the last row. FETCH NEXT is the only supported fetch
option. All insert, update, and delete statements made by the current
user (or committed by other users) that affect rows in the result set
are visible as the rows are fetched. Because the cursor cannot be
scrolled backward, however, changes made to rows in the database after
the row was fetched are not visible through the cursor. Forward-only
cursors are dynamic by default, meaning that all changes are detected
as the current row is processed. This provides faster cursor opening
and enables the result set to display updates made to the underlying
tables. While forward-only cursors do not support backward scrolling,
applications can return to the beginning of the result set by closing
and reopening the cursor.
So you can declare your cursors using DECLARE <CURSOR NAME> FAST_FORWARD FOR ... and you will get noticeable improvements
I think all the cursors code above can be simplified to something like this:
DROP TABLE IF EXISTS #Source;
SELECT DISTINCT p.ProjectID,p.fieldA,p.fieldB,p.fieldC,p.fieldD,u.ID AS [UploadID]
INTO #Source
FROM ProjectTable p
INNER JOIN DatasetsTable d ON d.ProjectID = p.ProjectID
INNER JOIN T_PERIODS s ON DATEDIFF(DAY,d.CollectionDate,dbo.fn_GetEndOfPeriod(s.ID)) >= 0
INNER JOIN T_UPLOADS u ON u.roject_savix_ID = p.ProjectID AND u.PERIOD_ID = s.ID AND u.STATUS = 3
WHERE NOT EXISTS (some conditions No - 1)
AND NOT EXISTS (some conditions No - 2)
;
UPDATE u SET u.fieldA = s.fieldA, u.fieldB = s.fieldB
FROM T_UPLOADS u
INNER JOIN #Source s ON s.UploadID = u.ID
;
INSERT INTO T_PROJECTGROUPSDATA (fieldA,fieldB,fieldC,fieldD,uploadID)
SELECT DISTINCT s.fieldA,s.fieldB,s.fieldC,s.fieldD,s.UploadID
FROM #Source s
;
DROP TABLE IF EXISTS #Source;
Also it would be nice to know "some conditions No" details as query can differ depends on that.
Is there a way to know if the data in a SQL Server 2008 R2 table has changed since the last time you used it? I would like to know of any type of change -- whether a new record has been inserted or an existing one has been modified or deleted.
I am not interested in what the particular change might have been. I am only interested in a boolean value that indicates whether or not the table data has been changed.
Finally, I want a simple solution that does not involve writing a trigger for each CRUD operation and then have that trigger update some other log table.
I have C# program that is meant to insert a large amount of initial data into some database tables. This is a one off operation that is supposed to happen only once, or rarely ever again if ever, in the life of the application. However, during development and testing, though, we use this program a lot.
Currently, with about the 10 tables that it inserts data into, each having about 21,000 rows per table, the program takes about 45 seconds to run. This isn't really a huge problem as this is a one-off operation that is anyway going to be done internally before shipping the product to the customer.
Still, I would like to minimize this time. So, I want to not insert data into a table if there has been no change in the table data since my program last used it.
My colleague told me that I could use the CHECKSUM_AGG function in T-SQL. My question(s) are:
1) If I compute the CHECKSUM_AGG(Cast(NumericPrimaryKeyIdColumn AS int)), then the checksum only changes if a new row has been added or an existing one deleted, right? If someone has only modified values of other columns of an existing row in the table, that will have no impact on the checksum aggregate of the ID column, right? Or will it?
2) Is there another way I can solve the problem of knowing whether table data has changed since the last time my program used it?
This is very close to what I already had in mind and what #user3003007 mentioned.
One way I am thinking of is to take a CHECKSUM(*) or CHECKSUM(Columns, I, Am, Interested, In) for each such table and then do an aggregate checksum on the checksum of each row, like so:
SELECT CHECKSUM_AGG(CAST(CHECKSUM(*) as int)) FROM TableName;
This is still not a reliable method as CHECKSUM does not work on some data types. So, if I have my column of type text or ntext, the CHECKSUM will fail.
Fortunately for me, I do not have such data types in the list of columns I am interested in, so this works for me.
Have you investigated Change Data Capture?
You can use a combination of hashing and checksum_agg. The below will work as long as the the string values do not overflow the HASHBYTES function. It works by converting all of the columns to strings, concatenating those, hashing the concatenated string, turning the hash into an integer, placing all of those values into a temp table, and then running checksum_agg on the temp table. Could easily be adapted to iterate across all real tables
Edit: Combining MD5 and checksum_agg looks like it works at least for somewhat narrow tables:
declare #tablename sysname
set #tablename = 'MyTableName'
declare #sql varchar(max)
set #sql = 'select convert(int,HASHBYTES(''MD5'','''''
declare c cursor for
select column_name
from INFORMATION_SCHEMA.COLUMNS
where table_name = #tablename
open c
declare #cname sysname
fetch next from c into #cname
while ##FETCH_STATUS = 0
begin
set #sql = #sql + '+ coalesce(convert(varchar,' + #cname + '),'')'
fetch next from c into #cname
end
close c
deallocate c
set #sql = #sql + ')) as CheckSumVal
into ##myresults from ' + #tablename
print #sql
exec(#sql)
select CHECKSUM_AGG(CheckSumVal) from ##myresults
drop table ##myresults
How do you know that the change was made by you or that the change is relevant to your needs? If you're not going to do it properly (delete & re-insert or merge) then the whole thing sounds futile to me.
In any case, if you spend only an hour researching, implementing and testing your change, you'd have to run it 80 times (and sit and watch it) before you've broken even on your time. So why bother?
Add Extra column like last_updated default getdate()
Add Extra column int type .
Declare enum (enable Flag attribute option to perform bit wise
operation ).
Then you can apply checksum on this column.
No datatype problem .
An easy way to check this is to use the system DMVs to check the index usage stats, the first index on the table (id 1) is either the heap or the clustered index of the table itself and so can be used for checking when the last update occurred:
SELECT DB_NAME(database_id) AS [database_name] ,
OBJECT_NAME([object_id], [database_id]) AS [index_name] ,
[user_seeks] ,
[user_scans] ,
[user_lookups] ,
[user_updates] ,
[last_user_seek] ,
[last_user_scan] ,
[last_user_lookup] ,
[last_user_update]
FROM sys.dm_db_index_usage_stats
WHERE [index_id] = 1
From this, you can see the last time that the table was updated as well as how many updates there have been (I have left in the seeks and scans etc just in case you're interested).
It's worth taking note that this data does not persist after a reboot, but it's pretty simple to load it into a permanent table every now and then in order to make the data permanent.
I'm getting started with my first use of a cursor in a stored procedure in sql server 2008. I've done some preliminary reading and I understand that they have significant performance limitations. In my current case I think they're necessary (I want to run multiple stored procedures for each stock symbol in a symbols table.
Edit:
The sprocs I'll be calling on each symbol will for the most part be insert operations to calculate symbol- dependent values, such as 5 day moving average, average daily volume, ATR (average true range). Most of these values will be calculated from data from a daily pricing and volume table... I'd like to streamline the retrieval of data values that would be retrieved redundantly otherwise... for example, I'd like to get for each symbol the daily pricing and volume data into a table variable... that temp table will then be passed in to the stored procedure that calls each of the aggregated functions I just mentioned. Hope that makes sense...
So my initial "outer loop" cursor- based stored procedure is below.. it times out after several minutes, without returning anything to the output window.
ALTER PROCEDURE dbo.sprocSymbolDependentAggsDriver2
AS
DECLARE #symbol nchar(10)
DECLARE symbolCursor CURSOR
STATIC FOR
SELECT Symbol FROM tblSymbolsMain ORDER BY Symbol
OPEN symbolCursor
FETCH NEXT FROM symbolCursor INTO #symbol
WHILE ##FETCH_STATUS = 0
SET #symbol = #symbol + ': Test.'
FETCH NEXT FROM symbolCursor INTO #symbol
CLOSE symbolCursor
DEALLOCATE symbolCursor
When I run it without the #symbol local variable and eliminate the assignment to it in the while loop, it seems to run ok. Is there a clear violation of performance best- practices within that assignment? Thanks..
"In my current case I think they're necessary (I want to run multiple
stored procedures for each stock symbol in a symbols table."
Cursors are rarely necessary.
From your example above, I think a simple WHILE loop will easily take the place of your cursor. Adapted from SQL Cursors - How to avoid them (one of my favorite SQL bookmarks)
-- Create a temporary table...
CREATE TABLE #Symbols (
RowID int IDENTITY(1, 1),
Symbol(nvarchar(max))
)
DECLARE #NumberRecords int, #RowCount int
DECLARE #Symbol nvarchar(max)
-- Get your data that you want to loop over
INSERT INTO #Symbols (Symbol)
SELECT Symbol
FROM tblSymbolsMain
ORDER BY Symbol
-- Get the number of records you just grabbed
SET #NumberRecords = ##ROWCOUNT
SET #RowCount = 1
-- Just do a WHILE loop. No cursor necessary.
WHILE #RowCount <= #NumberRecords
BEGIN
SELECT #Symbol = Symbol
FROM #Symbols
WHERE RowID = #RowCount
EXEC <myProc1> #Symbol
EXEC <myProc2> #Symbol
EXEC <myProc3> #Symbol
SET #RowCount = #RowCount + 1
END
DROP TABLE #Symbols
You don't really need all that explicit cursor jazz to build a string. Here is probably a more efficient way to do it:
DECLARE #symbol NVARCHAR(MAX) = N'';
SELECT #symbol += ': Test.'
FROM dbo.tblSymbolsMain
ORDER BY Symbol;
Though I suspect you actually wanted to see the names of the symbol, e.g.
DECLARE #symbol NVARCHAR(MAX) = N'';
SELECT #symbol += N':' + Symbol
FROM dbo.tblSymbolsMain
ORDER BY Symbol;
One caveat is that while you will typically observe the order to be observed, it is not guaranteed. So if you want to stick to the cursor, at least declare the cursor as follows:
DECLARE symbolCursor CURSOR
LOCAL STATIC READ_ONLY FORWARD_ONLY
FOR
...
Also it seems to me like NCHAR(10) is not sufficient to hold the data you're trying to stuff into it, unless you only have one row (which is why I chose NVARCHAR(MAX) above).
And I agree with Abe... it is quite possible you don't need to fire a stored procedure for every row in the cursor, but to suggest ways around that (which will almost certainly be more efficient), we'd have to understand what those stored procedures actually do.
you need an begin end here:
WHILE ##FETCH_STATUS = 0 BEGIN
SET #symbol = #symbol + ': Test.'
FETCH NEXT FROM symbolCursor INTO #symbol
END
also try DECLARE symbolCursor CURSOR LOCAL READ_ONLY FORWARD_ONLY instead of STATIC to improve performance.
After reading all the suggestions, I ended up doing some old trick and it worked miracles!
I had this cursor which was taking almost 3 mins to run, while the enclosing query was instant. I have other databases with more complex cursors that were only taking 1 second or less, so I ruled out the global issue on using cursors. My solution:
Detach the database in question, but ensure you tick Update Statistics.
Attach the database and check performance
This seems to help optimize all the performance parameters without the detailed effort. I am using SQL Express 2008 R2.
Would like to know your experience.
I have a SQL Server 2005 cursor operating over a table variable called #workingSet.
Some times rows can be related and in this case I process the row I have fetched and the related rows at the same time. I then remove the related records from #workingset as I don't need to process then in the loop.
In a #workingSet with 7 rows, the first two are related so when I process 1 I also process 2. I remove row 2 from the cursor source (#workingSet) and then fetch next. The problem is it returns the second row in #workingset (the one I deleted on the previous iteration).
I was of the impression that this could be done i.e. deleting an item from a source that a cursor operates on and it will honour the delete in subsequent fetches.
The answer appears to be that the table variable that is being used as the source of the cursor needs to have a primary key. I've added this and all works correctly.
Not massively familiar with cursors but from a quick test this end you need to avoid declaring the cursor with the STATIC or KEYSET options then the changes to the underlying table are reflected in the cursor.
SET NOCOUNT ON;
DECLARE #WorkingTable TABLE(C int)
INSERT INTO #WorkingTable VALUES (1),(2),(3)
DECLARE #C int
DECLARE wt_cursor CURSOR
DYNAMIC /*Or left blank but not STATIC or KEYSET*/
FOR
SELECT C
FROM #WorkingTable
OPEN wt_cursor;
FETCH NEXT FROM wt_cursor
INTO #C
DELETE FROM #WorkingTable
WHILE ##FETCH_STATUS = 0
BEGIN
PRINT #C;
FETCH NEXT FROM wt_cursor
INTO #C;
END
CLOSE wt_cursor;
DEALLOCATE wt_cursor;
I have a system which requires I have IDs on my data before it goes to the database. I was using GUIDs, but found them to be too big to justify the convenience.
I'm now experimenting with implementing a sequence generator which basically reserves a range of unique ID values for a given context. The code is as follows;
ALTER PROCEDURE [dbo].[Sequence.ReserveSequence]
#Name varchar(100),
#Count int,
#FirstValue bigint OUTPUT
AS
BEGIN
SET NOCOUNT ON;
-- Ensure the parameters are valid
IF (#Name IS NULL OR #Count IS NULL OR #Count < 0)
RETURN -1;
-- Reserve the sequence
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
BEGIN TRANSACTION
-- Get the sequence ID, and the last reserved value of the sequence
DECLARE #SequenceID int;
DECLARE #LastValue bigint;
SELECT TOP 1 #SequenceID = [ID], #LastValue = [LastValue]
FROM [dbo].[Sequences]
WHERE [Name] = #Name;
-- Ensure the sequence exists
IF (#SequenceID IS NULL)
BEGIN
-- Create the new sequence
INSERT INTO [dbo].[Sequences] ([Name], [LastValue])
VALUES (#Name, #Count);
-- The first reserved value of a sequence is 1
SET #FirstValue = 1;
END
ELSE
BEGIN
-- Update the sequence
UPDATE [dbo].[Sequences]
SET [LastValue] = #LastValue + #Count
WHERE [ID] = #SequenceID;
-- The sequence start value will be the last previously reserved value + 1
SET #FirstValue = #LastValue + 1;
END
COMMIT TRANSACTION
END
The 'Sequences' table is just an ID, Name (unique), and the last allocated value of the sequence. Using this procedure I can request N values in a named sequence and use these as my identifiers.
This works great so far - it's extremely quick since I don't have to constantly ask for individual values, I can just use up a range of values and then request more.
The problem is that at extremely high frequency, calling the procedure concurrently can sometimes result in a deadlock. I have only found this to occur when stress testing, but I'm worried it'll crop up in production. Are there any notable flaws in this procedure, and can anyone recommend any way to improve on it? It would be nice to do with without transactions for example, but I do need this to be 'thread safe'.
MS themselves offer a solution and even they say it locks/deadlocks.
If you want to add some lock hints then you'd reduce concurrency for your high loads
Options:
You could develop against the "Denali" CTP which is the next release
Use IDENTITY and the OUTPUT clause like everyone else
Adopt/modify the solutions above
On DBA.SE there is "Emulate a TSQL sequence via a stored procedure": see dportas' answer which I think extends the MS solution.
I'd recommend sticking with the GUIDs, if as you say, this is mostly about composing data ready for a bulk insert (it's simpler than what I present below).
As an alternative, could you work with a restricted count? Say, 100 ID values at a time? In that case, you could have a table with an IDENTITY column, insert into that table, return the generated ID (say, 39), and then your code could assign all values between 3900 and 3999 (e.g. multiply up by your assumed granularity) without consulting the database server again.
Of course, this could be extended to allocating multiple IDs in a single call - provided that your okay with some IDs potentially going unused. E.g. you need 638 IDs - so you ask the database to assign you 7 new ID values (which imply that you've allocated 700 values), use the 638 you want, and the remaining 62 never get assigned.
Can you get some kind of deadlock trace? For example, enable trace flag 1222 as shown here. Duplicate the deadlock. Then look in the SQL Server log for the deadlock trace.
Also, you might inspect what locks are taken out in your code by inserting a call to exec sp_lock or select * from sys.dm_tran_locks immediately before the COMMIT TRANSACTION.
Most likely you are observing a conversion deadlock. To avoid them, you want to make sure that your table is clustered and has a PK, but this advice is specific to 2005 and 2008 R2, and they can change the implementation, rendering this advice useless. Google up "Some heap tables may be more prone to deadlocks than identical tables with clustered indexes".
Anyway, if you observe an error during stress testing, it is likely that sooner or later it will occur in production as well.
You may want to use sp_getapplock to serialize your requests. Google up "Application Locks (or Mutexes) in SQL Server 2005". Also I described a few useful ideas here: "Developing Modifications that Survive Concurrency".
I thought I'd share my solution. I doesn't deadlock, nor does it produce duplicate values. An important difference between this and my original procedure is that it doesn't create the queue if it doesn't already exist;
ALTER PROCEDURE [dbo].[ReserveSequence]
(
#Name nvarchar(100),
#Count int,
#FirstValue bigint OUTPUT
)
AS
BEGIN
SET NOCOUNT ON;
IF (#Count <= 0)
BEGIN
SET #FirstValue = NULL;
RETURN -1;
END
DECLARE #Result TABLE ([LastValue] bigint)
-- Update the sequence last value, and get the previous one
UPDATE [Sequences]
SET [LastValue] = [LastValue] + #Count
OUTPUT INSERTED.LastValue INTO #Result
WHERE [Name] = #Name;
-- Select the first value
SELECT TOP 1 #FirstValue = [LastValue] + 1 FROM #Result;
END