Deletion of rows in table cause LOCKS - sql-server

I am running the following command to delete rows in batches out of a large table (150 million rows):
DECLARE #RowCount int
WHILE 1=1
BEGIN
DELETE TOP (10000) t1
FROM table t1
INNER JOIN table2 t2 ON t2.PrimaryKey = t1.PrimaryKey
WHERE t1.YearProcessed <= 2007
SET #RowCount = ##ROWCOUNT
IF (#RowCount < 10000) BREAK
END
This table is HIGHLY used. However, it is deleting records, but it is also causing locking on some records, thus throwing errors to the user (which is not acceptable in the environment we're in).
How can I delete older records without causing locks? Should I reduce the size of the batch from 10000 records to 1000? How will this effect log sizes (we have very little hard drive space left for large log growth).
Any suggestions?

I have seen similar sporadic problems in the past where even in small batches 0f 5000 records, locking would still happen. In our case, each delete/update was contained in its own Begin Tran...Commit loop. To correct the problem, the logic of
WaitFor DELAY '00:00:00:01'
was placed at the top of each loop through and that corrected the problem.

First of all - it looks like your DELETE performing Clustered Index Scan, i recommend to do the following:
create index [IX.IndexName] ON t1(YearProcessed, PrimaryKey)
Second - is there any needs to join t2 table?
And then use following query to delete the rows, assuming that your PrimaryKey column is of type INT:
declare #ids TABLE(PrimaryKey INT)
WHILE 1=1
BEGIN
INSERT #ids
SELECT top 10000 DISTINCT t1.PrimaryKey
FROM table t1
INNER JOIN table2 t2 ON t2.PrimaryKey = t1.PrimaryKey
WHERE t1.YearProcessed <= 2007
IF ##ROWCOUNT = 0 BREAK
DELETE t1
WHERE PrimaryKey in (Select PrimaryKey from #ids)
delete from #ids
END
And do not forget to remove t2 table from join if it is not needed
If it still causes locks - then lower the amount of rows deleted in each round

I think you're on the right track.
Look at these two articles, too:
http://support.microsoft.com/kb/323630
http://www.bennadel.com/blog/477-SQL-Server-NOLOCK-ROWLOCK-Directives-To-Improve-Performance.htm
and:
http://www.dbforums.com/microsoft-sql-server/985516-deleting-without-locking.html
Before you run the delete, check the estimated query plan to see if it
is doing an index seek for the delete, or still doing a full table
scan/access.

In addition to the other suggestions (that aim at reducing the work done during deletion) you can also configure SQL Server to not block other readers while doing deletes on a table.
This can be done by using "snapshot isolation" which was introduced with SQL Server 2005:
http://msdn.microsoft.com/en-us/library/ms345124%28v=sql.90%29.aspx

If you have anything with cascading deletes make sure they are indexed.
Highlighting the DELETE query and clicking Display estimated execution plan will show suggested indexes - which in my case included some cascading deletes.
Adding indexes for those made the delete a lot faster - but I still wouldn't try to delete all rows at once.

the best way that I have found is form asp.net DeleteExpiredSessions . you do a READUNCOMMITTED select and put the records in a temp table , than delete the record using a CURSOR.
ALTER PROCEDURE [dbo].[DeleteExpiredSessions]
AS
SET NOCOUNT ON
SET DEADLOCK_PRIORITY LOW
DECLARE #now datetime
SET #now = GETUTCDATE()
CREATE TABLE #tblExpiredSessions
(
SessionID nvarchar(88) NOT NULL PRIMARY KEY
)
INSERT #tblExpiredSessions (SessionID)
SELECT SessionID
FROM [ASPState].dbo.ASPStateTempSessions WITH (READUNCOMMITTED)
WHERE Expires < #now
IF ##ROWCOUNT <> 0
BEGIN
DECLARE ExpiredSessionCursor CURSOR LOCAL FORWARD_ONLY READ_ONLY
FOR SELECT SessionID FROM #tblExpiredSessions
DECLARE #SessionID nvarchar(88)
OPEN ExpiredSessionCursor
FETCH NEXT FROM ExpiredSessionCursor INTO #SessionID
WHILE ##FETCH_STATUS = 0
BEGIN
DELETE FROM [ASPState].dbo.ASPStateTempSessions WHERE SessionID = #SessionID AND Expires < #now
FETCH NEXT FROM ExpiredSessionCursor INTO #SessionID
END
CLOSE ExpiredSessionCursor
DEALLOCATE ExpiredSessionCursor
END
DROP TABLE #tblExpiredSessions
RETURN 0

Try this,
DECLARE #RowCount int
WHILE 1=1
BEGIN
BEGIN TRANSACTION
DELETE TOP (10000) t1
FROM table t1
INNER JOIN table2 t2 ON t2.PrimaryKey = t1.PrimaryKey
WHERE t1.YearProcessed <= 2007
END TRANSACTION
COMMIT TRANSACTION
SET #RowCount = ##ROWCOUNT
IF (#RowCount < 10000) BREAK
END

Related

Trigger AFTER INSERT, UPDATE, DELETE to call stored procedure with table name and primary key

For a sync process, my SQL Server database should record a list items that have changed - table name and primary key.
The DB already has a table and stored procedure to do this:
EXEC #ErrCode = dbo.SyncQueueItem "tableName", 1234;
I'd like to add triggers to a table to call this stored procedure on INSERT, UPDATE, DELETE. How do I get the key? What's the simplest thing that could possibly work?
CREATE TABLE new_employees
(
id_num INT IDENTITY(1,1),
fname VARCHAR(20),
minit CHAR(1),
lname VARCHAR(30)
);
GO
IF OBJECT_ID ('dbo.sync_new_employees','TR') IS NOT NULL
DROP TRIGGER sync_new_employees;
GO
CREATE TRIGGER sync_new_employees
ON new_employees
AFTER INSERT, UPDATE, DELETE
AS
DECLARE #Key Int;
DECLARE #ErrCode Int;
-- How to get the key???
SELECT #Key = 12345;
EXEC #ErrCode = dbo.SyncQueueItem "new_employees", #key;
GO
The way to access the records changed by the operation is by using the Inserted and Deleted pseudo-tables that are provided to you by SQL Server.
Inserted contains any inserted records, or any updated records with their new values.
Deleted contains any deleted records, or any updated records with their old values.
More Info
When writing a trigger, to be safe, one should always code for the case when multiple records are acted upon. Unfortunately if you need to call a SP that means a loop - which isn't ideal.
The following code shows how this could be done for your example, and includes a method of detecting whether the operation is an Insert/Update/Delete.
declare #Key int, #ErrCode int, #Action varchar(6);
declare #Keys table (id int, [Action] varchar(6));
insert into #Keys (id, [Action])
select coalesce(I.id, D.id_num)
, case when I.id is not null and D.id is not null then 'Update' when I.id is not null then 'Insert' else 'Delete' end
from Inserted I
full join Deleted D on I.id_num = D.id_num;
while exists (select 1 from #Keys) begin
select top 1 #Key = id, #Action = [Action] from #Keys;
exec #ErrCode = dbo.SyncQueueItem 'new_employees', #key;
delete from #Keys where id = #Key;
end
Further: In addition to solving your specified problem its worth noting a couple of points regarding the bigger picture.
As #Damien_The_Unbeliever points out there are built in mechanisms to accomplish change tracking which will perform much better.
If you still wish to handle your own change tracking, it would perform better if you could arrange it such that you handle the entire recordset in one go as opposed to carrying out a row-by-row operation. There are 2 ways to accomplish this a) Move your change tracking code inside the trigger and don't use a SP. b) Use a "User Defined Table Type" to pass the record-set of changes to the SP.
You should use the Magic Table to get the data.
Usually, inserted and deleted tables are called Magic Tables in the context of a trigger. There are Inserted and Deleted magic tables in SQL Server. These tables are automatically created and managed by SQL Server internally to hold recently inserted, deleted and updated values during DML operations (Insert, Update and Delete) on a database table.
Inserted magic table
The Inserted table holds the recently inserted values, in other words, new data values. Hence recently added records are inserted into the Inserted table.
Deleted magic table
The Deleted table holds the recently deleted or updated values, in other words, old data values. Hence the old updated and deleted records are inserted into the Deleted table.
**You can use the inserted and deleted magic table to get the value of id_num **
SELECT top 1 #Key = id_num from inserted
Note: This code sample will only work for a single record for insert scenario. For Bulk insert/update scenarios you need to fetch records from inserted and deleted table stored in the temp table or variable and then loop through it to pass to your procedure or you can pass a table variable to your procedure and handle the multiple records there.
A DML trigger should operate set data else only one row will be processed. It can be something like this. And of course use magic tables inserted and deleted.
CREATE TRIGGER dbo.tr_employees
ON dbo.employees --the table from Northwind database
AFTER INSERT,DELETE,UPDATE
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
declare #tbl table (id int identity(1,1),delId int,insId int)
--Use "magic tables" inserted and deleted
insert #tbl(delId, insId)
select d.EmployeeID, i.EmployeeID
from inserted i --empty when "delete"
full join deleted d --empty when "insert"
on i.EmployeeID=d.EmployeeID
declare #id int,#key int,#action char
select top 1 #id=id, #key=isnull(delId, insId),
#action=case
when delId is null then 'I'
when insId is null then 'D'
else 'U' end --just in case you need the operation executed
from #tbl
--do something for each row
while #id is not null --instead of cursor
begin
--do the main action
--exec dbo.sync 'employees', #key, #action
--remove processed row
delete #tbl where id=#id
--refill #variables
select top 1 #id=id, #key=isnull(delId, insId),
#action=case
when delId is null then 'I'
when insId is null then 'D'
else 'U' end --just in case you need the operation executed
from #tbl
end
END
Not the best solution, but just a direct answer on the question:
SELECT #Key = COALESCE(deleted.id_num,inserted.id_num);
Also not the best way (if not the worst) (do not try this at home), but at least it will help with multiple values:
DECLARE #Key INT;
DECLARE triggerCursor CURSOR LOCAL FAST_FORWARD READ_ONLY
FOR SELECT COALESCE(i.id_num,d.id_num) AS [id_num]
FROM inserted i
FULL JOIN deleted d ON d.id_num = i.id_num
WHERE (
COALESCE(i.fname,'')<>COALESCE(d.fname,'')
OR COALESCE(i.minit,'')<>COALESCE(d.minit,'')
OR COALESCE(i.lname,'')<>COALESCE(d.lname,'')
)
;
OPEN triggerCursor;
FETCH NEXT FROM triggerCursor INTO #Key;
WHILE ##FETCH_STATUS = 0
BEGIN
EXEC #ErrCode = dbo.SyncQueueItem 'new_employees', #key;
FETCH NEXT FROM triggerCursor INTO #Key;
END
CLOSE triggerCursor;
DEALLOCATE triggerCursor;
Better way to use trigger based "value-change-tracker":
INSERT INTO [YourTableHistoryName] (id_num, fname, minit, lname, WhenHappened)
SELECT COALESCE(i.id_num,d.id_num) AS [id_num]
,i.fname,i.minit,i.lname,CURRENT_TIMESTAMP AS [WhenHeppened]
FROM inserted i
FULL JOIN deleted d ON d.id_num = i.id_num
WHERE ( COALESCE(i.fname,'')<>COALESCE(d.fname,'')
OR COALESCE(i.minit,'')<>COALESCE(d.minit,'')
OR COALESCE(i.lname,'')<>COALESCE(d.lname,'')
)
;
The best (in my opinion) way to track changes is to use Temporal tables (SQL Server 2016+)
inserted/deleted in triggers will generate as many rows as touched and calling a stored proc per key would require a cursor or similar approach per row.
You should check timestamp/rowversion in SQL Server. You could add that to the all tables in question (not null, auto increment, unique within database for each table/row etc).
You could add a unique index on that column to all tables you added the column.
##DBTS is the current timestamp, you can store today's ##DBTS and tomorrow you will scan all tables from that to current ##DBTS. timestamp/rowversion will be incremented for all updates and inserts but for deletes it won't track, for deletes you can have a delete only trigger and insert keys into a different table.
Change data capture or change tracking could do this easier, but if there is heavy volumes on the server or large number of data loads, partition switches scanning the transaction log becomes a bottleneck and in some cases you will have to remove change data capture to save the transaction log from growing indefinetely.

Using a if condition in an insert SQL Server

I have the following statement in my code
INSERT INTO #TProductSales (ProductID, StockQTY, ETA1)
VALUES (#ProductID, #StockQTY, #ETA1)
I want to do something like:
IF #ProductID exists THEN
UPDATE #TProductSales
ELSE
INSERT INTO #TProductSales
Is there a way I can do this?
The pattern is (without error handling):
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
BEGIN TRANSACTION;
UPDATE #TProductSales SET StockQty = #StockQty, ETA1 = #ETA1
WHERE ProductID = #ProductID;
IF ##ROWCOUNT = 0
BEGIN
INSERT #TProductSales(ProductID, StockQTY, ETA1)
VALUES(#ProductID, #StockQTY, #ETA1);
END
COMMIT TRANSACTION;
You don't need to perform an additional read of the #temp table here. You're already doing that by trying the update. To protect from race conditions, you do the same as you'd protect any block of two or more statements that you want to isolate: you'd wrap it in a transaction with an appropriate isolation level (likely serializable here, though that all only makes sense when we're not talking about a #temp table, since that is by definition serialized).
You're not any further ahead by adding an IF EXISTS check (and you would need to add locking hints to make that safe / serializable anyway), but you could be further behind, depending on how many times you update existing rows vs. insert new. That could add up to a lot of extra I/O.
People will probably tell you to use MERGE (which is actually multiple operations behind the scenes, and also needs to be protected with serializable), I urge you not to. I and others lay out why here:
Use Caution with SQL Server's MERGE Statement
So, you want to use MERGE, eh?
For a multi-row pattern (like a TVP), I would handle this quite the same way, but there isn't a practical way to avoid the second read like you can with the single-row case. And no, MERGE doesn't avoid it either.
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
BEGIN TRANSACTION;
UPDATE t SET t.col = tvp.col
FROM dbo.TargetTable AS t
INNER JOIN #TVP AS tvp
ON t.ProductID = tvp.ProductID;
INSERT dbo.TargetTable(ProductID, othercols)
SELECT ProductID, othercols
FROM #TVP AS tvp
WHERE NOT EXISTS
(
SELECT 1 FROM dbo.TargetTable
WHERE ProductID = tvp.ProductID
);
COMMIT TRANSACTION;
Well, I guess there is a way to do it, but I haven't tested this thoroughly:
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
BEGIN TRANSACTION;
DECLARE #exist TABLE(ProductID int PRIMARY KEY);
UPDATE t SET t.col = tvp.col
OUTPUT deleted.ProductID INTO #exist
FROM dbo.TargetTable AS t
INNER JOIN #tvp AS tvp
ON t.ProductID = tvp.ProductID;
INSERT dbo.TargetTable(ProductID, othercols)
SELECT ProductID, othercols
FROM #tvp AS t
WHERE NOT EXISTS
(
SELECT 1 FROM #exist
WHERE ProductID = t.ProductID
);
COMMIT TRANSACTION;
In either case, you perform the update first, otherwise you'll update all the rows you just inserted, which would be wasteful.
I personally like to make a table variable or temp table to store the values and then do my update/insert, but I'm normally doing mass insert/updates. That is the nice thing about this pattern is that it works for multiple records without redundancy in the inserts/updates.
DECLARE #Tbl TABLE (
StockQty INT,
ETA1 DATETIME,
ProductID INT
)
INSERT INTO #Tbl (StockQty,ETA1,ProductID)
SELECT #StockQty AS StockQty ,#ETA1 AS ETA1,#ProductID AS ProductID
UPDATE tps
SET StockQty = tmp.StockQty
, tmp.ETA1 = tmp.ETA1
FROM #TProductSales tps
INNER JOIN #Tbl tmp ON tmp.ProductID=tps.ProductID
INSERT INTO #TProductSales(StockQty,ETA1,ProductID)
SELECT
tmp.StockQty,tmp.ETA1,tmp.ProductID
FROM #Tbl tmp
LEFT JOIN #TProductSales tps ON tps.ProductID=tmp.ProductID
WHERE tps.ProductID IS NULL
You could use something like:
IF EXISTS( SELECT NULL FROM #TProductSales WHERE ProductID = #ProductID)
UPDATE #TProductSales SET StockQTY = #StockQTY, ETA1 = #ETA1 WHERE ProductID = #ProductID
ELSE
INSERT INTO #TProductSales(ProductID,StockQTY,ETA1) VALUES(#ProductID,#StockQTY,#ETA1)

Truncate table doesn't release LCK_M_SCH_S

I have a stored procedure which truncates a not so large table (2M records but it will get bigger in the future) and then refills it. The sample version is like below:
ALTER PROCEDURE [SC].[A_SP]
AS
BEGIN
BEGIN TRANSACTION;
BEGIN TRY
TRUNCATE TABLE SC.A_TABLE
IF OBJECT_ID('tempdb..#Trans') IS NOT NULL DROP TABLE #Trans
SELECT
*
INTO
#Trans
FROM
(
SELECT
...
FROM
B_TABLE trans (NOLOCK)
INNER JOIN
... (NOLOCK) ON ...
LEFT OUTER JOIN
... (NOLOCK) ON ...
...
) AS x
INSERT INTO
SC.A_TABLE
(
...
)
SELECT
...
FROM
#Trans (NOLOCK)
DROP TABLE #Trans
END TRY
BEGIN CATCH
IF ##TRANCOUNT > 0
ROLLBACK TRANSACTION;
THROW
END CATCH
IF ##TRANCOUNT > 0
COMMIT TRANSACTION;
END
This procedure takes a few hours to work. Sometimes I want to take a COUNT to see how much is finished by using:
SELECT COUNT(*) FROM A_TABLE (NOLOCK)
This doesn't return anything (even with NOLOCK) because there is LCK_M_SCH_S lock on the table because of TRUNCATE statement. I even can't do:
SELECT object_id('SC.A_TABLE')
The other interesting thing is; I sometimes stop the execution of the procedure through SSMS and even after that I can't take a COUNT or select it's object_id. The execution seems suspended in sys.sysprocesses and I have to close the query window to make it release the lock. I suspect it's because I use transactions and leave it in mid state by stopping the execution but I'm not sure.
I know that truncating the table doesn't take so much time since the table doesn't have any foreign keys or indexes.
What can be the problem? I may use DELETE instead of this but I know that TRUNCATE will be much faster here.
EDIT: DELETE instead of TRUNCATE works without any problem btw but I only want to use it as a last resort.
If Truncate ain't your bag and you have way too many rows for a Delete to execute without bring the TLog to a crashing standstill, there's always option UbW (for Ugly, but workable): Create a clone of the table, load the rows into that, then (and inside a transaction), switch everything around.
Option UbW2 builds on that concept - have two tables always built - one empty, one full. Load into the empty table and then modify a View of Synonym to point to that table.
Option LUbW (less ugly...) involves using partitions: Load your data into a switch table then move that as a partition using some flag as your partition function.
All of these require more work and code. We have similar situations and use option UbW2 for our data warehouse that allows us to load millions of rows into 'active' tables with no downtime every hour, nor risking the consumers seeing inconsistent data.
Best you're going to get is to shift some of the heavy lifting out the transaction. That said, Sql Server is working 100% as per design.
ALTER PROCEDURE [SC].[A_SP]
AS
BEGIN
IF OBJECT_ID('tempdb..#Trans') IS NOT NULL DROP TABLE #Trans
SELECT
*
INTO
#Trans
FROM
(
SELECT
...
FROM
B_TABLE trans (NOLOCK)
INNER JOIN
... (NOLOCK) ON ...
LEFT OUTER JOIN
... (NOLOCK) ON ...
...
) AS x
BEGIN TRANSACTION;
BEGIN TRY
TRUNCATE TABLE SC.A_TABLE
INSERT INTO
SC.A_TABLE
(
...
)
SELECT
...
FROM
#Trans (NOLOCK)
DROP TABLE #Trans
If ##TranCount >0 And Xact_State() = 1
Commit Transaction;
END TRY
BEGIN CATCH
IF ##TRANCOUNT > 0
ROLLBACK TRANSACTION;
THROW
END CATCH

Trigger for updating total records on both insert and delete

I'm writing a trigger to store the record count of one table as a column in another to speed up some reporting queries on a large db.
Here's what I've got so far, it works fine on deletes but I also need to it work on inserts. Do I need to use a separate trigger? Also, is the use of the cursor necessary or is there a more efficient way?
Thanks!
ALTER TRIGGER [dbo].[updateSourceTotals]
ON [dbo].imports
AFTER INSERT, DELETE
AS
BEGIN
SET NOCOUNT ON;
DECLARE #sourceId int;
DECLARE deleteCursor CURSOR FOR SELECT DISTINCT sourceId FROM deleted
OPEN deleteCursor
FETCH NEXT FROM deleteCursor INTO #sourceId
WHILE ##FETCH_STATUS = 0
BEGIN
UPDATE sources
SET totalImports = (
SELECT COUNT(*)
FROM imports
WHERE sourceId = #sourceId
)
WHERE id = #sourceId
FETCH NEXT FROM deleteCursor INTO #sourceId
END
CLOSE deleteCursor
DEALLOCATE deleteCursor
END
GO
If you are really set on the Trigger approach (and I do NOT recommend it) then this is a much simpler and probably faster version of your current code:
ALTER TRIGGER [dbo].[updateSourceTotals]
ON [dbo].imports
AFTER INSERT, DELETE
AS
BEGIN
UPDATE s
SET totalImports = (
SELECT COUNT(*)
FROM imports i
WHERE i.sourceId = s.Id
)
FROM sources s
WHERE s.id IN(SELECT sourceId FROM deleted)
END
If you want to cover INSERTs also, this should do it:
ALTER TRIGGER [dbo].[updateSourceTotals]
ON [dbo].imports
AFTER INSERT, DELETE
AS
BEGIN
UPDATE s
SET totalImports = (
SELECT COUNT(*)
FROM imports i
WHERE i.sourceId = s.id
)
FROM sources s
WHERE s.id IN(
SELECT sourceId FROM deleted
UNION
SELECT sourceId FROM inserted
)
END
As an added bonus, it should work for UPDATEs as well.
Just to clarify, the problem with doing pre-aggregation in a Trigger, even after you eliminate the Cursor, is that instead of re-calculating the query on each request, you are instead re-calculating them on each modification.
Even in the abstract, this is only a win if you do many such requests, but do not modify the table very much. However, in the real context of an active DBMS server, you lose most of even this small advantage too, because if you are making many such requests, then they are probably getting cached very effectively (in turn, because reads are much more cache-effective than writes).

SQl Server Express 2005 - updating 2 tables and atomicity?

First off, I want to start by saying I am not an SQL programmer (I'm a C++/Delphi guy), so some of my questions might be really obvious. So pardon my ignorance :o)
I've been charged with writing a script that will update certain tables in a database based on the contents of a CSV file. I have it working it would seem, but I am worried about atomicity for one of the steps:
One of the tables contains only one field - an int which must be incremented each time, but from what I can see is not defined as an identity for some reason. I must create a new row in this table, and insert that row's value into another newly-created row in another table.
This is how I did it (as part of a larger script):
DECLARE #uniqueID INT,
#counter INT,
#maxCount INT
SELECT #maxCount = COUNT(*) FROM tempTable
SET #counter = 1
WHILE (#counter <= #maxCount)
BEGIN
SELECT #uniqueID = MAX(id) FROM uniqueIDTable <----Line 1
INSERT INTO uniqueIDTableVALUES (#uniqueID + 1) <----Line 2
SELECT #uniqueID = #uniqueID + 1
UPDATE TOP(1) tempTable
SET userID = #uniqueID
WHERE userID IS NULL
SET #counter = #counter + 1
END
GO
First of all, am I correct using a "WHILE" construct? I couldn't find a way to achieve this with a simple UPDATE statement.
Second of all, how can I be sure that no other operation will be carried out on the database between Lines 1 and 2 that would insert a value into the uniqueIDTable before I do? Is there a way to "synchronize" operations in SQL Server Express?
Also, keep in mind that I have no control over the database design.
Thanks a lot!
You can do the whole 9 yards in one single statement:
WITH cteUsers AS (
SELECT t.*
, ROW_NUMBER() OVER (ORDER BY userID) as rn
, COALESCE(m.id,0) as max_id
FROM tempTable t WITH(UPDLOCK)
JOIN (
SELECT MAX(id) as id
FROM uniqueIDTable WITH (UPDLOCK)
) as m ON 1=1
WHERE userID IS NULL)
UPDATE cteUsers
SET userID = rn + max_id
OUTPUT INSERTED.userID
INTO uniqueIDTable (id);
You get the MAX(id), lock the uniqueIDTable, compute sequential userIDs for users with NULL userID by using ROW_NUMBER(), update the tempTable and insert the new ids into uniqueIDTable. All in one operation.
For performance you need and index on uniqueIDTable(id) and index on tempTable(userID).
SQL is all about set oriented operations, WHILE loops are the code smell of SQL.
You need a transaction to ensure atomicity and you need to move the select and insert into one statement or do the select with an updlock to prevent two people from running the select at the same time, getting the same value and then trying to insert the same value into the table.
Basically
DECLARE #MaxValTable TABLE (MaxID int)
BEGIN TRANSACTION
BEGIN TRY
INSERT INTO uniqueIDTable VALUES (id)
OUTPUT inserted.id INTO #MaxValTable
SELECT MAX(id) + 1 FROM uniqueIDTable
UPDATE TOP(1) tempTable
SET userID = (SELECT MAXid FROM #MaxValTable)
WHERE userID IS NULL
COMMIT TRANSACTION
END TRY
BEGIN CATCH
ROLLBACK TRANSACTION
RAISERROR 'Error occurred updating tempTable' -- more detail here is good
END CATCH
That said, using an identity would make things far simpler. This is a potential concurrency problem. Is there any way you can change the column to be identity?
Edit: Ensuring that only one connection at a time will be able to insert into the uniqueIDtable. Not going to scale well though.
Edit: Table variable's better than exclusive table lock. If need be, this can be used when inserting users as well.

Resources