How to update large table with millions of rows in SQL Server? - sql-server

I've an UPDATE statement which can update more than million records. I want to update them in batches of 1000 or 10000. I tried with ##ROWCOUNT but I am unable to get desired result.
Just for testing purpose what I did is, I selected table with 14 records and set a row count of 5. This query is supposed to update records in 5, 5 and 4 but it just updates first 5 records.
Query - 1:
SET ROWCOUNT 5
UPDATE TableName
SET Value = 'abc1'
WHERE Parameter1 = 'abc' AND Parameter2 = 123
WHILE ##ROWCOUNT > 0
BEGIN
SET rowcount 5
UPDATE TableName
SET Value = 'abc1'
WHERE Parameter1 = 'abc' AND Parameter2 = 123
PRINT (##ROWCOUNT)
END
SET rowcount 0
Query - 2:
SET ROWCOUNT 5
WHILE (##ROWCOUNT > 0)
BEGIN
BEGIN TRANSACTION
UPDATE TableName
SET Value = 'abc1'
WHERE Parameter1 = 'abc' AND Parameter2 = 123
PRINT (##ROWCOUNT)
IF ##ROWCOUNT = 0
BEGIN
COMMIT TRANSACTION
BREAK
END
COMMIT TRANSACTION
END
SET ROWCOUNT 0
What am I missing here?

You should not be updating 10k rows in a set unless you are certain that the operation is getting Page Locks (due to multiple rows per page being part of the UPDATE operation). The issue is that Lock Escalation (from either Row or Page to Table locks) occurs at 5000 locks. So it is safest to keep it just below 5000, just in case the operation is using Row Locks.
You should not be using SET ROWCOUNT to limit the number of rows that will be modified. There are two issues here:
It has that been deprecated since SQL Server 2005 was released (11 years ago):
Using SET ROWCOUNT will not affect DELETE, INSERT, and UPDATE statements in a future release of SQL Server. Avoid using SET ROWCOUNT with DELETE, INSERT, and UPDATE statements in new development work, and plan to modify applications that currently use it. For a similar behavior, use the TOP syntax
It can affect more than just the statement you are dealing with:
Setting the SET ROWCOUNT option causes most Transact-SQL statements to stop processing when they have been affected by the specified number of rows. This includes triggers. The ROWCOUNT option does not affect dynamic cursors, but it does limit the rowset of keyset and insensitive cursors. This option should be used with caution.
Instead, use the TOP () clause.
There is no purpose in having an explicit transaction here. It complicates the code and you have no handling for a ROLLBACK, which isn't even needed since each statement is its own transaction (i.e. auto-commit).
Assuming you find a reason to keep the explicit transaction, then you do not have a TRY / CATCH structure. Please see my answer on DBA.StackExchange for a TRY / CATCH template that handles transactions:
Are we required to handle Transaction in C# Code as well as in Store procedure
I suspect that the real WHERE clause is not being shown in the example code in the Question, so simply relying upon what has been shown, a better model (please see note below regarding performance) would be:
DECLARE #Rows INT,
#BatchSize INT; -- keep below 5000 to be safe
SET #BatchSize = 2000;
SET #Rows = #BatchSize; -- initialize just to enter the loop
BEGIN TRY
WHILE (#Rows = #BatchSize)
BEGIN
UPDATE TOP (#BatchSize) tab
SET tab.Value = 'abc1'
FROM TableName tab
WHERE tab.Parameter1 = 'abc'
AND tab.Parameter2 = 123
AND tab.Value <> 'abc1' COLLATE Latin1_General_100_BIN2;
-- Use a binary Collation (ending in _BIN2, not _BIN) to make sure
-- that you don't skip differences that compare the same due to
-- insensitivity of case, accent, etc, or linguistic equivalence.
SET #Rows = ##ROWCOUNT;
END;
END TRY
BEGIN CATCH
RAISERROR(stuff);
RETURN;
END CATCH;
By testing #Rows against #BatchSize, you can avoid that final UPDATE query (in most cases) because the final set is typically some number of rows less than #BatchSize, in which case we know that there are no more to process (which is what you see in the output shown in your answer). Only in those cases where the final set of rows is equal to #BatchSize will this code run a final UPDATE affecting 0 rows.
I also added a condition to the WHERE clause to prevent rows that have already been updated from being updated again.
NOTE REGARDING PERFORMANCE
I emphasized "better" above (as in, "this is a better model") because this has several improvements over the O.P.'s original code, and works fine in many cases, but is not perfect for all cases. For tables of at least a certain size (which varies due to several factors so I can't be more specific), performance will degrade as there are fewer rows to fix if either:
there is no index to support the query, or
there is an index, but at least one column in the WHERE clause is a string data type that does not use a binary collation, hence a COLLATE clause is added to the query here to force the binary collation, and doing so invalidates the index (for this particular query).
This is the situation that #mikesigs encountered, thus requiring a different approach. The updated method copies the IDs for all rows to be updated into a temporary table, then uses that temp table to INNER JOIN to the table being updated on the clustered index key column(s). (It's important to capture and join on the clustered index columns, whether or not those are the primary key columns!).
Please see #mikesigs answer below for details. The approach shown in that answer is a very effective pattern that I have used myself on many occasions. The only changes I would make are:
Explicitly create the #targetIds table rather than using SELECT INTO...
For the #targetIds table, declare a clustered primary key on the column(s).
For the #batchIds table, declare a clustered primary key on the column(s).
For inserting into #targetIds, use INSERT INTO #targetIds (column_name(s)) SELECT and remove the ORDER BY as it's unnecessary.
So, if you don't have an index that can be used for this operation, and can't temporarily create one that will actually work (a filtered index might work, depending on your WHERE clause for the UPDATE query), then try the approach shown in #mikesigs answer (and if you use that solution, please up-vote it).

WHILE EXISTS (SELECT * FROM TableName WHERE Value <> 'abc1' AND Parameter1 = 'abc' AND Parameter2 = 123)
BEGIN
UPDATE TOP (1000) TableName
SET Value = 'abc1'
WHERE Parameter1 = 'abc' AND Parameter2 = 123 AND Value <> 'abc1'
END

I encountered this thread yesterday and wrote a script based on the accepted answer. It turned out to perform very slowly, taking 12 hours to process 25M of 33M rows. I wound up cancelling it this morning and working with a DBA to improve it.
The DBA pointed out that the is null check in my UPDATE query was using a Clustered Index Scan on the PK, and it was the scan that was slowing the query down. Basically, the longer the query runs, the further it needs to look through the index for the right rows.
The approach he came up with was obvious in hind sight. Essentially, you load the IDs of the rows you want to update into a temp table, then join that onto the target table in the update statement. This uses an Index Seek instead of a Scan. And ho boy does it speed things up! It took 2 minutes to update the last 8M records.
Batching Using a Temp Table
SET NOCOUNT ON
DECLARE #Rows INT,
#BatchSize INT,
#Completed INT,
#Total INT,
#Message nvarchar(max)
SET #BatchSize = 4000
SET #Rows = #BatchSize
SET #Completed = 0
-- #targetIds table holds the IDs of ALL the rows you want to update
SELECT Id into #targetIds
FROM TheTable
WHERE Foo IS NULL
ORDER BY Id
-- Used for printing out the progress
SELECT #Total = ##ROWCOUNT
-- #batchIds table holds just the records updated in the current batch
CREATE TABLE #batchIds (Id UNIQUEIDENTIFIER);
-- Loop until #targetIds is empty
WHILE EXISTS (SELECT 1 FROM #targetIds)
BEGIN
-- Remove a batch of rows from the top of #targetIds and put them into #batchIds
DELETE TOP (#BatchSize)
FROM #targetIds
OUTPUT deleted.Id INTO #batchIds
-- Update TheTable data
UPDATE t
SET Foo = 'bar'
FROM TheTable t
JOIN #batchIds tmp ON t.Id = tmp.Id
WHERE t.Foo IS NULL
-- Get the # of rows updated
SET #Rows = ##ROWCOUNT
-- Increment our #Completed counter, for progress display purposes
SET #Completed = #Completed + #Rows
-- Print progress using RAISERROR to avoid SQL buffering issue
SELECT #Message = 'Completed ' + cast(#Completed as varchar(10)) + '/' + cast(#Total as varchar(10))
RAISERROR(#Message, 0, 1) WITH NOWAIT
-- Quick operation to delete all the rows from our batch table
TRUNCATE TABLE #batchIds;
END
-- Clean up
DROP TABLE IF EXISTS #batchIds;
DROP TABLE IF EXISTS #targetIds;
Batching the slow way, do not use!
For reference, here is the original slower performing query:
SET NOCOUNT ON
DECLARE #Rows INT,
#BatchSize INT,
#Completed INT,
#Total INT
SET #BatchSize = 4000
SET #Rows = #BatchSize
SET #Completed = 0
SELECT #Total = COUNT(*) FROM TheTable WHERE Foo IS NULL
WHILE (#Rows = #BatchSize)
BEGIN
UPDATE t
SET Foo = 'bar'
FROM TheTable t
JOIN #batchIds tmp ON t.Id = tmp.Id
WHERE t.Foo IS NULL
SET #Rows = ##ROWCOUNT
SET #Completed = #Completed + #Rows
PRINT 'Completed ' + cast(#Completed as varchar(10)) + '/' + cast(#Total as varchar(10))
END

This is a more efficient version of the solution from #Kramb. The existence check is redundant as the update where clause already handles this. Instead you just grab the rowcount and compare to batchsize.
Also note #Kramb solution didn't filter out already updated rows from the next iteration hence it would be an infinite loop.
Also uses the modern batch size syntax instead of using rowcount.
DECLARE #batchSize INT, #rowsUpdated INT
SET #batchSize = 1000;
SET #rowsUpdated = #batchSize; -- Initialise for the while loop entry
WHILE (#batchSize = #rowsUpdated)
BEGIN
UPDATE TOP (#batchSize) TableName
SET Value = 'abc1'
WHERE Parameter1 = 'abc' AND Parameter2 = 123 and Value <> 'abc1';
SET #rowsUpdated = ##ROWCOUNT;
END

I want share my experience. A few days ago I have to update 21 million records in table with 76 million records. My colleague suggested the next variant.
For example, we have the next table 'Persons':
Id | FirstName | LastName | Email | JobTitle
1 | John | Doe | abc1#abc.com | Software Developer
2 | John1 | Doe1 | abc2#abc.com | Software Developer
3 | John2 | Doe2 | abc3#abc.com | Web Designer
Task: Update persons to the new Job Title: 'Software Developer' -> 'Web Developer'.
1. Create Temporary Table 'Persons_SoftwareDeveloper_To_WebDeveloper (Id INT Primary Key)'
2. Select into temporary table persons which you want to update with the new Job Title:
INSERT INTO Persons_SoftwareDeveloper_To_WebDeveloper SELECT Id FROM
Persons WITH(NOLOCK) --avoid lock
WHERE JobTitle = 'Software Developer'
OPTION(MAXDOP 1) -- use only one core
Depends on rows count, this statement will take some time to fill your temporary table, but it would avoid locks. In my situation it took about 5 minutes (21 million rows).
3. The main idea is to generate micro sql statements to update database. So, let's print them:
DECLARE #i INT, #pagesize INT, #totalPersons INT
SET #i=0
SET #pagesize=2000
SELECT #totalPersons = MAX(Id) FROM Persons
while #i<= #totalPersons
begin
Print '
UPDATE persons
SET persons.JobTitle = ''ASP.NET Developer''
FROM Persons_SoftwareDeveloper_To_WebDeveloper tmp
JOIN Persons persons ON tmp.Id = persons.Id
where persons.Id between '+cast(#i as varchar(20)) +' and '+cast(#i+#pagesize as varchar(20)) +'
PRINT ''Page ' + cast((#i / #pageSize) as varchar(20)) + ' of ' + cast(#totalPersons/#pageSize as varchar(20))+'
GO
'
set #i=#i+#pagesize
end
After executing this script you will receive hundreds of batches which you can execute in one tab of MS SQL Management Studio.
4. Run printed sql statements and check for locks on table. You always can stop process and play with #pageSize to speed up or speed down updating(don't forget to change #i after you pause script).
5. Drop Persons_SoftwareDeveloper_To_AspNetDeveloper. Remove temporary table.
Minor Note: This migration could take a time and new rows with invalid data could be inserted during migration. So, firstly fix places where your rows adds. In my situation I fixed UI, 'Software Developer' -> 'Web Developer'.
More about this method on my blog https://yarkul.com/how-smoothly-insert-millions-of-rows-in-sql-server/

Your print is messing things up, because it resets ##ROWCOUNT. Whenever you use ##ROWCOUNT, my advice is to always set it immediately to a variable. So:
DECLARE #RC int;
WHILE #RC > 0 or #RC IS NULL
BEGIN
SET rowcount 5;
UPDATE TableName
SET Value = 'abc1'
WHERE Parameter1 = 'abc' AND Parameter2 = 123 AND Value <> 'abc1';
SET #RC = ##ROWCOUNT;
PRINT(##ROWCOUNT)
END;
SET rowcount = 0;
And, another nice feature is that you don't need to repeat the update code.

First of all, thank you all for your inputs. I tweak my Query - 1 and got my desired result. Gordon Linoff is right, PRINT was messing up my query so I modified it as following:
Modified Query - 1:
SET ROWCOUNT 5
WHILE (1 = 1)
BEGIN
BEGIN TRANSACTION
UPDATE TableName
SET Value = 'abc1'
WHERE Parameter1 = 'abc' AND Parameter2 = 123
IF ##ROWCOUNT = 0
BEGIN
COMMIT TRANSACTION
BREAK
END
COMMIT TRANSACTION
END
SET ROWCOUNT 0
Output:
(5 row(s) affected)
(5 row(s) affected)
(4 row(s) affected)
(0 row(s) affected)

Related

How to read updated data in a stored procedure called multiple times simultaneously

There are 2 tables:
Wallets;
Transactions.
There is a stored procedure that handles (I think with ACID operation):
updating on Wallet table
inserting one row into Transactions table
every time it is called.
The issue occurs when there are many calls to the SP at same time, infact the value of PreviousBalance is not correct (sequentially wrong), cause in the SP read old value, meantime another process of call is running.
To understand better look the following screenshot.
There are 3 Transaction with same DT (IDs 1289, 1288, 1287), in all of those PreviouseBalance is equal, but is not correct, because the value for :
Trx ID 1288 should be 180,78 as Balance of previous row;
Trx ID 1289 should be 168,07 = 180,78 - 12,08
I think that the issue is in the SET of #OLDBalance var; at same time those 3 thread read same value, so when the SP goes to INSERT loads same value of PreviousBalance.
How can I do in order to read #OLDBalance correct after commit of one operation?
I tried to set several type of Isolation Levet into SP, the result was the same and sometime went in error for deadlock.
I have the following stored procedure:
Stored Procedure
ALTER PROCEDURE [dbo].[upsMovimenta_internal]
#AccountID int,
#Amount money,
#TypeTransactionID int,
#ProductID int,
#notes nvarchar(max)
AS
BEGIN
SET NOCOUNT ON;
DECLARE #OLDBalance MONEY;
DECLARE #PreviousBalance MONEY;
DECLARE #CurrentBalance MONEY;
DECLARE #Molt float;
BEGIN TRANSACTION;
IF NOT EXISTS( SELECT * FROM Accounts WHERE AccountID = #AccountID)
BEGIN
RaisError(N'Account not found ', 10, 1, N'number', 3);
return (1)
END
SELECT #Molt = Moltiplicatore
FROM TypeTransactions
where TypeTransactionID = #TypeTransactionID
;
IF (#Molt is null )
BEGIN
RaisError(N'Error transaction', 10, 1, N'number', 3);
return (1)
END
SET #Amount = #Amount * #Molt;
--SELECT * FROM Wallets
SELECT TOP 1 #OLDBalance = TotalAmount
FROM Wallets
where AccountID = #AccountID
;
SET #CurrentBalance = #OLDBalance + #Amount;
IF (#ProductID = 1 )
BEGIN
UPDATE Wallets
SET TotalAmount+=#Amount,
Cash+=#Amount
FROM Wallets where AccountID = #AccountID
;
END
IF (#ProductID = 2 )
BEGIN
UPDATE Wallets
SET TotalAmount+=#Amount,
Fun+=#Amount
FROM Wallets where AccountID = #AccountID
;
END
INSERT INTO Transactions
( AccountID, ProductID, DT, TypeTransactionID, Amout, Balance, PreviousBalance, Notes )
VALUES
( #AccountID, #ProductID, GETDATE(), #TypeTransactionID, #Amount, #CurrentBalance, #OLDBalance, #notes)
;
COMMIT TRANSACTION;
return (0)
END
Thank you so much guys
Generally, one way of managing locks on records, is to apply a dummy update on the rows you want to work on, right after starting transaction.
In this case SQL Server guarantees that those rows will be locked and no other transactions can access the rows. So you can change your design to something like this:
begin tran
update myTable
set Field1 = Field1
where someKeyField = 212
-- do the same for other tables that you want to protect against change
-- at this moment all working rows will be locked, other procedure calls will be on hold
-- do your main operations here
commit tran
The issue with this will be the other proc calls will wait and this may degrade performance or even time-out if the traffic is high and your operation in this proc is lengthy
If you are working on high transaction environment, you need to change your design.
Update: Design Suggestion
I don't get why you have PreviousBalance and Balance in your transaction (it is against the design rules, however you can override rule in special case).
Probably you have that to speed up your calculations or make your queries simpler. But it is not good practice in OLTP database.
Rules say you keep the Amount column and calculate PreviousBalance and Balance somewhere else.
You should drop PreviousBalance but keep the Balance column, and every time you insert a transaction, you update (increase/decrease) the Balance column. Plus you need to initialize the Balance column at the first transaction.
This is what I can think of. If I knew your whole system, I would be able to have better ideas though.

Is there a better way to DELETE 80 million+ rows from a table?

Is there a better way to DELETE 80 million+ rows from a table?
WHILE EXISTS (SELECT TOP 1 * FROM large_table)
BEGIN
WITH LT AS
(
SELECT TOP 60000 *
FROM large_table
)
DELETE FROM LT
END
This does the job of keeping my transaction logs from becoming too large, but I need to know if there is a way to make this process go faster? I've had my computer on for 5+ days now running this script and I haven't gotten very far, very fast.
You can truncate the table simply by.
TRUNCATE TABLE large_table
GO
You can also use delete by using where condition. The time taken by delete depends on various aspects. You can reduce the cost by eliminating SELECT query in the condition of WHILE loop.
DECLARE #rows INT = 1
WHILE (#rows>0)
BEGIN
DELETE TOP 1000 *
FROM large_table
#rows = ##ROWCOUNT
END
Bulk deletion will create a lots of logs and rollback happen if the log file is full.
you can do the delete as batches and ensure every transaction is committed.
DECLARE #IDCollection TABLE (ID INT)
DECLARE #Batch INT = 1000;
DECLARE #ROWCOUNT INT;
WHILE (1 = 1)
BEGIN
BEGIN TRANSACTION;
INSERT INTO #IDCollection
SELECT TOP (#Batch) ID
FROM table
ORDER BY id
DELETE
FROM table
WHERE id IN (
SELECT *
FROM #IDCollection
)
SET #ROWCOUNT = ##ROWCOUNT
IF (#ROWCOUNT = 0)
BREAK
COMMIT TRANSACTION;
END

Truncate some rows in table SQL Server Management Studio [duplicate]

Let's say we have table Sales with 30 columns and 500,000 rows. I would like to delete 400,000 in the table (those where "toDelete='1'").
But I have a few constraints :
the table is read / written "often" and I would not like a long "delete" to take a long time and lock the table for too long
I need to skip the transaction log (like with a TRUNCATE) but while doing a "DELETE ... WHERE..." (I need to put a condition), but haven't found any way to do this...
Any advice would be welcome to transform a
DELETE FROM Sales WHERE toDelete='1'
to something more partitioned & possibly transaction log free.
Calling DELETE FROM TableName will do the entire delete in one large transaction. This is expensive.
Here is another option which will delete rows in batches :
deleteMore:
DELETE TOP(10000) Sales WHERE toDelete='1'
IF ##ROWCOUNT != 0
goto deleteMore
I'll leave my answer here, since I was able to test different approaches for mass delete and update (I had to update and then delete 125+mio rows, server has 16GB of RAM, Xeon E5-2680 #2.7GHz, SQL Server 2012).
TL;DR: always update/delete by primary key, never by any other condition. If you can't use PK directly, create a temp table and fill it with PK values and update/delete your table using that table. Use indexes for this.
I started with solution from above (by #Kevin Aenmey), but this approach turned out to be inappropriate, since my database was live and it handles a couple of hundred transactions per second and there was some blocking involved (there was an index for all there fields from condition, using WITH(ROWLOCK) didn't change anything).
So, I added a WAITFOR statement, which allowed database to process other transactions.
deleteMore:
WAITFOR DELAY '00:00:01'
DELETE TOP(1000) FROM MyTable WHERE Column1 = #Criteria1 AND Column2 = #Criteria2 AND Column3 = #Criteria3
IF ##ROWCOUNT != 0
goto deleteMore
This approach was able to process ~1.6mio rows/hour for updating and ~0,2mio rows/hour for deleting.
Turning to temp tables changed things quite a lot.
deleteMore:
SELECT TOP 10000 Id /* Id is the PK */
INTO #Temp
FROM MyTable WHERE Column1 = #Criteria1 AND Column2 = #Criteria2 AND Column3 = #Criteria3
DELETE MT
FROM MyTable MT
JOIN #Temp T ON T.Id = MT.Id
/* you can use IN operator, it doesn't change anything
DELETE FROM MyTable WHERE Id IN (SELECT Id FROM #Temp)
*/
IF ##ROWCOUNT > 0 BEGIN
DROP TABLE #Temp
WAITFOR DELAY '00:00:01'
goto deleteMore
END ELSE BEGIN
DROP TABLE #Temp
PRINT 'This is the end, my friend'
END
This solution processed ~25mio rows/hour for updating (15x faster) and ~2.2mio rows/hour for deleting (11x faster).
What you want is batch processing.
While (select Count(*) from sales where toDelete =1) >0
BEGIN
Delete from sales where SalesID in
(select top 1000 salesId from sales where toDelete = 1)
END
Of course you can experiment which is the best value to use for the batch, I've used from 500 - 50000 depending on the table. If you use cascade delete, you will probably need a smaller number as you have those child records to delete.
One way I have had to do this in the past is to have a stored procedure or script that deletes n records. Repeat until done.
DELETE TOP 1000 FROM Sales WHERE toDelete='1'
You should try to give it a ROWLOCK hint so it will not lock the entire table. However, if you delete a lot of rows lock escalation will occur.
Also, make sure you have a non-clustered filtered index (only for 1 values) on the toDelete column. If possible make it a bit column, not varchar (or what it is now).
DELETE FROM Sales WITH(ROWLOCK) WHERE toDelete='1'
Ultimately, you can try to iterate over the table and delete in chunks.
Updated
Since while loops and chunk deletes are the new pink here, I'll throw in my version too (combined with my previous answer):
SET ROWCOUNT 100
DELETE FROM Sales WITH(ROWLOCK) WHERE toDelete='1'
WHILE ##rowcount > 0
BEGIN
SET ROWCOUNT 100
DELETE FROM Sales WITH(ROWLOCK) WHERE toDelete='1'
END
My own take on this functionality would be as follows.
This way there is no repeated code and you can manage your chunk size.
DECLARE #DeleteChunk INT = 10000
DECLARE #rowcount INT = 1
WHILE #rowcount > 0
BEGIN
DELETE TOP (#DeleteChunk) FROM Sales WITH(ROWLOCK)
SELECT #rowcount = ##RowCount
END
I have used the below to delete around 50 million records -
BEGIN TRANSACTION
DeleteOperation:
DELETE TOP (BatchSize)
FROM [database_name].[database_schema].[database_table]
IF ##ROWCOUNT > 0
GOTO DeleteOperation
COMMIT TRANSACTION
Please note that keeping the BatchSize < 5000 is less expensive on resources.
As I assume the best way to delete huge amount of records is to delete it by Primary Key. (What is Primary Key see here)
So you have to generate tsql script that contains the whole list of lines to delete and after this execute this script.
For example code below is gonna generate that file
GO
SET NOCOUNT ON
SELECT 'DELETE FROM DATA_ACTION WHERE ID = ' + CAST(ID AS VARCHAR(50)) + ';' + CHAR(13) + CHAR(10) + 'GO'
FROM DATA_ACTION
WHERE YEAR(AtTime) = 2014
The ouput file is gonna have records like
DELETE FROM DATA_ACTION WHERE ID = 123;
GO
DELETE FROM DATA_ACTION WHERE ID = 124;
GO
DELETE FROM DATA_ACTION WHERE ID = 125;
GO
And now you have to use SQLCMD utility in order to execute this script.
sqlcmd -S [Instance Name] -E -d [Database] -i [Script]
You can find this approach explaned here https://www.mssqltips.com/sqlservertip/3566/deleting-historical-data-from-a-large-highly-concurrent-sql-server-database-table/
Here's how I do it when I know approximately how many iterations:
delete from Activities with(rowlock) where Id in (select top 999 Id from Activities
(nolock) where description like 'financial data update date%' and len(description) = 87
and User_Id = 2);
waitfor delay '00:00:02'
GO 20
Edit: This worked better and faster for me than selecting top:
declare #counter int = 1
declare #msg varchar(max)
declare #batch int = 499
while ( #counter <= 37600)
begin
set #msg = ('Iteration count = ' + convert(varchar,#counter))
raiserror(#msg,0,1) with nowait
delete Activities with (rowlock) where Id in (select Id from Activities (nolock) where description like 'financial data update date%' and len(description) = 87 and User_Id = 2 order by Id asc offset 1 ROWS fetch next #batch rows only)
set #counter = #counter + 1
waitfor delay '00:00:02'
end
Declare #counter INT
Set #counter = 10 -- (you can always obtain the number of rows to be deleted and set the counter to that value)
While #Counter > 0
Begin
Delete TOP (4000) from <Tablename> where ID in (Select ID from <sametablename> with (NOLOCK) where DateField < '2021-01-04') -- or opt for GetDate() -1
Set #Counter = #Counter -1 -- or set #counter = #counter - 4000 if you know number of rows to be deleted.
End

Optimistic concurrency on multi-table complex entity

I have a complex entity (let's call it Thing) which is represented in SQL Server as many tables: one parent table dbo.Thing with several child tables dbo.ThingBodyPart, dbo.ThingThought, etc. We've implemented optimistic concurrency using a single rowversion column on dbo.Thing, using the UPDATE OUTPUT INTO technique. This has been working great, until we added a trigger to dbo.Thing. I'm looking for advice in choosing a different approach, because I'm fairly convinced that my current approach cannot be fixed.
Here is our current code:
CREATE PROCEDURE dbo.UpdateThing
#id uniqueidentifier,
-- ...
-- ... other parameters describing what to update...
-- ...
#rowVersion binary(8) OUTPUT
AS
BEGIN TRANSACTION;
BEGIN TRY
-- ...
-- ... update lots of Thing's child rows...
-- ...
DECLARE #t TABLE (
[RowVersion] binary(8) NOT NULL
);
UPDATE dbo.Thing
SET ModifiedUtc = sysutcdatetime()
OUTPUT INSERTED.[RowVersion] INTO #t
WHERE
Id = #id
AND [RowVersion] = #rowVersion;
IF ##ROWCOUNT = 0 RAISERROR('Thing has been updated by another user.', 16, 1);
COMMIT;
SELECT #rowVersion = [RowVersion] FROM #t;
END TRY
BEGIN CATCH
IF ##TRANCOUNT > 0 ROLLBACK;
EXEC usp_Rethrow_Error;
END CATCH
This worked absolutely beautifully, until we added an INSTEAD OF UPDATE trigger to dbo.Thing. Now the stored procedure no longer returns the new #rowVersion value, but returns the old unmodified value. I'm at a loss. Are there other ways to approach optimistic concurrency that would be as effective and easy as the one above, but would also work with triggers?
To illustrate what exactly goes wrong with this code, consider this test code:
DECLARE
#id uniqueidentifier = 'b0442c71-dbcb-4e0c-a178-1a01b9efaf0f',
#oldRowVersion binary(8),
#newRowVersion binary(8),
#expected binary(8);
SELECT #oldRowVersion = [RowVersion]
FROM dbo.Thing
WHERE Id = #id;
PRINT '#oldRowVersion = ' + convert(char(18), #oldRowVersion, 1);
DECLARE #t TABLE (
[RowVersion] binary(8) NOT NULL
);
UPDATE dbo.Thing
SET ModifiedUtc = sysutcdatetime()
OUTPUT INSERTED.[RowVersion] INTO #t
WHERE
Id = #id
AND [RowVersion] = #oldRowVersion;
PRINT '##ROWCOUNT = ' + convert(varchar(10), ##ROWCOUNT);
SELECT #newRowVersion = [RowVersion] FROM #t;
PRINT '#newRowVersion = ' + convert(char(18), #newRowVersion, 1);
SELECT #expected = [RowVersion]
FROM dbo.Thing
WHERE Id = #id;
PRINT '#expected = ' + convert(char(18), #expected, 1);
IF #newRowVersion = #expected PRINT 'Pass!'
ELSE PRINT 'Fail. :('
When the trigger is not present, this code correctly outputs:
#oldRowVersion = 0x0000000000016CDC
(1 row(s) affected)
##ROWCOUNT = 1
#newRowVersion = 0x000000000004E9D1
#expected = 0x000000000004E9D1
Pass!
When the trigger is present, we do not receive the expected value:
#oldRowVersion = 0x0000000000016CDC
(1 row(s) affected)
(1 row(s) affected)
##ROWCOUNT = 1
#newRowVersion = 0x0000000000016CDC
#expected = 0x000000000004E9D1
Fail. :(
Any ideas for a different approach?
I was assuming that an UPDATE was an atomic operation, which it is, except when there are triggers, when apparently it's not. Am I wrong? This seems really bad, in my opinion, with potential concurrency bugs lurking behind every statement. If the trigger really is INSTEAD OF, shouldn't I get back the correct timestamp, as though the trigger's UPDATE was the one I actually executed? Is this a SQL Server bug?
One of my esteemed co-workers, Jonathan MacCollum, pointed me to this bit of documentation:
INSERTED
Is a column prefix that specifies the value added by the insert or update operation.
Columns prefixed with INSERTED reflect the value after the UPDATE, INSERT, or MERGE statement is completed but before triggers are executed.
From this, I presume that I need to modify my stored procedure, splitting the one UPDATE into an UPDATE followed by a SELECT [RowVersion] ....
UPDATE dbo.Thing
SET ModifiedUtc = sysutcdatetime()
WHERE
Id = #id
AND [RowVersion] = #rowVersion;
IF ##ROWCOUNT = 0 RAISERROR('Thing has been updated by another user.', 16, 1);
COMMIT;
SELECT #rowVersion = [RowVersion]
FROM dbo.Thing
WHERE Id = #id;
I think I can still rest assured that my stored procedure is not accidentally overwriting anybody else's changes, but I should no longer assume that the data that the caller of the stored procedure holds is still up-to-date. There's a chance that the new #rowVersion value returned by the stored procedure is actually the result of someone else's update, not mine. So actually, there's no point in returning the #rowVersion at all. After executing this stored procedure, the caller should re-fetch the Thing and all of its child records in order to be sure its picture of the data is consistent.
... which further leads me to conclude that rowversion columns are not the best choice for implementing optimistic locking, which sadly is their sole purpose. I would be much better off using a manually incremented int column, with a query like:
UPDATE dbo.Thing
SET Version = #version + 1
WHERE
Id = #id
AND Version = #version;
The Version column is checked and incremented in a single atomic operation, so there's no chance for other statements to slip in-between. I don't have to ask the database what the new value is, because I told it what the new value is. As long as the Version column contains the value I'm expecting (and assuming all other people updating this row are also playing by the rules - correctly incrementing Version), I can know that the Thing is still exactly as I left it. At least, I think...

Subquery returned more than 1 value. This is not permit error in AFTER INSERT,UPDATE trigger

Once again.. i have the trigger below which has the function to keep/set the value in column esb for maximum 1 row to value 0 (in each row the value cycles from Q->0->R->1)
When i insert more than 1 row the trigger fails with an "Subquery returned more than 1 value. This is not permitted when the subquery follows" error on row 38, the "IF ((SELECT esb FROM INSERTED) in ('1','Q'))" statment.
I understand that 'SELECT esb FROM INSERTED' will return all rows of the insert, but do not know how to process one row at a time. I also tried it by creating a temporary table and iterating through the resultset, but subsequently found out that temporary tables based on the INSERTED table are not allowed.
any suggestions are welcome (again)
set ANSI_NULLS ON
set QUOTED_IDENTIFIER ON
go
ALTER TRIGGER [TR_PHOTO_AIU]
ON [SOA].[dbo].[photos_TEST]
AFTER INSERT,UPDATE
AS
DECLARE #MAXCONC INT -- Maximum concurrent processes
DECLARE #CONC INT -- Actual concurrent processes
SET #MAXCONC = 1 -- 1 concurrent process
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON
-- If column esb is involved in the update, does not necessarily mean
-- that the column itself is updated
If ( Update(ESB) )
BEGIN
-- If column esb has been changed to 1 or Q
IF ((SELECT esb FROM INSERTED) in ('1','Q'))
BEGIN
-- count the number of (imminent) active processes
SET #CONC = (SELECT COUNT(*)
FROM SOA.dbo.photos_TEST pc
WHERE pc.esb in ('0','R'))
-- if maximum has not been reached
IF NOT ( #CONC >= #MAXCONC )
BEGIN
-- set additional rows esb to '0' to match #MAXCONC
UPDATE TOP(#MAXCONC-#CONC) p2
SET p2.esb = '0'
FROM ( SELECT TOP(#MAXCONC-#CONC) p1.esb
FROM SOA.dbo.photos_TEST p1
INNER JOIN INSERTED i ON i.Value = p1.Value
AND i.ArrivalDateTime > p1.ArrivalDateTime
WHERE p1.esb = 'Q'
ORDER BY p1.arrivaldatetime ASC
) p2
END
END
END
Try to rewrite your IF as:
IF EXISTS(SELECT 1 FROM INSERTED WHERE esb IN ('1','Q'))
...

Resources