I have what seems like a simple problem but can't find a solution. I have a long-running stored procedure that will update a table at the beginning and end of the statement. The problem is, the table is locked during the whole process. Here's a simplified version:
ALTER PROCEDURE [dbo].[Proc_FullRefresh]
AS
BEGIN
UPDATE Settings SET SettingValue = 'true' WHERE SettingName = 'Running'
WAITFOR DELAY '00:00:30'
END
The problem is, I'm unable to select that row from the Settings table while the whole procedure is running. I even tried wrapping in transactions to see if that would help:
BEGIN TRAN
UPDATE Settings SET SettingValue = 'true' WHERE SettingName = 'Running'
COMMIT;
BEGIN TRAN
WAITFOR DELAY '00:00:30'
COMMIT
But that didn't work either. Is there any way to release the lock on the Settings table while the procedure is doing its other stuff?
Is there any way to release the lock on the Settings table while the procedure is doing its other stuff?
You are running the stored procedure in a transaction, otherwise the UPDATE statement would complete immediately, and be visible from other sessions. When you add additional BEGIN TRAN/COMMIT pairs you're actually creating a "nested transaction" The locks from the UPDATE will be held until the commit of the outer (real) transaction.
So just don't run the procedure in a transaction.
Some clients and data access/ORM frameworks start a transaction automatically, but most require you to explicitly start a transaction.
Related
Using SQL Server 2016, I wish to merge data from a SourceTable to a DestinationTable with a simple procedure containing a simple insert/update/delete on the same table.
The SourceTable is filled by several different applications, and they call the MergeOrders stored procedure to merge their uploaded rows from SourceTable into DestinationTable.
There can be several instances of MergeOrders stored procedure running in parallel.
I get a lot of lock, but that's normal, the issue is that sometimes I get "RowGroup deadlocks", which I cannot afford.
What is the best way to execute such merge operation in this parallel environment.
I am thinking about TABLOCK or SERIALIZABLE hints, or maybe application locks to serialize the access, but interested if there is better way.
An app lock will serialize sessions attempting to run this procedure. It should look like this:
create or alter procedure ProcWithAppLock
with execute as owner
as
begin
set xact_abort on;
set nocount on;
begin transaction;
declare #lockName nvarchar(255) = object_name(##procid) + '-applock';
exec sp_getapplock #lockName,'Exclusive','Transaction',null,'dbo';
--do stuff
waitfor delay '00:00:10';
select getdate() dt, object_name(##procid);
exec sp_releaseapplock #lockName, 'Transaction', 'dbo';
commit transaction;
end
There are a couple of subtle things in this template. First off it doesn't have a catch block, and relies on xact_abort to release the applock in case of an error. And you want to explicitly release the app lock in case this procedure is called in the context of a longer-running transaction. And finally the principal for the lock is set to dbo so that no non-dbo user can acquire a conflicting lock. This also requires that the procedure be run with execute as owner, as the application user would not normally be dbo.
I have a database table with thousands of entries. I have multiple worker threads which pick up one row at a time, does some work (takes roughly one second each). While picking up the row, each thread updates a flag on the database row (like a timestamp) so that the other threads do not pick it up. But the problem is that I end up in a scenario where multiple threads are picking up the same row.
My general question is that what general design approach should I follow here to ensure that each thread picks up unique rows and does their task independently.
Note : Multiple threads are running in parallel to hasten the processing of the database rows. So I would like to have a as small as possible critical segment or exclusive lock.
Just to give some context, below is the stored proc which picks up the rows from the table after it has updated the flag on the row. Please note that the stored proc is not compilable as I have removed unnecessary portions from it. But generally that's the structure of it.
The problem happens when multiple threads execute the stored proc in parallel. The change made by the update statement (note that the update is done after taking up a lock) in one thread is not visible to the other thread unless the transaction is committed. And as there is a SELECT statement (which takes around 50ms) between the UPDATE and the TRANSACTION COMMIT, on 20% cases the UPDATE statement in a thread picks up a row which has already been processed.
I hope I am clear enough here.
USE ['mydatabase']
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER PROCEDURE [dbo].[GetRequest]
AS
BEGIN
-- some variable declaration here
BEGIN TRANSACTION
-- check if there are blocking rows in the request table
-- FM: Remove records that don't qualify for operation.
-- delete operation on the table to remove rows we don't want to process
delete FROM request where somecondition = 1
-- Identify the requests to process
DECLARE #TmpTableVar table(TmpRequestId int NULL);
UPDATE TOP(1) request
WITH (ROWLOCK)
SET Lock = DateAdd(mi, 5, GETDATE())
OUTPUT INSERTED.ID INTO #TmpTableVar
FROM request tur
WHERE (Lock IS NULL OR GETDATE() > Lock) -- not locked or lock expired
AND GETDATE() > NextRetry -- next in the queue
IF(##RowCount = 0)
BEGIN
ROLLBACK TRANSACTION
RETURN
END
select #RequestID = TmpRequestId from #TmpTableVar
-- Get details about the request that has been just updated
SELECT somerows
FROM request
WHERE somecondition = 1
COMMIT TRANSACTION
END
The analog of a critical section in SQL Server is sp_getapplock, which is simple to use. Alternatively you can SELECT the row to update with (UPDLOCK,READPAST,ROWLOCK) table hints. Both of these require a multi-statement transaction to control the duration of the exclusive locking.
You need start a transaction isolation level on sql for isolation your line, but this can impact on your performance.
Look the sample:
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
GO
BEGIN TRANSACTION
GO
SELECT ID, NAME, FLAG FROM SAMPLE_TABLE WHERE FLAG=0
GO
UPDATE SAMPLE_TABLE SET FLAG=1 WHERE ID=1
GO
COMMIT TRANSACTION
Finishing, not exist a better way for use isolation level. You need analyze the positive and negative point for each level isolation and test your system performance.
More information:
https://learn.microsoft.com/en-us/sql/t-sql/statements/set-transaction-isolation-level-transact-sql
http://www.besttechtools.com/articles/article/sql-server-isolation-levels-by-example
https://en.wikipedia.org/wiki/Isolation_(database_systems)
I have a chunk of SQL code that has the following format:
SET IMPLICIT_TRANSACTIONS ON
// Insert or Update Statement #1
GO
// Insert or Update Statement #2
GO
IF ##TRANCOUNT > 0 COMMIT TRAN
SET IMPLICIT_TRANSACTIONS OFF
My question: is statement 1 in the same transaction as statement 2 (but that they are in different batches)? I'd believe so based on my reading on Google but I'd like some second opinions.
Thanks!
It depends.
If the both statements are either one of the following :
ALTER TABLE
FETCH
REVOKE
BEGIN TRANSACTION
GRANT
SELECT
CREATE
INSERT
TRUNCATE TABLE
DELETE
OPEN
UPDATE
DROP
then the answer is yes.
Because if the connection is already in an open transaction, the above statements do not start a new transaction.
If, however, Statement 2 is BEGIN TRANSACTION then it will cause two nested transactions to open.
http://msdn.microsoft.com/en-us/library/ms187807(v=sql.100).aspx
And the GO command is just a batch separator , it doesn't start a new transaction.
A transaction can be wrapped around multiple batches.
I have a stored procedure which is called inside a trigger on Insert/Update/Delete.
The problem is that there is a certain code block inside this SP which is not critical.
Hence I want to ignore any erros arising from this code block.
I inserted this code block inside a TRY CATCH block. But to my surprise I got the following error:
The current transaction cannot be committed and cannot support operations that write to the log file. Roll back the transaction.
Then I tried using SAVE & ROLLBACK TRANSACTION along with TRY CATCH, that too failed with the following error:
The current transaction cannot be committed and cannot be rolled back to a savepoint. Roll back the entire transaction.
My server version is: Microsoft SQL Server 2008 (SP2) - 10.0.4279.0 (X64)
Sample DDL:
IF OBJECT_ID('TestTrigger') IS NOT NULL
DROP TRIGGER TestTrigger
GO
IF OBJECT_ID('TestProcedure') IS NOT NULL
DROP PROCEDURE TestProcedure
GO
IF OBJECT_ID('TestTable') IS NOT NULL
DROP TABLE TestTable
GO
CREATE TABLE TestTable (Data VARCHAR(20))
GO
CREATE PROC TestProcedure
AS
BEGIN
SAVE TRANSACTION Fallback
BEGIN TRY
DECLARE #a INT = 1/0
END TRY
BEGIN CATCH
ROLLBACK TRANSACTION Fallback
END CATCH
END
GO
CREATE TRIGGER TestTrigger
ON TestTable
FOR INSERT, UPDATE, DELETE
AS
BEGIN
EXEC TestProcedure
END
GO
Code to replicate the error:
BEGIN TRANSACTION
INSERT INTO TestTable VALUES('data')
IF ##ERROR > 0
ROLLBACK TRANSACTION
ELSE
COMMIT TRANSACTION
GO
I was going through the same torment, and I just solved it!!!
Just add this single line at the very first step of your TRIGGER and you're going to be fine:
SET XACT_ABORT OFF;
In my case, I'm handling the error feeding a specific table with the batch that caused the error and the error variables from SQL.
Default value for XACT_ABORT is ON, so the entire transaction won't be commited even if you're handling the error inside a TRY CATCH block (just as I'm doing). Setting its value for OFF will cause the transaction to be commited even when an error occurs.
However, I didn't test it when the error is not handled...
For more info:
SET XACT_ABORT (Transact-SQL) | Microsoft Docs
I'd suggest re-architecting this so that you don't poison the original transaction - maybe have the transaction send a service broker message (or just insert relevant data into some form of queue table), so that the "non-critical" part can take place in a completely independent transaction.
E.g. your trigger becomes:
CREATE TRIGGER TestTrigger
ON TestTable
FOR INSERT, UPDATE, DELETE
AS
BEGIN
INSERT INTO QueueTable (Col1,Col2)
SELECT COALESCE(i.Col1,d.Col1),COALESCE(i.Col2,d.Col2) from inserted i,deleted d
END
GO
You shouldn't do anything inside a trigger that might fail, unless you do want to force the transaction that initiated the trigger action to also fail.
This is a very similar question to Why try catch does not suppress exception in trigger
Also see the answer here T-SQL try catch transaction in trigger
I don’t think you can use savepoints inside a trigger. I mean, you can but I googled about it and I saw a few people saying that they don’t work. If you replace your “save transaction” for a begin transaction, it compiles. Of course it is not necessary because you have the outer transaction control and the inner rollback would rollback everything.
If I have a stored procedure that executes another stored procedure several times with different arguments, is it possible to have each of these calls commit independently of the others?
In other words, if the first two executions of the nested procedure succeed, but the third one fails, is it possible to preserve the results of the first two executions (and not roll them back)?
I have a stored procedure defined something like this in SQL Server 2000:
CREATE PROCEDURE toplevel_proc ..
AS
BEGIN
...
while #row_count <= #max_rows
begin
select #parameter ... where rownum = #row_count
exec nested_proc #parameter
select #row_count = #row_count + 1
end
END
First off, there is no such thing as a nested transaction in SQL Server
However, you can use SAVEPOINTs as per this example (too long to reproduce here sorry) from fellow SO user Remus Rusanu
Edit: AlexKuznetsov mentioned (he deleted his answer though) that this won't work if a transaction is doomed. This can happen with SET XACT_ABORT ON or some trigger errors.
From BOL:
ROLLBACK TRANSACTION without a
savepoint_name or transaction_name
rolls back to the beginning of the
transaction. When nesting
transactions, this same statement
rolls back all inner transactions to
the outermost BEGIN TRANSACTION
statement.
I also found the following from another thread here:
Be aware that SQL Server transactions
aren't really nested in the way you
might think. Once an explict
transaction is started, a subsequent
BEGIN TRAN increments ##TRANCOUNT
while a COMMIT decrements the value.
The entire outmost transaction is
committed when a COMMIT results in a
zero ##TRANCOUNT. But a ROLLBACK
without a savepoint rolls back all
work including the outermost
transaction.
If you need nested transaction
behavior, you'll need to use SAVE
TRANSACTION instead of BEGIN TRAN and
use ROLLBACK TRAN [savepoint_name]
instead of ROLLBACK TRAN.
So it would appear possible.