how to handle this data concurrency problem? - sql-server

I am doing a financial application in which I am expecting a data concurrency issue.
Suppose there is an account ABC which has $500 in it. User from web can transfer these funds to other accounts. This will involve 2 steps 1st checking availability of funds and 2nd transferring. I am making a transaction and doing both acts in it.
Problem is when in a time (say Time1) there are 2 or 3 seprate requests for transferring (say transaction1,transaction2, transaction3) same amount. Now committed available amount is $500. If all translations starts in same time, all will test is amount ($500) available ? which will true and next statement will transfer funds to other account.
I have read about Transaction isolation levels but I couldn't decide which isolation level I should use, actually I am confuse in its understanding. Please help me.
Thanks

The aim is to prevent another process reading the balance but minimise blocking for other users. So use the "table as a queue" type locks thus:
SET XACT_ABORT, NOCOUNT ON;
BEGIN TRY
BEGIN TRANSACTION
SELECT #balance = Balance
FROM SomeTable WITH (ROWLOCK, HOLDLOCK, UPDLOCK)
WHERE Account = 'ABC'
--some checks
UPDATE ...
COMMIT TRANSACTION
END TRY
BEGIN CATCH
...
END CATCH
The alternative is to do it in one, which is more feasible if there is one table involved.
The CROSS JOIN is a test to
SET XACT_ABORT, NOCOUNT ON;
BEGIN TRY
--BEGIN TRANSACTION
UPDATE SomeTable WITH (ROWLOCK, HOLDLOCK, UPDLOCK)
SET Balance = Balance - #request
WHERE
ST.Account = 'ABC' AND Balance > #request;
IF ##ROWCOUNT <> 1
RAISERROR ('Not enough in account', 16, 1);
--COMMIT TRANSACTION
END TRY
BEGIN CATCH
...
END CATCH

In order to avoid withdraws of amounts bigger than the price, you could do this:
update <table>
set amount = amount - #price
where amount >= #price
and account = #account
if ##rowcount = 1 print 'transaction went well' else print 'Insufficient funds'

Related

The ROLLBACK TRANSACTION request has no corresponding BEGIN TRANSACTION when ran UDATE statement

Got an error when ran update statement.
For one record it worked fine, but for a chunk of records it gives me an error.
Also, why it tells me that 64801 row(s) affected and then 1 row(s) affected and then 0? How should I interpret that?
This is the script:
update tblQuotes
set QuoteStatusID = 11, --Not Taken Up
QuoteStatusReasonID = 9 --"Not Competitive"
where CAST(EffectiveDate as DATE) < CAST('2013-11-27' as DATE)
and CompanyLocationGuid = '32828BB4-E1FA-489F-9764-75D8AF7A78F1' -- Plaza Insurance Company
and LineGUID = '623AA353-9DFE-4463-97D7-0FD398400B6D' --Commercial Auto
I added BEGIN TRANSACTION statement, but it still won't work.
BEGIN TRANSACTION
update tblQuotes
set QuoteStatusID = 11, --Not Taken Up
QuoteStatusReasonID = 9 --"Not Competitive"
where CAST(EffectiveDate as DATE) < CAST('2017-11-27' as DATE)
AND CompanyLocationGuid = '32828BB4-E1FA-489F-9764-75D8AF7A78F1' -- Plaza Insurance Company
and LineGUID = '623AA353-9DFE-4463-97D7-0FD398400B6D' --Commercial Auto
IF ##TRANCOUNT>0
COMMIT TRANSACTION
In my opinion this is a "flaw", if not a "bug" in SQL Server. When you COMMIT a transaction, TRANCOUNT is decremented by 1. When you ROLLBACK any transaction, all transactions in the calling stack are rolled back! This means that any calling procedure that tries to commit or rollback will have this error and you've lost the integrity of your calling stack.
I worked through this when building a mechanism do do unit testing on SQL Server. I get around it by always using named transactions as shown in the example below. You can obviously also check XACT_STATE. The point is simply that, rather than blindly committing and rolling back anonymous transactions, if you manage transactions by name or transaction id you have better control.
For unit testing, I write a stored procedure as a test that calls the procedure under test. The unit test is in either serializable or snapshot mode and ONLY includes a rollback statement. I call the procedure under test, validate the results, build test output (pass/fail, parameters, etc.) as XML output, then everything gets rolled back. This gets around the need to build "mock data". I can use the data on any environment as the transaction is always rolled back.
--
-- get #procedure from object_name(##procid)
-------------------------------------------------
DECLARE #procedure SYSNAME = N'a_procedure_name_is_a_synonym_so_can_be_longer_than_a_transaction_name'
, #transaction_id BIGINT;
DECLARE #transaction_name NVARCHAR(32) = RIGHT(#procedure + N'_tx', 32);
--
BEGIN TRANSACTION #transaction_name;
BEGIN
SELECT #transaction_id = [transaction_id]
FROM [sys].[dm_tran_active_transactions]
WHERE [name] = #transaction_name;
SELECT *
FROM [sys].[dm_tran_active_transactions]
WHERE [name] = #transaction_name;
-- Perform work here
END;
IF EXISTS
(SELECT *
FROM [sys].[dm_tran_active_transactions]
WHERE [name] = #transaction_name)
ROLLBACK TRANSACTION #transaction_name;
This error states that in SQL Server, you have given a Commit or Commit Transaction without specifying a Begin Transaction or the number of commit transactions is greater than the number of begin transactions. To avoid this make sure you check the existing transactions on the current session before committing.
So a normal Commit Transaction will be updated as below
IF ##TRANCOUNT>0
COMMIT TRANSACTION
There is a trigger, that making sure about the integrity of the QuoteStatusID. So im my WHERE clause I have to exactly specify what current status ID policy have to have in order to be updated.

Confused with MS SQL Server LOCK would help in INSERT Scenario.(Concurrency)

Business Scenario: This is a ticketing system, and we got so many user using the application. When a ticket(stored in 1st table in below) comes in to the application, any user can hit the ownership button and take ownershipf of it.
Only one user can take ownership for one ticket. If two user tries to hit the ownership button, first one wins and second gets another incident or message that no incident exists to take ownership.
Here i am facing a concurrency issue now. I already have a lock implementation using another table(2nd table in below).
I have two tables;
Table(Columns)
Ticket(TicketID-PK, OwnerUserID-FK)
TicketOwnerShipLock(TicketID-PK, OwnerUserID-FK, LockDate)Note: Here TicketID is set as Primary Key.
Current lock implementation: whenever user one tries to own ticket puts an entry to 2nd table with TicketID, UserID and current date,then goes to update the OwnerUserID in 1st table.
Before insert the above said lock entry, Procedure checks for any other user already created any lock for the same incident.
If already there is lock, lock wont be opened for the user. Else lock entry wont be entered and the user cannot update the ticket onwership.
More Info: There are so many tickets getting opened in 1st table, whenever user tries to take ownership, we should find the next available ticket to take ownership. So need to find ticket and to do some calculation and set a status for that ticket, there one more column in 1st table StatusID. Status will be assigned as Assigned.
Problem: Somehow two user's got the ownership for same ticket at excatly same time, i have even checked the millisecond but that too same.
1. I would like to know if any MS SQL Server LOCK would help in this scenario.
2. Or do i need to block table while insert.(This 2nd rable will not have much data approx. less than 15 rows)
Lock Creation Procedure Below:
ALTER PROCEDURE [dbo].[TakeOwnerShipGetLock]
#TicketId [uniqueidentifier],
#OwnerId [uniqueidentifier]
AS
BEGIN
SET NOCOUNT ON;
BEGIN TRANSACTION TakeOwnership
BEGIN TRY
DECLARE #Lock BIT
SET #Lock = 0
DECLARE #LockDate DATETIME
SELECT #LockDate = LockDate
FROM dbo.TakeOwnershipLock
WHERE TicketId = #TicketId
IF #LockDate IS NULL
AND NOT EXISTS ( SELECT 1
FROM dbo.TakeOwnershipLock as takeOwnership WITH (UPDLOCK)
INNER JOIN dbo.Ticket as Ticket WITH (NOLOCK)
ON Ticket.TicketID = takeOwnership.TicketId
WHERE takeOwnership.TicketId = #TicketId
AND Ticket.OwnerID is NULL )
BEGIN
INSERT INTO dbo.TakeOwnershipLock
( TicketId
,OwnerId
,LockDate
)
VALUES ( #TicketId
,#OwnerId
,GETDATE()
)
IF ( ##ROWCOUNT > 0 )
SET #Lock = 1
END
SELECT #Lock
COMMIT TRANSACTION TakeOwnership
END TRY
BEGIN CATCH
-- Test whether the transaction is uncommittable.
IF XACT_STATE() = 1
BEGIN
COMMIT TRANSACTION TakeOwnership
SET #Lock = 1
SELECT #Lock
END
-- Test whether the transaction is active and valid.
IF XACT_STATE() = -1
BEGIN
ROLLBACK TRANSACTION TakeOwnership
SET #Lock = 0
SELECT #Lock
END
END CATCH
END

How to prevent multi threaded application to read this same Sql Server record twice

I am working on a system that uses multiple threads to read, process and then update database records. Threads run in parallel and try to pick records by calling Sql Server stored procedure.
They call this stored procedure looking for unprocessed records multiple times per second and sometimes pick this same record up.
I try to prevent this happening this way:
UPDATE dbo.GameData
SET Exported = #Now,
ExportExpires = #Expire,
ExportSession = #ExportSession
OUTPUT Inserted.ID INTO #ExportedIDs
WHERE ID IN ( SELECT TOP(#ArraySize) GD.ID
FROM dbo.GameData GD
WHERE GD.Exported IS NULL
ORDER BY GD.ID ASC)
The idea here is to set a record as exported first using an UPDATE with OUTPUT (remembering record id), so no other thread can pick it up again. When record is set as exported, then I can do some extra calculations and pass the data to the external system hoping that no other thread will pick this same record again in the mean time. Since the UPDATE that has in mind to secure the record first.
Unfortunately it doesn't seem to be working and the application sometimes pick same record twice anyway.
How to prevent it?
Kind regards
Mariusz
I think you should be able to do this atomically using a common table expression. (I'm not 100% certain about this, and I haven't tested, so you'll need to verify that it works for you in your situation.)
;WITH cte AS
(
SELECT TOP(#ArrayCount)
ID, Exported, ExportExpires, ExportSession
FROM dbo.GameData WITH (READPAST)
WHERE Exported IS NULL
ORDER BY ID
)
UPDATE cte
SET Exported = #Now,
ExportExpires = #Expire,
ExportSession = #ExportSession
OUTPUT INSERTED.ID INTO #ExportedIDs
I have a similar set up and I use sp_getapplock. My application runs many threads and they call a stored procedure to get the ID of the element that has to be processed. sp_getapplock guarantees that the same ID would not be chosen by two different threads.
I have a MyTable with a list of IDs that my application checks in an infinite loop using many threads. For each ID there are two datetime columns: LastCheckStarted and LastCheckCompleted. They are used to determine which ID to pick. Stored procedure picks an ID that wasn't checked for the longest period. There is also a hard-coded period of 20 minutes - the same ID can't be checked more often than every 20 minutes.
CREATE PROCEDURE [dbo].[GetNextIDToCheck]
-- Add the parameters for the stored procedure here
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
BEGIN TRANSACTION;
BEGIN TRY
DECLARE #VarID int = NULL;
DECLARE #VarLockResult int;
EXEC #VarLockResult = sp_getapplock
#Resource = 'SomeUniqueName_app_lock',
#LockMode = 'Exclusive',
#LockOwner = 'Transaction',
#LockTimeout = 60000,
#DbPrincipal = 'public';
IF #VarLockResult >= 0
BEGIN
-- Acquired the lock
-- Find ID that wasn't checked for the longest period
SELECT TOP 1
#VarID = ID
FROM
dbo.MyTable
WHERE
LastCheckStarted <= LastCheckCompleted
-- this ID is not being checked right now
AND LastCheckCompleted < DATEADD(minute, -20, GETDATE())
-- last check was done more than 20 minutes ago
ORDER BY LastCheckCompleted;
-- Start checking
UPDATE dbo.MyTable
SET LastCheckStarted = GETDATE()
WHERE ID = #VarID;
-- There is no need to explicitly verify if we found anything.
-- If #VarID is null, no rows will be updated
END;
-- Return found ID, or no rows if nothing was found,
-- or failed to acquire the lock
SELECT
#VarID AS ID
WHERE
#VarID IS NOT NULL
;
COMMIT TRANSACTION;
END TRY
BEGIN CATCH
ROLLBACK TRANSACTION;
END CATCH;
END
The second procedure is called by an application when it finishes checking the found ID.
CREATE PROCEDURE [dbo].[SetCheckComplete]
-- Add the parameters for the stored procedure here
#ParamID int
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
BEGIN TRANSACTION;
BEGIN TRY
DECLARE #VarLockResult int;
EXEC #VarLockResult = sp_getapplock
#Resource = 'SomeUniqueName_app_lock',
#LockMode = 'Exclusive',
#LockOwner = 'Transaction',
#LockTimeout = 60000,
#DbPrincipal = 'public';
IF #VarLockResult >= 0
BEGIN
-- Acquired the lock
-- Completed checking the given ID
UPDATE dbo.MyTable
SET LastCheckCompleted = GETDATE()
WHERE ID = #ParamID;
END;
COMMIT TRANSACTION;
END TRY
BEGIN CATCH
ROLLBACK TRANSACTION;
END CATCH;
END
It does not work because multiple transactions might first execute the IN clause and find the same set of rows, then update multiple times and overwrite each other.
LukeH's answer is best, accept it.
You can also fix it by adding AND Exported IS NULL to cancel double updates.
Or, make this SERIALIZABLE. This will lead to some blocking and deadlocking. This can safely be handled by timeouts and retry in case of deadlock. SERIALIZABLE is always safe for all workloads but it might block/deadlock more often.

Rollback Transaction when concurrency check fails

I have a stored procedure which does a lot of probing of the database to determine if some records should be updated
Each record (Order) has a TIMESTAMP called [RowVersion]
I store the candidate record ids and RowVersions in a temporary table called #Ids
DECLARE #Ids TABLE (id int, [RowVersion] Binary(8))
I get the count of candidates with the the following
DECLARE #FoundCount int
SELECT #FoundCount = COUNT(*) FROM #Ids
Since records may change from when i SELECT to when i eventually try to UPDATE, i need a way to check concurrency and ROLLBACK TRANSACTION if that check fails
What i have so far
BEGIN TRANSACTION
-- create new combinable order group
INSERT INTO CombinableOrders DEFAULT VALUES
-- update orders found into new group
UPDATE Orders
SET Orders.CombinableOrder_Id = SCOPE_IDENTITY()
FROM Orders AS Orders
INNER JOIN #Ids AS Ids
ON Orders.Id = Ids.Id
AND Orders.[RowVersion] = Ids.[RowVersion]
-- if the rows updated dosnt match the rows found, then there must be a concurrecy issue, roll back
IF (##ROWCOUNT != #FoundCount)
BEGIN
ROLLBACK TRANSACTION
set #Updated = -1
END
ELSE
COMMIT
From the above, i'm filtering the UPDATE with the stored [RowVersion] this will skip any records that have since been changed (hopefully)
However i'm not quite sure if i'm using transactions or optimistic concurrency in regards to TIMESTAMP correctly, or if there are better ways to achieve my desired goals
It's difficult to understand what logic you are trying to implement.
But, if you absolutely must perform several non-atomic actions in a procedure and make sure that the whole block of code is not executed again while it is running (for example, by another user), consider using sp_getapplock.
Places a lock on an application resource.
Your procedure may look similar to this:
CREATE PROCEDURE [dbo].[YourProcedure]
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
BEGIN TRANSACTION;
BEGIN TRY
DECLARE #VarLockResult int;
EXEC #VarLockResult = sp_getapplock
#Resource = 'UniqueStringFor_app_lock',
#LockMode = 'Exclusive',
#LockOwner = 'Transaction',
#LockTimeout = 60000,
#DbPrincipal = 'public';
IF #VarLockResult >= 0
BEGIN
-- Acquired the lock
-- perform your complex processing
-- populate table with IDs
-- update other tables using IDs
-- ...
END;
COMMIT TRANSACTION;
END TRY
BEGIN CATCH
ROLLBACK TRANSACTION;
END CATCH;
END
When you SELECT the data, try using HOLDLOCK and UPDLOCK while inside of an explicit transaction. It's going to mess with the concurrency of OTHER transactions but not yours.
http://msdn.microsoft.com/en-us/library/ms187373.aspx

Does tablockx have to been specified in each session?

I have two test transactions in two session respectively. Assuming these two transaction will run simultaneously. What I attempt to do is to let one transaction insert invoice number correctly after the other transaction is done. No duplicate. I did it as below. But if I remove with (tablockx) in session 2, they won't work any more. I checked on line book but no answer. Anybody would help? Serializable won't work since two SELECT want to be exclusive to each other here. Thanks.
In session 1:
begin transaction
declare #i int
select #i=MAX(InvNumber) from Invoice
with(tablockx)
where LocName='A'
waitfor delay '00:00:10'
set #i=#i+1
insert into Invoice values('A',#i);
commit
In session 2:
begin transaction
declare #i int
select #i=MAX(InvNumber) from Invoice
with(tablockx)
where LocName='A'
set #i=#i+1
insert into Invoice values('A',#i);
commit
That will work but also completely block all other access to the table.
You can potentially lock at a lower granularity (than table) and mode (than exclusive) if you do WITH(UPDLOCK, HOLDLOCK).
HOLDLOCK gives serializable semantics so can just lock the range at the top of the index (if you have one on LocName,InvNumber).
UPDLOCK ensures two concurrent transactions can't both hold the same lock but, unlike exclusive, doesn't block other (normal) readers that aren't using the hint.
BEGIN TRANSACTION
DECLARE #i INT
SELECT #i = MAX(InvNumber)
FROM Invoice WITH(UPDLOCK, HOLDLOCK)
WHERE LocName = 'A'
WAITFOR delay '00:00:10'
SET #i=#i + 1
INSERT INTO Invoice
VALUES ('A',
#i);
COMMIT
Alternatively you could just use sp_getapplock to serialize access to that code.

Resources