T-SQL stored procedure transaction concurrency - sql-server

I have a situation where I need to wrap an update T-SQL in a stored procedure (sp_update_queue) inside a transaction. But I'm wondering what would happen if you have two threads using the same connection but executing different queries and one rolls back a transaction it started.
For example ThreadA called sp_update_queue to update table QUEUED_TASKS but before sp_update_queue commits/rollback, transaction ThreadB executes some other update or insert SQL on a different table, say CUSTOMERS. Then after ThreadB has finished, sp_update_queue happens to encounter an error and calls rollback.
Because they are both using the same connection would the rollback also rollback changes made by ThreadB?, regardless of whether ThreadB made its changes within a transaction or not.

Each thread which acquire the resource first, will lock that resource(if you have suitable isolation level), so the second thread will wait for the required resource.
Note:each thread will have their own SessionId.
UPDATED
In your scenario, however both of the threads are using same connection, but do not use any common resources(ThreadA is dealing with table X and ThreadB is dealing with table Y). So commit or rollback of each Thread(Thread A or B) does not impact the other one.
Read more about Isolation Level

Related

Does sp_getapplock cause SQL Server performance problems?

I have a stored procedure which cannot be executed concurrently. Multiple processes call this stored procedure, but it is of vital importance that the processes access the stored procedure sequentially.
The stored procedure basically scans a table for a primary key that meets various conditions, marks the record as in-use by the calling process, and then passes the primary key back to the calling process.
Anywhere from one to a dozen instances of the calling process could exist, depending upon how much work exists.
I decided to prevent concurrency by using sp_GetAppLock inside the stored procedure. I grab an exclusive transaction lock, with #Resource set to a string that is only used inside this stored procedure. The only thing that is ever blocked by this lock is the execution of this stored procedure.
The call inside the stored procedure looks like this:
sp_getapplock #Resource='My Unique String Here'
,#LockMode='Exclusive' -- Type of lock
,#LockOwner='Transaction' -- Transaction or Session
,#LockTimeout = 5000
It works swimmingly. If a dozen instances of my process are running, only one of them executes the stored procedure at any one point in time, while the other 11 obediently queue up and wait their turn.
The only problem is our DBA. He is a very good DBA who constantly monitors the database for blocking and receives an alert when it exceeds a certain threshold. My use of sp_getapplock triggers a lot of alerts. My DBA claims that the blocking in-and-of-itself is a performance problem.
Is his claim accurate? My feeling is that this is "good" blocking, in that the only thing being blocked is execution of the stored procedure, and I want that to be blocked. But my DBA says that forcing SQL Server to enforce this blocking is a significant drain on resources.
Can I tell him to "put down the crack pipe," as we used to say? Or should I re-write my application to avoid the need for sp_getapplock?
The article I read which sold me on sp_getapplock is here: sp_getapplock
Unfortunately, I think your DBA has a point, blocking does drain resources and this type of blocking is putting extra load on the server.
Let me explain how:
Proc gets called, SQL Server assigns worker thread from the Thread pool to it and it starts executing.
Call 2,3,4,... comes in, again SQL Server assigns worker threads to these calls, the Threads starts executing but because of the exclusive locks you have obtained, all the threads get suspended and sitting in the "Waiting List" for resources to become available.
Worker Threads which are very limited in numbers on any SQL Server are being held because of your process.
Now SQL Server is accumulating waits because of something a developer decided to do.
As a DBA we want you to come to SQL Server get what you need and leave it as soon as possible. If you are intentionally staying there and holding on to resources and putting SQL Server under pressure, it will piss off the DBA.
I think you need to reconsider your application design and come up with an alternative solution.
Maybe a "Process Table" in the SQL Server, update it with some value when a process start and for each call check the process table first before you fire the next call for that proc. So the wait stuff happens in the application layer and only when the resources are available then go to DB.
"The stored procedure basically scans a table for a primary key that meets various conditions, marks the record as in-use by the calling process, and then passes the primary key back to the calling process."
Here is a different way to do it inside the SP:
BEGIN TRANSACTION
SELECT x.PKCol
FROM dbo.[myTable] x WITH (FASTFIRSTROW XLOCK ROWLOCK READPAST)
WHERE x.col1 = #col1...
IF ##ROWCOUNT > 0 BEGIN
UPDATE dbo.[myTable]
SET ...
WHERE x.col1 = #col1
END
COMMIT TRANSACTION
XLOCK
Specifies that exclusive locks are to be taken and held until the transaction completes. If specified with ROWLOCK, PAGLOCK, or TABLOCK, the exclusive locks apply to the appropriate level of granularity.

Transaction deadlocked while long select

I have a nightly job which execute a stored procedure that goes over a table and fetches records to be inserted to another table.
The procedure duration is about 4-5 minutes in which it executes 6 selects over a table with ~3M records.
While this procedure is running there are exceptions thrown from another stored procedure which trying to update the same table:
Transaction (Process ID 166) was deadlocked on lock resources with
another process and has been chosen as the deadlock victim. Rerun the
transaction.
Execution Timeout Expired. The timeout period elapsed prior to
completion of the operation or the server is not responding. --->
System.ComponentModel.Win32Exception (0x80004005): The wait operation
timed out
I have read Why use a READ UNCOMMITTED isolation level?
question, but didn't come to a conclusion what best fits my scenario, as one of the comments stated:
"The author seems to imply that read uncommitted / no lock will return
whatever data was last committed. My understanding is read uncommitted
will return whatever value was last set even from uncommitted
transactions. If so, the result would not be retrieving data "a few
seconds out of date". It would (or at least could if the transaction
that wrote the data you read gets rolled back) be retrieving data that
doesn't exist or was never committed"
Taking into consideration that I only care about the state of the rows at the moment the nightly job started (the updates in the meanwhile will be calculated in the next one)
What would be most appropriate approach?
Transaction (Process ID 166) was deadlocked on lock resources with
another process and has been chosen as the deadlock victim. Rerun the
transaction.
This normally happens when you read data with the intention to update it later by just putting a shared lock, the following UPDATE statement can’t acquire the necessary Update Locks, because they are already blocked by the Shared Locks acquired in the different session causing the deadlock.
To resolve this you can select the records using UPDLOCK like following
SELECT * FROM [Your_Table] WITH (UPDLOCK) WHERE A=B
This will take the necessary Update lock on the record in advance and will stop other sessions to acquire any lock (shared/exclusive) on the record and will prevent from any deadlocks.
Another common reason for the deadlock (Cycle Deadlock) is due to the order of the statements your put in your query, where in the end every query waits for another one in different transactions. For this type of scenarios you have to visit your query and fix the ordering issue.
Execution Timeout Expired. The timeout period elapsed prior to
completion of the operation or the server is not responding. --->
System.ComponentModel.Win32Exception (0x80004005): The wait operation
timed out
This is very clear, you need to work on the query performance, and keep the record locking as less as possible.

SQL Server Deadlock

My scheduled job is working 6 times in a day.
Sometimes its failing cause of deadlock. I tried to identify whose blocking my session.
I searched and i discovered sql profiler but its not showing exact result. How to identify historical with T-SQL or any other way ?
When the fail job error message shown belown,
transaction (process id ) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. rerun the transaction.
This should help identify deadlock victims or causes of deadlocking: https://ask.sqlservercentral.com/questions/5982/how-can-i-identify-all-processes-involved-in-a-dea.html
If you want to reduce the risk of your process deadlocking here are some strategies...
Try to INSERT/UPDATE/DELETE tables in the same order. E.g. if one process is doing this:
BEGIN TRAN; UPDATE TableA; UPDATE TableB; COMMIT;
while another process is doing this:
BEGIN TRAN; UPDATE TableB; UPDATE TableA; COMMIT;
there is a risk that one process will deadlock the other. The greater the time to complete, the higher the risk of deadlock. SQL Server will simply choose one process as the "Deadlock Victim" at random.
Try to minimise the code involved in a transaction. I.e. Fewer lines of INSERT/UPDATE/DELETE code between your BEGIN TRANSACTION and COMMIT TRANSACTION statements
Process smaller batches of data if possible. If you are processing a large number of rows, try to add batching, so the code locks smaller batches of data at any given time.

Postgres behaviour of concurrent DELETE RETURNING queries

If I have a database transaction which goes along the lines of:
DELETE FROM table WHERE id = ANY(ARRAY[id1, id2, id3,...]) RETURNING foo, bar;
if num_rows_returned != num_rows_in_array then
rollback and return
Do stuff with deleted data...
Commit
My understanding is that the DELETE query will lock those rows, until the transaction is committed or rolled back. As according to the postgres 9.1 docs:
An exclusive row-level lock on a specific row is automatically
acquired when the row is updated or deleted. The lock is held until
the transaction commits or rolls back, just like table-level locks.
Row-level locks do not affect data querying; they block only writers
to the same row.
I am using the default read committed isolation level in postgres 9.1.13
I would take from this that I should be OK, but I want to ensure that this means the following things are true:
Only one transaction may delete and return a row from this table, unless a previous transaction was rolled back.
This means "Do stuff with deleted data" can only be done once per row.
If two transactions try to do the above at once with conflicting rows, one will always succeed (ignoring system failure), and one will always fail.
Concurrent transactions may succeed when there is no crossover of rows.
If a transaction is unable to delete and return all rows, it will rollback and thus not delete any rows. A transaction may try to delete two rows for example. One row is already deleted by another transaction, but the other is free to be returned. However since one row is already deleted, the other must not be deleted and processed. Only if all specified ids can be deleted and returned may anything take place.
Using the normal idea of concurrency, processes/transactions do not fail when they are locked out of data, they wait.
The DBMS implements execution in such a way that transactions advance but only seeing effects from other transactions according to the isolation level. (Only in the case of detected deadlock is a transaction aborted, and even then its implemented execution will begin again, and the killing is not evident to its next execution or to other transactions except per isolation level.) Under SERIALIZABLE isolation level this means that the database will change as if all transactions happened without overlap in some order. Other levels allow a transaction to see certain effects of overlapped implementation execution of other transactions.
However in the case of PostgresSQL under SERIALIZABLE when a transaction tries to commit and the DBMS sees that it would give non-serialized behaviour the tranasaction is aborted with notification but not automatically restarted. (Note that this is not failure from implementation execution attempted access to a locked resource.)
(Prior to 9.1, PostgrSQL SERIALIZABLE did not give SQL standard (serialized) behaviour: "To retain the legacy Serializable behavior, Repeatable Read should now be requested.")
The locking protocols are how actual implementation execution gets interleaved to maximize throughput while keeping that true. All locking does is prevent actual overlapped implementation execution accesses to effect the apparent serialized execution.
Explicit locking by transaction code also just causes waiting.
Your question does not reflect this. You seem to think that attempted access to a locked resource by the implementation aborts a transaction. That is not so.

sp_getapplock to synchronize concurrent access to in-memory tables

I have about 20 stored procedures that consume each other, forming a tree-like dependency chain.
The stored procedures however use in-memory tables for caching and can be called concurrently from many different clients.
To protect against concurrent update / delete attempts against the in-memory tables, I am using sp_getapplock and SET MEMORY_OPTIMIZED_ELEVATE_TO_SNAPSHOT ON;.
I am using a hash of the stored procedure parameters that is unique to each stored procedure, but multiple concurrent calls to the same stored procedure with the same parameters should generate the same hash. It's this equality of the hash for concurrent calls to the same stored proc with the same parameters that gives me a useful resource name to obtain our applock against.
Below is an example:
BEGIN TRANSACTION
EXEC #LOCK_STATUS = sp_getapplock #Resource= [SOME_HASH_OF_PARAMETERS_TO_THE_SP], #LockMode = 'Exclusive';
...some stored proc code...
IF FAILURE
BEGIN
ROLLBACK;
THROW [SOME_ERROR_NUMBER]
END
...some stored proc code...
COMMIT TRANSACTION
Despite wrapping everything in an applock which should block any concurrent updates or deletes, I still get error 41302:
The current transaction attempted to update a record that has been
updated since this transaction started. The transaction was aborted.
Uncommittable transaction is detected at the end of the batch. The
transaction is rolled back.
Am I using sp_getapplock incorrectly? It seems like the approach I am suggesting should work.
The second you begin your transaction with a memory optimized table you get your "snapshot", which is based on the time the transaction started, for optimistic concurrency resolution. Unfortunately your lock is in place after the snapshot is taken and so it's still entirely possible to have optimistic concurrency resolution failures.
Consider the situation where 2 transactions that need the same lock begin at once. They both begin their transactions "simultaneously" before either obtains the lock or modifies any rows. Their snapshots look exactly the same because no data has been modified yet. Next, one transaction obtains the lock and proceeds to make its changes while the other is blocked. This transaction commits fine because it was first and so the snapshot it refers to still matches the data in memory. The other transaction obtains the lock now, but it's snapshot is invalid (however it doesn't know this yet). It proceeds to execute and at the end realizes its snapshot is invalid so it throws you an error. Truthfully the transactions don't even need to start near simultaneously, the second transaction just needs to start before the first commits.
You need to either enforce the lock at the application level, or through use of sp_getapplock at the session level.
Hope this helped.

Resources