My scheduled job is working 6 times in a day.
Sometimes its failing cause of deadlock. I tried to identify whose blocking my session.
I searched and i discovered sql profiler but its not showing exact result. How to identify historical with T-SQL or any other way ?
When the fail job error message shown belown,
transaction (process id ) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. rerun the transaction.
This should help identify deadlock victims or causes of deadlocking: https://ask.sqlservercentral.com/questions/5982/how-can-i-identify-all-processes-involved-in-a-dea.html
If you want to reduce the risk of your process deadlocking here are some strategies...
Try to INSERT/UPDATE/DELETE tables in the same order. E.g. if one process is doing this:
BEGIN TRAN; UPDATE TableA; UPDATE TableB; COMMIT;
while another process is doing this:
BEGIN TRAN; UPDATE TableB; UPDATE TableA; COMMIT;
there is a risk that one process will deadlock the other. The greater the time to complete, the higher the risk of deadlock. SQL Server will simply choose one process as the "Deadlock Victim" at random.
Try to minimise the code involved in a transaction. I.e. Fewer lines of INSERT/UPDATE/DELETE code between your BEGIN TRANSACTION and COMMIT TRANSACTION statements
Process smaller batches of data if possible. If you are processing a large number of rows, try to add batching, so the code locks smaller batches of data at any given time.
Related
I have a nightly job which execute a stored procedure that goes over a table and fetches records to be inserted to another table.
The procedure duration is about 4-5 minutes in which it executes 6 selects over a table with ~3M records.
While this procedure is running there are exceptions thrown from another stored procedure which trying to update the same table:
Transaction (Process ID 166) was deadlocked on lock resources with
another process and has been chosen as the deadlock victim. Rerun the
transaction.
Execution Timeout Expired. The timeout period elapsed prior to
completion of the operation or the server is not responding. --->
System.ComponentModel.Win32Exception (0x80004005): The wait operation
timed out
I have read Why use a READ UNCOMMITTED isolation level?
question, but didn't come to a conclusion what best fits my scenario, as one of the comments stated:
"The author seems to imply that read uncommitted / no lock will return
whatever data was last committed. My understanding is read uncommitted
will return whatever value was last set even from uncommitted
transactions. If so, the result would not be retrieving data "a few
seconds out of date". It would (or at least could if the transaction
that wrote the data you read gets rolled back) be retrieving data that
doesn't exist or was never committed"
Taking into consideration that I only care about the state of the rows at the moment the nightly job started (the updates in the meanwhile will be calculated in the next one)
What would be most appropriate approach?
Transaction (Process ID 166) was deadlocked on lock resources with
another process and has been chosen as the deadlock victim. Rerun the
transaction.
This normally happens when you read data with the intention to update it later by just putting a shared lock, the following UPDATE statement can’t acquire the necessary Update Locks, because they are already blocked by the Shared Locks acquired in the different session causing the deadlock.
To resolve this you can select the records using UPDLOCK like following
SELECT * FROM [Your_Table] WITH (UPDLOCK) WHERE A=B
This will take the necessary Update lock on the record in advance and will stop other sessions to acquire any lock (shared/exclusive) on the record and will prevent from any deadlocks.
Another common reason for the deadlock (Cycle Deadlock) is due to the order of the statements your put in your query, where in the end every query waits for another one in different transactions. For this type of scenarios you have to visit your query and fix the ordering issue.
Execution Timeout Expired. The timeout period elapsed prior to
completion of the operation or the server is not responding. --->
System.ComponentModel.Win32Exception (0x80004005): The wait operation
timed out
This is very clear, you need to work on the query performance, and keep the record locking as less as possible.
I have a situation where I need to wrap an update T-SQL in a stored procedure (sp_update_queue) inside a transaction. But I'm wondering what would happen if you have two threads using the same connection but executing different queries and one rolls back a transaction it started.
For example ThreadA called sp_update_queue to update table QUEUED_TASKS but before sp_update_queue commits/rollback, transaction ThreadB executes some other update or insert SQL on a different table, say CUSTOMERS. Then after ThreadB has finished, sp_update_queue happens to encounter an error and calls rollback.
Because they are both using the same connection would the rollback also rollback changes made by ThreadB?, regardless of whether ThreadB made its changes within a transaction or not.
Each thread which acquire the resource first, will lock that resource(if you have suitable isolation level), so the second thread will wait for the required resource.
Note:each thread will have their own SessionId.
UPDATED
In your scenario, however both of the threads are using same connection, but do not use any common resources(ThreadA is dealing with table X and ThreadB is dealing with table Y). So commit or rollback of each Thread(Thread A or B) does not impact the other one.
Read more about Isolation Level
Documentation says, serializable transactions execute one after one.
But in practic it seems not to be truth. Here's two almost equal transactions, the difference is delay for 15 seconds only.
#1:
set transaction isolation level serializable
go
begin transaction
if not exists (select * from articles where title like 'qwe')
begin
waitfor delay '00:00:15'
insert into articles (title) values ('qwe')
end
commit transaction go
#2:
set transaction isolation level serializable
go
begin transaction
if not exists (select * from articles where title like 'qwe')
begin
insert into articles (title) values ('asd')
end
commit transaction go
The second transaction has been run after couple of seconds since the start of first one.
The result is deadlock. The first transaction dies with
Transaction (Process ID 58) was deadlocked on
lock resources with another process and has been chosen as the deadlock victim.
Rerun the transaction.
reason.
The conclusion, serializable transactions are not serial?
serializable transactions don't necessarily execute serially.
The promise is just that transactions can only commit if the result would be as if they had executed serially (in any order).
The locking requirements to meet this guarantee can frequently lead to deadlock where one of the transactions needs to be rolled back. You would need to code your own retry logic to resubmit the failed query.
See The Serializable Isolation Level for more about the differences between the logical description and implementation.
What happens here:
Because transactions 1 runs in serializable isolation level, it keeps a share lock it obtains on table articles while it wait. This way, it is guaranteed that the non exists condition remains true until the transaction terminates.
Transaction 2 gets a share lock as well that allows it to do the exist check condition. Then, with the insert statement, Transaction 2 requires to convert the share lock to a exclusive lock but has to wait as Transaction 1 holds a shared lock.
When Transaction 1 finishes to wait, it also requests a conversion to exclusive mode => deadlock situation, 1 of the transaction has to be terminated.
I got into a similar problem and i found that:
From MSDN:
SERIALIZABLE
Specifies the following:
Statements cannot read data that has been modified but not yet
committed by other transactions.
No other transactions can modify data that has been read by the
current transaction until the current transaction completes.
Other transactions cannot insert new rows with key values that would
fall in the range of keys read by any statements in the current
transaction until the current transaction completes.
The second point does not state that both sessions can't take the shared lock that will result in deadlock. We solved it with a hint on SELECT.
select * from articles WITH (UPDLOCK, ROWLOCK) where title like 'qwe'
Have not tried if it would work in this case but i think you would have to lock on the table part since the row is not yet created.
I have about 20 stored procedures that consume each other, forming a tree-like dependency chain.
The stored procedures however use in-memory tables for caching and can be called concurrently from many different clients.
To protect against concurrent update / delete attempts against the in-memory tables, I am using sp_getapplock and SET MEMORY_OPTIMIZED_ELEVATE_TO_SNAPSHOT ON;.
I am using a hash of the stored procedure parameters that is unique to each stored procedure, but multiple concurrent calls to the same stored procedure with the same parameters should generate the same hash. It's this equality of the hash for concurrent calls to the same stored proc with the same parameters that gives me a useful resource name to obtain our applock against.
Below is an example:
BEGIN TRANSACTION
EXEC #LOCK_STATUS = sp_getapplock #Resource= [SOME_HASH_OF_PARAMETERS_TO_THE_SP], #LockMode = 'Exclusive';
...some stored proc code...
IF FAILURE
BEGIN
ROLLBACK;
THROW [SOME_ERROR_NUMBER]
END
...some stored proc code...
COMMIT TRANSACTION
Despite wrapping everything in an applock which should block any concurrent updates or deletes, I still get error 41302:
The current transaction attempted to update a record that has been
updated since this transaction started. The transaction was aborted.
Uncommittable transaction is detected at the end of the batch. The
transaction is rolled back.
Am I using sp_getapplock incorrectly? It seems like the approach I am suggesting should work.
The second you begin your transaction with a memory optimized table you get your "snapshot", which is based on the time the transaction started, for optimistic concurrency resolution. Unfortunately your lock is in place after the snapshot is taken and so it's still entirely possible to have optimistic concurrency resolution failures.
Consider the situation where 2 transactions that need the same lock begin at once. They both begin their transactions "simultaneously" before either obtains the lock or modifies any rows. Their snapshots look exactly the same because no data has been modified yet. Next, one transaction obtains the lock and proceeds to make its changes while the other is blocked. This transaction commits fine because it was first and so the snapshot it refers to still matches the data in memory. The other transaction obtains the lock now, but it's snapshot is invalid (however it doesn't know this yet). It proceeds to execute and at the end realizes its snapshot is invalid so it throws you an error. Truthfully the transactions don't even need to start near simultaneously, the second transaction just needs to start before the first commits.
You need to either enforce the lock at the application level, or through use of sp_getapplock at the session level.
Hope this helped.
I have a table that has millions of rows.
Accidentally I wrote an update query over a table without where clause and clicked execute.
It started executing. After two seconds I realized the query is wrong and I clicked 'Stop' button in Sql Server Management Studio. The query execution was stopped, this all happened within 7 seconds.
Now I am curious to know if there are any rows affected. If any which are they?
How to find it?
A single update statement will not update some rows. It's all rows or none
This is the atomicity in the ACID properties which SQL server respects well.
Atomicity requires that each transaction is "all or nothing": if one part of the transaction fails, the entire transaction fails, and the database state is left unchanged. An atomic system must guarantee atomicity in each and every situation, including power failures, errors, and crashes.
Then the commit is at the end of the statement, so when you cancel there's no commit