I have a nightly job which execute a stored procedure that goes over a table and fetches records to be inserted to another table.
The procedure duration is about 4-5 minutes in which it executes 6 selects over a table with ~3M records.
While this procedure is running there are exceptions thrown from another stored procedure which trying to update the same table:
Transaction (Process ID 166) was deadlocked on lock resources with
another process and has been chosen as the deadlock victim. Rerun the
transaction.
Execution Timeout Expired. The timeout period elapsed prior to
completion of the operation or the server is not responding. --->
System.ComponentModel.Win32Exception (0x80004005): The wait operation
timed out
I have read Why use a READ UNCOMMITTED isolation level?
question, but didn't come to a conclusion what best fits my scenario, as one of the comments stated:
"The author seems to imply that read uncommitted / no lock will return
whatever data was last committed. My understanding is read uncommitted
will return whatever value was last set even from uncommitted
transactions. If so, the result would not be retrieving data "a few
seconds out of date". It would (or at least could if the transaction
that wrote the data you read gets rolled back) be retrieving data that
doesn't exist or was never committed"
Taking into consideration that I only care about the state of the rows at the moment the nightly job started (the updates in the meanwhile will be calculated in the next one)
What would be most appropriate approach?
Transaction (Process ID 166) was deadlocked on lock resources with
another process and has been chosen as the deadlock victim. Rerun the
transaction.
This normally happens when you read data with the intention to update it later by just putting a shared lock, the following UPDATE statement can’t acquire the necessary Update Locks, because they are already blocked by the Shared Locks acquired in the different session causing the deadlock.
To resolve this you can select the records using UPDLOCK like following
SELECT * FROM [Your_Table] WITH (UPDLOCK) WHERE A=B
This will take the necessary Update lock on the record in advance and will stop other sessions to acquire any lock (shared/exclusive) on the record and will prevent from any deadlocks.
Another common reason for the deadlock (Cycle Deadlock) is due to the order of the statements your put in your query, where in the end every query waits for another one in different transactions. For this type of scenarios you have to visit your query and fix the ordering issue.
Execution Timeout Expired. The timeout period elapsed prior to
completion of the operation or the server is not responding. --->
System.ComponentModel.Win32Exception (0x80004005): The wait operation
timed out
This is very clear, you need to work on the query performance, and keep the record locking as less as possible.
Related
My scheduled job is working 6 times in a day.
Sometimes its failing cause of deadlock. I tried to identify whose blocking my session.
I searched and i discovered sql profiler but its not showing exact result. How to identify historical with T-SQL or any other way ?
When the fail job error message shown belown,
transaction (process id ) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. rerun the transaction.
This should help identify deadlock victims or causes of deadlocking: https://ask.sqlservercentral.com/questions/5982/how-can-i-identify-all-processes-involved-in-a-dea.html
If you want to reduce the risk of your process deadlocking here are some strategies...
Try to INSERT/UPDATE/DELETE tables in the same order. E.g. if one process is doing this:
BEGIN TRAN; UPDATE TableA; UPDATE TableB; COMMIT;
while another process is doing this:
BEGIN TRAN; UPDATE TableB; UPDATE TableA; COMMIT;
there is a risk that one process will deadlock the other. The greater the time to complete, the higher the risk of deadlock. SQL Server will simply choose one process as the "Deadlock Victim" at random.
Try to minimise the code involved in a transaction. I.e. Fewer lines of INSERT/UPDATE/DELETE code between your BEGIN TRANSACTION and COMMIT TRANSACTION statements
Process smaller batches of data if possible. If you are processing a large number of rows, try to add batching, so the code locks smaller batches of data at any given time.
I have a situation where I need to wrap an update T-SQL in a stored procedure (sp_update_queue) inside a transaction. But I'm wondering what would happen if you have two threads using the same connection but executing different queries and one rolls back a transaction it started.
For example ThreadA called sp_update_queue to update table QUEUED_TASKS but before sp_update_queue commits/rollback, transaction ThreadB executes some other update or insert SQL on a different table, say CUSTOMERS. Then after ThreadB has finished, sp_update_queue happens to encounter an error and calls rollback.
Because they are both using the same connection would the rollback also rollback changes made by ThreadB?, regardless of whether ThreadB made its changes within a transaction or not.
Each thread which acquire the resource first, will lock that resource(if you have suitable isolation level), so the second thread will wait for the required resource.
Note:each thread will have their own SessionId.
UPDATED
In your scenario, however both of the threads are using same connection, but do not use any common resources(ThreadA is dealing with table X and ThreadB is dealing with table Y). So commit or rollback of each Thread(Thread A or B) does not impact the other one.
Read more about Isolation Level
Assume that SQL server received both select and update statements to the same table at the same time from different threads and connections
Do any of them get prioritized?
I know that select statements are delayed until update completed if table is already locked for update (update statements locks table by default i am incorrect?). If table lock continue for a long time due to update, select statement gets cancelled with too much waiting error
So what happens when both received at the same time?
A SELECT statement will place a shared lock (S) on any rows it's reading - depending on the isolation levels, that lock will be held for various amounts of time. In the default READ COMMITTED isolation level, the lock is held only while actually reading the row - once it's read, the lock is released right away.
The shared lock is compatible with other shared locks - so any number of SELECT statements can read the same rows simultaneously.
The UPDATE statement will place an update (U) lock on the row it wants to update, to read the existing values. Then, after that's done, before the actual updated values are written back, the lock is converted into an exclusive (X) lock for the time the data is written. Those locks are held until the transaction they're executing in is committed (or rolled back).
An update lock is not compatible with another update lock, nor with an exclusive lock. It is compatible with a shared lock however - so if the UPDATE statement is currently only reading the existing values, another transaction might read those same values using a SELECT statement with a shared lock.
An exlusive lock is incompatible with anything - you cannot even read the row anymore, while an X lock is on it.
So if you have two statements that come in and try to access the same row, then:
if the SELECT comes first, it will place a S lock on the row, read it, and typically release that lock again
at the same time, the UPDATE statement can place a U lock on the row and read the existing values; the "promotion" of the lock to X will not be possible until the S lock has been released - if that's not happening, the UPDATE statement will wait, and eventually time out, if the S lock is never released
if the UPDATE lock comes first, it will place an U lock on the row to read the existing values
at the same time, another transaction could be placing a S lock on the row to read it
and again: the UPDATE statement can only progress to the X level to write back the new values once the S lock is gone - otherwise it will time out
if the UPDATE lock comes first, it will place an U lock on the row to read the existing values, and already places the X lock on the row to actually do the update
then at this time, no other transaction can even read that row - they will have to wait (or time out, if it takes too long for them to get serviced)
Read SQL Server Transaction Locking and Row Versioning Guide for a more in-depth overview of the topic and more details
If I have a database transaction which goes along the lines of:
DELETE FROM table WHERE id = ANY(ARRAY[id1, id2, id3,...]) RETURNING foo, bar;
if num_rows_returned != num_rows_in_array then
rollback and return
Do stuff with deleted data...
Commit
My understanding is that the DELETE query will lock those rows, until the transaction is committed or rolled back. As according to the postgres 9.1 docs:
An exclusive row-level lock on a specific row is automatically
acquired when the row is updated or deleted. The lock is held until
the transaction commits or rolls back, just like table-level locks.
Row-level locks do not affect data querying; they block only writers
to the same row.
I am using the default read committed isolation level in postgres 9.1.13
I would take from this that I should be OK, but I want to ensure that this means the following things are true:
Only one transaction may delete and return a row from this table, unless a previous transaction was rolled back.
This means "Do stuff with deleted data" can only be done once per row.
If two transactions try to do the above at once with conflicting rows, one will always succeed (ignoring system failure), and one will always fail.
Concurrent transactions may succeed when there is no crossover of rows.
If a transaction is unable to delete and return all rows, it will rollback and thus not delete any rows. A transaction may try to delete two rows for example. One row is already deleted by another transaction, but the other is free to be returned. However since one row is already deleted, the other must not be deleted and processed. Only if all specified ids can be deleted and returned may anything take place.
Using the normal idea of concurrency, processes/transactions do not fail when they are locked out of data, they wait.
The DBMS implements execution in such a way that transactions advance but only seeing effects from other transactions according to the isolation level. (Only in the case of detected deadlock is a transaction aborted, and even then its implemented execution will begin again, and the killing is not evident to its next execution or to other transactions except per isolation level.) Under SERIALIZABLE isolation level this means that the database will change as if all transactions happened without overlap in some order. Other levels allow a transaction to see certain effects of overlapped implementation execution of other transactions.
However in the case of PostgresSQL under SERIALIZABLE when a transaction tries to commit and the DBMS sees that it would give non-serialized behaviour the tranasaction is aborted with notification but not automatically restarted. (Note that this is not failure from implementation execution attempted access to a locked resource.)
(Prior to 9.1, PostgrSQL SERIALIZABLE did not give SQL standard (serialized) behaviour: "To retain the legacy Serializable behavior, Repeatable Read should now be requested.")
The locking protocols are how actual implementation execution gets interleaved to maximize throughput while keeping that true. All locking does is prevent actual overlapped implementation execution accesses to effect the apparent serialized execution.
Explicit locking by transaction code also just causes waiting.
Your question does not reflect this. You seem to think that attempted access to a locked resource by the implementation aborts a transaction. That is not so.
Documentation says, serializable transactions execute one after one.
But in practic it seems not to be truth. Here's two almost equal transactions, the difference is delay for 15 seconds only.
#1:
set transaction isolation level serializable
go
begin transaction
if not exists (select * from articles where title like 'qwe')
begin
waitfor delay '00:00:15'
insert into articles (title) values ('qwe')
end
commit transaction go
#2:
set transaction isolation level serializable
go
begin transaction
if not exists (select * from articles where title like 'qwe')
begin
insert into articles (title) values ('asd')
end
commit transaction go
The second transaction has been run after couple of seconds since the start of first one.
The result is deadlock. The first transaction dies with
Transaction (Process ID 58) was deadlocked on
lock resources with another process and has been chosen as the deadlock victim.
Rerun the transaction.
reason.
The conclusion, serializable transactions are not serial?
serializable transactions don't necessarily execute serially.
The promise is just that transactions can only commit if the result would be as if they had executed serially (in any order).
The locking requirements to meet this guarantee can frequently lead to deadlock where one of the transactions needs to be rolled back. You would need to code your own retry logic to resubmit the failed query.
See The Serializable Isolation Level for more about the differences between the logical description and implementation.
What happens here:
Because transactions 1 runs in serializable isolation level, it keeps a share lock it obtains on table articles while it wait. This way, it is guaranteed that the non exists condition remains true until the transaction terminates.
Transaction 2 gets a share lock as well that allows it to do the exist check condition. Then, with the insert statement, Transaction 2 requires to convert the share lock to a exclusive lock but has to wait as Transaction 1 holds a shared lock.
When Transaction 1 finishes to wait, it also requests a conversion to exclusive mode => deadlock situation, 1 of the transaction has to be terminated.
I got into a similar problem and i found that:
From MSDN:
SERIALIZABLE
Specifies the following:
Statements cannot read data that has been modified but not yet
committed by other transactions.
No other transactions can modify data that has been read by the
current transaction until the current transaction completes.
Other transactions cannot insert new rows with key values that would
fall in the range of keys read by any statements in the current
transaction until the current transaction completes.
The second point does not state that both sessions can't take the shared lock that will result in deadlock. We solved it with a hint on SELECT.
select * from articles WITH (UPDLOCK, ROWLOCK) where title like 'qwe'
Have not tried if it would work in this case but i think you would have to lock on the table part since the row is not yet created.