SqlServer, Transaction deadlock, when are table actually locked? - sql-server

This SQL (called from c#) occasionally results in a deadlock.
The server is not under much load so the approach used is to lock as much as possible.
-- Lock to prevent race-conditions when multiple instances of an application calls this SQL:
BEGIN TRANSACTION
-- Check that no one has inserted the rows in T1 before me, and that T2 is in a valid state (Test1 != null)
IF NOT EXISTS (SELECT TOP 1 1 FROM T1 WITH(HOLDLOCK, TABLOCKX) WHERE FKId IN {0}) AND
NOT EXISTS(SELECT TOP 1 1 FROM T2 WITH(HOLDLOCK, TABLOCKX) WHERE DbID IN {0} AND Test1 IS NOT NULL)
BEGIN
-- Great! Im the first - go insert the row in T1 and update T2 accordingly. Finally write a log to T3
INSERT INTO T1(FKId, Status)
SELECT DbId, {1} FROM T2 WHERE DbId IN {0};
UPDATE T2 SET LastChangedBy = {2}, LastChangedAt = GETDATE() WHERE DbId IN {0};
INSERT INTO T3 (F1, FKId, F3)
SELECT {2}, DbId, GETDATE() FROM T2 WHERE DbId IN {0} ;
END;
-- Select status on the rows so the program can evaluate what just happened
SELECT FKId, Status FROM T1 WHERE FkId IN {0};
COMMIT TRANSACTION
I believe the problem is that multiple tables needs to be locked.
I'm a bit unsure when the tables are actually xlocked - when a table is used the first time - or are all tables locked at one time at BEGIN TRANS?

Using Table Locks can increase the likelihood of getting deadlocks... Not all deadlocks are caused by out of sequence operations... Some can be caused, (as you have found) by other activity that only tries to lock a single record in the same table you are locking completely, so locking the entire table increases the probability for that conflict to occur. When using serializable isolation level, range locks are placed on index rows, which can prevent inserts/deletes by other sql operations, in a way that can cause a deadlock by two concurrent operations from the same procedure, even though they are coded to perform their ops in the same order...
In any event, to find out what exactly is causing the deadlock, set SQL Server Trace flags 1204 and 1222. These will cause detailed info to be written to the SQL Server Logs about each deadlock, including what statements were involved.
Here is a good article about how to do this.
(Don't forget to turn these flags off when you're done...)

Locks are done when you call lock or select with lock and released on commit or rollback.
You could get a dead lock if another procedure locks in T3 first and in T1 or T2 afterwards. Then two transactions are waiting for each other to get a resource, while locking what the other needs.
You could also avoid the table lock and use isolation level serializable.

The problem with locking is that you really need to look at all places you do locking at the same time, there's no way to isolate and split up the problem into many smaller ones and look at those individually.
For instance, what if some other code locks the same tables, but without it being obvious, and in the wrong order? That'll cause a deadlock.
You need to analyze the server state at the moment the deadlock is discovered to try to figure out what else is running at the moment. Only then can try to fix it.

Related

Can Select block a table from Insert?

If I am performing SELECTS on a table, is it possible that a SELECT query on a table can block INSERTS into the same table?
A SELECT will take a shared lock on the rows, but it shouldn't effect inserts correct?
The query is a LIKE clause - will this block an insert? Does it have the potential?
SELECT * FROM USERS WHERE Description LIKE '%HELLO%'
Reference:
I read this response SQL Server SELECT statements causing blocking and I am confused how this would block an insert.
A SELECT will take a shared lock on the rows, but it shouldn't effect
inserts correct?
No, it's not exact.
When you make a SELECT, it can acquire shared locks on pages and even on the whole table,
you can test it by yourself by using paglock or tablock hints (of course you should use repeatable read or serializable to see them, as all the shared locks in read committed are released as soon as they are no needed anymore).
The same situation can be modelled this way:
if object_id('dbo.t') is not null drop table dbo.t;
select top 10000 cast(row_number() over(order by getdate()) as varchar(10)) as col
into dbo.t
from sys.columns c1
cross join sys.columns c2;
set transaction isolation level serializable;
begin tran
select *
from dbo.t
where col like '%0%';
select resource_type,
request_mode,
count(*) as cnt
from sys.dm_tran_locks
where request_session_id = ##spid
group by resource_type,
request_mode;
Here you see lock escalation result, my query wanted more than 5000 locks per statement so instead of them server took only one lock, shared lock on the table.
Now if you try to insert any row in this table from another session, your INSERT will be blocked.
This is because any insert first need to acquire IX on a table and IX on a page, but IX on a table is incompatible with S on the same table, so it will be blocked.
This way your select could block your insert.
To see what exactly happens on your server you should use sys.dm_tran_locks filtered by both your session_id.
General info - this is called SQL Server Concurrency and in SQL Server you will find two models:
Pessimistic;
Optimistic.
Answering your question - yes, you can block any insert during read and this is called "Pessimistic Concurrency". However, this model comes with specific properties and you have to be careful because:
Data being read is locked, so that no other user can modify the data;
Data being modified is locked, so that no other user can read or modify the data;
The number of locks acquired is high because every data access operation (read/write) acquires a lock;
Writers block readers and other writers. Readers block writers.
The point is that you should use Pessimistic Concurrency only if the locks are held for a short period of time and only if the cost of each lock is lower than rolling back the transaction in case of a conflict, as Neeraj said.
I would recommend to read more about isolation levels applying both Pessimistic and Optimistic models here.
EDIT - I found a very detailed explanation about isolation levels on Stack, here.

Which type of locking mode for INSERT, UPDATE or DELETE operations in Sql Server?

I know that NOLOCK is default for SELECT operations. So, if I even don't write with (NOLOCK) keyword for a select query, the row won't be locked.
I couldn't find what happens if with (ROWLOCK) is not specified for UPDATE and DELETE query. Is there a difference between below queries?
UPDATE MYTABLE set COLUMNA = 'valueA';
and
UPDATE MYTABLE WITH (ROWLOCK) set COLUMNA = 'valueA';
If there is no hint, then the db engine chooses the LOCK mdoe as a function of the operation(select/modify), the level of isolation and granularity, and the possibility of escalating the granularity level. Specifying ROWLOCKX does not give 100% of the result of the fact that it will be X on a rows. In general, a very large topic for such a broad issue
Read first about Lock Modes that https://technet.microsoft.com/en-us/library/ms175519(v=sql.105).aspx
If
In statement 1 (without rowlock) the DBMS decides to lock the entire table or the page that updating record is in it. so it means while updating the row all or number of other rows in the table are locked and could not be updated or deleted.
Statement 2 (with (ROWLOCK)) suggests the DBMS to only lock the record that is being updated. But be ware that this is just a HINT and there is no guarantee that the it will be accepted by the DBMS.
So, if I even don't write with (NOLOCK) keyword for a select query, the row won't be locked.
select queries always take a lock and it is called shared lock and duration of the lock depends on your isolation level
Is there a difference between below queries?
UPDATE MYTABLE set COLUMNA = 'valueA';
and
UPDATE MYTABLE WITH (ROWLOCK) set COLUMNA = 'valueA';
Suppose your first statement affects more than 5000 locks,locks will be escalated to table ,but with rowlock ...SQLServer won't lock total table

Does a long-running query prevent an insert to tables involved?

Say I have two tables:
Table 1
[Clustered Id] [Text Field]
Table 2
[Clustered Id] [Numeric Field]
Then I have a query:
select *
from [Table 1]
,[Table 2]
where [Table 1].[Clustered Id] = [Table 2].[Clustered Id]
and [Table 1].[Text Field] like '%some string%'
Say my insert inserts one row, and looks like this:
insert into [Table 2]
values (new clustered ID)
,-182
If this query takes a long time to run, would an insert to [Table 2] be possible during that time? If so, what are the nuances? If not, what could I do to avoid it?
Yes a select will take a shared lock that will prevent an update lock.
You could use the hint "with (nolock)" on the select so that it does not take shared lock and does not prevent an update lock. But bad things could happen. A lot of people on this site will tell you never to do that.
If an update it just taking a rowlock then only that row needs to be open.
On an update it really helps to add <> mirror to the set so it will not take a lock
update table1
set col1 = 12
where col3 = 56
and co1 <> 12 -- will not take an update lock
An insert is different as it would only block on pagelock and tablock.
Please post your insert and how many rows you are inserting.
If you are taking a tablock then I think inserts would be blocked. Even with repeatable read I don't think a select would block an insert.
Unless you are in serializable isolation level,you don't need to worry.Your selects wont block inserts..
Select Acquires shared locks.Talking about low level,SQL requires Exclusive lock on the row it is trying to insert.We also know Exclusive lock is not compatible with shared lock..Now a question arises ,how can a select will be blocked by an insert which doesn't have a row at all.
Isolation level determines how much duration the select locks will be held..In normal isolation levels,shared lock will be released as soon as the row is read ..
Only in serializable,range locks are taken and lock wont be released until the select is totally completed..

How do I acquire write locks in SQL Server?

I need to run a query that selects ten records. Then, based on their values and some outside information, update said records.
Unfortunately I am running into deadlocks when I do this in a multi-threaded fashion. Both threads A and B run their selects at the same time, acquiring read locks on the ten records. So when one of them tries to do an update, the other transaction is aborted.
So what I need to be able to say is "select and write-lock these ten records".
(Yea, I know serial transactions should be avoided, but this is a special case for me.)
Try applying UPDLOCK
BEGIN TRAN
SELECT * FROM table1
WITH (UPDLOCK, ROWLOCK)
WHERE col1 = 'value1'
UPDATE table1
set col1 = 'value2'
where col1 = 'value1'
COMMIT TRAN

Deadlock caused by SELECT JOIN statement with SQL Server

When executing a SELECT statement with a JOIN of two tables SQL Server seems to
lock both tables of the statement individually. For example by a query like
this:
SELECT ...
FROM
table1
LEFT JOIN table2
ON table1.id = table2.id
WHERE ...
I found out that the order of the locks depends on the WHERE condition. The
query optimizer tries to produce an execution plan that only reads as much
rows as necessary. So if the WHERE condition contains a column of table1
it will first get the result rows from table1 and then get the corresponding
rows from table2. If the column is from table2 it will do it the other way
round. More complex conditions or the use of indexes may have an effect on
the decision of the query optimizer too.
When the data read by a statement should be updated later in the transaction
with UPDATE statements it is not guaranteed that the order of the UPDATE
statements matches the order that was used to read the data from the 2 tables.
If another transaction tries to read data while a transaction is updating the
tables it can cause a deadlock when the SELECT statement is executed in
between the UPDATE statements because neither the SELECT can get the lock on
the first table nor can the UPDATE get the lock on the second table. For
example:
T1: SELECT ... FROM ... JOIN ...
T1: UPDATE table1 SET ... WHERE id = ?
T2: SELECT ... FROM ... JOIN ... (locks table2, then blocked by lock on table1)
T1: UPDATE table2 SET ... WHERE id = ?
Both tables represent a type hierarchy and are always loaded together. So it
makes sense to load an object using a SELECT with a JOIN. Loading both tables
individually would not give the query optimizer a chance to find the best
execution plan. But since UPDATE statements can only update one table at a
time this can causes deadlocks when an object is loaded while the object
is updated by another transaction. Updates of objects often cause UPDATEs on
both tables when properties of the object that belong to different types of the
type hierarchy are updated.
I have tried to add locking hints to the SELECT statement, but that does not
change the problem. It just causes the deadlock in the SELECT statements when
both statements try to lock the tables and one SELECT statement gets the lock
in the opposite order of the other statement. Maybe it would be possible to
load data for updates always with the same statement forcing the locks to be
in the same order. That would prevent a deadlock between two transactions that
want to update the data, but would not prevent a transaction that only reads
data to deadlock which needs to have different WHERE conditions.
The only work-a-round so this so far seems to be that reads may not get locks
at all. With SQL Server 2005 this can be done using SNAPSHOT ISOLATION. The
only way for SQL Server 2000 would be to use the READ UNCOMMITED isolation
level.
I would like to know if there is another possibilty to prevent the SQL Server
from causing these deadlocks?
This will never happen under snapshot isolation, when readers do not block writers. Other than that, there is no way to prevent such things. I wrote a lot of repro scripts here: Reproducing deadlocks involving only one table
Edit:
I don't have access to SQL 2000, but I would try to serialize access to the object by using sp_getapplock, so that reading and modifications never run concurrently. If you cannot use sp_getapplock, roll out your own mutex.
Another way to fix this is to split the select... from... join into multiple select statements. Set the isolation level to read committed. Use table variable to pipe data from select to be joined to other. Use distinct to filter down inserts into these table variables.
So if I've two tables A, B. I'm inserting/updating into A and then B. Where as the sql's query optimizer prefers to read B first and A. I'll split the single select into 2 selects. First I'll read B. Then pass on this data to next select statement which reads A.
Here deadlock won't happen because the read locks on table B will be released as soon as 1st statement is done.
PS I've faced this issue and this worked very good. Much better than my force order answer.
I was facing the same issue. Using query hint FORCE ORDER will fix this issue. The downside is you won't be able to leverage best plan that query optimizer has for your query, but this will prevent the deadlock.
So (this is from "Bill the Lizard" user) if you have a query FROM table1 LEFT JOIN table2 and your WHERE clause only contains columns from table2 the execution plan will normally first select the rows from table2 and then look up the rows from table1. With a small result set from table2 only a few rows from table1 have to be fetched. With FORCE ORDER first all rows from table1 have to be fetched, because it has no WHERE clause, then the rows from table2 are joined and the result is filtered using the WHERE clause. Thus degrading performance.
But if you know this won't be the case, use this. You might want to optimize the query manually.
The syntax is
SELECT ...
FROM
table1
LEFT JOIN table2
ON table1.id = table2.id
WHERE ...
OPTION (FORCE ORDER)

Resources