update with rowlock in MSSQL server - sql-server

I was trying to understand ROWLOCK in SQL server to update a record after locking it. Here is my observation and would like to get a confirm if ROWLOCK is like a table or page lock sort of thing or I have not tried it correctly. ROWLOCK should be a lock to row only not to the table or page.
Here is what I tried:
I created a simple table:row_lock_temp_test with two columns ID and Name with no PK or index. Now I open SQL Server, two different clients but same credentials and tried executed a set of queries as follow:
Client 1:
1: BEGIN TRANSACTION;
2: update row_lock_temp_test set name = 'CC' where id = 2
3: COMMIT
Client 2:
1: BEGIN TRANSACTION;
2: update row_lock_temp_test set name= 'CC' where id = 2
3: COMMIT
I executed Query 1, 2 on C-1 and went to C-2 and executed the same queries, both clients executed the queries and then I committed the transaction, all good.
Then I added RowLock to update query,
C-1
1: BEGIN TRANSACTION;
2: update row_lock_temp_test WITH(rowlock) set name = 'CC' where id = 2
3: COMMIT
C-2
1: BEGIN TRANSACTION;
2: update row_lock_temp_test WITH(rowlock) set name = 'CC' where id = 2
3: COMMIT
Now, I executed query 1 and 2 on C-1 and then went to C-2 and tried to execute the same 2 queries, but query got Stuck as expected because the row is locked by C-1 so it should be in queue until the transaction is committed on C-1. as soon as I committed transaction on C-1 query on C-2 got executed and then I committed the transaction on C-2 as well. All good.
here I tried another scenario to execute the same set of queries with row id = 3
C-2
1: BEGIN TRANSACTION;
2: update row_lock_temp_test WITH(rowlock) set name = 'CC' where id = 3
3: COMMIT
I executed 1st two queries in C-1 and then went to executed 1st two queries of C-2, row id is different in both clients, but still, the query in C-2 got stuck. This means while updating query with id = 2 it has locked the page or table, I was expecting a row lock, but it seems a page or table lock.
I also tried using xlock, HOLDLOCK, and UPDLOCK with different combinations but it is always locking the table. is there any possibility to lock a row only.
Select and insert is working as expected.
Thanks in advance.

Lock hints are only hints. You can't "force" SQL to take a particular kind of lock.
You can see the locks being taken with the following query:
select tl.request_session_id,
tl.resource_type,
tl.request_mode,
tl.resource_description,
tl.request_status
from sys.dm_tran_locks tl
join sys.partitions pt on pt.hobt_id = tl.resource_associated_entity_id
join sys.objects ob on ob.object_id = pt.object_id
where tl.resource_database_id = db_id()
order by tl.request_session_id
OK, let's run some code in an SSMS query window:
create table t(i int, j int);
insert t values (1, 1), (2, 2);
begin tran;
update t with(rowlock) set j = 2 where i = 1;
Open a second SSMS window, and run this:
begin tran;
update t with(rowlock) set j = 2 where i = 2;
The second execution will be blocked. Why?
Run the locking query in a third window, and note that there are two rows with a resource_type of RID, one with a status of "grant", the other with a status of "wait". We'll get to the RID bit in a second. Also, look at the resource_description column for those rows. It's the same value.
OK, so what's a resource_description? It depends on theresource_type. But for our RID it represents: the file id, then the page id, then the row id (also known as the slot). But why are both executions taking a lock on row slot 0? Shouldn't they be trying to lock different rows? After all, we are updating different rows.
David Browne has given the answer: In order to find the correct row to update, SQL has to scan the entire table, because there is no index telling it how many rows there are where i = 1. It will take an update lock on each row as it scans through. Why does it take an update lock on each row? Well, it's not to "do" the update, to so speak. It will take an exclusive lock for that. Update locks are pretty much always taken to prevent deadlocks.
So, the first query has scanned through the rows, taking a U lock on each row. Of course, it found the row it wanted to update right away, in slot 0, and took an X lock. And it still has that X lock, because we haven't committed.
Then we started the second query, which also has to scan all of the rows to find the one it wants. It started off by trying to take the U lock on the first row, and was blocked. The X lock of our first query is blocking it.
So, you see, even with row locking, your second query is still blocked.
OK, let's rollback the queries, and see what happens if we have the first query update the second row, and the second query update the first row? Does that work? Nope! Because SQL still has no way of knowing how many rows match the predicate. So the first query takes its update lock on slot 0, sees that it doesn't have to update it, takes its update lock on slot 1, sees the correct value for i, takes its exclusive lock, and waits for us to commit.
The query 2 comes along, takes the update lock on slot 0, sees the value it wants, takes its exclusive lock, updates the value, and then tries to take an update lock on slot 1, because that might also have the value it wants.
You'll also see "intent locks" on the next "level" up, i.e., the page. The operation is letting the rest of the engine know that it might want to escalate the lock to the page level at some point in the future. But that's not a factor here. Page locking is not causing the issue.
Solution in this case? Add an index on column i. In this case, that's probably the primary key. You can then do the updates in either order. Asking for row locking in this case makes no difference, because SQL doesn't know how many rows match the predicate. But even if you try to force a row lock in some situation, and even with a primary key or appropriate index, SQL can still choose to escalate the lock type, because it can be way more efficient to lock a whole page, or a whole table, than to lock and unlock individual rows.

Related

Insert and delete in a transaction always block the rows?

I have this table:
TableAB
{
IDA;
IDB;
}
And I want to ensure that always I have the pair (ID1, ID2) and (ID2, ID1). So I am trying to use this to scripts:
To insert:
begin tran
insert into tablaAB (IDTablaA, IDTablaB) VALUES(1,2);
insert into tablaAB (IDTablaA, IDTablaB) VALUES(2,1);
commit
To delete:
begin tran
delete tablaAB where IDTablaA = 1 and IDTablaB = 2
delete tablaAB where IDTablaA = 2 and IDTablaB = 1;
commit
I am using two instance of Microsoft Management Studio, to run both queries, and in most of the cases, it works, I get the two rows or any of them. But sometimes, I get only one of them.
The steps are:
run the query to delete (1,2).
run the query to add (1,2).
In most of the cases, it is block until the transaction to delete both rows finishes, but in some case it can pass to the next line, to insert the second row. If this happens, then I don't have a coherence data.
But I don't know if it is because I make some mistakes in the test or in same rare cases the first query is not blocked as I expect.
Really in all cases the first insert should be block if the first delete is done?
The table is empty. So it seems that the row is blocked when I try to delete and it doesn't allow to insert the row, but I don't know if really can be some rare situations in which the row is not blocked.
Thanks.
But I don't know if it is because I make some mistakes in the test or in same rare cases the first query is not blocked as I expect.
Really in all cases the first insert should be block if the first delete is done?
It seems you are running using the READ COMMITTED isolation level. In this case, no lock is held by the DELETE session when no rows qualify so the INSERT session can proceed to insert rows. This becomes a race condition where you may end up with zero, one, or two rows. Consider this sequence that results in one row:
--session 1:
begin tran;
delete TableAB where IDTablaA = 1 and IDTablaB = 2;
--no row deleted, no lock held
--session 2:
begin tran
insert into TableAB (IDTablaA, IDTablaB) VALUES(1,2);
--row inserted, lock held
insert into TableAB (IDTablaA, IDTablaB) VALUES(2,1);
--row inserted, lock held
commit;
-- inserts committed and locks released
--session 1:
delete TableAB where IDTablaA = 2 and IDTablaB = 1;
--row deleted, lock held
commit;
--deleted committed, lock released
If you instead use the SERIALIZABLE isolation level, the DELETE statement will hold a lock (table lock in this case due to no indexes) and block the insert session. A less restrictive key range lock will be held with an index on the column used to locate rows to be deleted.
Note that SERIALIZABLE is it is more prone to deadlocks than less restrictive isolation levels.

SQL Server Read Committed Snapshot

I am just wondering something about snapshot behavior on read committed isolation level. Let's assume that I have a table with name "A". Here is the first transaction:
Select blabla
From A
Insert Into A blabla
and second transaction does the same
Select blabla
From A
Insert Into A blabla
and assume that below timeline occurred:
Tran1: select
Tran1: insert (not yet committed)
Tran2: select (I don't know it is possible or not)
Tran2: insert
As far as I know, in standard read committed isolation level, tran2 select query would be blocked because of tran1 insert command not yet committed or rolled back. But, while "is_read_committed_snapshot" is enabled, I expect that any of lock won't acquired during insert or update command.
So what will happen to tran2?
I expect that tran2 select query won't see the data that inserted by tran1, because it would be "dirty read". But it wouldn't get block as well.
Because of the tran1 insert query does not acquire any lock, wouldn't this situation be a problem about concurrency of executing these two transactions?
I expect that any of lock won't acquired during insert or update
command.
That is wrong. Even if you have enabled RCSI, writers still block writers, and X locks are still acqiured.
What is different between RC and RCSI is reading behaviour.
When working on pessimistic RC, SELECT from Tran2 will be blocked on X lock held on A, while working on RCSI Tran2's SELECT will not be blocked, it will be provided with the last committed version of A, i.e. with the state of A before Tran1 has modificated it.
What happend then depends on your table organisation and on what you INSERT.
Some examples.
1) table A is a heap, you are doing single insert in both transactions.
In this case your INSERT in Tran2 will succeed in any case, be it the same value that you try to insert in both transactions or not, because what the server acquires in this case is IX on a table (that is compatible with IX held by Tran1), IX on a page (that is also compatible with IX held by Tran1, even if it is the same page), and X on RID (while Tran1 has X on another RID), so there is no conflict.
2) table A is clustered table, you are trying to insert the same new key in this table.
In this case your Tran2's INSERT will be blocked because of the conflict between two X lock on the same key, the first is held by Tran1, the secont is requested by Tran2 and is blocked.
3) table A is clustered table, you are trying to insert different keys in this table.
Insert2 will succeed because X lock on key requested by Tran2 will be granted as Tran1 holds IX on table, IX on page, and X on another key.
Lets say you're doing it this way:
SELECT id FROM customers
BEGIN TRAN new_tran
UPDATE customers
SET ID = '1'
WHERE ID = '01'
IF your query is something like this:
SET TRANSACTION ISOLATION LEVEL SNAPSHOT
GO
BEGIN TRAN
SELECT *
FROM customers
WHERE id = '01'
Result- Even if we have changed the value to 01, we will still see old record in session 2 (2, TWO).
Now, let’s commit transaction in session 1
Now lets say you commit the transaction, in session 2, now you'll get the new updated value:
COMMIT
SELECT *
FROM DemoTable
WHERE i = 2
You can read more about it on Pinal Dave's blog: blog.sqlauthority.com/2015/07/03/sql-server-difference-between-read-committed-snapshot-and-snapshot-isolation-level/

Deadlock while updating two different rows on a single table

Been reading on deadlocks alot and when i thought i knew it well here comes the problem.
There are two similar transactions going on at same time. They look like below:
BEGIN TRAN //read_committed_snapshot ON
//an application sends insert query
INSERT INTO t1 VALUES('Name',15)
//later on application sends update query for the newly inserted row
UPDATE t1 SET name='NewName', number=16 WHERE id = 10 //this ID is the id of the inserted row.
COMMIT
The given code is not the exact one that i use in my app, but the idea is same, it just has more columns.
Table t1 has primary key ID, some nonclustered indexes.
After running two of these transactions simultaneously it deadlocks. The profiler says that the deadlocked query was this UPDATE t1 SET name='NewName', number=16 WHERE id = :id for each of the conflicting processes.
Sorry i do not have the XML of the deadlock, but profiler told that both processes had X lock and both of them tried to acquire U locks.
process 1
owner - X
waiter - U
process 2
owner - X
waiter - U
t1 table was shown for both processes as object and PK_id index as indexname.
So what is actually happening here? Each transaction updates different rows in same table, why did it deadlock?
Many examples on the net say 'hey its because of how it scans indexes, it scans pk index for one transaction and some other nonclustered index for another transaction' but their profiler deadlock graphs show different values under indexname, so that is not the same as i am having, the index name is the same.
Any ideas how to resolve this? Its driving me crazy. I thought enabling read_committed_snapshot would solve this but i was wrong.
Most likely that one or both of your updates using the table/clustered index scan for find the rows for update - it often causes deadlocks. Check the execution plan.

Return unlocked rows in a "select top n" query

I need to have a MsSql database table and another 8 (identical) processes accessing the same table in parallel - making a select top n, processing those n rows, and updating a column of those rows. The problem is that I need to select and process each row just once. This means that if one process got to the database and selected the top n rows, when the second process comes it should find those rows locked and select the rows from n to 2*n rows, and so on...
Is it possible to put a lock on some rows when you select them, and when someone requests top n rows which are locked to return the next rows, and not to wait for the locked ones? Seems like a long shot, but...
Another thing I was thinking - maybe not so elegant but sounds simple and safe, is to have in the database a counter for the instances which made selects on that table. The first instance that comes will increment the counter and select top n, the next one will increment the counter and select rows from n*(i-1) to n*i, and so on...
Does this sound like a good ideea? Do you have any better suggestions? Any thought is highly appreciated!
Thanks for your time.
Here's a sample I blogged about a while ago:
The READPAST hint is what ensures multiple processes don't block each other when polling for records to process. Plus, in this example I have a bit field to physically "lock" a record - could be a datetime if needed.
DECLARE #NextId INTEGER
BEGIN TRANSACTION
-- Find next available item available
SELECT TOP 1 #NextId = ID
FROM QueueTable WITH (UPDLOCK, READPAST)
WHERE IsBeingProcessed = 0
ORDER BY ID ASC
-- If found, flag it to prevent being picked up again
IF (#NextId IS NOT NULL)
BEGIN
UPDATE QueueTable
SET IsBeingProcessed = 1
WHERE ID = #NextId
END
COMMIT TRANSACTION
-- Now return the queue item, if we have one
IF (#NextId IS NOT NULL)
SELECT * FROM QueueTable WHERE ID = #NextId
The most simplest method is to use row locking:
BEGIN TRAN
SELECT *
FROM authors
WITH (HOLDLOCK, ROWLOCK)
WHERE au_id = '274-80-9391'
/* Do all your stuff here while the record is locked */
COMMIT TRAN
But if you are accessing your data and then closing the connection, you won't be able to use this method.
How long will you be needing to lock the rows for? The best way might actually be as you say to place a counter on the rows you select (best done using OUTPUT clause within an UPDATE).
The best idea if you want to select records in this manner would be to use a counter in a separate table.
You really don't want to be locking rows on a production database exclusively for any great period of time, therefore I would recommend using a counter. This way only one of your processes would be able to grab that counter number at a time (as it will lock as it is being updated) which will give you the concurrency that you need.
If you need a hand writing the tables and procedures that will do this (simply and safely as you put it!) just ask.
EDIT: ahh, nevermind, you're working in a disconnected style. How about this:
UPDATE TOP (#n) QueueTable SET Locked = 1
OUTPUT INSERTED.Col1, INSERTED.Col2 INTO #this
WHERE Locked = 0
<do your stuff>
Perhaps you are looking for the READPAST hint?
<begin or save transaction>
INSERT INTO #this (Col1, Col2)
SELECT TOP (#n) Col1, Col2
FROM Table1 WITH (ROWLOCK, HOLDLOCK, READPAST)
<do your stuff>
<commit or rollback>

Get "next" row from SQL Server database and flag it in single transaction

I have a SQL Server table that I'm using as a queue, and it's being processed by a multi-threaded (and soon to be multi-server) application. I'd like a way for a process to claim the next row from the queue, flagging it as "in-process", without the possibility that multiple threads (or multiple servers) will claim the same row at the same time.
Is there a way to update a flag in a row and retrieve that row at the same time? I want something like this psuedocode, but ideally, without blocking the whole table:
Block the table to prevent others from reading
Grab the next ID in the queue
Update the row of that item with a "claimed" flag (or whatever)
Release the lock and let other threads repeat the process
What's the best way to use T-SQL to accomplish this? I remember seeing a statement one time that would DELETE rows and, at the same time, deposit the DELETED rows into a temp table so you could do something else with them, but I can't for the life of me find it now.
You can use the OUTPUT clause
UPDATE myTable SET flag = 1
WHERE
id = 1
AND
flag <> 1
OUTPUT DELETED.id
Main thing is to use a combination of table hints as shown below, within a transaction.
DECLARE #NextId INTEGER
BEGIN TRANSACTION
SELECT TOP 1 #NextId = ID
FROM QueueTable WITH (UPDLOCK, ROWLOCK, READPAST)
WHERE BeingProcessed = 0
ORDER BY ID ASC
IF (#NextId IS NOT NULL)
BEGIN
UPDATE QueueTable
SET BeingProcessed = 1
WHERE ID = #NextID
END
COMMIT TRANSACTION
IF (#NextId IS NOT NULL)
SELECT * FROM QueueTable WHERE ID = #NextId
UPDLOCK will lock the next available row it finds that's available, preventing other processes from grabbing it.
ROWLOCK will ensure only the individual row is locked (I've never found it to be a problem not using this as I think it will only use a rowlock anyway, but safest to use it).
READPAST will prevent a process being blocked, waiting for another to finish.

Resources