MSSQL how to properly Lock rows and insert? - sql-server

I want to insert two rows in 2 different tables but want to roll back the transaction if some pre conditions on the second table are met.
Does it work In .NET if i simply start a transaction scope and execute a sql query to check data on the second table before executing the insert statements? If so, what is the isolation level to use?
I don't want it lock the whole tables as there are going to be many inserts. UNIQUE constraint is not an option because what i want to do is guarantee not more than 2 rows in the 2nd table to have the same value (FK to a PK column of table 1)
Thanks

Yes you can execute a sql query to check data on the second table before executing the insert statements.
Fyi the default is Serializable. From MSDN:
The lowest isolation level, ReadUncommitted, allows many transactions
to operate on a data store simultaneously and provides no protection
against data corruption due to interruptive transactions. The highest
isolation level, Serializable, provides a high degree of protection
against interruptive transactions, but requires that each transaction
complete before any other transactions are allowed to operate on the
data.
The isolation level of a transaction is determined when the
transaction is created. By default, the System.Transactions
infrastructure creates Serializable transactions. You can determine
the isolation level of an existing transaction using the
IsolationLevel property of a transaction.
Given your requirement, I do not think you want to use Serializable since it is the least friendly for high volume multi user systems because they cause the most amount of blocking.
You need to decide on the amount of protection that is required. At a minimum, you should look into READ UNCOMMITTED, READ COMMITTED, REPEATABLE READ. The following answer goes over Isolation Levels in detail. From that, you can decide what level of protection is sufficient for your requirement.
Transaction isolation levels relation with locks on table

Related

in-flight collision of DB transaction with different Isolation Level

I have some doubts with respect to transactions and isolation levels:
1) In case the DB transaction level is set to Serializable / Repeatable Read and there are two concurrent transactions trying to modify the same data then one of the transaction will fail.
In such cases, why DB doesn't re-tries the failed operation? Is it a good practice to retry the transaction on application level (hoping the other transaction will be over in mean time)?
2) In case the DB transaction level is set to READ_COMMITTED / DIRTY READ and there are two concurrent transactions trying to modify the same data then why the transactions don't fail?
Ideally we are controlling the read behaviour and concurrent writes should not be allowed.
3) My application has 2 parts and uses the spring managed datasource in one part and application created datasource in other part (this part doesn't use spring and data source is explicit created by passing the properties).
My assumption is that isolation level has no impact - from which datasource the connections is coming from...two concurrent transactions even if coming from different datasource will behave the same based on isolation level as if they are coming from same datasource.
Do you see any issue with this setup? Should we strive for single datasource across application?
I also wait until others to give their feed backs. But now i would like to give my 2 cents to this post.
As you explained isolation's are work differently each.
I'll try to keep a sample data set as follows
IF OBJECT_ID('Employees') IS NOT NULL DROP TABLE Employees
GO
CREATE TABLE Employees (
Emp_id INT IDENTITY,
Emp_name VARCHAR(20),
Emp_Telephone VARCHAR(15),
)
ALTER TABLE Employees
ADD CONSTRAINT PK_Employees PRIMARY KEY (emp_id)
INSERT INTO Employees (Emp_name, Emp_Telephone)
SELECT 'Satsara', '07436743439'
INSERT INTO Employees (Emp_name, Emp_Telephone)
SELECT 'Udhara', '045672903'
INSERT INTO Employees (Emp_name, Emp_Telephone)
SELECT 'Sithara', '58745874859'
REPEATABLE READ and SERIALIZABLE are both very close to each, but SERIALIZABLE is the heights in the isolation. Both options are provided for avoid the dirty readings and both need to manage very carefully because most of the time this will cause for deadlocks due to the way that it handing the data. If there's a deadlock, definitely server will wipe out one transaction from the picture. So it will never run it by the server again due to it doesn't have any clue about that removed transaction, unless a log.
REPEATABLE READ - Not allow to modify (lock records) any records which is already read by another process (another query). But it allows for new records to insert (without a lock) which can be impact to your system while querying.
SERIALIZABLE - Different in Serializable is, its not allow to insert records with
"SET TRANSACTION ISOLATION LEVEL Serializable". So INSERT processors are wait until the previous transaction commit.
Usually REPEATABLE READ and SERIALIZABLE isolation's are keep data locks than other two options.
example [REPEATABLE and SERIALIZABLE]:
In Employee table you have 3 records.
Open a query window and run (QUERY 1)
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ
BEGIN TRAN
SELECT * FROM Employees;
Now try to run a insert query in a different window (QUERY 2)
INSERT INTO Employees(Emp_name, Emp_Telephone)
SELECT 'JANAKA', '3333333'
System allow to insert the new record in QUERY 2 and now run the same query2 again and you can see 4 records.
Now replace the Query 1 with following code and try the same process to test the Serializable
SET TRANSACTION ISOLATION LEVEL Serializable
BEGIN TRAN
SELECT * FROM Employees;
This time you can see the that 2nd Query insert command not allow to execute and wait until the Query 1 to commit.
Once Query 1 committed only, Query 2 allows to execute the INSERT command.
When compare the Read Committed and the Read Uncommitted,
READ COMMITTED - Changes to the data is not visible to other processors until it commit the records. With Read Committed. it puts shared locks for all the records it reads. If another process found a exclusive lock by, it wait until its lock release.
READ UNCOMMITTED - Not recommended and garbage data can read by the system due to this. (in SQL Server nolock). So this will return the uncommitted data.
"Select * from Employee (nolock)
**DEADLOCKS - ** Whether its Repeatable read, Serializable, READ COMMITTED or READ UNCOMMITTED, it can creates dead locks. Only things
is as we discussed Repeatable read and Serializable are more prone to
deadlocks than other two options.
Note: If you need sample for Read Committed and Read Uncommitted, please let know in the comment section and we can discuss.
Actually this topic is very large topic and need to discuss with lots of samples. I do not know this explanation is enough or not. But i gave a small try. NO idea ware to start and when to stop.
At the same time, you asked about " Is it a good practice to retry the
transaction on application level "
In my opinion that's fine. Personally i also do retrying process in some sort of a situations.
Different techniques used.
Keeping a Flag field to identify it updated or not and retry
Using a Event driven solution such RabitMQ, KAFKA.

Row locking behaviour while updating

In Oracle databases I can start a transaction and update a row without committing. Selecting this row in another session still returns the current ("old") value.
How to get this behaviour in SQL Server? Currently, the row is locked until the transaction is ended. WITH (NOLOCK) inside the select statement gives the new value from the uncommitted transaction which is potentially dangerous.
Starting the transaction without committing:
BEGIN TRAN;
UPDATE test SET val = 'Updated' WHERE id = 1;
This works:
SELECT * FROM test WHERE id = 2;
This waits for the transaction to be committed:
SELECT * FROM test WHERE id = 1;
With Read Committed Snapshot Isolation (RCSI), versions of rows are stored in a version store, so readers can read a version of a row that existed at the time the statement started and before any changes have been made; while a transaction is open; without taking shared locks on rows or pages; and without blocking writers or other readers. From this post by Paul White:
To summarize, locking read committed sees each row as it was at the time it was briefly locked and physically read; RCSI sees all rows as they were at the time the statement began. Both implementations are guaranteed to never see uncommitted data,
One cost, of course, is that if you read a prior version of the row, it can change (even many times) before you're done doing whatever it is you plan to do with it. If you're making important decisions based on some past version of the row, it may be the case that you actually want an isolation level that forces you to wait until all changes have been committed.
Another cost is that version store is not free... it requires space and I/O in tempdb, so if tempdb is already a bottleneck on your system, this is something worth testing.
(In SQL Server 2019, with Accelerated Database Recovery, the version store shifts to the user database, which increases database size but mitigates some of the tempdb contention.)
Paul's post goes on to explain some other risks and caveats.
In almost all cases, this is still way better than NOLOCK, IMHO. Lots of links about the dangers there (and why RCSI is better) here:
I'm using NOLOCK; is that bad?
And finally, from the documentation (adding one clarification from the comments):
When the READ_COMMITTED_SNAPSHOT database option is set ON, read committed isolation uses row versioning to provide statement-level read consistency. Read operations require only SCH-S table level locks and no page or row locks. That is, the SQL Server Database Engine uses row versioning to present each statement with a transactionally consistent snapshot of the data as it existed at the start of the statement. Locks are not used to protect the data from updates by other transactions. A user-defined function can return data that was committed after the time the statement containing the UDF began.When the READ_COMMITTED_SNAPSHOT database option is set OFF, which is the default setting * on-prem but not in Azure SQL Database *, read committed isolation uses shared locks to prevent other transactions from modifying rows while the current transaction is running a read operation. The shared locks also block the statement from reading rows modified by other transactions until the other transaction is completed. Both implementations meet the ISO definition of read committed isolation.

SQL Server MERGE command duplicate pk error

This stored procedure often fails if transactions happen simultaneusly, because it violates a pk constraint on duplicate key (field, name). I have been trying, holdlock, rowlock and begin/commit transaction, which result in the same error.
I am trying to perform an insert statement and if the record exist with the same key (field, name) update it instead.
Performance is important so i am trying to avoid a temp table solution.
MERGE Fielddata AS TARGET
USING (VALUES (#Field, #Name, #Value, #File, #Type))
AS SOURCE (Field, Name, Value, File, Type)
ON TARGET.Field = #Field AND TARGET.Name = #Name
WHEN MATCHED THEN
UPDATE
SET Value = SOURCE.Value,
File = SOURCE.File,
Type = SOURCE.Type
WHEN NOT MATCHED THEN
INSERT (Field, Name, Value, File, Type)
VALUES (SOURCE.Field, SOURCE.Name, SOURCE.Value, SOURCE.File, SOURCE.Type);
EDIT: Testing with serializable/holdlock for 24 hours. After 30 mins: no errors.
EDIT 2: WITH (SERIALIZABLE) / SET TRANSACTION ISOLATION LEVEL SERIALIZABLE solves the duplicate key problem effectively, costing a little bit of performance in our case/scenario.
You must increase the level of transaction isolation.
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
SERIALIZABLE Specifies the following: + Statements cannot read data
that has been modified but not yet committed by other transactions. No
other transactions can modify data that has been read by the current
transaction until the current transaction completes. Other
transactions cannot insert new rows with key values that would fall in
the range of keys read by any statements in the current transaction
until the current transaction completes. Range locks are placed in the
range of key values that match the search conditions of each statement
executed in a transaction. This blocks other transactions from
updating or inserting any rows that would qualify for any of the
statements executed by the current transaction. This means that if any
of the statements in a transaction are executed a second time, they
will read the same set of rows. The range locks are held until the
transaction completes. This is the most restrictive of the isolation
levels because it locks entire ranges of keys and holds the locks
until the transaction completes. Because concurrency is lower, use
this option only when necessary. This option has the same effect as
setting HOLDLOCK on all tables in all SELECT statements in a
transaction.
Serializable Implementations
SQL Server happens to use a locking implementation of the serializable
isolation level, where physical locks are acquired and held to the end
of the transaction (hence the deprecated table hint HOLDLOCK as a
synonym for SERIALIZABLE).
This strategy is not quite enough to provide a technical guarantee of
full serializability, because new or changed data could appear in a
range of rows previously processed by the transaction. This
concurrency phenomenon is known as a phantom, and can result in
effects which could not have occurred in any serial schedule.
To ensure protection against the phantom concurrency phenomenon, locks
taken by SQL Server at the serializable isolation level may also
incorporate key-range locking to prevent new or changed rows from
appearing between previously-examined index key values. Range locks
are not always acquired under the serializable isolation level; all we
can say in general is that SQL Server always acquires sufficient locks
to meet the logical requirements of the serializable isolation level.
In fact, locking implementations quite often acquire more, and
stricter, locks than are really needed to guarantee serializability,
but I digress.
https://sqlperformance.com/2014/04/t-sql-queries/the-serializable-isolation-level
if it is simple: it blocks not only the source but also the range for insertion

Deadlock occuring in Clustered Columnstore index

We are using clustered columnstore index in our transaction table holding order fulfillments. This table is regularly updated by different sessions. But, every session is specific to order job number and so, they are not trying to update same row at the same time. But, we are facing deadlock issues due to below scenarios between sessions.
Row group locking & Page lock
Row group locking & Row group locking
This is not specific to a stored procedure. It is due to multiple stored procedures updating this table, sequentially one by one, as part of order fulfillment.
The sample schema of the table is very simple:
CREATE TABLE OrderFulfillments
(
OrderJobNumber INT NOT NULL,
FulfilledIndividualID BIGINT NOT NULL,
IsIndividualSuppressed BIT NOT NULL,
SuppressionReason VARCHAR(100) NULL
)
I have given sample deadlock graph for your reference. Please let me know, what approach can I take to avoid this deadlock situation. We need clustered Columnstore index in this table, as we are doing aggregation operations to see how many times an Individual been fulfilled already. without columnstore index, it might be slower.
In my case, the deadlock scenario was due to lock escalations happening, as some of the fulfillments were very big and in 10,000s or in 100k ranges and it was causing lock escalation to happen to rowgroup level and in some cases, page level.
I solved this issue by having a temporary table at the very beginning of transactions and work on updates on the temporary table and finally inserting the temporary table related fulfillments information in to this OrderFulfillments. This OrderFulfillments is also being used by temporary table to see how many times the individual is already fulfilled. but, it is shared lock on the top and not exclusive locks.
By going for temporary table, every session is working on their own copy and concurrency issues are resolved.
You assume NOLOCK is the same as no locking...that is incorrect.
NOLOCK Is equivalent to READUNCOMMITTED.
• READUNCOMMITTED and NOLOCK hints apply only to data locks.
All queries, including those with READUNCOMMITTED and NOLOCK hints,
acquire Sch-S (schema stability) locks during compilation and
execution. Because of this, queries are blocked when a concurrent
transaction holds a Sch-M (schema modification) lock on the table.
For example, a data definition language (DDL) operation acquires a Sch-M
lock before it modifies the schema information of the table.
Any concurrent queries, including those running with READUNCOMMITTED or
NOLOCK hints, are blocked when attempting to acquire a Sch-S lock.
Conversely, a query holding a Sch-S lock blocks a concurrent
transaction that attempts to acquire a Sch-M lock.
READUNCOMMITTED and NOLOCK cannot be specified for tables modified by
insert, update, or delete operations. The SQL Server query optimizer
ignores the READUNCOMMITTED and NOLOCK hints in the FROM clause that
apply to the target table of an UPDATE or DELETE statement.
You can minimize locking contention while protecting transactions from
dirty reads of uncommitted data modifications by using either of the
following:
• The READ COMMITTED isolation level with the
READ_COMMITTED_SNAPSHOT database option set ON.
• The SNAPSHOT
isolation level. For more information about isolation levels, see SET
TRANSACTION ISOLATION LEVEL (Transact-SQL).
https://learn.microsoft.com/en-us/sql/t-sql/queries/hints-transact-sql-table
Understand how your Indexes are structured can cause blocking if say, a select statement requires an entire page that your UPDATE is modifying concurrently.
Limit your variables upon testing.
Consider splitting your DML into sections. You may find an optimal range for performing concurrent modifications of your table data.

when/what locks are hold/released in READ COMMITTED isolation level

I am trying to understand isolation/locks in SQL Server.
I have following scenario in READ COMMITTED isolation level(Default)
We have a table.
create table Transactions(Tid int,amt int)
with some records
insert into Transactions values(1, 100)
insert into Transactions values(2, -50)
insert into Transactions values(3, 100)
insert into Transactions values(4, -100)
insert into Transactions values(5, 200)
Now from msdn i understood
When a select is fired shared lock is taken so no other transaction can modify data(avoiding dirty read).. Documentation also talks about row level, page level, table level lock. I thought of following scenarion
Begin Transaction
select * from Transactions
/*
some buisness logic which takes 5 minutes
*/
Commit
What I want to understand is for what duration of time shared lock would be acquired and which (row, page, table).
Will lock will be acquire only when statement select * from Transactions is run or would it be acquire for whole 5+ minutes till we reach COMMIT.
You are asking the wrong question, you are concerned about the implementation details. What you should think of and be concerned with are the semantics of the isolation level. Kendra Little has a nice poster explaining them: Free Poster! Guide to SQL Server Isolation Levels.
Your question should be rephrased like:
select * from Items
Q: What Items will I see?
A: All committed Items
Q: What happens if there are uncommitted transactions that have inserted/deleted/update Items?
A: your SELECT will block until all uncommitted Items are committed (or rolled back).
Q: What happens if new Items are inserted/deleted/update while I run the query above?
A: The results are undetermined. You may see some of the modifications, won't see some other, and possible block until some of them commit.
READ COMMITTED makes no promise once your statement finished, irrelevant of the length of the transaction. If you run the statement again you will have again exactly the same semantics as state before, and the Items you've seen before may change, disappear and new one can appear. Obviously this implies that changes can be made to Items after your select.
Higher isolation levels give stronger guarantees: REPEATABLE READ guarantees that no item you've selected the first time can be modified or deleted until you commit. SERIALIZABLE adds the guarantee that no new Item can appear in your second select before you commit.
This is what you need to understand, no how the implementation mechanism works. After you master these concepts, you may ask the implementation details. They're all described in Transaction Processing: Concepts and Techniques.
Your question is a good one. Understanding what kind of locks are acquired allows a deep understanding of DBMS's. In SQL Server, under all isolation levels (Read Uncommitted, Read Committed (default), Repeatable Reads, Serializable) Exclusive Locks are acquired for Write operations.
Exclusive locks are released when transaction ends, regardless of the isolation level.
The difference between the isolation levels refers to the way in which Shared (Read) Locks are acquired/released.
Under Read Uncommitted isolation level, no Shared locks are acquired. Under this isolation level the concurrency issue known as "Dirty Reads" (a transaction is allowed to read data from a row that has been modified by another running transaction and not yet committed, so it could be rolled back) can occur.
Under Read Committed isolation level, Shared Locks are acquired for the concerned records. The Shared Locks are released when the current instruction ends. This isolation level prevents "Dirty Reads" but, since the record can be updated by other concurrent transactions, "Non-Repeatable Reads" (transaction A retrieves a row, transaction B subsequently updates the row, and transaction A later retrieves the same row again. Transaction A retrieves the same row twice but sees different data) or "Phantom Reads" (in the course of a transaction, two identical queries are executed, and the collection of rows returned by the second query is different from the first) can occur.
Under Repeatable Reads isolation level, Shared Locks are acquired for the transaction duration. "Dirty Reads" and "Non-Repeatable Reads" are prevented but "Phantom Reads" can still occur.
Under Serializable isolation level, ranged Shared Locks are acquired for the transaction duration. None of the above mentioned concurrency issues occur but performance is drastically reduced and there is the risk of Deadlocks occurrence.
lock will only acquire when select * from Transaction is run
You can check it with below code
open a sql session and run this query
Begin Transaction
select * from Transactions
WAITFOR DELAY '00:05'
/*
some buisness logic which takes 5 minutes
*/
Commit
Open another sql session and run below query
Begin Transaction
Update Transactions
Set = ...
where ....
commit
First, lock only acquire when statement run.
Your statement seprate in two pieces, suppose to be simplfy:
select * from Transactions
update Transactions set amt = xxx where Tid = xxx
When/what locks are hold/released in READ COMMITTED isolation level?
when select * from Transactions run, no lock acquired.
Following update Transactions set amt = xxx where Tid = xxx will add X lock for updating/updated keys, IX lock for page/tab
All lock will release only after committed/rollbacked. That means no lock will release in trans running.

Resources