What sql server isolation level should I choose to prevent concurrent reads? - sql-server

I have the following transaction:
SQL inserts a 1 new record into a table called tbl_document
SQL deletes all records matching a criteria in another table called tbl_attachment
SQL inserts multiple records into the tbl_attachment
Until this transaction finishes, I don't want others users to be aware of the (1) new records in tbl_document, (2) deleted records in tbl_attachment, and (3) modified records in tbl_attachment.
Would Read Committed Isolation be the correct isolation level?

It doesn't matter the transaction isolation level of your writes. What is important is the isolation level of your reads. Normally reads will not see your insert/update/delete until is committed. The only isolation level can does see uncommitted changes is READ UNCOMMITTED. If the concurrent thread uses dirty reads, there is nothing you can do in the write to prevent it.
READ UNCOMMITTED can be either set as an isolation level, or requested explicitly with a (NOLOCK) table hint. Dirty reads can see inconsistent transaction data (eg. the debit does not balance the credit of an operation) and also can cause duplicate reads (read the same row multiple times from a table) and trigger mysterious key violations.

yes, like this:
BEGIN TRANSACTION
insert into tbl_document ...
delete tbl_attachment where ...
inserts into tbl_attachment ...
COMMIT
you may block/lock users until you are finished and commit/rollback the transaction. Also, someone could SELECT your rows from tbl_attachment after your insert into tbl_document but before your delete. If you need to prevent that do this:
BEGIN TRANSACTION
select tbl_attachment with (UPDLOCK,HOLDLOCK) where ...
insert into tbl_document ...
delete tbl_attachment where ...
inserts into tbl_attachment ...
COMMIT
or just delete tbl_attachment before the insert into tbl_document and forget the select with the locking hints.

Related

Lost update in snapshot vs all the rest isolation levels

Let's suppose we use create new table and enable snapshot isolation for our database:
alter database database_name set allow_snapshot_isolation on
create table marbles (id int primary key, color char(5))
insert marbles values(1, 'Black') insert marbles values(2, 'White')
Next, in session 1 begin a snaphot transaction:
set transaction isolation level snapshot
begin tran
update marbles set color = 'Blue' where id = 2
Now, before committing the changes, run the following in session 2:
set transaction isolation level snapshot
begin tran
update marbles set color = 'Yellow' where id = 2
Then, when we commit session 1, session 2 will fail with an error about transaction aborted - I understand that is preventing from lost update.
If we follow this steps one by one but with any other isolation level such as: serializable, repeatable read, read committed or read uncommitted this Session 2 will get executed making new update to our table.
Could someone please explain my why is this happening?
For me this is some kind of lost update, but it seems like only snapshot isolation is preventing from it.
Could someone please explain my why is this happening?
Because under all the other isolation levels the point-in-time at which the second session first sees the row is after the first transaction commits. Locking is a kind of time travel. A session enters a lock wait and is transported forward in time to when the resource is eventually available.
For me this is some kind of lost update
No. It's not. Both updates were properly completed, and the final state of the row would have been the same if the transactions had been 10 minutes apart.
In a lost update scenario, each session will read the row before attempting to update it, and the results of the first transaction are needed to properly complete the second transaction. EG if each is incrementing a column by 1.
And under locking READ COMMITTED, REPEATABLE READ, and SERIALIZABLE the SELECT would be blocked, and no lost update would occur. And under READ_COMMITTED_SNAPSHOT the SELECT should have a UPDLOCK hint, and it would block too.

sql server update blocked by another transaction, concurrency in update

Two SP's are getting executed one after another and the second one is getting blocked by first one. They both are trying to update same table. Two SP's are as following
CREATE PROCEDURE [dbo].[SP1]
Begin
SET TRANSACTION ISOLATION LEVEL SNAPSHOT;
BEGIN TRANSACTION ImpSchd
update Table t1 .......... ................................//updating
a set of [n1,n2....n100] records
COMMIT TRANSACTION ImpSchd
SET TRANSACTION ISOLATION LEVEL
READ COMMITTED;
END
2.
CREATE PROCEDURE [dbo].[SP2]
Begin
update Table t1 .......... ................................//updating
a set of [n101,n102.....n200] records
END
My question is when sp1 is running is snapshot level isolation why is it blocking sp2 (n both are updating different set of records)?
If i run first sp for two different set of records simultaneously it
works perfectly.
How can I overcome this situation ?
If using the snapshot level isolation is to be set for each sp updating the same table then it would be a larger change.
if two sp has to update same records in a table, how should i handle that(both sp will update different columns)?
Isolation levels only are for not blocking selects,so any DML wont be affected by Isolation levels.In this case update takes IX lock on table,page followed by taking xlock on row to update.Since you are updating in bulk ,table itself might have been locked due to lock escalation.Hope this helps

SQL Server - Is there any such thing called 'dirty write'?

Does SQL Server allow a transaction to modify the data that is currently being modified by another transaction but hasn't yet been committed? Is this possible under any of the isolation levels, let's say READ UNCOMMITTED since that is the least restrictive? Or does it completely prevent that from happening? Would you call that a 'dirty write' if it is possible?
Any RDBMS providing transactions and atomicity of transactions cannot allow dirty writes.
SQL Server must ensure that all writes can be rolled back. This goes even for a single statement because even a single statement can cause many writes and run for hours.
Imagine a row was written but needed to be rolled back. But meanwhile another write happened to that row that is already committed. Now we cannot roll back because that would violate the durability guarantee provided to the other transaction: the write would be lost. (It would possibly also violate the atomicity guarantee provided to that other transaction, if the row to be rolled back was one of several of its written rows).
The only solution is to always stabilize written but uncommitted data using X-locks.
SQL Server never allows dirty writes or lost writes.
No, you can't unless you update in the same transaction. Setting the Isolation Level to Read Uncommitted will only work to read the data from the table even if it has not been committed but you can't update it.
The Read Uncommitted Isolation Level and the nolock table hint will be ignored for update or delete statements and it will wait until the transaction is committed.
Dirty write didn't occur in SQL Server according to my experiment with READ UNCOMMITTED which is the most loose isolation level. *Basically, dirty write is not allowed with all isolation levels in many databases.
Dirty write is that a transaction updates or deletes (overwrites) the uncommitted data which other transactions insert, update or delete.
I experimented dirty write with MSSQL(SQL Server) and 2 command prompts.
First, I set READ UNCOMMITTED isolation level:
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
Then, I created person table with id and name as shown below.
person table:
id
name
1
John
2
David
Now, I did these steps below with MSSQL queries. *I used 2 command prompts:
Flow
Transaction 1 (T1)
Transaction 2 (T2)
Explanation
Step 1
BEGIN TRAN;GO;
T1 starts.
Step 2
BEGIN TRAN;GO;
T2 starts.
Step 3
UPDATE person SET name = 'Tom' WHERE id = 2;GO;
T1 updates David to Tom so this row is locked by T1 until T1 commits.
Step 4
UPDATE person SET name = 'Lisa' WHERE id = 2;GO;
T2 cannot update Tom to Lisa before T1 commits because this row is locked by T1 so to update this row, T2 needs to wait for T1 to unlock this row by commit.*Dirty write is not allowed.
Step 5
COMMIT;GO;
Waiting...
T1 commits.
Step 6
UPDATE person SET name = 'Lisa' WHERE id = 2;GO;
Now, T2 can update Tom to Lisa because T1 has already committed(unlocked this row).
Step 7
COMMIT;GO;
T2 commits.

How Serializable works with insert in SQL Server 2005

G'day
I think I have a misunderstanding of serializable. I have two tables (data, transaction) which I insert information into in a serializable transaction (either they are both in, or both out, but not in limbo).
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
BEGIN TRANSACTION
INSERT INTO dbo.data (ID, data) VALUES (#Id, data)
INSERT INTO dbo.transactions(ID, info) VALUES (#ID, #info)
COMMIT TRANSACTION
I have a reconcile query which checks the data table for entries where there is no transaction at read committed isolation level.
INSERT INTO reconciles (ReconcileID, DataID)
SELECT Reconcile = #ReconcileID, ID FROM Data
WHERE NOT EXISTS (SELECT 1 FROM TRANSACTIONS WHERE data.id = transactions.id)
Note that the ID is actually a composite (2 column) key, so I can't use a NOT IN operator
My understanding was that the second query would exclude any values written into data without their transaction as this insert was happening at serializable and the read was occurring at read committed.
So what I have seen is that the Reconcile query has picked up data entries that are in the data table but not in the transactions table when entered with this query, which I thought wasn't possible due to the isolation level.
All transaction isolation levels refer exclusively to reads. There is no 'serializable' insert, just as there is no 'read committed' insert. Writes are never affected by the serialization level. Wrapping two inserts into a serialization level, any level, is a no-op since the inserts will behave identically under all isolation levels.
The second query, the INSERT ... SELECT ... on the other hand, by containing a read (the SELECT), is affected by isolation level. The SELECT part will behave according to the current isolation level (in this case, read committed).
Updated
Writes in a transaction are visible outside the transaction only after the commit. If you have a sequence begin transaction; insert into A; insert into B; commit then a reader that is at least at read committed isolation will not see the insert into A before the insert into B. If your reconcile query sees the partial transaction (ie. sees the insert into A w/o a corresponding insert into B) then you have some possible explanations:
the reconcile query is running at the wrong isolation level and does dirty reads
the application did commit the two inserts separately
a code defect in the application that results in inserting only into A
The most likely explanation is the last one.

Does SQL Server wrap Select...Insert Queries into an implicit transaction?

When I perform a select/Insert query, does SQL Server automatically create an implicit transaction and thus treat it as one atomic operation?
Take the following query that inserts a value into a table if it isn't already there:
INSERT INTO Table1 (FieldA)
SELECT 'newvalue'
WHERE NOT EXISTS (Select * FROM Table1 where FieldA='newvalue')
Is there any possibility of 'newvalue' being inserted into the table by another user between the evaluation of the WHERE clause and the execution of the INSERT clause if I it isn't explicitly wrapped in a transaction?
You are confusing between transaction and locking. Transaction reverts your data back to the original state if there is any error. If not, it will move the data to the new state. You will never ever have your data in an intermittent state when the operations are transacted. On the other hand, locking is the one that allows or prevents multiple users from accessing the data simultaneously. To answer your question, select...insert is atomic and as long as no granular locks are explicitly requested, no other user will be able to insert while select..insert is in progress.
John, the answer to this depends on your current isolation level. If you're set to READ UNCOMMITTED you could be looking for trouble, but with a higher isolation level, you should not get additional records in the table between the select and insert. With a READ COMMITTED (the default), REPEATABLE READ, or SERIALIZABLE isolation level, you should be covered.
Using SSMS 2016, it can be verified that the Select/Insert statement requests a lock (and so most likely operates atomically):
Open a new query/connection for the following transaction and set a break-point on ROLLBACK TRANSACTION before starting the debugger:
BEGIN TRANSACTION
INSERT INTO Table1 (FieldA) VALUES ('newvalue');
ROLLBACK TRANSACTION --[break-point]
While at the above break-point, execute the following from a separate query window to show any locks (may take a few seconds to register any output):
SELECT * FROM sys.dm_tran_locks
WHERE resource_database_id = DB_ID()
AND resource_associated_entity_id = OBJECT_ID(N'dbo.Table1');
There should be a single lock associated to the BEGIN TRANSACTION/INSERT above (since by default runs in an ISOLATION LEVEL of READ COMMITTED)
OBJECT ** ********** * IX LOCK GRANT 1
From another instance of SSMS, open up a new query and run the following (while still stopped at the above break-point):
INSERT INTO Table1 (FieldA)
SELECT 'newvalue'
WHERE NOT EXISTS (Select * FROM Table1 where FieldA='newvalue')
This should hang with the string "(Executing)..." being displayed in the tab title of the query window (since ##LOCK_TIMEOUT is -1 by default).
Re-run the query from Step 2.
Another lock corresponding to the Select/Insert should now show:
OBJECT ** ********** 0 IX LOCK GRANT 1
OBJECT ** ********** 0 IX LOCK GRANT 1
ref: How to check which locks are held on a table

Resources