Understand SQL Read committed & Read uncommitted - sql-server

I am using SQL Server Express 2008 w/ AdventureWorksLT2008 DB to understand the different between Read committed & Read uncommitted.
According to Wikipedia:
http://en.wikipedia.org/wiki/Isolation_%28database_systems%29
READ COMMITTED
Data records retrieved by a query are
not prevented from modification by
some other transactions.
Assume there is a table named SalesLT.Address and a column AddressLine2 which all rows has blank value
Then i run this query :
SET TRANSACTION ISOLATION LEVEL READ COMMITTED
BEGIN TRANSACTION
update SalesLT.Address set AddressLine2 = 'new value'
BEGIN TRANSACTION
select AddressLine2 from SalesLT.Address
--Break Here
/*
COMMIT TRANSACTION
COMMIT TRANSACTION
*/
So, you can see the first transaction haven't commited yet, and the second one start to query the data.
It resulting:
So why the second transaction can be retrieved the phantom data even the 1st transaction still not committed?

When data is read inside a transaction, any changes that have been made by that transaction are visible - within that tranasction only (although READ UNCOMMITTED changes this). So above, even though you've started a second, nested, transaction, you're still in scope of the first transaction and can thus read changed data and get 'the changed values'.
Another transaction, on a separate SPID for example, would block if it was using READ COMMITTED and attempted to read this data.

Related

Lost update in snapshot vs all the rest isolation levels

Let's suppose we use create new table and enable snapshot isolation for our database:
alter database database_name set allow_snapshot_isolation on
create table marbles (id int primary key, color char(5))
insert marbles values(1, 'Black') insert marbles values(2, 'White')
Next, in session 1 begin a snaphot transaction:
set transaction isolation level snapshot
begin tran
update marbles set color = 'Blue' where id = 2
Now, before committing the changes, run the following in session 2:
set transaction isolation level snapshot
begin tran
update marbles set color = 'Yellow' where id = 2
Then, when we commit session 1, session 2 will fail with an error about transaction aborted - I understand that is preventing from lost update.
If we follow this steps one by one but with any other isolation level such as: serializable, repeatable read, read committed or read uncommitted this Session 2 will get executed making new update to our table.
Could someone please explain my why is this happening?
For me this is some kind of lost update, but it seems like only snapshot isolation is preventing from it.
Could someone please explain my why is this happening?
Because under all the other isolation levels the point-in-time at which the second session first sees the row is after the first transaction commits. Locking is a kind of time travel. A session enters a lock wait and is transported forward in time to when the resource is eventually available.
For me this is some kind of lost update
No. It's not. Both updates were properly completed, and the final state of the row would have been the same if the transactions had been 10 minutes apart.
In a lost update scenario, each session will read the row before attempting to update it, and the results of the first transaction are needed to properly complete the second transaction. EG if each is incrementing a column by 1.
And under locking READ COMMITTED, REPEATABLE READ, and SERIALIZABLE the SELECT would be blocked, and no lost update would occur. And under READ_COMMITTED_SNAPSHOT the SELECT should have a UPDLOCK hint, and it would block too.

What is the TRANSACTION ISOLATION LEVEL that is equivalent with (NOLOCK) for all select statements?

I am building a stored procedure for reporting and I am using (NOLOCK) for all select statements.
There is no locking requirement for the scenario I am working on.
I am thinking to change the TRANSACTION ISOLATION LEVEL at the top of the stored procedure, and avoid adding (NOLOCK) to all of the select statements. Is there a TRANSACTION ISOLATION LEVEL that is equivalent with (NOLOCK) when I set it at the top of the store procedures?
TRANSACTION ISOLATION LEVEL : READ UNCOMMITTED
Specifies that statements can read rows that have been modified by
other transactions but not yet committed. Transactions running at the
READ UNCOMMITTED level do not issue shared locks to prevent other
transactions from modifying data read by the current transaction. READ
UNCOMMITTED transactions are also not blocked by exclusive locks that
would prevent the current transaction from reading rows that have been
modified but not committed by other transactions. When this option is
set, it is possible to read uncommitted modifications, which are
called dirty reads. Values in the data can be changed and rows can
appear or disappear in the data set before the end of the transaction.
This option has the same effect as setting NOLOCK on all tables in all
SELECT statements in a transaction. This is the least restrictive of
the isolation levels.
Note : This is not a recommended Isolation level as this can allows dirty reads
If you want to set the ISOLOATION LEVEL to the SP alone then try changing the SP
CREATE PROCEDURE PRC_SP AS
BEGIN
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
--your statements
END

sql server update blocked by another transaction, concurrency in update

Two SP's are getting executed one after another and the second one is getting blocked by first one. They both are trying to update same table. Two SP's are as following
CREATE PROCEDURE [dbo].[SP1]
Begin
SET TRANSACTION ISOLATION LEVEL SNAPSHOT;
BEGIN TRANSACTION ImpSchd
update Table t1 .......... ................................//updating
a set of [n1,n2....n100] records
COMMIT TRANSACTION ImpSchd
SET TRANSACTION ISOLATION LEVEL
READ COMMITTED;
END
2.
CREATE PROCEDURE [dbo].[SP2]
Begin
update Table t1 .......... ................................//updating
a set of [n101,n102.....n200] records
END
My question is when sp1 is running is snapshot level isolation why is it blocking sp2 (n both are updating different set of records)?
If i run first sp for two different set of records simultaneously it
works perfectly.
How can I overcome this situation ?
If using the snapshot level isolation is to be set for each sp updating the same table then it would be a larger change.
if two sp has to update same records in a table, how should i handle that(both sp will update different columns)?
Isolation levels only are for not blocking selects,so any DML wont be affected by Isolation levels.In this case update takes IX lock on table,page followed by taking xlock on row to update.Since you are updating in bulk ,table itself might have been locked due to lock escalation.Hope this helps

SQL Server transaction isolation problem - global variable

SQL Server 2008 R2 (Data Center edition - I think)
I have a very specific requirement for the database.
I need to insert a row marked with timestamp [ChangeTimeStamp]. Timestamp value is passed as a parameter. Timestamp has to be unique.
Two processes can insert values at the same time, and I happen to run into duplicate key insertion once in a while. To avoid this, I am trying:
declare #maxChangeStamp bigint
set transaction isolation level read committed
begin transaction
select #maxChangeStamp = MAX(MaxChangeTimeStamp) from TSMChangeTimeStamp
if (#maxChangeStamp > #changeTimeStamp)
set #maxChangeStamp = #maxChangeStamp + 1
else
set #maxChangeStamp = #changeTimeStamp
update TSMChangeTimeStamp
set MaxChangeTimeStamp = #maxChangeStamp
commit
set #changeTimeStamp = #maxChangeStamp
insert statment
REPEATABLE READ - causes deadlock
READ COMMITTED - causes duplicate key inserts
#changeTimeStamp is my parameter.
TSMChangeTimeStamp holds only one value.
If anyone has a good idea how to solve this I will appreciate any help.
You don't read-increment-update, this will fail no matter what you try. Alway update and use the OUTPUT clause to the new value:
update TSMChangeTimeStamp
set MaxChangeTimeStamp += 1
output inserted.MaxChangeTimeStamp;
You can capture the output value if you need it in T-SQL. But although this will do what you're asking, you most definitely do not want to do this, specially on a system that is high end enough to run DC edition. Generating the next timestamp will place an X lock on the timestamp resource, and thus will prevent every other transaction from generating a new timestamp until the current transaction commits. You achieve complete serialization of work with only one transaction being active at a moment. The performance will tank to the bottom of the abyss.
You must revisit your requirement and come up with a more appropriate one. As it is now your requirement can also be expressed as 'My system is too fast, how can I make is really really really slow?'.
Inside the transaction, the SELECT statement will acquire a shared lock if the mode is not READ COMMITTED or snapshot isolation. If two processes both start the SELECT at the same time, they will both acquire a shared lock.
Later, the UPDATE statement attempts to acquire an exclusive lock (or update lock). Unfortunately, neither one can acquire an exclusive lock, because the other process has a shared lock.
Try using the WITH (UPDLOCK) table hint on the SELECT statement. From MSDN:
UPDLOCK
Specifies that update locks are to be taken and held until the
transaction completes. UPDLOCK takes update locks for read operations
only at the row-level or page-level. If UPDLOCK is combined with
TABLOCK, or a table-level lock is taken for some other reason, an
exclusive (X) lock will be taken instead.
When UPDLOCK is specified, the READCOMMITTED and READCOMMITTEDLOCK
isolation level hints are ignored. For example, if the isolation level
of the session is set to SERIALIZABLE and a query specifies (UPDLOCK,
READCOMMITTED), the READCOMMITTED hint is ignored and the transaction
is run using the SERIALIZABLE isolation level.
For example:
begin transaction
select #maxChangeStamp = MAX(MaxChangeTimeStamp) from TSMChangeTimeStamp with (updlock)
Note that update locks may be promoted to a table lock if there is no index for your table (Microsoft KB article 179362).
Explicitly requesting an XLOCK may also work.
Also note your UPDATE statement does not have a WHERE clause. This causes the UPDATE to lock and update every record in the table (if applicable in your case).

How Serializable works with insert in SQL Server 2005

G'day
I think I have a misunderstanding of serializable. I have two tables (data, transaction) which I insert information into in a serializable transaction (either they are both in, or both out, but not in limbo).
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
BEGIN TRANSACTION
INSERT INTO dbo.data (ID, data) VALUES (#Id, data)
INSERT INTO dbo.transactions(ID, info) VALUES (#ID, #info)
COMMIT TRANSACTION
I have a reconcile query which checks the data table for entries where there is no transaction at read committed isolation level.
INSERT INTO reconciles (ReconcileID, DataID)
SELECT Reconcile = #ReconcileID, ID FROM Data
WHERE NOT EXISTS (SELECT 1 FROM TRANSACTIONS WHERE data.id = transactions.id)
Note that the ID is actually a composite (2 column) key, so I can't use a NOT IN operator
My understanding was that the second query would exclude any values written into data without their transaction as this insert was happening at serializable and the read was occurring at read committed.
So what I have seen is that the Reconcile query has picked up data entries that are in the data table but not in the transactions table when entered with this query, which I thought wasn't possible due to the isolation level.
All transaction isolation levels refer exclusively to reads. There is no 'serializable' insert, just as there is no 'read committed' insert. Writes are never affected by the serialization level. Wrapping two inserts into a serialization level, any level, is a no-op since the inserts will behave identically under all isolation levels.
The second query, the INSERT ... SELECT ... on the other hand, by containing a read (the SELECT), is affected by isolation level. The SELECT part will behave according to the current isolation level (in this case, read committed).
Updated
Writes in a transaction are visible outside the transaction only after the commit. If you have a sequence begin transaction; insert into A; insert into B; commit then a reader that is at least at read committed isolation will not see the insert into A before the insert into B. If your reconcile query sees the partial transaction (ie. sees the insert into A w/o a corresponding insert into B) then you have some possible explanations:
the reconcile query is running at the wrong isolation level and does dirty reads
the application did commit the two inserts separately
a code defect in the application that results in inserting only into A
The most likely explanation is the last one.

Resources