Isolation Level Difference, SNAPSHOT and SNAPSHOT READ COMMITTED Snapshot? - sql-server

I kinda have trouble to identify the difference between SNAPSHOT and SNAPSHOT READ COMMITTED? READ COMMITTED is a pessimistic approach of concurrency and how is this to be applied into the optimistic concurrency? which in this case on SNAPSHOT isolation level
Thank you, so much appreciate for some enlightments

Both names are disturbingly misleading.
In SQL Server terminology both SNAPSHOT and SNAPSHOT READ COMMITTED are isolation levels and also an implementation way how SQL Server accomplish the isolation of the concurrent data access.
The main difference: In SNAPSHOT repeatable reads and phantom reads are prevented while in SNAPSHOT READ COMMITTED level you can experience non repeatable reads and also you may experience phantom reads.
With other words SNAPSHOT is a higher and stronger isolation than SNAPSHOT READ COMMITTED)
Regarding only the isolation level: SNAPSHOT is equivalent with SERIALIZABLE and SNAPSHOT READ COMMITTED is equivalent READ COMMITTED. However the implementation is different. While SNAPSHOT and SNAPSHOT READ COMMITTED uses record versions, the other hand SERIALIZABLE and READ COMMITTED uses blocking semantics to force block the concurrent process to the changed (or read) resource while the first one finishes its transaction.
I think concurrency and transaction isolation is tough enough to understand, and mixing it (almost always) with optimistic/pessimistic metaphor more hardens the understanding than helps.

Read committed
This is a default isolation level. Implemented with shared read locks.
Read committed snapshot
The same isolation level as Read committed, but implemented with row versioning / MVCC. The advantage is that writers don't block readers. Some people feel like this should be the default. Note that it is the same isolation level in a sense that it avoids the same anomalies.
Snapshot
A stronger isolation level that allows fewer anomalies.

Related

SQL Server Isolation Levels

This question may have been asked but from what I've read I still don't clearly understand what's happening. My question deals specifically with using the Read Uncommitted isolation level in SQL Server - via Entity Framework (i.e. setting the isolation level on the DB object in the contest when starting a transaction).
My question is - the documentation states that using read uncommitted will allow other transactions/queries to read the tables involved in the initial transaction. Now this works fine up until a write based operation is performed. At that point the table is locked. I've tested this out by starting a transaction in a project and stopping in a breakpoint after the update. Then tried to access the related table in SQL Server Management Studio. I am not able to do so. This makes me wonder what the purpose of read uncommitted is if you can't read uncommitted. I have read that people state that there are two types of locking behaviour and the second one which is not affected by the isolation level is what is locking the table. This still doesn't explain why the read uncommitted is not working - again it states that it allows reading of uncommitted operations (but seems to not allow it).
Peter

In SQL Server 2005 and 2008, how to tell I'm using pessimistic concurrency model or optimistic one?

I know SQL Server 2000 has a pessimistic concurrency model. And the optimistic model was added in SQL Server 2005. So how do I tell whether I'm using the pessimistic concurrency model or the optimistic one in SQL Server 2005 and 2008?
Thanks.
SQL 2005 (and 2008) introduces SNAPSHOT issolation. This is the way to move to optimistic concurrency. Take a look to Transaction Isolation and the New Snapshot Isolation Level article:
Isolation level Dirty Reads Non-repeatable Phantom reads Concurrency
reads control
READ UNCOMMITTED Yes Yes Yes Pessimistic
READ COMMITTED No Yes Yes Pessimistic
(with locking)
READ COMMITTED No Yes Yes Optimistic
(with snapshot)
REPEATABLE READ No No Yes Pessimistic
SNAPSHOT No No No Optimistic
SERIALIZABLE No No No Pessimistic
After reading some articles and documents from Microsoft. I got the following conclusion.
On SQL Server 2005+
If you are using read uncommitted, repeatable read or serializable isolation level, you are using pessimistic concurrency model.
If you are using snapshot isolation level, you are using optimistic concurrency model.
If you are using read committed isolation level and the database setting read_committed_snapshot is ON, then you are using optimistic concurrency model
If you are using read committed isolation level and the database setting read_committed_snapshot is OFF, then you are using pessimistic concurrency model
However, I still need confirmation. Also, If there are some code to test the concurrency model that would be great.
Basically:
Pessimistic: you lock the record only for you until you have finished with it. So read committed transaction isolation level. (not uncommitted as you said)
Optimistic concurrency control works on the assumption that resource conflicts between multiple users are unlikely, and it permits transactions to execute without locking any resources. The resources are checked only when transactions are trying to change data. This determines whether any conflict has occurred (for example, by checking a version number). If a conflict occurs, the application must read the data and try the change again. Optimistic concurrency control is not provided with the product, but you can build it into your application manually by tracking database access. (Source)

Database JDBC using multicore vs isolation level overhead

Hallo,
I want to get data into a database on a multicore system with ative WAL using JDBC. I was thinking about spawning multiple threads in my application to insert data parallely.
If the application has multiple threads I will have to increase the isolation level to Repeatable Read which on MVCC-databases should be mapped to Snapshot isolation.
If I were using one thread I wouldn't need isolation levels. As far as I know most Snapshot isolation databases analyze the write sets of all transaction that could have a conflict and then rollback all but one of the real conflict transactions. More specific I'm talking about Oracle, InnoDB and PostgreSQL.
1.) Is this analyzing of the write sets expensive?
2.) Is it a good idea to multithread the inserts for a higher total throughput? Real conflict are nearly impossible because of the application layer feeding the threads conflict free stuff. But the database shall be a safety net.
Oracle does not support Repeatable Read. It supports only Read Committed and Serializable. I might be mistaken, but setting an isolation level of Repeatable Read for Oracle might result in a transaction with an isolation level of Serializable. In short, you are left to mercy of the database support for the isolation levels that you desire.
I cannot speak for InnoDB and PostgreSQL, but the same would apply if they do not support the required isolation levels. The database could automatically upgrade the isolation level to a higher level to meet the desired isolation characteristics. You ought to rethink this approach, if your application's desired isolation level has to be Repeatable Read.
The problem like you've rightly inferred is that optimistic locking will possibly result in transaction rollbacks, if a conflict is detected. Oracle does so by reporting the ORA-08177 SQL error. Since this error is reported when two threads will access the same data range, it could be avoided if the threads work against data sets involving different data ranges. You will have to ensure that this is the case when dividing work across threads.
I think the limiting factor here will be disk IO, not the overhead of moving to Repeatable Read.
Even a single thread may be able to max out the disks on the DB server especially with the amount of DB logging required on insert / update. Are you sure that's not already the case?
Also, in any multi-user system, you probably want to be running with Repeatable Read isolation anyway (Postgres only supports this and serializable). So, I don't think of this as adding any "overhead" above what I would normally see.

Implications of using Serializable and Snapshot isolation concurrently

Are there any issues using SNAPSHOT isolation to read data consistently for viewing without locking, blocking or dirty/phantom reads, while a separate process is processing continuous incoming data in serializable transactions?
We need readers (guaranteed read-only: web data sync, real-time monitoring views, etc) to be able to read consistent data, without being blocked, or blocking the updates. We were using SNAPSHOT for everything, but had too many consistency failures so switched the updating process to SERIALIZABLE.
I've read about but am not totally clear as to the impacts of using different isolation levels concurrently. I've seen the lock compatibility matrix, and read various info. It seems ok, but I'd really appreciate some wise guidance from people with practical experience about any major pitfalls.
Are there any issues using Snapshot isolation for the readers while SERIALIZABLE transactions are writing? Are there circumstances it will block a SERIALIZABLE transaction? Is there a benefit to using SNAPSHOT vs READ COMMITTED (with READ_COMMITTED_SNAPSHOT ON)?
Thanks, any assistance greatly appreciated :-)
Reads performed under SNAPSHOT isolation level read any modified data from the version store. As such they are affected only by writes. Writes behave identically under all isolation levels. Therefore SNAPSHOT reads behave the same way no matter the isolation level of the concurent transactions.
READ_COMMITTED_SNAPSHOT ON makes READ COMMITTED act as SNAPSHOT. In effect, it is SNAPSHOT: the READ_COMMITTED_SNAPSHOT was provided as a quick way to port applications to SNAPSHOT w/o code changes. So everything said on the first paragraph applies.

Isolation Level - Serializable. When should I use this?

I understand that an Isolation level of Serializable is the most restrictive of all isolation levels. I'm curious though what sort of applications would require this level of isolation, or when I should consider using it?
Ask yourself the following question: Would it be bad if someone were to INSERT a new row into your data while your transaction is running? Would this interfere with your results in an unacceptable way? If so, use the SERIALIZABLE level.
From MSDN regarding SET TRANSACTION ISOLATION LEVEL:
SERIALIZABLE
Places a range lock on the data set,
preventing other users from updating
or inserting rows into the data set
until the transaction is complete.
This is the most restrictive of the
four isolation levels. Because
concurrency is lower, use this option
only when necessary. This option has
the same effect as setting HOLDLOCK on
all tables in all SELECT statements in
a transaction.
So your transaction maintains all locks throughout its lifetime-- even those normally discarded after use. This makes it appear that all transactions are running one at a time, hence the name SERIALIZABLE. Note from Wikipedia regarding isolation levels:
SERIALIZABLE
This isolation level specifies that
all transactions occur in a completely
isolated fashion; i.e., as if all
transactions in the system had
executed serially, one after the
other. The DBMS may execute two or
more transactions at the same time
only if the illusion of serial
execution can be maintained.
The SERIALIZABLE isolation level is the highest isolation level based on pessimistic concurrency control where transactions are completely isolated from one another.
The ANSI/ISO standard SQL 92 covers the following read phenomena when one transaction reads data, which is changed by second transaction:
dirty reads
non-repeatable reads
phantom reads
and Microsoft documentation extends with the following two:
lost updates
missing and double reads caused by row updates
The following table shows the concurrency side effects enabled by the different isolation levels:
So, the question is what read phenomena are allowed by your business requirements and then to check if your hardware environment can handle stricter concurrency control?
Note, something very interesting about the SERIALIZABLE isolation level - it is the default isolation level specified by the SQL standard. In the context of SQL Server of course, the default is READ COMMITTED.
Also, the official documentation about Transaction Locking and Row Versioning Guide is a great place where a lot of aspects are covered and explained.
Try accounting. Transactions in accounts are inherently serializable if you want to have proper account values AND adhere to things like credit limits.
It behaves in a way that when you try to update a row, It simply blocks the updation process until the transaction is completed.

Resources