I am using System.Data.SqlClient (4.6.1) in a dot net core 2.2 project. SqlClient maintains a pool of connections, and it has been reported that it leaks transaction isolation level if the same pooled connection is used for the next sql command.
For example, this is explained in this stackoverflow answer: https://stackoverflow.com/a/25606151/1250853
I tried looking for the right way to prevent this leak, but couldn't find a satisfactory solution.
I am thinking to follow this pattern:
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
SET XACT_ABORT ON -- Turns on rollback if T-SQL statement raises a run-time error.
BEGIN TRANSACTION
SELECT * FROM MyTable;
-- removed complex statements for brevity. there are selects followed by insert.
COMMIT TRANSACTION
-- Set settings back to known defaults.
SET TRANSACTION ISOLATION LEVEL READ COMMITTED
SET XACT_ABORT OFF
Is this a good approach?
I would normally use two separate connection strings that are different (e.g. using tweaked Application Name values). Use one connection string for normal connections, the other for connections where you need serializable.
Since the connection strings are different, they go into separate pools. You may want to adjust other pool related settings if you think this will cause issues (e.g. limit the pool for the serializable to a much lower maximum if using it is rare and to prevent 2x default maximum connections from possibly being created).
I recommend never changing the transaction isolation level. If a transaction needs different locking behavior, use appropriate lock hints on selected queries.
The transaction isolation levels are blunt instruments, and often have surprising consequences.
SERIALIZABLE is especially problematic, as few people are prepared to handle the deadlocks it uses to enforce its isolation guarantees.
Also if you only change the transaction isolation level in stored procedure, SQL Server will automatically revert the session's isolation level after the procedure is complete.
Answering my own question based on #Zohar Peled's suggestion in the comments:
BEGIN TRY
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
SET XACT_ABORT ON -- Turns on rollback if T-SQL statement raises a run-time error.
BEGIN TRANSACTION
SELECT * FROM MyTable;
-- removed complex statements for brevity. there are selects followed by multiple inserts.
COMMIT TRANSACTION
SET TRANSACTION ISOLATION LEVEL READ COMMITTED
SET XACT_ABORT OFF
END TRY
BEGIN CATCH
SET TRANSACTION ISOLATION LEVEL READ COMMITTED;
SET XACT_ABORT OFF;
THROW;
END CATCH
EDIT:
If you are setting isloation level and xact_abort inside a stored proc, it's scoped to the stored proc only and you don't need to catch and turn everything off. https://learn.microsoft.com/en-us/sql/t-sql/statements/set-statements-transact-sql?view=sql-server-ver15 .
Related
I have about 20 stored procedures that consume each other, forming a tree-like dependency chain.
The stored procedures however use in-memory tables for caching and can be called concurrently from many different clients.
To protect against concurrent update / delete attempts against the in-memory tables, I am using sp_getapplock and SET MEMORY_OPTIMIZED_ELEVATE_TO_SNAPSHOT ON;.
I am using a hash of the stored procedure parameters that is unique to each stored procedure, but multiple concurrent calls to the same stored procedure with the same parameters should generate the same hash. It's this equality of the hash for concurrent calls to the same stored proc with the same parameters that gives me a useful resource name to obtain our applock against.
Below is an example:
BEGIN TRANSACTION
EXEC #LOCK_STATUS = sp_getapplock #Resource= [SOME_HASH_OF_PARAMETERS_TO_THE_SP], #LockMode = 'Exclusive';
...some stored proc code...
IF FAILURE
BEGIN
ROLLBACK;
THROW [SOME_ERROR_NUMBER]
END
...some stored proc code...
COMMIT TRANSACTION
Despite wrapping everything in an applock which should block any concurrent updates or deletes, I still get error 41302:
The current transaction attempted to update a record that has been
updated since this transaction started. The transaction was aborted.
Uncommittable transaction is detected at the end of the batch. The
transaction is rolled back.
Am I using sp_getapplock incorrectly? It seems like the approach I am suggesting should work.
The second you begin your transaction with a memory optimized table you get your "snapshot", which is based on the time the transaction started, for optimistic concurrency resolution. Unfortunately your lock is in place after the snapshot is taken and so it's still entirely possible to have optimistic concurrency resolution failures.
Consider the situation where 2 transactions that need the same lock begin at once. They both begin their transactions "simultaneously" before either obtains the lock or modifies any rows. Their snapshots look exactly the same because no data has been modified yet. Next, one transaction obtains the lock and proceeds to make its changes while the other is blocked. This transaction commits fine because it was first and so the snapshot it refers to still matches the data in memory. The other transaction obtains the lock now, but it's snapshot is invalid (however it doesn't know this yet). It proceeds to execute and at the end realizes its snapshot is invalid so it throws you an error. Truthfully the transactions don't even need to start near simultaneously, the second transaction just needs to start before the first commits.
You need to either enforce the lock at the application level, or through use of sp_getapplock at the session level.
Hope this helped.
I have read committed snapshot isolation and allow isolation ON for my database. I'm still receiving a deadlock error. I'm pretty sure I know what is happening...
First transaction gets a sequence number at the beginning of its transaction.
Second one gets a later sequence number at the beginning of its transaction, but after the first transaction has already gotten its (second sequence number is more recent than first).
Second transaction makes it to the update statement first. When it checks the row versioning it sees the record that precedes both transactions since the first one hasn't reached the update yet. It finds that the row's sequence number is in a committed state and moves on it's merry way.
The first transaction takes it's turn and like the second transaction finds the same committed sequence number because it won't see the second one because it is newer than itself. When it tries to commit it finds that another transaction has already updated records that are trying to be committed and has to roll itself back.
Here is my question: Will this rollback appear as a deadlock in a trace?
In a comment attached to the original question you said: "I'm just wondering if an update conflict will appear as a deadlock or if it will appear as something different." I actually had exactly these types of concerns when I started looking into using snapshot isolation. Eventually I realized that there is significant difference between READ_COMMITTED_SNAPSHOT and isolation level SNAPSHOT.
The former uses row versioning for reads, but continues to use exclusive locking for writes. So, READ_COMMITTED_SNAPHOT is actually something in between pure pessimistic and pure optimistic concurrency control. Because it uses locks for writing, update conflicts are not possible, but deadlocks are. At least in SQL Server those deadlocks will be reported as deadlocks just as they are with 'normal' pessimistic locking.
The latter (isolation level SNAPSHOT) is pure optimistic concurrency control. Row versioning is used for both reads and writes. Deadlocks are not possible, but update conflicts are. The latter are reported as update conflicts and not as deadlocks.
The snapshot transaction is rolled back, and it receives the following error message:
Msg 3960, Level 16, State 4, Line 1
Snapshot isolation transaction aborted due to update conflict. You cannot use snapshot
isolation to access table 'Test.TestTran' directly or indirectly in database 'TestDatabase' to
update, delete, or insert the row that has been modified or deleted by another transaction.
Retry the transaction or change the isolation level for the update/delete statement.
To prevent deadlock enable both
ALLOW_SNAPSHOT_ISOLATION and READ_COMMITTED_SNAPSHOT
ALTER DATABASE [BD] SET READ_COMMITTED_SNAPSHOT ON;
ALTER DATABASE [BD] SET ALLOW_SNAPSHOT_ISOLATION ON;
here explain the differences
http://technet.microsoft.com/en-us/sqlserver/gg545007.aspx
How can I set the isolation level of all my SqlCommand ExecuteNonQuery calls to be read uncommitted? (connecting to a SQL Server 2008 enterprise instance)
I am simply transforming static data and inserting the results to my own tables on a regular basis, and would like to avoid writing more code than necessary.
No, you cannot.
You need to explicitly define the isolation level when you start a transaction.
For more info on adjusting the isolation level, see the MSDN documentation on the topic.
No, you cannot.
And there is No way of changing the default transaction isolation level.
http://blogs.msdn.com/b/ialonso/archive/2012/11/26/how-to-set-the-default-transaction-isolation-level-server-wide.aspx
You can set the isolation level in the SqlTransaction object, which is a property of the SqlCommand object.
http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqltransaction.isolationlevel.aspx
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
BEGIN TRAN
/* do stuff */
COMMIT
Note that ADODB allowed one to set the default isolation level for the connection, whereas ADO.NET will use the isolation level of the last committed transaction as the default isolation level (see the note in https://msdn.microsoft.com/en-us/library/5ha4240h(v=vs.110).aspx). See https://technet.microsoft.com/en-us/library/ms189542%28v=sql.105%29.aspx?f=255&MSPPError=-2147217396 for details on setting the isolation level for various Microsoft database technologies.
Could some one please help me understand when to use SNAPSHOT isolation level over READ COMMITTED SNAPSHOT in SQL Server?
I understand that in most cases READ COMMITTED SNAPSHOT works, but not sure when go for SNAPSHOT isolation.
Thanks
READ COMMITTED SNAPSHOT does optimistic reads and pessimistic writes. In contrast, SNAPSHOT does optimistic reads and optimistic writes.
Microsoft recommends READ COMMITTED SNAPSHOT for most apps that need row versioning.
Read this excellent Microsoft article: Choosing Row Versioning-based Isolation Levels. It explains the benefits and costs of both isolation levels.
And here's a more thorough one:
http://msdn.microsoft.com/en-us/library/ms345124(SQL.90).aspx
[![Isolation levels table][2]][2]
See the example below:
Read Committed Snapshot
Change the database property as below
ALTER DATABASE SQLAuthority
SET READ_COMMITTED_SNAPSHOT ON WITH ROLLBACK IMMEDIATE
GO
Session 1
USE SQLAuthority
GO
BEGIN TRAN
UPDATE DemoTable
SET i = 4
WHERE i = 1
Session 2
USE SQLAuthority
GO
BEGIN TRAN
SELECT *
FROM DemoTable
WHERE i = 1
Result – Query in Session 2 shows old value (1, ONE) because current transaction is NOT committed. This is the way to avoid blocking and read committed data also.
Session 1
COMMIT
Session 2
USE SQLAuthority
GO
SELECT *
FROM DemoTable
WHERE i = 1
Result – Query in Session 2 shows no rows because row is updated in session 1. So again, we are seeing committed data.
Snapshot Isolation Level
This is the new isolation level, which was available from SQL Server 2005 onwards. For this feature, there is a change needed in the application as it has to use a new isolation level.
Change database setting using below. We need to make sure that there is no transaction in the database.
ALTER DATABASE SQLAuthority SET AllOW_SNAPSHOT_ISOLATION ON
Now, we also need to change the isolation level of connection by using below
Session 1
USE SQLAuthority
GO
BEGIN TRAN
UPDATE DemoTable
SET i = 10
WHERE i = 2
Session 2
SET TRANSACTION ISOLATION LEVEL SNAPSHOT
GO
USE SQLAuthority
GO
BEGIN TRAN
SELECT *
FROM DemoTable
WHERE i = 2
Result- Even if we have changed the value to 10, we will still see old record in session 2 (2, TWO).
Now, let’s commit transaction in session 1
Session 1
COMMIT
Let’s come back to session 2 and run select again.
Session 2
SELECT *
FROM DemoTable
WHERE i = 2
We will still see the record because session 2 has stated the transaction with snapshot isolation. Unless we complete the transaction, we will not see latest record.
Session 2
COMMIT
SELECT *
FROM DemoTable
WHERE i = 2
Now, we should not see the row as it's already updated.
See: SQL Authority, Safari Books Online
No comparison of Snapshot and Snapshot Read Committed is complete without a discussion of the dreaded "snapshot update conflict" exception that can happen in Snapshot, but not Snapshot Read Committed.
In a nutshell, Snapshot isolation retrieves a snapshot of committed data at the start of a transaction, and then uses optimistic locking for both reads and writes. If, when attempting to commit a transaction, it turns out that something else changed some of that same data, the database will rollback the entire transaction and raise an error causing a snapshot update conflict exception in the calling code. This is because the version of data affected by the transaction is not the same at the end of the transaction as it was at the start.
Snapshot Read Committed does not suffer from this problem because it uses locking on writes (pessimistic writes) and it obtains snapshot version information of all committed data at the stat of each statement.
The possibility of snapshot update conflicts happening in Snapshot and NOT Snapshot Read Committed is an extremely significant difference between the two.
Still relevant, starting with Bill's comments I read more and made notes that might be useful to someone else.
By default single statements (including SELECT) work on "committed" data (READ COMMITTED), the question is: do they wait for data to be "idle" and stop others from working when reading?
Setting via right click DB "Properties -> Options -> Miscellaneous":
Concurrency/Blocking: Is Read Committed Snapshot On [defaults off, should be on]:
Use SNAPSHOT for select (read), do not wait for others, nor block them.
Effects operation without code change
ALTER DATABASE <dbName> SET READ_COMMITTED_SNAPSHOT [ON|OFF]
SELECT name, is_read_committed_snapshot_on FROM sys.databases
Consistency: Allow Snapshot Isolation [defaults off, debatable – OK off]:
Allow client to request SNAPSHOT across SQL statements (transactions).
Code must request "transaction" snapshots (like SET TRANSACTION ...)
ALTER DATABASE <dbName> SET ALLOW_SNAPSHOT_ISOLATION [ON|OFF]
SELECT name, snapshot_isolation_state FROM sys.databases
To the question: it is not one or the other between Read Committed Snapshot and Allow Snapshot Isolation. They are two cases of Snapshot, and either could be on or off independently, with Allow Snapshot Isolation a bit more of an advanced topic. Allow Snapshot Isolation allows code to go a step further controlling Snapshot land.
The issue seems clear if you think about one row: by default the system has no copy, so a reader has to wait if anyone else is writing, and a writer also has to wait if anyone else is reading – the row must lock all the time. Enabling "Is Read Committed Snapshot On" activates the DB to support "snapshot copies" to avoid these locks.
Rambling on...
In my opinion "Is Read Committed Snapshot On" should be TRUE for any normal MS SQLServer databases, and that it is a premature optimization that it ships FALSE by default.
However, I'm told the one row lock gets worse not only because you may be addressing multiple rows across tables, but because in SQL Server row locks are implemented using "block" level locks (locking random rows associated by storage proximity) and that there is a threshold where multiple locks trigger table locking - presumably more "optimistic" performance optimizations at the risk of blocking issues in busy databases.
Let me describe 2 points that have not been mentioned.
Firstly let's make it clear how to use both because it's not intuitive.
SNAPSHOT and READ_COMMITTED_SNAPSHOT are two different isolation levels.
SNAPSHOT is isolation level you can use in your transaction explicitly as usual:
begin transaction
set transaction isolation level snapshot;
-- ...
commit
READ_COMMITTED_SNAPSHOT can't be use like this. READ_COMMITTED_SNAPSHOT is both a database level option and an implicit/automatic isolation level. To use it, you need to enable it for the whole database:
alert database ... set read_committed_snapshot on;
What above database setting does, is that every time you run transaction like this:
begin transaction
set transaction isolation level read committed;
-- ...
commit
With this option ON, all READ_COMMITTED transactions will run under READ_COMMITTED_SNAPSHOT isolation level instead. This happens automatically, affecting all READ_COMMITTED transactions issued against database with this setting set to ON. It's not possible to run transaction under READ_COMMITTED isolation level because all transactions with this level will be automatically converted to READ_COMMITTED_SNAPSHOT.
Secondly you shouldn't blindly use READ_COMMITTED_SNAPSHOT option.
To illustrate the kind problems it can create, imagine you have simple events table like this:
create table Events (
id int not null identity(1, 1) primary key,
name nvarchar(450) not null
-- ...
)
And you poll it periodically with query like this:
begin transaction
set transaction isolation level read committed; -- automatically set to read committed snapshot when this setting is ON on database level
select top 100 * from Events where id > ${lastId} order by id asc;
commit
Above query doesn't need to be enclosed with transaction and explicit isolation level. READ_COMMITTED is default isolation level and if you invoke query without wrapping it in transaction block - it'll be implicitly run in READ_COMMITTED transaction.
You'll find that under READ_COMMITTED_SNAPSHOT isolation level auto-increment identity values may have gaps that later appear.
You can easily simulate it with insert like this:
begin transaction
insert into Events (name) values ('test 1');
waitfor delay '00:00:10'
commit
...followed by normal insert:
insert into Events (name) values ('test 2');
Your polling function invoked within 10s will return single row with id 2.
Following poll after updating lastId will return nothing. Row with id 1 had will appear after 10s.
Event with id 1 will be effectively skipped.
This will not happen if you use READ_COMMITTED without READ_COMMITTED_SNAPSHOT auto promotion option.
It's worth understanding this scenario. It's not related to the fact that IDENTITY column doesn't guarantee uniqueness. It's not related to the fact that IDENTITY column doesn't guarantee strict monotonicity. Even when both uniqueness and strict monotonicity are not violated, you still end up with gaps - possibility of seeing commits with higher ids before seeing commits with lower ids.
Under READ_COMMITTED this problem doesn't exist.
Under READ_COMMITTED you can also see gaps - ie. by transactions that rolled back. But those gaps will be permanent - ie. you are not skipping events because they will never reappear. Ie. you won't see lower ids reappearing later after you've seen higher ids.
Please understand above issue and its implications before turning READ_COMMITTED_SNAPSHOT on.
Control of this option lies in the gray area of developer vs db admin responsibility. If you're admin, you should not blindly use it as developers may have relied on READ_COMMITTED isolation semantics when developing application and turning READ_COMMITTED_SNAPSHOT may violate those assumptions in very implicit, hard to find bug way.
I have a bunch of utility procedures that just check for some conditions in the database and return a flag result. These procedures are run with READ UNCOMMITTED isolation level, equivalent to WITH NOLOCK.
I also have more complex procedures that are run with SERIALIZABLE isolation level. They also happen to have these same kind of checks in them.
So I decided to call these check procedures from within those complex procedures instead of replicating the check code.
Basically it looks like this:
CREATE PROCEDURE [dbo].[CheckSomething]
AS
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
BEGIN TRANSACTION
-- Do checks
COMMIT TRANSACTION
and
CREATE PROCEDURE [dbo].[DoSomethingImportant]
AS
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
BEGIN TRANSACTION
EXECUTE [dbo].[CheckSomething]
-- Do some work
COMMIT TRANSACTION
Would it be okay to do that? Will the temporarily activated lower isolation level somehow break the higher level protection or is everything perfect safe?
EDIT: The execution goes smoothly without any errors.
It's all here for SQL Server 2005. A snippet:
When you change a transaction from one
isolation level to another, resources
that are read after the change are
protected according to the rules of
the new level. Resources that are read
before the change continue to be
protected according to the rules of
the previous level. For example, if a
transaction changed from READ
COMMITTED to SERIALIZABLE, the shared
locks acquired after the change are
now held until the end of the
transaction.
If you issue SET TRANSACTION ISOLATION
LEVEL in a stored procedure or
trigger, when the object returns
control the isolation level is reset
to the level in effect when the object
was invoked. For example, if you set
REPEATABLE READ in a batch, and the
batch then calls a stored procedure
that sets the isolation level to
SERIALIZABLE, the isolation level
setting reverts to REPEATABLE READ
when the stored procedure returns
control to the batch.
In this example:
Each isolation level is applied for the scope of the stored proc
Resources locked by DoSomethingImportant stay under SERIALIZABLE
Resources used by CheckSomething are READ UNCOMMITTED