Can I set the isolation level in the connection string? - sql-server

How can I set the isolation level of all my SqlCommand ExecuteNonQuery calls to be read uncommitted? (connecting to a SQL Server 2008 enterprise instance)
I am simply transforming static data and inserting the results to my own tables on a regular basis, and would like to avoid writing more code than necessary.

No, you cannot.
You need to explicitly define the isolation level when you start a transaction.
For more info on adjusting the isolation level, see the MSDN documentation on the topic.

No, you cannot.
And there is No way of changing the default transaction isolation level.
http://blogs.msdn.com/b/ialonso/archive/2012/11/26/how-to-set-the-default-transaction-isolation-level-server-wide.aspx

You can set the isolation level in the SqlTransaction object, which is a property of the SqlCommand object.
http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqltransaction.isolationlevel.aspx

SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
BEGIN TRAN
/* do stuff */
COMMIT

Note that ADODB allowed one to set the default isolation level for the connection, whereas ADO.NET will use the isolation level of the last committed transaction as the default isolation level (see the note in https://msdn.microsoft.com/en-us/library/5ha4240h(v=vs.110).aspx). See https://technet.microsoft.com/en-us/library/ms189542%28v=sql.105%29.aspx?f=255&MSPPError=-2147217396 for details on setting the isolation level for various Microsoft database technologies.

Related

How to prevent leak of transaction isolation level in pooled connections?

I am using System.Data.SqlClient (4.6.1) in a dot net core 2.2 project. SqlClient maintains a pool of connections, and it has been reported that it leaks transaction isolation level if the same pooled connection is used for the next sql command.
For example, this is explained in this stackoverflow answer: https://stackoverflow.com/a/25606151/1250853
I tried looking for the right way to prevent this leak, but couldn't find a satisfactory solution.
I am thinking to follow this pattern:
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
SET XACT_ABORT ON -- Turns on rollback if T-SQL statement raises a run-time error.
BEGIN TRANSACTION
SELECT * FROM MyTable;
-- removed complex statements for brevity. there are selects followed by insert.
COMMIT TRANSACTION
-- Set settings back to known defaults.
SET TRANSACTION ISOLATION LEVEL READ COMMITTED
SET XACT_ABORT OFF
Is this a good approach?
I would normally use two separate connection strings that are different (e.g. using tweaked Application Name values). Use one connection string for normal connections, the other for connections where you need serializable.
Since the connection strings are different, they go into separate pools. You may want to adjust other pool related settings if you think this will cause issues (e.g. limit the pool for the serializable to a much lower maximum if using it is rare and to prevent 2x default maximum connections from possibly being created).
I recommend never changing the transaction isolation level. If a transaction needs different locking behavior, use appropriate lock hints on selected queries.
The transaction isolation levels are blunt instruments, and often have surprising consequences.
SERIALIZABLE is especially problematic, as few people are prepared to handle the deadlocks it uses to enforce its isolation guarantees.
Also if you only change the transaction isolation level in stored procedure, SQL Server will automatically revert the session's isolation level after the procedure is complete.
Answering my own question based on #Zohar Peled's suggestion in the comments:
BEGIN TRY
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
SET XACT_ABORT ON -- Turns on rollback if T-SQL statement raises a run-time error.
BEGIN TRANSACTION
SELECT * FROM MyTable;
-- removed complex statements for brevity. there are selects followed by multiple inserts.
COMMIT TRANSACTION
SET TRANSACTION ISOLATION LEVEL READ COMMITTED
SET XACT_ABORT OFF
END TRY
BEGIN CATCH
SET TRANSACTION ISOLATION LEVEL READ COMMITTED;
SET XACT_ABORT OFF;
THROW;
END CATCH
EDIT:
If you are setting isloation level and xact_abort inside a stored proc, it's scoped to the stored proc only and you don't need to catch and turn everything off. https://learn.microsoft.com/en-us/sql/t-sql/statements/set-statements-transact-sql?view=sql-server-ver15 .

Unix FreeTDS Isolation Level Sybase

According to the Sybase Documentation (http://infocenter.sybase.com/help/index.jsp?topic=/com.sybase.help.sqlanywhere.12.0.1/dbusage/udtisol.html) there is one paragraph:
[...] The default isolation level is 0, except for [...] and TDS connections, which have a default isolation level of 1. [...]
Im connecting to that server using FreeTDS on Unix. Till now I haven't found a solution to change the Isolation-Level to 0 (Read-Uncommitted) (maybe using /etc/freetds.conf but here I also haven't found anything). For me its not possible to modify SQL-Statements so I'm looking for a config-option.
Anyone an idea?
You can set the isolation level for the connection using :
SET TEMPORARY OPTION isolation_level = 0;
If you need more details check the documentation.
You can see the current isolation level with:
SELECT CONNECTION_PROPERTY('isolation_level');
That does the trick:
set TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;

Is it possible to select data while a transaction is occuring?

I am using transactionscope to ensure that data is being read to the database correctly. However, I may have a need to select some data (from another page) while the transaction is running. Would it be possible to do this? I'm very noob when it comes to databases.
I am using LinqToSQL and SQL Server 2005(dev)/2008(prod).
Yes, it is possible to still select data from a database while a transaction is running.
Data not affected by your transaction (for instance, rows in a table which are being not updated) can usually be read from other transactions. (In certain situations SQL Server will introduce a table lock that stops reads on all rows in the table but they are unusual and most often a symptom of something else going on in your query or on the server).
You need to look into Transaction Isolation Levels since these control exactly how this behaviour will work.
Here is the C# code to set the isolation level of a transaction scope.
TransactionOptions option = new TransactionOptions();
options.IsolationLevel = System.Transactions.IsolationLevel.ReadCommitted;
using (TransactionScope sc = new TransactionScope(TransactionScopeOption.Required, options)
{
// Code within transaction
}
In general, depending on the transaction isolation level specified on a transaction (or any table hints like NOLOCK) you get different levels of data locking that protect the rest of your application from activity tied up in your transaction. With a transaction isolation level of READUNCOMMITTED for example, you can see the writes within that transaction as they occur. This allows for dirty reads but also prevents (most) locks on data.
The other end of the scale is an isolation level like SERIALIZABLE which ensures that your transaction activity is entirely isolated until it has comitted.
In adition to the already provided advice, I would strongly recommend you look into snapshot isolation models. There is a good discussion at Using Snapshot Isolation. Enabling Read Committed Snapshot ON on the database can aleviate a lot of contention problems because readers are no longer blocked by writers. Since default reads are performed under read commited isolation mode, this simple database option switch has immediate benefits and requires no changes in the app.
There is no free lunch, so this comes at a price, in this case the price being aditional load on tempdb, see Row Versioning Resource Usage.
If howeever you are using explict isolation levels and specially if you use the default TransactionScope Serializable mode, then you'll have to review your code to enforce the more bening ReadCommited isolation level. If you don't know what isolation level you use, it means you use ReadCommited.
Yes, by default a TransactionScope will lock the tables involved in the transaction. If you need to read while a transaction is taking place, enter another TransactionScope with TransactionOptions IsolationLevel.ReadUncommitted:
TransactionScopeOptions = new TransactionScopeOptions();
options.IsolationLevel = IsolationLevel.ReadUncommitted;
using(var scope = new TransactionScope(
TransactionScopeOption.RequiresNew,
options
) {
// read the database
}
With a LINQ-to-SQL DataContext:
// db is DataContext
db.Transaction =
db.Connection.BeginTransaction(System.Data.IsolationLevel.ReadUncommitted);
Note that there is a difference between System.Transactions.IsolationLevel and System.Data.IsolationLevel. Yes, you read that correctly.

Dirty Reads in Postgres

I have a long running function that should be inserting new rows. How do I check the progress of this function?
I was thinking dirty reads would work so I read http://www.postgresql.org/docs/8.4/interactive/sql-set-transaction.html and came up with the following code and ran it in a new session:
SET SESSION CHARACTERISTICS AS SERIALIZABLE;
SELECT * FROM MyTable;
Postgres gives me a syntax error. What am I doing wrong? If I do it right, will I see the inserted records while that long function is still running?
Thanks.
PostgreSQL does not implement a way for you to see this from outside the function, aka READ UNCOMMITTED isolation level. Your basic two options are:
Have the function use RAISE NOTICE every now and then to show you how far along you are
Use something like dblink from the function back to the same database, and update a counter table from there. Since that's a completely separate transaction, the counter will be visible as soon as that transaction commits - you don't have to wait for the main transaction (around the function call) to finish.
For versions up to 9.0: PostgreSQL Transaction Isolation
In PostgreSQL, you can request any of the four standard transaction isolation levels. But internally, there are only two distinct isolation levels, which correspond to the levels Read Committed and Serializable. When you select the level Read Uncommitted you really get Read Committed, and when you select Repeatable Read you really get Serializable, so the actual isolation level might be stricter than what you select. This is permitted by the SQL standard: the four isolation levels only define which phenomena must not happen, they do not define which phenomena must happen.
For versions from 9.1 to current(15): PostgreSQL Transaction Isolation
In PostgreSQL, you can request any of the four standard transaction isolation levels, but internally only three distinct isolation levels are implemented, i.e., PostgreSQL's Read Uncommitted mode behaves like Read Committed. This is because it is the only sensible way to map the standard isolation levels to PostgreSQL's multiversion concurrency control architecture.
Dirty read doesn't occur in PostgreSQL even the isolation level is READ UNCOMMITTED. And, the documentation says below:
PostgreSQL's Read Uncommitted mode behaves like Read Committed.
So, READ UNCOMMITTED has the same characteristics of READ COMMITTED in PostgreSQL different from other databases so in short, READ UNCOMMITTED and READ COMMITTED are the same in PostgreSQL.
And, this table below shows which anomaly occurs in which isolation level in PostgreSQL according to my experiments:
Anomaly
Read Uncommitted
Read Committed
Repeatable Read
Serializable
Dirty Read
No
No
No
No
Non-repeatable Read
Yes
Yes
No
No
Phantom Read
Yes
Yes
No
No
Lost Update
Yes
Yes
No
No
Write Skew(Serialization Anomaly)
Yes
Yes
Yes
No
With SELECT FOR UPDATE:
Anomaly
Read Uncommitted
Read Committed
Repeatable Read
Serializable
Dirty Read
No
No
No
No
Non-repeatable Read
No
No
No
No
Phantom Read
No
No
No
No
Lost Update
No
No
No
No
Write Skew(Serialization Anomaly)
No
No
No
No

Is it okay if from within one stored procedure I call another one that sets a lower transaction isolation level?

I have a bunch of utility procedures that just check for some conditions in the database and return a flag result. These procedures are run with READ UNCOMMITTED isolation level, equivalent to WITH NOLOCK.
I also have more complex procedures that are run with SERIALIZABLE isolation level. They also happen to have these same kind of checks in them.
So I decided to call these check procedures from within those complex procedures instead of replicating the check code.
Basically it looks like this:
CREATE PROCEDURE [dbo].[CheckSomething]
AS
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
BEGIN TRANSACTION
-- Do checks
COMMIT TRANSACTION
and
CREATE PROCEDURE [dbo].[DoSomethingImportant]
AS
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
BEGIN TRANSACTION
EXECUTE [dbo].[CheckSomething]
-- Do some work
COMMIT TRANSACTION
Would it be okay to do that? Will the temporarily activated lower isolation level somehow break the higher level protection or is everything perfect safe?
EDIT: The execution goes smoothly without any errors.
It's all here for SQL Server 2005. A snippet:
When you change a transaction from one
isolation level to another, resources
that are read after the change are
protected according to the rules of
the new level. Resources that are read
before the change continue to be
protected according to the rules of
the previous level. For example, if a
transaction changed from READ
COMMITTED to SERIALIZABLE, the shared
locks acquired after the change are
now held until the end of the
transaction.
If you issue SET TRANSACTION ISOLATION
LEVEL in a stored procedure or
trigger, when the object returns
control the isolation level is reset
to the level in effect when the object
was invoked. For example, if you set
REPEATABLE READ in a batch, and the
batch then calls a stored procedure
that sets the isolation level to
SERIALIZABLE, the isolation level
setting reverts to REPEATABLE READ
when the stored procedure returns
control to the batch.
In this example:
Each isolation level is applied for the scope of the stored proc
Resources locked by DoSomethingImportant stay under SERIALIZABLE
Resources used by CheckSomething are READ UNCOMMITTED

Resources