Concurrency exception in Entity Framework when loading and deleting objects - sql-server

I've got an EF class mapped to a SQL Server table. I have the following very simple Entity Framework code (using the ASP.NET Boilerplate repository wrapper over the EF DBContext):
var objectsToDelete = repository.GetAll().Where(r => r.parentId == param1).ToList();
foreach(var obj in objectsToDelete) {
repository.Delete(obj); //doesn't actually update the database yet.
}
...
repository.SaveChanges();
Note that the GetAll call will immediately read from the database.
Surprisingly to me, this is a real concurrency headache when multiple web service calls try to run this code simultaneously with the same param1 value (imagine that the user performs the exact same action twice in quick succession). To cut a long story short, a context switch can occur between getting the IDs and calling SaveChanges. So this is what happens (you can change this order around quite a bit without affecting the result):
Thread #1 starts a SQL transaction and starts running the above code.
Thread #2 starts a SQL transaction and starts running the above code (using a different DBContext).
Thread #1 gets the objects to delete.
Context switch.
Thread #2 gets the objects to delete - they will have the same IDs that #1 read.
Thread #2 marks the objects as deleted and calls SaveChanges, which issues the SQL DELETE statements.
Thread #2 commits its transaction and it's done.
Thread #1, using its (now stale) list, marks those same objects as deleted and calls SaveChanges, which issues the SQL DELETE statements.
Exception! Entity throws this:
Store update, insert, or delete statement affected an unexpected number of rows (0).
Now, even if I turn the transaction isolation level all the way up to Serializable, this doesn't help. "Serializable" specifies that statements cannot read data that has been modified but not yet committed by other transactions. But the problem occurred in step 5, and at that point nothing had been modified. Ah, but Serializable also specifies that no other transactions can modify data that has been read by the current transaction until the current transaction completes. Okay, so in the above sequence Thread #2 will block at step 6. But that just shifts the problem without addressing it. Whichever thread is the second one to attempt the delete will encounter the 'unexpected rowcount' exception.
I can't see any prevention - only treatment. It seems that I have to either swallow the exception or turn off Entity Framework's validation-on-save. And since I'm using ASP.NET Boilerplate, that doesn't seem to be very convenient. I’m puzzled that something so apparently simple and typical isn’t easier to control. What am I missing?

var objectsToDelete = repository.GetAll().Where(r => r.parentId == param1).ToList();
if(objectsToDelete.Count!=0)
{
repository.RemoveRange(objectsToDelete );
}
...
repository.SaveChanges();

Related

Pessimistic offline locking using Entity framework

First I'd like to describe the mechanism of a locking solution I'd like to implement. Basically an item can be opened in read or write mode. However if an user opens the item in write mode, no other user should be able to open it in edit mode. The item means a case in a customer service application.
In order to to this I came up with the following: The table will contain a flag which indicates if an item is checked out for edit, and an 'end time', while this flag is valid. The default value for it is 3 minutes, if no user interaction happens during this time, the flag can be ignored next time when an user tries to open the same item.
On the UI side, I use jQuery to monitor if an user is active. If he or she is, a periodic AJAX call extends his or her time frame so he or she can continue working on the item. When the user saves the item, the flag will be removed. The end time is necessary to handle situations when the browser crashes or when the user goes to drink a coffee and leaves the item open for an hour.
So, the question. :) If an user opens the item in edit mode first I have to read the flag & time values for the time item, and if I find these valid (flag is not set, or set but not valid because of the time) and I have to update them with new values.
What kind of transaction level should I use for this in EF, if any? Or should I write stored procedures to handle the select & update in a transaction? If so, what kind of locking method should I use?
You are describing pessimistic locking, there is really no debate on that. There are detailed instructions on what you want to do in the excellent MVC/EF tutorial http://www.asp.net/mvc/tutorials/getting-started-with-ef-using-mvc/handling-concurrency-with-the-entity-framework-in-an-asp-net-mvc-application
There’s a chapter early on about pessimistic.
Optimistic locking is still OK in this case. You can use timestamp / rowversion and your flag together. The flag will be used to handle your application logic - only single user can edit the record and the timestamp will be used to avoid race condition when setting the flag because only single thread will be able to read the record and write it back. If any other thread tries to read the record concurrently and saves it after the first thread it will get concurrency exception.
If you don't want to use timestamp different transaction isolation level will not help you because isolation level doesn't force queries to lock records. You must manually write SQL query and use UPDLOCK hint to lock the record by querying and after that execute update. You can do this in stored procedure.
The answer below is not a good way to implement pessimistic concurrency. You should not implement this at the application level. The RDBMS have better tools for this.
If you are locking a row in the db, this is by definition pessimistic.
Since you are controlling the pessimistic concurrency at the application level, I don't think it matters which transaction scope EF uses. EF will automatically start a db-level transaction when you SaveChanges.
To prevent multiple threads from executing the lock / unlock from your app, you can lock the section of code that queries & updates like so:
public static object _lock = new object();
public class MyClassThatManagesConcurrency
{
public void MyMethodThatManagesConcurrency()
{
lock(_lock)
{
// query for the data
// determine if item should be unlocked
// dbContext.SaveChanges();
}
}
}
With the above, no 2 threads will ever execute code inside the lock section at the same time. However, I am not sure why this is necessary. If all you are doing is reading the object and unlocking it when time has expired, and 2 threads enter the method at the same time, either way, the item will become unlocked.
On the other hand, if your db row for this object has a timestamp column (not a datetime column but a columng for versioning rows), and 2 threads enter the method at the same time, the second will receive a concurrency exception. But unless you have are versioning rows at the db level, I don't think you need to do any locking.
Reply to comment
Ok I get it now, you are right. But you are still locking at the application level, which means it should not matter which db transaction ef chooses. To prevent 2 users from unlocking the same object, use the C# lock block I posted above.

Lost Update Anomaly in Sql Server Update Command

I am very much confused.
I have a transaction in ReadCommitted Isolation level. Among other things I am also updating a counter value in it, something similar to below:
Update tblCount set counter = counter + 1
My application is a desktop application and this transaction happens to occur quite frequently and concurrently. We recently noticed an error that sometimes the counter value doesn't get updated or is missed. We also insert one record on each counter update so we are sure that records have been inserted but somehow counter fails to update. This happens once in 2000 simulaneous transactions.
I seriously doubt it is a lost update anomaly I am facing but if you look at the command above, it's just update the counter from its own value: if I have started a transaction and the transaction has reached this statement, it should have locked the row. This should not cause lost update, but it's happening somehow.
Is the thing that this update command works in two parts? Like first it reads the counter value (during which it doesn't get the exclusive lock) and then writes the new calculated value (when it does get an exclusive lock)?
Please help, I have got really confused.
The update command does not work in two parts. It only works in one.
There's something else going on, and my first guess would be that your transaction is rolling back for another reason. Out of those 2,000 transactions, for example, one may be rolling back - especially if you're doing a ton of things concurrently - and it didn't succeed at all.
That update may not have been what caused the problem, either - you may have deadlocks involved due to other transactions, and they may be failing before the update command (or during the update command).
I'd zoom out and ask questions about the transaction's error handling. Are you doing everything in try/catch blocks? Are you capturing error levels when transactions fail? If not, you'll need to capture a trace with Profiler to find out what's going on.
Are you sure that the SQL is always succeeding? What I mean is, could it be something like an occasional lock time-out? Are you handling SQL exceptions in your .Net code in a way that will be aware of them (i.e a pop-up message or a log entry)?

Two threads adding new rows at the same time - how to prevent?

In my application, I have couple of threads that execute some logic.
At the end they adding new row to some table.
Before adding the new row, they check if a previous entry with the same details does not already exist. If one found - they updating instead of adding.
The problem is when some thread A do the check, see that no previous entity with the same details exist, and just before he add a new row, the thread B search the DB for the same entity. Thread B see that no such entity exist so he add new row too.
The result is that there are two rows with the same data in the table.
Note: no table key violated, because the thread get the next sequence just before adding the row, and the table key is some ID that does not related to the data.
Even if I will change the table key so it will be a combination of the data, It will prevent two rows with the same data, but will cause a DB error when the second thread will try to add the row.
Thank you in advance for the help, Roy.
You should be using a queue, possibly blocking queue. Threads A and B (producers) would add objects to the queue and another thread C (consumer) would poll the queue and remove the oldest object from the queue persisting it to the DB. This will prevent the problem when both A and B in the same time want to persist equal objects
You speak of "rows" so presumably this is a SQL database?
If so, why not just use transactions?
(Unless the threads are sharing a database connection, in which case a mutex might help, but I would prefer to give each thread a separate connection.)
I would recommend avoid locking in the client layer. Synchronized only works within one process, later you may scale so that your threads are across several JVMs or indeed machines.
I would enforce uniqueness in the DB, as you suggest this will then cause an exception for the second inserter. Catch that exception and do an update if that's the business logic you need.
But consider this argument:
Sometimes either of the following sequences may occur:
A insert Values VA, B updates to values VB.
B insert VB, A updates to VA.
If the two threads are racing either of these two outcomes VA or VB is equally valid. So you can't distinguish the second case from A inserts VA and B just fails!
So in fact there may be no need for the "fail and then update" case.
I think this is a job for SQL constraints, namely "UNIQUE" on the set of columns that have the data + the appropriate error handling.
Most database frameworks (Hibernate in Java, ActiveRecord etc in Ruby) have a form of optimistic locking. What this means is that you execute each operation on the assumption that it will work without conflict. In the special case where there is a conflict, you check this atomically at the point where you do the database operation, throw an exception, or error return code, and retry the operation in your client code after requerying etc.
This is usually implemented using a version number on each record. When a database operation is done, the row is read (including the version number), the client code updates the data, then saves it back to the database with a where clause specifying the primary key ID AND the version number being the same as it was when it was read. If it is different - this means another process has updated the row, and the operation should be retried. Usually this means re-reading the record, and doing that operation again on it with the new data from the other process.
In the case of adding, you would also want a unique index on the table, so the database refuses the operation, and you can handle that in the same code.
Pseudo code would look something like
do {
read row from database
if no row {
result_code = insert new row with data
} else {
result_code = update row with data
}
} while result_code != conflict_code
The benefit of this is that you don't need complicated synchronization/locking in your client code - each thread just executes in isolation, and uses the database as the consistency check (which it is very quick, and good at). Because you're not locking on some shared resource for every operation, the code can run much faster.
It also means that you can run multiple separate operating system processes to split the load and/or scale the operation over multiple servers as well without any code changes to handle conflicts.
You need to wrap the calls to check and write the row in a critical section or mutex.
With a critical section, interrupts and thread-switching are disabled while you perform the check and write, so both threads can't write at once.
With a mutex, the first thread would lock the mutex, perform its operations, then unlock the mutex. The second thread would attempt to do the same but the mutex lock would block until the first thread released the mutex.
Specific implementations of critical section or mutex functionality would depend on your platform.
You need to perform the act of checking for existing rows and then updating / adding rows inside a single transaction.
When you perform your check you should also acquire an update lock on those records, to indicate that you are going to write to the database based on the information that you have just read, and that no-one else should be allowed to change it.
In pseudo T-SQL (for Microsoft SQL Server):
BEGIN TRANSACTION
SELECT id FROM MyTable WHERE SomeColumn = #SomeValue WITH UPDLOCK
-- Perform your update here
END TRANSACTION
The update lock wont prevent people reading from those records, but it will prevent people from writing anything which might change the output of your SELECT
Multi Threading is always mind-bending ^^.
Main thing to do is to delimit the critical resources and critical operations.
Critical resource : your table.
Critical operation : adding yes, but
the whole procedure
You need to lock access to your table from the beginning of the check, until the end of the add.
If a thread attempt to do the same, while another is adding/checking, then he waits until the thread finish its operation. As simple as that.

NHibernate session.flush() fails but makes changes

We have a SQL Server database table that consists of user id, some numeric value, e.g. balance, and a version column.
We have multiple threads updating this table's value column in parallel, each in its own transaction and session (we're using a session-per-thread model). Since we want all logical transaction to occur, each thread does the following:
load the current row (mapped to a type).
make the change to the value, based on old value. (e.g. add 50).
session.update(obj)
session.flush() (since we're optimistic, we want to make sure we had the correct version value prior to the update)
if step 4 (flush) threw StaleStateException, refresh the object (with lockmode.read) and goto step 1
we only do this a certain number of times per logical transaction, if we can't commit it after X attempts, we reject the logical transaction.
each such thread commits periodically, e.g. after 100 successful logical transactions, to keep commit-induced I/O to manageable levels. meaning - we have a single database transaction (per transaction) with multiple flushes, at least once per logical change.
what's the problem here, you ask? well, on commits we see changes to failed logical objects.
specifically, if the value was 50 when we went through step 1 (for the first time), and we tried to update it to 100 (but we failed since e.g. another thread changed it to 70), then the value of 50 is committed for this row. obviously this is incorrect.
What are we missing here?
Well, I do not have a ton of experience here, but one thing I remember reading in the documentation is that if an exception occurs, you are supposed to immediately rollback the transaction and dispose of the session. Perhaps your issue is related to the session being in an inconsistent state?
Also, calling update in your code here is not necessary. Since you loaded the object in that session, it is already being tracked by nhibernate.
If you want to make your changes anyway, why do you bother with row versioning? It sounds like you should get the same result if you simply always update the data and let the last transaction win.
As to why the update becomes permanent, it depends on what the SQL statements for the version check/update look like and on your transaction control, which you left out of the code example. If you turn on the Hibernate SQL logging it will probably become obvious how this is happening.
I'm not a nhibernate guru, but answer seems simple.
When nhibernate loads an object, it expects it not to change in db as long as it's in nhibernate session cache.
As you mentioned - you got multi thread app.
This is what happens=>
1st thread loads an entity
2nd thread loads an entity
1st thread changes entity
2nd thread changes entity and => finds out that loaded entity has changed by something else and being afraid that it has screwed up changes 1st thread made - throws an exception to let programmer be aware about that.
You are missing locking mechanism. Can't tell much about how to apply that properly and elegantly. Maybe Transaction would help.
We had similar problems when we used nhibernate and raw ado.net concurrently (luckily - just for querying - at least for production code). All we had to do - force updating db on insert/update so we could actually query something through full-text search for some specific entities.
Had StaleStateException in integration tests when we used raw ado.net to reset db. NHibernate session was alive through bunch of tests, but every test tried to cleanup db without awareness of NHibernate.
Here is the documention for exception in the session
http://nhibernate.info/doc/nhibernate-reference/best-practices.html

Diagnosing Deadlocks in SQL Server 2005

We're seeing some pernicious, but rare, deadlock conditions in the Stack Overflow SQL Server 2005 database.
I attached the profiler, set up a trace profile using this excellent article on troubleshooting deadlocks, and captured a bunch of examples. The weird thing is that the deadlocking write is always the same:
UPDATE [dbo].[Posts]
SET [AnswerCount] = #p1, [LastActivityDate] = #p2, [LastActivityUserId] = #p3
WHERE [Id] = #p0
The other deadlocking statement varies, but it's usually some kind of trivial, simple read of the posts table. This one always gets killed in the deadlock. Here's an example
SELECT
[t0].[Id], [t0].[PostTypeId], [t0].[Score], [t0].[Views], [t0].[AnswerCount],
[t0].[AcceptedAnswerId], [t0].[IsLocked], [t0].[IsLockedEdit], [t0].[ParentId],
[t0].[CurrentRevisionId], [t0].[FirstRevisionId], [t0].[LockedReason],
[t0].[LastActivityDate], [t0].[LastActivityUserId]
FROM [dbo].[Posts] AS [t0]
WHERE [t0].[ParentId] = #p0
To be perfectly clear, we are not seeing write / write deadlocks, but read / write.
We have a mixture of LINQ and parameterized SQL queries at the moment. We have added with (nolock) to all the SQL queries. This may have helped some. We also had a single (very) poorly-written badge query that I fixed yesterday, which was taking upwards of 20 seconds to run every time, and was running every minute on top of that. I was hoping this was the source of some of the locking problems!
Unfortunately, I got another deadlock error about 2 hours ago. Same exact symptoms, same exact culprit write.
The truly strange thing is that the locking write SQL statement you see above is part of a very specific code path. It's only executed when a new answer is added to a question -- it updates the parent question with the new answer count and last date/user. This is, obviously, not that common relative to the massive number of reads we are doing! As far as I can tell, we're not doing huge numbers of writes anywhere in the app.
I realize that NOLOCK is sort of a giant hammer, but most of the queries we run here don't need to be that accurate. Will you care if your user profile is a few seconds out of date?
Using NOLOCK with Linq is a bit more difficult as Scott Hanselman discusses here.
We are flirting with the idea of using
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
on the base database context so that all our LINQ queries have this set. Without that, we'd have to wrap every LINQ call we make (well, the simple reading ones, which is the vast majority of them) in a 3-4 line transaction code block, which is ugly.
I guess I'm a little frustrated that trivial reads in SQL 2005 can deadlock on writes. I could see write/write deadlocks being a huge issue, but reads? We're not running a banking site here, we don't need perfect accuracy every time.
Ideas? Thoughts?
Are you instantiating a new LINQ to SQL DataContext object for every operation or are you perhaps sharing the same static context for all your calls?
Jeremy, we are sharing one static datacontext in the base Controller for the most part:
private DBContext _db;
/// <summary>
/// Gets the DataContext to be used by a Request's controllers.
/// </summary>
public DBContext DB
{
get
{
if (_db == null)
{
_db = new DBContext() { SessionName = GetType().Name };
//_db.ExecuteCommand("SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED");
}
return _db;
}
}
Do you recommend we create a new context for every Controller, or per Page, or .. more often?
According to MSDN:
http://msdn.microsoft.com/en-us/library/ms191242.aspx
When either the
READ COMMITTED SNAPSHOT or
ALLOW SNAPSHOT ISOLATION database
options are ON, logical copies
(versions) are maintained for all data
modifications performed in the
database. Every time a row is modified
by a specific transaction, the
instance of the Database Engine stores
a version of the previously committed
image of the row in tempdb. Each
version is marked with the transaction
sequence number of the transaction
that made the change. The versions of
modified rows are chained using a link
list. The newest row value is always
stored in the current database and
chained to the versioned rows stored
in tempdb.
For short-running transactions, a
version of a modified row may get
cached in the buffer pool without
getting written into the disk files of
the tempdb database. If the need for
the versioned row is short-lived, it
will simply get dropped from the
buffer pool and may not necessarily
incur I/O overhead.
There appears to be a slight performance penalty for the extra overhead, but it may be negligible. We should test to make sure.
Try setting this option and REMOVE all NOLOCKs from code queries unless it’s really necessary. NOLOCKs or using global methods in the database context handler to combat database transaction isolation levels are Band-Aids to the problem. NOLOCKS will mask fundamental issues with our data layer and possibly lead to selecting unreliable data, where automatic select / update row versioning appears to be the solution.
ALTER Database [StackOverflow.Beta] SET READ_COMMITTED_SNAPSHOT ON
NOLOCK and READ UNCOMMITTED are a slippery slope. You should never use them unless you understand why the deadlock is happening first. It would worry me that you say, "We have added with (nolock) to all the SQL queries". Needing to add WITH NOLOCK everywhere is a sure sign that you have problems in your data layer.
The update statement itself looks a bit problematic. Do you determine the count earlier in the transaction, or just pull it from an object? AnswerCount = AnswerCount+1 when a question is added is probably a better way to handle this. Then you don't need a transaction to get the correct count and you don't have to worry about the concurrency issue that you are potentially exposing yourself to.
One easy way to get around this type of deadlock issue without a lot of work and without enabling dirty reads is to use "Snapshot Isolation Mode" (new in SQL 2005) which will always give you a clean read of the last unmodified data. You can also catch and retry deadlocked statements fairly easily if you want to handle them gracefully.
The OP question was to ask why this problem occured. This post hopes to answer that while leaving possible solutions to be worked out by others.
This is probably an index related issue. For example, lets say the table Posts has a non-clustered index X which contains the ParentID and one (or more) of the field(s) being updated (AnswerCount, LastActivityDate, LastActivityUserId).
A deadlock would occur if the SELECT cmd does a shared-read lock on index X to search by the ParentId and then needs to do a shared-read lock on the clustered index to get the remaining columns while the UPDATE cmd does a write-exclusive lock on the clustered index and need to get a write-exclusive lock on index X to update it.
You now have a situation where A locked X and is trying to get Y whereas B locked Y and is trying to get X.
Of course, we'll need the OP to update his posting with more information regarding what indexes are in play to confirm if this is actually the cause.
I'm pretty uncomfortable about this question and the attendant answers. There's a lot of "try this magic dust! No that magic dust!"
I can't see anywhere that you've anaylzed the locks that are taken, and determined what exact type of locks are deadlocked.
All you've indicated is that some locks occur -- not what is deadlocking.
In SQL 2005 you can get more info about what locks are being taken out by using:
DBCC TRACEON (1222, -1)
so that when the deadlock occurs you'll have better diagnostics.
Are you instantiating a new LINQ to SQL DataContext object for every operation or are you perhaps sharing the same static context for all your calls? I originally tried the latter approach, and from what I remember, it caused unwanted locking in the DB. I now create a new context for every atomic operation.
Before burning the house down to catch a fly with NOLOCK all over, you may want to take a look at that deadlock graph you should've captured with Profiler.
Remember that a deadlock requires (at least) 2 locks. Connection 1 has Lock A, wants Lock B - and vice-versa for Connection 2. This is an unsolvable situation, and someone has to give.
What you've shown so far is solved by simple locking, which Sql Server is happy to do all day long.
I suspect you (or LINQ) are starting a transaction with that UPDATE statement in it, and SELECTing some other piece of info before hand. But, you really need to backtrack through the deadlock graph to find the locks held by each thread, and then backtrack through Profiler to find the statements that caused those locks to be granted.
I expect that there's at least 4 statements to complete this puzzle (or a statement that takes multiple locks - perhaps there's a trigger on the Posts table?).
Will you care if your user profile is a few seconds out of date?
Nope - that's perfectly acceptable. Setting the base transaction isolation level is probably the best/cleanest way to go.
Typical read/write deadlock comes from index order access. Read (T1) locates the row on index A and then looks up projected column on index B (usually clustered). Write (T2) changes index B (the cluster) then has to update the index A. T1 has S-Lck on A, wants S-Lck on B, T2 has X-Lck on B, wants U-Lck on A. Deadlock, puff. T1 is killed.
This is prevalent in environments with heavy OLTP traffic and just a tad too many indexes :). Solution is to make either the read not have to jump from A to B (ie. included column in A, or remove column from projected list) or T2 not have to jump from B to A (don't update indexed column).
Unfortunately, linq is not your friend here...
#Jeff - I am definitely not an expert on this, but I have had good results with instantiating a new context on almost every call. I think it's similar to creating a new Connection object on every call with ADO. The overhead isn't as bad as you would think, since connection pooling will still be used anyway.
I just use a global static helper like this:
public static class AppData
{
/// <summary>
/// Gets a new database context
/// </summary>
public static CoreDataContext DB
{
get
{
var dataContext = new CoreDataContext
{
DeferredLoadingEnabled = true
};
return dataContext;
}
}
}
and then I do something like this:
var db = AppData.DB;
var results = from p in db.Posts where p.ID = id select p;
And I would do the same thing for updates. Anyway, I don't have nearly as much traffic as you, but I was definitely getting some locking when I used a shared DataContext early on with just a handful of users. No guarantees, but it might be worth giving a try.
Update: Then again, looking at your code, you are only sharing the data context for the lifetime of that particular controller instance, which basically seems fine unless it is somehow getting used concurrently by mutiple calls within the controller. In a thread on the topic, ScottGu said:
Controllers only live for a single request - so at the end of processing a request they are garbage collected (which means the DataContext is collected)...
So anyway, that might not be it, but again it's probably worth a try, perhaps in conjunction with some load testing.
Q. Why are you storing the AnswerCount in the Posts table in the first place?
An alternative approach is to eliminate the "write back" to the Posts table by not storing the AnswerCount in the table but to dynamically calculate the number of answers to the post as required.
Yes, this will mean you're running an additional query:
SELECT COUNT(*) FROM Answers WHERE post_id = #id
or more typically (if you're displaying this for the home page):
SELECT p.post_id,
p.<additional post fields>,
a.AnswerCount
FROM Posts p
INNER JOIN AnswersCount_view a
ON <join criteria>
WHERE <home page criteria>
but this typically results in an INDEX SCAN and may be more efficient in the use of resources than using READ ISOLATION.
There's more than one way to skin a cat. Premature de-normalisation of a database schema can introduce scalability issues.
You definitely want READ_COMMITTED_SNAPSHOT set to on, which it is not by default. That gives you MVCC semantics. It's the same thing Oracle uses by default. Having an MVCC database is so incredibly useful, NOT using one is insane. This allows you to run the following inside a transaction:
Update USERS Set FirstName = 'foobar';
//decide to sleep for a year.
meanwhile without committing the above, everyone can continue to select from that table just fine. If you are not familiar with MVCC, you will be shocked that you were ever able to live without it. Seriously.
Setting your default to read uncommitted is not a good idea. Your will undoubtedly introduce inconsistencies and end up with a problem that is worse than what you have now. Snapshot isolation might work well, but it is a drastic change to the way Sql Server works and puts a huge load on tempdb.
Here is what you should do: use try-catch (in T-SQL) to detect the deadlock condition. When it happens, just re-run the query. This is standard database programming practice.
There are good examples of this technique in Paul Nielson's Sql Server 2005 Bible.
Here is a quick template that I use:
-- Deadlock retry template
declare #lastError int;
declare #numErrors int;
set #numErrors = 0;
LockTimeoutRetry:
begin try;
-- The query goes here
return; -- this is the normal end of the procedure
end try begin catch
set #lastError=##error
if #lastError = 1222 or #lastError = 1205 -- Lock timeout or deadlock
begin;
if #numErrors >= 3 -- We hit the retry limit
begin;
raiserror('Could not get a lock after 3 attempts', 16, 1);
return -100;
end;
-- Wait and then try the transaction again
waitfor delay '00:00:00.25';
set #numErrors = #numErrors + 1;
goto LockTimeoutRetry;
end;
-- Some other error occurred
declare #errorMessage nvarchar(4000), #errorSeverity int
select #errorMessage = error_message(),
#errorSeverity = error_severity()
raiserror(#errorMessage, #errorSeverity, 1)
return -100
end catch;
One thing that has worked for me in the past is making sure all my queries and updates access resources (tables) in the same order.
That is, if one query updates in order Table1, Table2 and a different query updates it in order of Table2, Table1 then you might see deadlocks.
Not sure if it's possible for you to change the order of updates since you're using LINQ. But it's something to look at.
Will you care if your user profile is a few seconds out of date?
A few seconds would definitely be acceptable. It doesn't seem like it would be that long, anyways, unless a huge number of people are submitting answers at the same time.
I agree with Jeremy on this one. You ask if you should create a new data context for each controller or per page - I tend to create a new one for every independent query.
I'm building a solution at present which used to implement the static context like you do, and when I threw tons of requests at the beast of a server (million+) during stress tests, I was also getting read/write locks randomly.
As soon as I changed my strategy to use a different data context at LINQ level per query, and trusted that SQL server could work its connection pooling magic, the locks seemed to disappear.
Of course I was under some time pressure, so trying a number of things all around the same time, so I can't be 100% sure that is what fixed it, but I have a high level of confidence - let's put it that way.
You should implement dirty reads.
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
If you don't absolutely require perfect transactional integrity with your queries, you should be using dirty reads when accessing tables with high concurrency. I assume your Posts table would be one of those.
This may give you so called "phantom reads", which is when your query acts upon data from a transaction that hasn't been committed.
We're not running a banking site here, we don't need perfect accuracy every time
Use dirty reads. You're right in that they won't give you perfect accuracy, but they should clear up your dead locking issues.
Without that, we'd have to wrap every LINQ call we make (well, the simple reading ones, which is the vast majority of them) in a 3-4 line transaction code block, which is ugly
If you implement dirty reads on "the base database context", you can always wrap your individual calls using a higher isolation level if you need the transactional integrity.
So what's the problem with implementing a retry mechanism? There will always be the possibility of a deadlock ocurring so why not have some logic to identify it and just try again?
Won't at least some of the other options introduce performance penalties that are taken all the time when a retry system will kick in rarely?
Also, don't forget some sort of logging when a retry happens so that you don't get into that situation of rare becoming often.
Now that I see Jeremy's answer, I think I remember hearing that the best practice is to use a new DataContext for each data operation. Rob Conery's written several posts about DataContext, and he always news them up rather than using a singleton.
http://blog.wekeroad.com/2007/08/17/linqtosql-ranch-dressing-for-your-database-pizza/
http://blog.wekeroad.com/mvc-storefront/mvcstore-part-9/ (see comments)
Here's the pattern we used for Video.Show (link to source view in CodePlex):
using System.Configuration;
namespace VideoShow.Data
{
public class DataContextFactory
{
public static VideoShowDataContext DataContext()
{
return new VideoShowDataContext(ConfigurationManager.ConnectionStrings["VideoShowConnectionString"].ConnectionString);
}
public static VideoShowDataContext DataContext(string connectionString)
{
return new VideoShowDataContext(connectionString);
}
}
}
Then at the service level (or even more granular, for updates):
private VideoShowDataContext dataContext = DataContextFactory.DataContext();
public VideoSearchResult GetVideos(int pageSize, int pageNumber, string sortType)
{
var videos =
from video in DataContext.Videos
where video.StatusId == (int)VideoServices.VideoStatus.Complete
orderby video.DatePublished descending
select video;
return GetSearchResult(videos, pageSize, pageNumber);
}
I would have to agree with Greg so long as setting the isolation level to read uncommitted doesn't have any ill effects on other queries.
I'd be interested to know, Jeff, how setting it at the database level would affect a query such as the following:
Begin Tran
Insert into Table (Columns) Values (Values)
Select Max(ID) From Table
Commit Tran
It's fine with me if my profile is even several minutes out of date.
Are you re-trying the read after it fails? It's certainly possible when firing a ton of random reads that a few will hit when they can't read. Most of the applications that I work with are very few writes compared to the number of reads and I'm sure the reads are no where near the number you are getting.
If implementing "READ UNCOMMITTED" doesn't solve your problem, then it's tough to help without knowing a lot more about the processing. There may be some other tuning option that would help this behavior. Unless some MSSQL guru comes to the rescue, I recommend submitting the problem to the vendor.
I would continue to tune everything; how are is the disk subsystem performing? What is the average disk queue length? If I/O's are backing up, the real problem might not be these two queries that are deadlocking, it might be another query that is bottlenecking the system; you mentioned a query taking 20 seconds that has been tuned, are there others?
Focus on shortening the long-running queries, I'll bet the deadlock problems will disappear.
Had the same problem, and cannot use the "IsolationLevel = IsolationLevel.ReadUncommitted" on TransactionScope because the server dont have DTS enabled (!).
Thats what i did with an extension method:
public static void SetNoLock(this MyDataContext myDS)
{
myDS.ExecuteCommand("SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED");
}
So, for selects who use critical concurrency tables, we enable the "nolock" like this:
using (MyDataContext myDS = new MyDataContext())
{
myDS.SetNoLock();
// var query = from ...my dirty querys here...
}
Sugestions are welcome!

Resources