Add object and its relationships atomically in SQL Server database - sql-server

Suppose I want to insert a new Experiment in my SQL Server database, using Entity framework 4.0:
Experiment has 1..* Tasks in it
Both Experiment and Task derive from EntityObject
Also, there is a database constraint that each Task must have exactly one "parent" Experiment linked to it
Insertion must be atomic. What I mean by atomic is that a reader on database must never be able to read an Experiment which is not fully written to database, for instance an Experiment with no Task.
All solutions I tried so far have the issue that some incomplete experiments can be read even though this lasts only a few seconds; i.e. the experiment finally gets populated with its Task quickly but not atomically.
More specifically,
my reader.exe reads in while(true) loop all experiments and dumps experiments with no tasks.
In parallel my writer.exe write ~1000 experiments, one by one, all with one task, and save them to database.
I cannot find a way to write my ReadAllExperiments and WriteOneExperiment functions so that I never read incomplete experiment.
How I am supposed to do that?
PS:
I'm a newbie to databases; I tried transactions with serializable isolation level on write, manual SQL requests for reading with UPDLOCK, etc. but did not succeed in solving this problem, so I'm stuck.
What I thought to be quite a basic and easy need might reveal to be ill-posed problem?
Issue is unit tested here:
Entity Framework Code First: SaveChanges is not atomic

The following should actually perform what you are after assuming you are not reading with READ UNCOMMITTED or similar isolation levels
using(var ctx = new MyContext())
{
var task = new Task{};
ctx.Tasks.Add(task);
ctx.Experiment.Add(new Experiment{ Task = task });
ctx.SaveChanges();
}
If you are using READ UNCOMMITTED or similar in this case the task will show up before the Experiment is added, I don't believe there should ever be a state where the Experiment can exist before the task given the constraint you have described.

2 solutions apparently solve our issues.
The database option "Is Read Commited Snapshot On"=True (By default, it's false)
The database option "Allow Snapshot isolation"=True + read done using snapshot isolation level. We tried the read using snapshot isolation before, but did not know about this db option. I still do not understand why we don't get an error when reading with disabled isolation level?
More information on http://www.codinghorror.com/blog/2008/08/deadlocked.html
or on
MSDN: http://msdn.microsoft.com/en-us/library/ms173763.aspx (search for READ_COMMITTED_SNAPSHOT)
http://msdn.microsoft.com/en-us/library/ms179599%28v=sql.105%29.aspx

Related

Nhibernate .SaveOrUpdate, how to get if row was update or not, means RowCount

I am updating a column in a SQL table and I want to check if it was updated successfully or it was updated already and my query didn't do anything
as we get ##rowcount in SQL Server.
In my case, I want to update a column named lockForProcessing, so if it is already processing, then my query would not affect any row, it means someone else is already processing it, else I would process it.
If I understand you correctly, your problem is related to a multi threading / concurrency problem, where the same table may be updated simultaneously.
You may want to have a look at the :
Chapter 11. Transactions And Concurrency
The ISession is not threadsafe!
The entity is not stored the moment the code session.SaveOrUpdate() is executed, but typically after transaction.Commit().
stored and commited are two different things.
The entity is stored after any session.Flush(). Depending on the IsolationLevel, the entity won't be seen by other transactions.
The entity is commited after a transaction.Commit(). A commit also flushes.
Maybe all you need to do is choose the right IsolationLevel when beginning transactions and then read the table row to get the current value:
using (var transaction = session.BeginTransaction(IsolationLevel.Serializable))
{
session.Get(); // Read your row
transaction.Commit();
}
Maybe it is easier to create some locking or pipeline mechanism in your application code though. Without knowing more about who is accessing the database (other transactions, sessions, processes?) it is hard to answer more precisely.

How to prevent interim identity holes in SQL Server

Is there a way (using config + transaction isolation levels) to ensure that there are no interim holes in a SQL Server IDENTITY column? Persistent holes are OK. The situation I am trying to avoid is when one query returns a hole but a subsequent similar query returns a row that was not yet committed when the query had been run the first time.
Your question is one of isolation levels and has nothing to do with IDENTITY. The same problem applies to any update/insert visibility. The first query can return results which had include an uncommited row in one and only one situation: if you use dirty reads (read uncommited). If you do, then you deserve all the inconsistent results you'll get and you deserve no help.
If you want to see stable results between two consecutive reads you must have a transaction that encompases both reads and use SERIALIZABLE isolation level or, better, use a row versioning based isolation level like SNAPSHOT. My recommendation would be to enable SNAPSHOT and use it. See Using Snapshot Isolation.
All I need is the promise that inserts to a table are committed in order of identity values they claim.
I hope you read this again and realize the impossibility of the request ('promise ... commit..'). You can't ask for something to guarantee success before it finished. What you're asking eventually boils down to asking not to allocate a new identity before the previous allocated one has committed successfully. In other words, full serialization of all insert transactions.

Is it possible to select data while a transaction is occuring?

I am using transactionscope to ensure that data is being read to the database correctly. However, I may have a need to select some data (from another page) while the transaction is running. Would it be possible to do this? I'm very noob when it comes to databases.
I am using LinqToSQL and SQL Server 2005(dev)/2008(prod).
Yes, it is possible to still select data from a database while a transaction is running.
Data not affected by your transaction (for instance, rows in a table which are being not updated) can usually be read from other transactions. (In certain situations SQL Server will introduce a table lock that stops reads on all rows in the table but they are unusual and most often a symptom of something else going on in your query or on the server).
You need to look into Transaction Isolation Levels since these control exactly how this behaviour will work.
Here is the C# code to set the isolation level of a transaction scope.
TransactionOptions option = new TransactionOptions();
options.IsolationLevel = System.Transactions.IsolationLevel.ReadCommitted;
using (TransactionScope sc = new TransactionScope(TransactionScopeOption.Required, options)
{
// Code within transaction
}
In general, depending on the transaction isolation level specified on a transaction (or any table hints like NOLOCK) you get different levels of data locking that protect the rest of your application from activity tied up in your transaction. With a transaction isolation level of READUNCOMMITTED for example, you can see the writes within that transaction as they occur. This allows for dirty reads but also prevents (most) locks on data.
The other end of the scale is an isolation level like SERIALIZABLE which ensures that your transaction activity is entirely isolated until it has comitted.
In adition to the already provided advice, I would strongly recommend you look into snapshot isolation models. There is a good discussion at Using Snapshot Isolation. Enabling Read Committed Snapshot ON on the database can aleviate a lot of contention problems because readers are no longer blocked by writers. Since default reads are performed under read commited isolation mode, this simple database option switch has immediate benefits and requires no changes in the app.
There is no free lunch, so this comes at a price, in this case the price being aditional load on tempdb, see Row Versioning Resource Usage.
If howeever you are using explict isolation levels and specially if you use the default TransactionScope Serializable mode, then you'll have to review your code to enforce the more bening ReadCommited isolation level. If you don't know what isolation level you use, it means you use ReadCommited.
Yes, by default a TransactionScope will lock the tables involved in the transaction. If you need to read while a transaction is taking place, enter another TransactionScope with TransactionOptions IsolationLevel.ReadUncommitted:
TransactionScopeOptions = new TransactionScopeOptions();
options.IsolationLevel = IsolationLevel.ReadUncommitted;
using(var scope = new TransactionScope(
TransactionScopeOption.RequiresNew,
options
) {
// read the database
}
With a LINQ-to-SQL DataContext:
// db is DataContext
db.Transaction =
db.Connection.BeginTransaction(System.Data.IsolationLevel.ReadUncommitted);
Note that there is a difference between System.Transactions.IsolationLevel and System.Data.IsolationLevel. Yes, you read that correctly.

NHibernate session.flush() fails but makes changes

We have a SQL Server database table that consists of user id, some numeric value, e.g. balance, and a version column.
We have multiple threads updating this table's value column in parallel, each in its own transaction and session (we're using a session-per-thread model). Since we want all logical transaction to occur, each thread does the following:
load the current row (mapped to a type).
make the change to the value, based on old value. (e.g. add 50).
session.update(obj)
session.flush() (since we're optimistic, we want to make sure we had the correct version value prior to the update)
if step 4 (flush) threw StaleStateException, refresh the object (with lockmode.read) and goto step 1
we only do this a certain number of times per logical transaction, if we can't commit it after X attempts, we reject the logical transaction.
each such thread commits periodically, e.g. after 100 successful logical transactions, to keep commit-induced I/O to manageable levels. meaning - we have a single database transaction (per transaction) with multiple flushes, at least once per logical change.
what's the problem here, you ask? well, on commits we see changes to failed logical objects.
specifically, if the value was 50 when we went through step 1 (for the first time), and we tried to update it to 100 (but we failed since e.g. another thread changed it to 70), then the value of 50 is committed for this row. obviously this is incorrect.
What are we missing here?
Well, I do not have a ton of experience here, but one thing I remember reading in the documentation is that if an exception occurs, you are supposed to immediately rollback the transaction and dispose of the session. Perhaps your issue is related to the session being in an inconsistent state?
Also, calling update in your code here is not necessary. Since you loaded the object in that session, it is already being tracked by nhibernate.
If you want to make your changes anyway, why do you bother with row versioning? It sounds like you should get the same result if you simply always update the data and let the last transaction win.
As to why the update becomes permanent, it depends on what the SQL statements for the version check/update look like and on your transaction control, which you left out of the code example. If you turn on the Hibernate SQL logging it will probably become obvious how this is happening.
I'm not a nhibernate guru, but answer seems simple.
When nhibernate loads an object, it expects it not to change in db as long as it's in nhibernate session cache.
As you mentioned - you got multi thread app.
This is what happens=>
1st thread loads an entity
2nd thread loads an entity
1st thread changes entity
2nd thread changes entity and => finds out that loaded entity has changed by something else and being afraid that it has screwed up changes 1st thread made - throws an exception to let programmer be aware about that.
You are missing locking mechanism. Can't tell much about how to apply that properly and elegantly. Maybe Transaction would help.
We had similar problems when we used nhibernate and raw ado.net concurrently (luckily - just for querying - at least for production code). All we had to do - force updating db on insert/update so we could actually query something through full-text search for some specific entities.
Had StaleStateException in integration tests when we used raw ado.net to reset db. NHibernate session was alive through bunch of tests, but every test tried to cleanup db without awareness of NHibernate.
Here is the documention for exception in the session
http://nhibernate.info/doc/nhibernate-reference/best-practices.html

Diagnosing Deadlocks in SQL Server 2005

We're seeing some pernicious, but rare, deadlock conditions in the Stack Overflow SQL Server 2005 database.
I attached the profiler, set up a trace profile using this excellent article on troubleshooting deadlocks, and captured a bunch of examples. The weird thing is that the deadlocking write is always the same:
UPDATE [dbo].[Posts]
SET [AnswerCount] = #p1, [LastActivityDate] = #p2, [LastActivityUserId] = #p3
WHERE [Id] = #p0
The other deadlocking statement varies, but it's usually some kind of trivial, simple read of the posts table. This one always gets killed in the deadlock. Here's an example
SELECT
[t0].[Id], [t0].[PostTypeId], [t0].[Score], [t0].[Views], [t0].[AnswerCount],
[t0].[AcceptedAnswerId], [t0].[IsLocked], [t0].[IsLockedEdit], [t0].[ParentId],
[t0].[CurrentRevisionId], [t0].[FirstRevisionId], [t0].[LockedReason],
[t0].[LastActivityDate], [t0].[LastActivityUserId]
FROM [dbo].[Posts] AS [t0]
WHERE [t0].[ParentId] = #p0
To be perfectly clear, we are not seeing write / write deadlocks, but read / write.
We have a mixture of LINQ and parameterized SQL queries at the moment. We have added with (nolock) to all the SQL queries. This may have helped some. We also had a single (very) poorly-written badge query that I fixed yesterday, which was taking upwards of 20 seconds to run every time, and was running every minute on top of that. I was hoping this was the source of some of the locking problems!
Unfortunately, I got another deadlock error about 2 hours ago. Same exact symptoms, same exact culprit write.
The truly strange thing is that the locking write SQL statement you see above is part of a very specific code path. It's only executed when a new answer is added to a question -- it updates the parent question with the new answer count and last date/user. This is, obviously, not that common relative to the massive number of reads we are doing! As far as I can tell, we're not doing huge numbers of writes anywhere in the app.
I realize that NOLOCK is sort of a giant hammer, but most of the queries we run here don't need to be that accurate. Will you care if your user profile is a few seconds out of date?
Using NOLOCK with Linq is a bit more difficult as Scott Hanselman discusses here.
We are flirting with the idea of using
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
on the base database context so that all our LINQ queries have this set. Without that, we'd have to wrap every LINQ call we make (well, the simple reading ones, which is the vast majority of them) in a 3-4 line transaction code block, which is ugly.
I guess I'm a little frustrated that trivial reads in SQL 2005 can deadlock on writes. I could see write/write deadlocks being a huge issue, but reads? We're not running a banking site here, we don't need perfect accuracy every time.
Ideas? Thoughts?
Are you instantiating a new LINQ to SQL DataContext object for every operation or are you perhaps sharing the same static context for all your calls?
Jeremy, we are sharing one static datacontext in the base Controller for the most part:
private DBContext _db;
/// <summary>
/// Gets the DataContext to be used by a Request's controllers.
/// </summary>
public DBContext DB
{
get
{
if (_db == null)
{
_db = new DBContext() { SessionName = GetType().Name };
//_db.ExecuteCommand("SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED");
}
return _db;
}
}
Do you recommend we create a new context for every Controller, or per Page, or .. more often?
According to MSDN:
http://msdn.microsoft.com/en-us/library/ms191242.aspx
When either the
READ COMMITTED SNAPSHOT or
ALLOW SNAPSHOT ISOLATION database
options are ON, logical copies
(versions) are maintained for all data
modifications performed in the
database. Every time a row is modified
by a specific transaction, the
instance of the Database Engine stores
a version of the previously committed
image of the row in tempdb. Each
version is marked with the transaction
sequence number of the transaction
that made the change. The versions of
modified rows are chained using a link
list. The newest row value is always
stored in the current database and
chained to the versioned rows stored
in tempdb.
For short-running transactions, a
version of a modified row may get
cached in the buffer pool without
getting written into the disk files of
the tempdb database. If the need for
the versioned row is short-lived, it
will simply get dropped from the
buffer pool and may not necessarily
incur I/O overhead.
There appears to be a slight performance penalty for the extra overhead, but it may be negligible. We should test to make sure.
Try setting this option and REMOVE all NOLOCKs from code queries unless it’s really necessary. NOLOCKs or using global methods in the database context handler to combat database transaction isolation levels are Band-Aids to the problem. NOLOCKS will mask fundamental issues with our data layer and possibly lead to selecting unreliable data, where automatic select / update row versioning appears to be the solution.
ALTER Database [StackOverflow.Beta] SET READ_COMMITTED_SNAPSHOT ON
NOLOCK and READ UNCOMMITTED are a slippery slope. You should never use them unless you understand why the deadlock is happening first. It would worry me that you say, "We have added with (nolock) to all the SQL queries". Needing to add WITH NOLOCK everywhere is a sure sign that you have problems in your data layer.
The update statement itself looks a bit problematic. Do you determine the count earlier in the transaction, or just pull it from an object? AnswerCount = AnswerCount+1 when a question is added is probably a better way to handle this. Then you don't need a transaction to get the correct count and you don't have to worry about the concurrency issue that you are potentially exposing yourself to.
One easy way to get around this type of deadlock issue without a lot of work and without enabling dirty reads is to use "Snapshot Isolation Mode" (new in SQL 2005) which will always give you a clean read of the last unmodified data. You can also catch and retry deadlocked statements fairly easily if you want to handle them gracefully.
The OP question was to ask why this problem occured. This post hopes to answer that while leaving possible solutions to be worked out by others.
This is probably an index related issue. For example, lets say the table Posts has a non-clustered index X which contains the ParentID and one (or more) of the field(s) being updated (AnswerCount, LastActivityDate, LastActivityUserId).
A deadlock would occur if the SELECT cmd does a shared-read lock on index X to search by the ParentId and then needs to do a shared-read lock on the clustered index to get the remaining columns while the UPDATE cmd does a write-exclusive lock on the clustered index and need to get a write-exclusive lock on index X to update it.
You now have a situation where A locked X and is trying to get Y whereas B locked Y and is trying to get X.
Of course, we'll need the OP to update his posting with more information regarding what indexes are in play to confirm if this is actually the cause.
I'm pretty uncomfortable about this question and the attendant answers. There's a lot of "try this magic dust! No that magic dust!"
I can't see anywhere that you've anaylzed the locks that are taken, and determined what exact type of locks are deadlocked.
All you've indicated is that some locks occur -- not what is deadlocking.
In SQL 2005 you can get more info about what locks are being taken out by using:
DBCC TRACEON (1222, -1)
so that when the deadlock occurs you'll have better diagnostics.
Are you instantiating a new LINQ to SQL DataContext object for every operation or are you perhaps sharing the same static context for all your calls? I originally tried the latter approach, and from what I remember, it caused unwanted locking in the DB. I now create a new context for every atomic operation.
Before burning the house down to catch a fly with NOLOCK all over, you may want to take a look at that deadlock graph you should've captured with Profiler.
Remember that a deadlock requires (at least) 2 locks. Connection 1 has Lock A, wants Lock B - and vice-versa for Connection 2. This is an unsolvable situation, and someone has to give.
What you've shown so far is solved by simple locking, which Sql Server is happy to do all day long.
I suspect you (or LINQ) are starting a transaction with that UPDATE statement in it, and SELECTing some other piece of info before hand. But, you really need to backtrack through the deadlock graph to find the locks held by each thread, and then backtrack through Profiler to find the statements that caused those locks to be granted.
I expect that there's at least 4 statements to complete this puzzle (or a statement that takes multiple locks - perhaps there's a trigger on the Posts table?).
Will you care if your user profile is a few seconds out of date?
Nope - that's perfectly acceptable. Setting the base transaction isolation level is probably the best/cleanest way to go.
Typical read/write deadlock comes from index order access. Read (T1) locates the row on index A and then looks up projected column on index B (usually clustered). Write (T2) changes index B (the cluster) then has to update the index A. T1 has S-Lck on A, wants S-Lck on B, T2 has X-Lck on B, wants U-Lck on A. Deadlock, puff. T1 is killed.
This is prevalent in environments with heavy OLTP traffic and just a tad too many indexes :). Solution is to make either the read not have to jump from A to B (ie. included column in A, or remove column from projected list) or T2 not have to jump from B to A (don't update indexed column).
Unfortunately, linq is not your friend here...
#Jeff - I am definitely not an expert on this, but I have had good results with instantiating a new context on almost every call. I think it's similar to creating a new Connection object on every call with ADO. The overhead isn't as bad as you would think, since connection pooling will still be used anyway.
I just use a global static helper like this:
public static class AppData
{
/// <summary>
/// Gets a new database context
/// </summary>
public static CoreDataContext DB
{
get
{
var dataContext = new CoreDataContext
{
DeferredLoadingEnabled = true
};
return dataContext;
}
}
}
and then I do something like this:
var db = AppData.DB;
var results = from p in db.Posts where p.ID = id select p;
And I would do the same thing for updates. Anyway, I don't have nearly as much traffic as you, but I was definitely getting some locking when I used a shared DataContext early on with just a handful of users. No guarantees, but it might be worth giving a try.
Update: Then again, looking at your code, you are only sharing the data context for the lifetime of that particular controller instance, which basically seems fine unless it is somehow getting used concurrently by mutiple calls within the controller. In a thread on the topic, ScottGu said:
Controllers only live for a single request - so at the end of processing a request they are garbage collected (which means the DataContext is collected)...
So anyway, that might not be it, but again it's probably worth a try, perhaps in conjunction with some load testing.
Q. Why are you storing the AnswerCount in the Posts table in the first place?
An alternative approach is to eliminate the "write back" to the Posts table by not storing the AnswerCount in the table but to dynamically calculate the number of answers to the post as required.
Yes, this will mean you're running an additional query:
SELECT COUNT(*) FROM Answers WHERE post_id = #id
or more typically (if you're displaying this for the home page):
SELECT p.post_id,
p.<additional post fields>,
a.AnswerCount
FROM Posts p
INNER JOIN AnswersCount_view a
ON <join criteria>
WHERE <home page criteria>
but this typically results in an INDEX SCAN and may be more efficient in the use of resources than using READ ISOLATION.
There's more than one way to skin a cat. Premature de-normalisation of a database schema can introduce scalability issues.
You definitely want READ_COMMITTED_SNAPSHOT set to on, which it is not by default. That gives you MVCC semantics. It's the same thing Oracle uses by default. Having an MVCC database is so incredibly useful, NOT using one is insane. This allows you to run the following inside a transaction:
Update USERS Set FirstName = 'foobar';
//decide to sleep for a year.
meanwhile without committing the above, everyone can continue to select from that table just fine. If you are not familiar with MVCC, you will be shocked that you were ever able to live without it. Seriously.
Setting your default to read uncommitted is not a good idea. Your will undoubtedly introduce inconsistencies and end up with a problem that is worse than what you have now. Snapshot isolation might work well, but it is a drastic change to the way Sql Server works and puts a huge load on tempdb.
Here is what you should do: use try-catch (in T-SQL) to detect the deadlock condition. When it happens, just re-run the query. This is standard database programming practice.
There are good examples of this technique in Paul Nielson's Sql Server 2005 Bible.
Here is a quick template that I use:
-- Deadlock retry template
declare #lastError int;
declare #numErrors int;
set #numErrors = 0;
LockTimeoutRetry:
begin try;
-- The query goes here
return; -- this is the normal end of the procedure
end try begin catch
set #lastError=##error
if #lastError = 1222 or #lastError = 1205 -- Lock timeout or deadlock
begin;
if #numErrors >= 3 -- We hit the retry limit
begin;
raiserror('Could not get a lock after 3 attempts', 16, 1);
return -100;
end;
-- Wait and then try the transaction again
waitfor delay '00:00:00.25';
set #numErrors = #numErrors + 1;
goto LockTimeoutRetry;
end;
-- Some other error occurred
declare #errorMessage nvarchar(4000), #errorSeverity int
select #errorMessage = error_message(),
#errorSeverity = error_severity()
raiserror(#errorMessage, #errorSeverity, 1)
return -100
end catch;
One thing that has worked for me in the past is making sure all my queries and updates access resources (tables) in the same order.
That is, if one query updates in order Table1, Table2 and a different query updates it in order of Table2, Table1 then you might see deadlocks.
Not sure if it's possible for you to change the order of updates since you're using LINQ. But it's something to look at.
Will you care if your user profile is a few seconds out of date?
A few seconds would definitely be acceptable. It doesn't seem like it would be that long, anyways, unless a huge number of people are submitting answers at the same time.
I agree with Jeremy on this one. You ask if you should create a new data context for each controller or per page - I tend to create a new one for every independent query.
I'm building a solution at present which used to implement the static context like you do, and when I threw tons of requests at the beast of a server (million+) during stress tests, I was also getting read/write locks randomly.
As soon as I changed my strategy to use a different data context at LINQ level per query, and trusted that SQL server could work its connection pooling magic, the locks seemed to disappear.
Of course I was under some time pressure, so trying a number of things all around the same time, so I can't be 100% sure that is what fixed it, but I have a high level of confidence - let's put it that way.
You should implement dirty reads.
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
If you don't absolutely require perfect transactional integrity with your queries, you should be using dirty reads when accessing tables with high concurrency. I assume your Posts table would be one of those.
This may give you so called "phantom reads", which is when your query acts upon data from a transaction that hasn't been committed.
We're not running a banking site here, we don't need perfect accuracy every time
Use dirty reads. You're right in that they won't give you perfect accuracy, but they should clear up your dead locking issues.
Without that, we'd have to wrap every LINQ call we make (well, the simple reading ones, which is the vast majority of them) in a 3-4 line transaction code block, which is ugly
If you implement dirty reads on "the base database context", you can always wrap your individual calls using a higher isolation level if you need the transactional integrity.
So what's the problem with implementing a retry mechanism? There will always be the possibility of a deadlock ocurring so why not have some logic to identify it and just try again?
Won't at least some of the other options introduce performance penalties that are taken all the time when a retry system will kick in rarely?
Also, don't forget some sort of logging when a retry happens so that you don't get into that situation of rare becoming often.
Now that I see Jeremy's answer, I think I remember hearing that the best practice is to use a new DataContext for each data operation. Rob Conery's written several posts about DataContext, and he always news them up rather than using a singleton.
http://blog.wekeroad.com/2007/08/17/linqtosql-ranch-dressing-for-your-database-pizza/
http://blog.wekeroad.com/mvc-storefront/mvcstore-part-9/ (see comments)
Here's the pattern we used for Video.Show (link to source view in CodePlex):
using System.Configuration;
namespace VideoShow.Data
{
public class DataContextFactory
{
public static VideoShowDataContext DataContext()
{
return new VideoShowDataContext(ConfigurationManager.ConnectionStrings["VideoShowConnectionString"].ConnectionString);
}
public static VideoShowDataContext DataContext(string connectionString)
{
return new VideoShowDataContext(connectionString);
}
}
}
Then at the service level (or even more granular, for updates):
private VideoShowDataContext dataContext = DataContextFactory.DataContext();
public VideoSearchResult GetVideos(int pageSize, int pageNumber, string sortType)
{
var videos =
from video in DataContext.Videos
where video.StatusId == (int)VideoServices.VideoStatus.Complete
orderby video.DatePublished descending
select video;
return GetSearchResult(videos, pageSize, pageNumber);
}
I would have to agree with Greg so long as setting the isolation level to read uncommitted doesn't have any ill effects on other queries.
I'd be interested to know, Jeff, how setting it at the database level would affect a query such as the following:
Begin Tran
Insert into Table (Columns) Values (Values)
Select Max(ID) From Table
Commit Tran
It's fine with me if my profile is even several minutes out of date.
Are you re-trying the read after it fails? It's certainly possible when firing a ton of random reads that a few will hit when they can't read. Most of the applications that I work with are very few writes compared to the number of reads and I'm sure the reads are no where near the number you are getting.
If implementing "READ UNCOMMITTED" doesn't solve your problem, then it's tough to help without knowing a lot more about the processing. There may be some other tuning option that would help this behavior. Unless some MSSQL guru comes to the rescue, I recommend submitting the problem to the vendor.
I would continue to tune everything; how are is the disk subsystem performing? What is the average disk queue length? If I/O's are backing up, the real problem might not be these two queries that are deadlocking, it might be another query that is bottlenecking the system; you mentioned a query taking 20 seconds that has been tuned, are there others?
Focus on shortening the long-running queries, I'll bet the deadlock problems will disappear.
Had the same problem, and cannot use the "IsolationLevel = IsolationLevel.ReadUncommitted" on TransactionScope because the server dont have DTS enabled (!).
Thats what i did with an extension method:
public static void SetNoLock(this MyDataContext myDS)
{
myDS.ExecuteCommand("SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED");
}
So, for selects who use critical concurrency tables, we enable the "nolock" like this:
using (MyDataContext myDS = new MyDataContext())
{
myDS.SetNoLock();
// var query = from ...my dirty querys here...
}
Sugestions are welcome!

Resources