How to use SQL Server table hints while using LINQ? - sql-server

What is the way to use Sql Server's table hints like "NOLOCK" while using LINQ?
For example I can write "SELECT * from employee(NOLOCK)" in SQL.
How can we write the same using LINQ?

Here's how you can apply NOLOCK: http://www.hanselman.com/blog/GettingLINQToSQLAndLINQToEntitiesToUseNOLOCK.aspx
(Quote for posterity, all rights reserved by mr scott):
ProductsNewViewData viewData = new ProductsNewViewData();
using (var t = new TransactionScope(TransactionScopeOption.Required,
new TransactionOptions {
IsolationLevel = System.Transactions.IsolationLevel.ReadUncommitted
}))
{
viewData.Suppliers = northwind.Suppliers.ToList();
viewData.Categories = northwind.Categories.ToList();
}

I STRONGLY recommend reading about SQL Server transaction isolation modes before using ReadUncommitted.
Here's a very good read
http://blogs.msdn.com/b/davidlean/archive/2009/04/06/sql-server-nolock-hint-other-poor-ideas.aspx
In many cases level ReadSnapshot should suffice. Also you if You really need it You can set default transaction isolation level for Your database by using
Set Transaction Isolation Level --levelHere
Other good ideas include packaging Your context in a wrapper that encapsulates each call using demanded isolation level. (maybe You need nolock 95% of the time and serializable 5% of the time). It can be done using using extension methods, or normal methods by code like:
viewData.Categories = northwind.Categories.AsReadCommited().ToList();
Which takes your IQueryable and does the trick mentioned by Rob.
Hope it helps
Luke

Related

Add object and its relationships atomically in SQL Server database

Suppose I want to insert a new Experiment in my SQL Server database, using Entity framework 4.0:
Experiment has 1..* Tasks in it
Both Experiment and Task derive from EntityObject
Also, there is a database constraint that each Task must have exactly one "parent" Experiment linked to it
Insertion must be atomic. What I mean by atomic is that a reader on database must never be able to read an Experiment which is not fully written to database, for instance an Experiment with no Task.
All solutions I tried so far have the issue that some incomplete experiments can be read even though this lasts only a few seconds; i.e. the experiment finally gets populated with its Task quickly but not atomically.
More specifically,
my reader.exe reads in while(true) loop all experiments and dumps experiments with no tasks.
In parallel my writer.exe write ~1000 experiments, one by one, all with one task, and save them to database.
I cannot find a way to write my ReadAllExperiments and WriteOneExperiment functions so that I never read incomplete experiment.
How I am supposed to do that?
PS:
I'm a newbie to databases; I tried transactions with serializable isolation level on write, manual SQL requests for reading with UPDLOCK, etc. but did not succeed in solving this problem, so I'm stuck.
What I thought to be quite a basic and easy need might reveal to be ill-posed problem?
Issue is unit tested here:
Entity Framework Code First: SaveChanges is not atomic
The following should actually perform what you are after assuming you are not reading with READ UNCOMMITTED or similar isolation levels
using(var ctx = new MyContext())
{
var task = new Task{};
ctx.Tasks.Add(task);
ctx.Experiment.Add(new Experiment{ Task = task });
ctx.SaveChanges();
}
If you are using READ UNCOMMITTED or similar in this case the task will show up before the Experiment is added, I don't believe there should ever be a state where the Experiment can exist before the task given the constraint you have described.
2 solutions apparently solve our issues.
The database option "Is Read Commited Snapshot On"=True (By default, it's false)
The database option "Allow Snapshot isolation"=True + read done using snapshot isolation level. We tried the read using snapshot isolation before, but did not know about this db option. I still do not understand why we don't get an error when reading with disabled isolation level?
More information on http://www.codinghorror.com/blog/2008/08/deadlocked.html
or on
MSDN: http://msdn.microsoft.com/en-us/library/ms173763.aspx (search for READ_COMMITTED_SNAPSHOT)
http://msdn.microsoft.com/en-us/library/ms179599%28v=sql.105%29.aspx

How to specify hint with jpa and MS SQL

I would like one of my table to have row level locking. I am currently using JPA (hibernate) with play framework.
Is this something that needs to be specified at the DB level or can it be done progammatically?
As a work around I ended up setting the transaction to read_uncommited with the code below.
try{
Session session = (Session) Report.em().getDelegate();
Connection connection = session.connection();
connection.setTransactionIsolation(Connection.TRANSACTION_READ_UNCOMMITTED);
} catch(SQLException e){
...
}
This seems to suit my needs but is not exactly what I was looking for.
Furthermore the session.connection() is deprecated and this might not be a long term solution.

Does SQLite support transactions across multiple databases?

I've done some searching and also read the FAQ on the SQLite site, no luck finding an answer to my question.
It could very well be that my database approach is flawed, but at the moment, I would like to store my data in multiple SQLite3 databases, so that means separate files. I am very worried about data corruption due to my application possibly crashing, or a power outage in the middle of changing data in my tables.
In order to ensure data integrity, I basically need to do this:
begin transaction
modify table(s) in database #1
modify table(s) in database #2
commit, or rollback if error
Is this supported by SQLite? Also, I am using sqlite.net, specifically the latest which is based on SQLite 3.6.23.1.
UPDATE
One more question -- is this something people would usually add to their unit tests? I always unit test databases, but have never had a case like this. And if so, how would you do it? It's almost like you have to pass another parameter to the method like bool test_transaction, and if it's true, throw an exception between database accesses. Then test after the call to make sure the first set of data didn't make it into the other database. But maybe this is something that's covered by the SQLite tests, and should not appear in my test cases.
Yes transactions works with different sqlite database and even between sqlite and sqlserver. I have tried it couple of times.
Some links and info
From here - Transaction between different data sources.
Since SQLite ADO.NET 2.0 Provider supports transaction enlistement, not only it is possible to perform a transaction spanning several SQLite datasources, but also spanning other database engines such as SQL Server.
Example:
using (DbConnection cn1 = new SQLiteConnection(" ... ") )
using (DbConnection cn2 = new SQLiteConnection(" ... "))
using (DbConnection cn3 = new System.Data.SqlClient.SqlConnection( " ... ") )
using (TransactionScope ts = new TransactionScope() )
{
cn1.Open(); cn2.Open(); cn3.Open();
DoWork1( cn1 );
DoWork2( cn2 );
DoWork3( cn3 );
ts.Complete();
}
How to attach a new database:
SQLiteConnection cnn = new SQLiteConnection("Data Source=C:\\myfirstdatabase.db");
cnn.Open();
using (DbCommand cmd = cnn.CreateCommand())
{
cmd.CommandText = "ATTACH DATABASE 'c:\\myseconddatabase.db' AS [second]";
cmd.ExecuteNonQuery();
cmd.CommandText = "SELECT COUNT(*) FROM main.myfirsttable INNER JOIN second.mysecondtable ON main.myfirsttable.id = second.mysecondtable.myfirstid";
object o = cmd.ExecuteScalar();
}
Yes, SQLite explicitly supports multi-database transactions (see https://www.sqlite.org/atomiccommit.html#_multi_file_commit for technical details). However, there is a fairly large caveat. If the database file is in WAL mode, then:
Transactions that involve changes against multiple ATTACHed databases
are atomic for each individual database, but are not atomic across all
databases as a set.

Is it possible to select data while a transaction is occuring?

I am using transactionscope to ensure that data is being read to the database correctly. However, I may have a need to select some data (from another page) while the transaction is running. Would it be possible to do this? I'm very noob when it comes to databases.
I am using LinqToSQL and SQL Server 2005(dev)/2008(prod).
Yes, it is possible to still select data from a database while a transaction is running.
Data not affected by your transaction (for instance, rows in a table which are being not updated) can usually be read from other transactions. (In certain situations SQL Server will introduce a table lock that stops reads on all rows in the table but they are unusual and most often a symptom of something else going on in your query or on the server).
You need to look into Transaction Isolation Levels since these control exactly how this behaviour will work.
Here is the C# code to set the isolation level of a transaction scope.
TransactionOptions option = new TransactionOptions();
options.IsolationLevel = System.Transactions.IsolationLevel.ReadCommitted;
using (TransactionScope sc = new TransactionScope(TransactionScopeOption.Required, options)
{
// Code within transaction
}
In general, depending on the transaction isolation level specified on a transaction (or any table hints like NOLOCK) you get different levels of data locking that protect the rest of your application from activity tied up in your transaction. With a transaction isolation level of READUNCOMMITTED for example, you can see the writes within that transaction as they occur. This allows for dirty reads but also prevents (most) locks on data.
The other end of the scale is an isolation level like SERIALIZABLE which ensures that your transaction activity is entirely isolated until it has comitted.
In adition to the already provided advice, I would strongly recommend you look into snapshot isolation models. There is a good discussion at Using Snapshot Isolation. Enabling Read Committed Snapshot ON on the database can aleviate a lot of contention problems because readers are no longer blocked by writers. Since default reads are performed under read commited isolation mode, this simple database option switch has immediate benefits and requires no changes in the app.
There is no free lunch, so this comes at a price, in this case the price being aditional load on tempdb, see Row Versioning Resource Usage.
If howeever you are using explict isolation levels and specially if you use the default TransactionScope Serializable mode, then you'll have to review your code to enforce the more bening ReadCommited isolation level. If you don't know what isolation level you use, it means you use ReadCommited.
Yes, by default a TransactionScope will lock the tables involved in the transaction. If you need to read while a transaction is taking place, enter another TransactionScope with TransactionOptions IsolationLevel.ReadUncommitted:
TransactionScopeOptions = new TransactionScopeOptions();
options.IsolationLevel = IsolationLevel.ReadUncommitted;
using(var scope = new TransactionScope(
TransactionScopeOption.RequiresNew,
options
) {
// read the database
}
With a LINQ-to-SQL DataContext:
// db is DataContext
db.Transaction =
db.Connection.BeginTransaction(System.Data.IsolationLevel.ReadUncommitted);
Note that there is a difference between System.Transactions.IsolationLevel and System.Data.IsolationLevel. Yes, you read that correctly.

Diagnosing Deadlocks in SQL Server 2005

We're seeing some pernicious, but rare, deadlock conditions in the Stack Overflow SQL Server 2005 database.
I attached the profiler, set up a trace profile using this excellent article on troubleshooting deadlocks, and captured a bunch of examples. The weird thing is that the deadlocking write is always the same:
UPDATE [dbo].[Posts]
SET [AnswerCount] = #p1, [LastActivityDate] = #p2, [LastActivityUserId] = #p3
WHERE [Id] = #p0
The other deadlocking statement varies, but it's usually some kind of trivial, simple read of the posts table. This one always gets killed in the deadlock. Here's an example
SELECT
[t0].[Id], [t0].[PostTypeId], [t0].[Score], [t0].[Views], [t0].[AnswerCount],
[t0].[AcceptedAnswerId], [t0].[IsLocked], [t0].[IsLockedEdit], [t0].[ParentId],
[t0].[CurrentRevisionId], [t0].[FirstRevisionId], [t0].[LockedReason],
[t0].[LastActivityDate], [t0].[LastActivityUserId]
FROM [dbo].[Posts] AS [t0]
WHERE [t0].[ParentId] = #p0
To be perfectly clear, we are not seeing write / write deadlocks, but read / write.
We have a mixture of LINQ and parameterized SQL queries at the moment. We have added with (nolock) to all the SQL queries. This may have helped some. We also had a single (very) poorly-written badge query that I fixed yesterday, which was taking upwards of 20 seconds to run every time, and was running every minute on top of that. I was hoping this was the source of some of the locking problems!
Unfortunately, I got another deadlock error about 2 hours ago. Same exact symptoms, same exact culprit write.
The truly strange thing is that the locking write SQL statement you see above is part of a very specific code path. It's only executed when a new answer is added to a question -- it updates the parent question with the new answer count and last date/user. This is, obviously, not that common relative to the massive number of reads we are doing! As far as I can tell, we're not doing huge numbers of writes anywhere in the app.
I realize that NOLOCK is sort of a giant hammer, but most of the queries we run here don't need to be that accurate. Will you care if your user profile is a few seconds out of date?
Using NOLOCK with Linq is a bit more difficult as Scott Hanselman discusses here.
We are flirting with the idea of using
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
on the base database context so that all our LINQ queries have this set. Without that, we'd have to wrap every LINQ call we make (well, the simple reading ones, which is the vast majority of them) in a 3-4 line transaction code block, which is ugly.
I guess I'm a little frustrated that trivial reads in SQL 2005 can deadlock on writes. I could see write/write deadlocks being a huge issue, but reads? We're not running a banking site here, we don't need perfect accuracy every time.
Ideas? Thoughts?
Are you instantiating a new LINQ to SQL DataContext object for every operation or are you perhaps sharing the same static context for all your calls?
Jeremy, we are sharing one static datacontext in the base Controller for the most part:
private DBContext _db;
/// <summary>
/// Gets the DataContext to be used by a Request's controllers.
/// </summary>
public DBContext DB
{
get
{
if (_db == null)
{
_db = new DBContext() { SessionName = GetType().Name };
//_db.ExecuteCommand("SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED");
}
return _db;
}
}
Do you recommend we create a new context for every Controller, or per Page, or .. more often?
According to MSDN:
http://msdn.microsoft.com/en-us/library/ms191242.aspx
When either the
READ COMMITTED SNAPSHOT or
ALLOW SNAPSHOT ISOLATION database
options are ON, logical copies
(versions) are maintained for all data
modifications performed in the
database. Every time a row is modified
by a specific transaction, the
instance of the Database Engine stores
a version of the previously committed
image of the row in tempdb. Each
version is marked with the transaction
sequence number of the transaction
that made the change. The versions of
modified rows are chained using a link
list. The newest row value is always
stored in the current database and
chained to the versioned rows stored
in tempdb.
For short-running transactions, a
version of a modified row may get
cached in the buffer pool without
getting written into the disk files of
the tempdb database. If the need for
the versioned row is short-lived, it
will simply get dropped from the
buffer pool and may not necessarily
incur I/O overhead.
There appears to be a slight performance penalty for the extra overhead, but it may be negligible. We should test to make sure.
Try setting this option and REMOVE all NOLOCKs from code queries unless it’s really necessary. NOLOCKs or using global methods in the database context handler to combat database transaction isolation levels are Band-Aids to the problem. NOLOCKS will mask fundamental issues with our data layer and possibly lead to selecting unreliable data, where automatic select / update row versioning appears to be the solution.
ALTER Database [StackOverflow.Beta] SET READ_COMMITTED_SNAPSHOT ON
NOLOCK and READ UNCOMMITTED are a slippery slope. You should never use them unless you understand why the deadlock is happening first. It would worry me that you say, "We have added with (nolock) to all the SQL queries". Needing to add WITH NOLOCK everywhere is a sure sign that you have problems in your data layer.
The update statement itself looks a bit problematic. Do you determine the count earlier in the transaction, or just pull it from an object? AnswerCount = AnswerCount+1 when a question is added is probably a better way to handle this. Then you don't need a transaction to get the correct count and you don't have to worry about the concurrency issue that you are potentially exposing yourself to.
One easy way to get around this type of deadlock issue without a lot of work and without enabling dirty reads is to use "Snapshot Isolation Mode" (new in SQL 2005) which will always give you a clean read of the last unmodified data. You can also catch and retry deadlocked statements fairly easily if you want to handle them gracefully.
The OP question was to ask why this problem occured. This post hopes to answer that while leaving possible solutions to be worked out by others.
This is probably an index related issue. For example, lets say the table Posts has a non-clustered index X which contains the ParentID and one (or more) of the field(s) being updated (AnswerCount, LastActivityDate, LastActivityUserId).
A deadlock would occur if the SELECT cmd does a shared-read lock on index X to search by the ParentId and then needs to do a shared-read lock on the clustered index to get the remaining columns while the UPDATE cmd does a write-exclusive lock on the clustered index and need to get a write-exclusive lock on index X to update it.
You now have a situation where A locked X and is trying to get Y whereas B locked Y and is trying to get X.
Of course, we'll need the OP to update his posting with more information regarding what indexes are in play to confirm if this is actually the cause.
I'm pretty uncomfortable about this question and the attendant answers. There's a lot of "try this magic dust! No that magic dust!"
I can't see anywhere that you've anaylzed the locks that are taken, and determined what exact type of locks are deadlocked.
All you've indicated is that some locks occur -- not what is deadlocking.
In SQL 2005 you can get more info about what locks are being taken out by using:
DBCC TRACEON (1222, -1)
so that when the deadlock occurs you'll have better diagnostics.
Are you instantiating a new LINQ to SQL DataContext object for every operation or are you perhaps sharing the same static context for all your calls? I originally tried the latter approach, and from what I remember, it caused unwanted locking in the DB. I now create a new context for every atomic operation.
Before burning the house down to catch a fly with NOLOCK all over, you may want to take a look at that deadlock graph you should've captured with Profiler.
Remember that a deadlock requires (at least) 2 locks. Connection 1 has Lock A, wants Lock B - and vice-versa for Connection 2. This is an unsolvable situation, and someone has to give.
What you've shown so far is solved by simple locking, which Sql Server is happy to do all day long.
I suspect you (or LINQ) are starting a transaction with that UPDATE statement in it, and SELECTing some other piece of info before hand. But, you really need to backtrack through the deadlock graph to find the locks held by each thread, and then backtrack through Profiler to find the statements that caused those locks to be granted.
I expect that there's at least 4 statements to complete this puzzle (or a statement that takes multiple locks - perhaps there's a trigger on the Posts table?).
Will you care if your user profile is a few seconds out of date?
Nope - that's perfectly acceptable. Setting the base transaction isolation level is probably the best/cleanest way to go.
Typical read/write deadlock comes from index order access. Read (T1) locates the row on index A and then looks up projected column on index B (usually clustered). Write (T2) changes index B (the cluster) then has to update the index A. T1 has S-Lck on A, wants S-Lck on B, T2 has X-Lck on B, wants U-Lck on A. Deadlock, puff. T1 is killed.
This is prevalent in environments with heavy OLTP traffic and just a tad too many indexes :). Solution is to make either the read not have to jump from A to B (ie. included column in A, or remove column from projected list) or T2 not have to jump from B to A (don't update indexed column).
Unfortunately, linq is not your friend here...
#Jeff - I am definitely not an expert on this, but I have had good results with instantiating a new context on almost every call. I think it's similar to creating a new Connection object on every call with ADO. The overhead isn't as bad as you would think, since connection pooling will still be used anyway.
I just use a global static helper like this:
public static class AppData
{
/// <summary>
/// Gets a new database context
/// </summary>
public static CoreDataContext DB
{
get
{
var dataContext = new CoreDataContext
{
DeferredLoadingEnabled = true
};
return dataContext;
}
}
}
and then I do something like this:
var db = AppData.DB;
var results = from p in db.Posts where p.ID = id select p;
And I would do the same thing for updates. Anyway, I don't have nearly as much traffic as you, but I was definitely getting some locking when I used a shared DataContext early on with just a handful of users. No guarantees, but it might be worth giving a try.
Update: Then again, looking at your code, you are only sharing the data context for the lifetime of that particular controller instance, which basically seems fine unless it is somehow getting used concurrently by mutiple calls within the controller. In a thread on the topic, ScottGu said:
Controllers only live for a single request - so at the end of processing a request they are garbage collected (which means the DataContext is collected)...
So anyway, that might not be it, but again it's probably worth a try, perhaps in conjunction with some load testing.
Q. Why are you storing the AnswerCount in the Posts table in the first place?
An alternative approach is to eliminate the "write back" to the Posts table by not storing the AnswerCount in the table but to dynamically calculate the number of answers to the post as required.
Yes, this will mean you're running an additional query:
SELECT COUNT(*) FROM Answers WHERE post_id = #id
or more typically (if you're displaying this for the home page):
SELECT p.post_id,
p.<additional post fields>,
a.AnswerCount
FROM Posts p
INNER JOIN AnswersCount_view a
ON <join criteria>
WHERE <home page criteria>
but this typically results in an INDEX SCAN and may be more efficient in the use of resources than using READ ISOLATION.
There's more than one way to skin a cat. Premature de-normalisation of a database schema can introduce scalability issues.
You definitely want READ_COMMITTED_SNAPSHOT set to on, which it is not by default. That gives you MVCC semantics. It's the same thing Oracle uses by default. Having an MVCC database is so incredibly useful, NOT using one is insane. This allows you to run the following inside a transaction:
Update USERS Set FirstName = 'foobar';
//decide to sleep for a year.
meanwhile without committing the above, everyone can continue to select from that table just fine. If you are not familiar with MVCC, you will be shocked that you were ever able to live without it. Seriously.
Setting your default to read uncommitted is not a good idea. Your will undoubtedly introduce inconsistencies and end up with a problem that is worse than what you have now. Snapshot isolation might work well, but it is a drastic change to the way Sql Server works and puts a huge load on tempdb.
Here is what you should do: use try-catch (in T-SQL) to detect the deadlock condition. When it happens, just re-run the query. This is standard database programming practice.
There are good examples of this technique in Paul Nielson's Sql Server 2005 Bible.
Here is a quick template that I use:
-- Deadlock retry template
declare #lastError int;
declare #numErrors int;
set #numErrors = 0;
LockTimeoutRetry:
begin try;
-- The query goes here
return; -- this is the normal end of the procedure
end try begin catch
set #lastError=##error
if #lastError = 1222 or #lastError = 1205 -- Lock timeout or deadlock
begin;
if #numErrors >= 3 -- We hit the retry limit
begin;
raiserror('Could not get a lock after 3 attempts', 16, 1);
return -100;
end;
-- Wait and then try the transaction again
waitfor delay '00:00:00.25';
set #numErrors = #numErrors + 1;
goto LockTimeoutRetry;
end;
-- Some other error occurred
declare #errorMessage nvarchar(4000), #errorSeverity int
select #errorMessage = error_message(),
#errorSeverity = error_severity()
raiserror(#errorMessage, #errorSeverity, 1)
return -100
end catch;
One thing that has worked for me in the past is making sure all my queries and updates access resources (tables) in the same order.
That is, if one query updates in order Table1, Table2 and a different query updates it in order of Table2, Table1 then you might see deadlocks.
Not sure if it's possible for you to change the order of updates since you're using LINQ. But it's something to look at.
Will you care if your user profile is a few seconds out of date?
A few seconds would definitely be acceptable. It doesn't seem like it would be that long, anyways, unless a huge number of people are submitting answers at the same time.
I agree with Jeremy on this one. You ask if you should create a new data context for each controller or per page - I tend to create a new one for every independent query.
I'm building a solution at present which used to implement the static context like you do, and when I threw tons of requests at the beast of a server (million+) during stress tests, I was also getting read/write locks randomly.
As soon as I changed my strategy to use a different data context at LINQ level per query, and trusted that SQL server could work its connection pooling magic, the locks seemed to disappear.
Of course I was under some time pressure, so trying a number of things all around the same time, so I can't be 100% sure that is what fixed it, but I have a high level of confidence - let's put it that way.
You should implement dirty reads.
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
If you don't absolutely require perfect transactional integrity with your queries, you should be using dirty reads when accessing tables with high concurrency. I assume your Posts table would be one of those.
This may give you so called "phantom reads", which is when your query acts upon data from a transaction that hasn't been committed.
We're not running a banking site here, we don't need perfect accuracy every time
Use dirty reads. You're right in that they won't give you perfect accuracy, but they should clear up your dead locking issues.
Without that, we'd have to wrap every LINQ call we make (well, the simple reading ones, which is the vast majority of them) in a 3-4 line transaction code block, which is ugly
If you implement dirty reads on "the base database context", you can always wrap your individual calls using a higher isolation level if you need the transactional integrity.
So what's the problem with implementing a retry mechanism? There will always be the possibility of a deadlock ocurring so why not have some logic to identify it and just try again?
Won't at least some of the other options introduce performance penalties that are taken all the time when a retry system will kick in rarely?
Also, don't forget some sort of logging when a retry happens so that you don't get into that situation of rare becoming often.
Now that I see Jeremy's answer, I think I remember hearing that the best practice is to use a new DataContext for each data operation. Rob Conery's written several posts about DataContext, and he always news them up rather than using a singleton.
http://blog.wekeroad.com/2007/08/17/linqtosql-ranch-dressing-for-your-database-pizza/
http://blog.wekeroad.com/mvc-storefront/mvcstore-part-9/ (see comments)
Here's the pattern we used for Video.Show (link to source view in CodePlex):
using System.Configuration;
namespace VideoShow.Data
{
public class DataContextFactory
{
public static VideoShowDataContext DataContext()
{
return new VideoShowDataContext(ConfigurationManager.ConnectionStrings["VideoShowConnectionString"].ConnectionString);
}
public static VideoShowDataContext DataContext(string connectionString)
{
return new VideoShowDataContext(connectionString);
}
}
}
Then at the service level (or even more granular, for updates):
private VideoShowDataContext dataContext = DataContextFactory.DataContext();
public VideoSearchResult GetVideos(int pageSize, int pageNumber, string sortType)
{
var videos =
from video in DataContext.Videos
where video.StatusId == (int)VideoServices.VideoStatus.Complete
orderby video.DatePublished descending
select video;
return GetSearchResult(videos, pageSize, pageNumber);
}
I would have to agree with Greg so long as setting the isolation level to read uncommitted doesn't have any ill effects on other queries.
I'd be interested to know, Jeff, how setting it at the database level would affect a query such as the following:
Begin Tran
Insert into Table (Columns) Values (Values)
Select Max(ID) From Table
Commit Tran
It's fine with me if my profile is even several minutes out of date.
Are you re-trying the read after it fails? It's certainly possible when firing a ton of random reads that a few will hit when they can't read. Most of the applications that I work with are very few writes compared to the number of reads and I'm sure the reads are no where near the number you are getting.
If implementing "READ UNCOMMITTED" doesn't solve your problem, then it's tough to help without knowing a lot more about the processing. There may be some other tuning option that would help this behavior. Unless some MSSQL guru comes to the rescue, I recommend submitting the problem to the vendor.
I would continue to tune everything; how are is the disk subsystem performing? What is the average disk queue length? If I/O's are backing up, the real problem might not be these two queries that are deadlocking, it might be another query that is bottlenecking the system; you mentioned a query taking 20 seconds that has been tuned, are there others?
Focus on shortening the long-running queries, I'll bet the deadlock problems will disappear.
Had the same problem, and cannot use the "IsolationLevel = IsolationLevel.ReadUncommitted" on TransactionScope because the server dont have DTS enabled (!).
Thats what i did with an extension method:
public static void SetNoLock(this MyDataContext myDS)
{
myDS.ExecuteCommand("SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED");
}
So, for selects who use critical concurrency tables, we enable the "nolock" like this:
using (MyDataContext myDS = new MyDataContext())
{
myDS.SetNoLock();
// var query = from ...my dirty querys here...
}
Sugestions are welcome!

Resources