We're seeing some pernicious, but rare, deadlock conditions in the Stack Overflow SQL Server 2005 database.
I attached the profiler, set up a trace profile using this excellent article on troubleshooting deadlocks, and captured a bunch of examples. The weird thing is that the deadlocking write is always the same:
UPDATE [dbo].[Posts]
SET [AnswerCount] = #p1, [LastActivityDate] = #p2, [LastActivityUserId] = #p3
WHERE [Id] = #p0
The other deadlocking statement varies, but it's usually some kind of trivial, simple read of the posts table. This one always gets killed in the deadlock. Here's an example
SELECT
[t0].[Id], [t0].[PostTypeId], [t0].[Score], [t0].[Views], [t0].[AnswerCount],
[t0].[AcceptedAnswerId], [t0].[IsLocked], [t0].[IsLockedEdit], [t0].[ParentId],
[t0].[CurrentRevisionId], [t0].[FirstRevisionId], [t0].[LockedReason],
[t0].[LastActivityDate], [t0].[LastActivityUserId]
FROM [dbo].[Posts] AS [t0]
WHERE [t0].[ParentId] = #p0
To be perfectly clear, we are not seeing write / write deadlocks, but read / write.
We have a mixture of LINQ and parameterized SQL queries at the moment. We have added with (nolock) to all the SQL queries. This may have helped some. We also had a single (very) poorly-written badge query that I fixed yesterday, which was taking upwards of 20 seconds to run every time, and was running every minute on top of that. I was hoping this was the source of some of the locking problems!
Unfortunately, I got another deadlock error about 2 hours ago. Same exact symptoms, same exact culprit write.
The truly strange thing is that the locking write SQL statement you see above is part of a very specific code path. It's only executed when a new answer is added to a question -- it updates the parent question with the new answer count and last date/user. This is, obviously, not that common relative to the massive number of reads we are doing! As far as I can tell, we're not doing huge numbers of writes anywhere in the app.
I realize that NOLOCK is sort of a giant hammer, but most of the queries we run here don't need to be that accurate. Will you care if your user profile is a few seconds out of date?
Using NOLOCK with Linq is a bit more difficult as Scott Hanselman discusses here.
We are flirting with the idea of using
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
on the base database context so that all our LINQ queries have this set. Without that, we'd have to wrap every LINQ call we make (well, the simple reading ones, which is the vast majority of them) in a 3-4 line transaction code block, which is ugly.
I guess I'm a little frustrated that trivial reads in SQL 2005 can deadlock on writes. I could see write/write deadlocks being a huge issue, but reads? We're not running a banking site here, we don't need perfect accuracy every time.
Ideas? Thoughts?
Are you instantiating a new LINQ to SQL DataContext object for every operation or are you perhaps sharing the same static context for all your calls?
Jeremy, we are sharing one static datacontext in the base Controller for the most part:
private DBContext _db;
/// <summary>
/// Gets the DataContext to be used by a Request's controllers.
/// </summary>
public DBContext DB
{
get
{
if (_db == null)
{
_db = new DBContext() { SessionName = GetType().Name };
//_db.ExecuteCommand("SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED");
}
return _db;
}
}
Do you recommend we create a new context for every Controller, or per Page, or .. more often?
According to MSDN:
http://msdn.microsoft.com/en-us/library/ms191242.aspx
When either the
READ COMMITTED SNAPSHOT or
ALLOW SNAPSHOT ISOLATION database
options are ON, logical copies
(versions) are maintained for all data
modifications performed in the
database. Every time a row is modified
by a specific transaction, the
instance of the Database Engine stores
a version of the previously committed
image of the row in tempdb. Each
version is marked with the transaction
sequence number of the transaction
that made the change. The versions of
modified rows are chained using a link
list. The newest row value is always
stored in the current database and
chained to the versioned rows stored
in tempdb.
For short-running transactions, a
version of a modified row may get
cached in the buffer pool without
getting written into the disk files of
the tempdb database. If the need for
the versioned row is short-lived, it
will simply get dropped from the
buffer pool and may not necessarily
incur I/O overhead.
There appears to be a slight performance penalty for the extra overhead, but it may be negligible. We should test to make sure.
Try setting this option and REMOVE all NOLOCKs from code queries unless it’s really necessary. NOLOCKs or using global methods in the database context handler to combat database transaction isolation levels are Band-Aids to the problem. NOLOCKS will mask fundamental issues with our data layer and possibly lead to selecting unreliable data, where automatic select / update row versioning appears to be the solution.
ALTER Database [StackOverflow.Beta] SET READ_COMMITTED_SNAPSHOT ON
NOLOCK and READ UNCOMMITTED are a slippery slope. You should never use them unless you understand why the deadlock is happening first. It would worry me that you say, "We have added with (nolock) to all the SQL queries". Needing to add WITH NOLOCK everywhere is a sure sign that you have problems in your data layer.
The update statement itself looks a bit problematic. Do you determine the count earlier in the transaction, or just pull it from an object? AnswerCount = AnswerCount+1 when a question is added is probably a better way to handle this. Then you don't need a transaction to get the correct count and you don't have to worry about the concurrency issue that you are potentially exposing yourself to.
One easy way to get around this type of deadlock issue without a lot of work and without enabling dirty reads is to use "Snapshot Isolation Mode" (new in SQL 2005) which will always give you a clean read of the last unmodified data. You can also catch and retry deadlocked statements fairly easily if you want to handle them gracefully.
The OP question was to ask why this problem occured. This post hopes to answer that while leaving possible solutions to be worked out by others.
This is probably an index related issue. For example, lets say the table Posts has a non-clustered index X which contains the ParentID and one (or more) of the field(s) being updated (AnswerCount, LastActivityDate, LastActivityUserId).
A deadlock would occur if the SELECT cmd does a shared-read lock on index X to search by the ParentId and then needs to do a shared-read lock on the clustered index to get the remaining columns while the UPDATE cmd does a write-exclusive lock on the clustered index and need to get a write-exclusive lock on index X to update it.
You now have a situation where A locked X and is trying to get Y whereas B locked Y and is trying to get X.
Of course, we'll need the OP to update his posting with more information regarding what indexes are in play to confirm if this is actually the cause.
I'm pretty uncomfortable about this question and the attendant answers. There's a lot of "try this magic dust! No that magic dust!"
I can't see anywhere that you've anaylzed the locks that are taken, and determined what exact type of locks are deadlocked.
All you've indicated is that some locks occur -- not what is deadlocking.
In SQL 2005 you can get more info about what locks are being taken out by using:
DBCC TRACEON (1222, -1)
so that when the deadlock occurs you'll have better diagnostics.
Are you instantiating a new LINQ to SQL DataContext object for every operation or are you perhaps sharing the same static context for all your calls? I originally tried the latter approach, and from what I remember, it caused unwanted locking in the DB. I now create a new context for every atomic operation.
Before burning the house down to catch a fly with NOLOCK all over, you may want to take a look at that deadlock graph you should've captured with Profiler.
Remember that a deadlock requires (at least) 2 locks. Connection 1 has Lock A, wants Lock B - and vice-versa for Connection 2. This is an unsolvable situation, and someone has to give.
What you've shown so far is solved by simple locking, which Sql Server is happy to do all day long.
I suspect you (or LINQ) are starting a transaction with that UPDATE statement in it, and SELECTing some other piece of info before hand. But, you really need to backtrack through the deadlock graph to find the locks held by each thread, and then backtrack through Profiler to find the statements that caused those locks to be granted.
I expect that there's at least 4 statements to complete this puzzle (or a statement that takes multiple locks - perhaps there's a trigger on the Posts table?).
Will you care if your user profile is a few seconds out of date?
Nope - that's perfectly acceptable. Setting the base transaction isolation level is probably the best/cleanest way to go.
Typical read/write deadlock comes from index order access. Read (T1) locates the row on index A and then looks up projected column on index B (usually clustered). Write (T2) changes index B (the cluster) then has to update the index A. T1 has S-Lck on A, wants S-Lck on B, T2 has X-Lck on B, wants U-Lck on A. Deadlock, puff. T1 is killed.
This is prevalent in environments with heavy OLTP traffic and just a tad too many indexes :). Solution is to make either the read not have to jump from A to B (ie. included column in A, or remove column from projected list) or T2 not have to jump from B to A (don't update indexed column).
Unfortunately, linq is not your friend here...
#Jeff - I am definitely not an expert on this, but I have had good results with instantiating a new context on almost every call. I think it's similar to creating a new Connection object on every call with ADO. The overhead isn't as bad as you would think, since connection pooling will still be used anyway.
I just use a global static helper like this:
public static class AppData
{
/// <summary>
/// Gets a new database context
/// </summary>
public static CoreDataContext DB
{
get
{
var dataContext = new CoreDataContext
{
DeferredLoadingEnabled = true
};
return dataContext;
}
}
}
and then I do something like this:
var db = AppData.DB;
var results = from p in db.Posts where p.ID = id select p;
And I would do the same thing for updates. Anyway, I don't have nearly as much traffic as you, but I was definitely getting some locking when I used a shared DataContext early on with just a handful of users. No guarantees, but it might be worth giving a try.
Update: Then again, looking at your code, you are only sharing the data context for the lifetime of that particular controller instance, which basically seems fine unless it is somehow getting used concurrently by mutiple calls within the controller. In a thread on the topic, ScottGu said:
Controllers only live for a single request - so at the end of processing a request they are garbage collected (which means the DataContext is collected)...
So anyway, that might not be it, but again it's probably worth a try, perhaps in conjunction with some load testing.
Q. Why are you storing the AnswerCount in the Posts table in the first place?
An alternative approach is to eliminate the "write back" to the Posts table by not storing the AnswerCount in the table but to dynamically calculate the number of answers to the post as required.
Yes, this will mean you're running an additional query:
SELECT COUNT(*) FROM Answers WHERE post_id = #id
or more typically (if you're displaying this for the home page):
SELECT p.post_id,
p.<additional post fields>,
a.AnswerCount
FROM Posts p
INNER JOIN AnswersCount_view a
ON <join criteria>
WHERE <home page criteria>
but this typically results in an INDEX SCAN and may be more efficient in the use of resources than using READ ISOLATION.
There's more than one way to skin a cat. Premature de-normalisation of a database schema can introduce scalability issues.
You definitely want READ_COMMITTED_SNAPSHOT set to on, which it is not by default. That gives you MVCC semantics. It's the same thing Oracle uses by default. Having an MVCC database is so incredibly useful, NOT using one is insane. This allows you to run the following inside a transaction:
Update USERS Set FirstName = 'foobar';
//decide to sleep for a year.
meanwhile without committing the above, everyone can continue to select from that table just fine. If you are not familiar with MVCC, you will be shocked that you were ever able to live without it. Seriously.
Setting your default to read uncommitted is not a good idea. Your will undoubtedly introduce inconsistencies and end up with a problem that is worse than what you have now. Snapshot isolation might work well, but it is a drastic change to the way Sql Server works and puts a huge load on tempdb.
Here is what you should do: use try-catch (in T-SQL) to detect the deadlock condition. When it happens, just re-run the query. This is standard database programming practice.
There are good examples of this technique in Paul Nielson's Sql Server 2005 Bible.
Here is a quick template that I use:
-- Deadlock retry template
declare #lastError int;
declare #numErrors int;
set #numErrors = 0;
LockTimeoutRetry:
begin try;
-- The query goes here
return; -- this is the normal end of the procedure
end try begin catch
set #lastError=##error
if #lastError = 1222 or #lastError = 1205 -- Lock timeout or deadlock
begin;
if #numErrors >= 3 -- We hit the retry limit
begin;
raiserror('Could not get a lock after 3 attempts', 16, 1);
return -100;
end;
-- Wait and then try the transaction again
waitfor delay '00:00:00.25';
set #numErrors = #numErrors + 1;
goto LockTimeoutRetry;
end;
-- Some other error occurred
declare #errorMessage nvarchar(4000), #errorSeverity int
select #errorMessage = error_message(),
#errorSeverity = error_severity()
raiserror(#errorMessage, #errorSeverity, 1)
return -100
end catch;
One thing that has worked for me in the past is making sure all my queries and updates access resources (tables) in the same order.
That is, if one query updates in order Table1, Table2 and a different query updates it in order of Table2, Table1 then you might see deadlocks.
Not sure if it's possible for you to change the order of updates since you're using LINQ. But it's something to look at.
Will you care if your user profile is a few seconds out of date?
A few seconds would definitely be acceptable. It doesn't seem like it would be that long, anyways, unless a huge number of people are submitting answers at the same time.
I agree with Jeremy on this one. You ask if you should create a new data context for each controller or per page - I tend to create a new one for every independent query.
I'm building a solution at present which used to implement the static context like you do, and when I threw tons of requests at the beast of a server (million+) during stress tests, I was also getting read/write locks randomly.
As soon as I changed my strategy to use a different data context at LINQ level per query, and trusted that SQL server could work its connection pooling magic, the locks seemed to disappear.
Of course I was under some time pressure, so trying a number of things all around the same time, so I can't be 100% sure that is what fixed it, but I have a high level of confidence - let's put it that way.
You should implement dirty reads.
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
If you don't absolutely require perfect transactional integrity with your queries, you should be using dirty reads when accessing tables with high concurrency. I assume your Posts table would be one of those.
This may give you so called "phantom reads", which is when your query acts upon data from a transaction that hasn't been committed.
We're not running a banking site here, we don't need perfect accuracy every time
Use dirty reads. You're right in that they won't give you perfect accuracy, but they should clear up your dead locking issues.
Without that, we'd have to wrap every LINQ call we make (well, the simple reading ones, which is the vast majority of them) in a 3-4 line transaction code block, which is ugly
If you implement dirty reads on "the base database context", you can always wrap your individual calls using a higher isolation level if you need the transactional integrity.
So what's the problem with implementing a retry mechanism? There will always be the possibility of a deadlock ocurring so why not have some logic to identify it and just try again?
Won't at least some of the other options introduce performance penalties that are taken all the time when a retry system will kick in rarely?
Also, don't forget some sort of logging when a retry happens so that you don't get into that situation of rare becoming often.
Now that I see Jeremy's answer, I think I remember hearing that the best practice is to use a new DataContext for each data operation. Rob Conery's written several posts about DataContext, and he always news them up rather than using a singleton.
http://blog.wekeroad.com/2007/08/17/linqtosql-ranch-dressing-for-your-database-pizza/
http://blog.wekeroad.com/mvc-storefront/mvcstore-part-9/ (see comments)
Here's the pattern we used for Video.Show (link to source view in CodePlex):
using System.Configuration;
namespace VideoShow.Data
{
public class DataContextFactory
{
public static VideoShowDataContext DataContext()
{
return new VideoShowDataContext(ConfigurationManager.ConnectionStrings["VideoShowConnectionString"].ConnectionString);
}
public static VideoShowDataContext DataContext(string connectionString)
{
return new VideoShowDataContext(connectionString);
}
}
}
Then at the service level (or even more granular, for updates):
private VideoShowDataContext dataContext = DataContextFactory.DataContext();
public VideoSearchResult GetVideos(int pageSize, int pageNumber, string sortType)
{
var videos =
from video in DataContext.Videos
where video.StatusId == (int)VideoServices.VideoStatus.Complete
orderby video.DatePublished descending
select video;
return GetSearchResult(videos, pageSize, pageNumber);
}
I would have to agree with Greg so long as setting the isolation level to read uncommitted doesn't have any ill effects on other queries.
I'd be interested to know, Jeff, how setting it at the database level would affect a query such as the following:
Begin Tran
Insert into Table (Columns) Values (Values)
Select Max(ID) From Table
Commit Tran
It's fine with me if my profile is even several minutes out of date.
Are you re-trying the read after it fails? It's certainly possible when firing a ton of random reads that a few will hit when they can't read. Most of the applications that I work with are very few writes compared to the number of reads and I'm sure the reads are no where near the number you are getting.
If implementing "READ UNCOMMITTED" doesn't solve your problem, then it's tough to help without knowing a lot more about the processing. There may be some other tuning option that would help this behavior. Unless some MSSQL guru comes to the rescue, I recommend submitting the problem to the vendor.
I would continue to tune everything; how are is the disk subsystem performing? What is the average disk queue length? If I/O's are backing up, the real problem might not be these two queries that are deadlocking, it might be another query that is bottlenecking the system; you mentioned a query taking 20 seconds that has been tuned, are there others?
Focus on shortening the long-running queries, I'll bet the deadlock problems will disappear.
Had the same problem, and cannot use the "IsolationLevel = IsolationLevel.ReadUncommitted" on TransactionScope because the server dont have DTS enabled (!).
Thats what i did with an extension method:
public static void SetNoLock(this MyDataContext myDS)
{
myDS.ExecuteCommand("SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED");
}
So, for selects who use critical concurrency tables, we enable the "nolock" like this:
using (MyDataContext myDS = new MyDataContext())
{
myDS.SetNoLock();
// var query = from ...my dirty querys here...
}
Sugestions are welcome!
Related
I am updating a column in a SQL table and I want to check if it was updated successfully or it was updated already and my query didn't do anything
as we get ##rowcount in SQL Server.
In my case, I want to update a column named lockForProcessing, so if it is already processing, then my query would not affect any row, it means someone else is already processing it, else I would process it.
If I understand you correctly, your problem is related to a multi threading / concurrency problem, where the same table may be updated simultaneously.
You may want to have a look at the :
Chapter 11. Transactions And Concurrency
The ISession is not threadsafe!
The entity is not stored the moment the code session.SaveOrUpdate() is executed, but typically after transaction.Commit().
stored and commited are two different things.
The entity is stored after any session.Flush(). Depending on the IsolationLevel, the entity won't be seen by other transactions.
The entity is commited after a transaction.Commit(). A commit also flushes.
Maybe all you need to do is choose the right IsolationLevel when beginning transactions and then read the table row to get the current value:
using (var transaction = session.BeginTransaction(IsolationLevel.Serializable))
{
session.Get(); // Read your row
transaction.Commit();
}
Maybe it is easier to create some locking or pipeline mechanism in your application code though. Without knowing more about who is accessing the database (other transactions, sessions, processes?) it is hard to answer more precisely.
I am using Entity Framework, and I am inserting records into our database which include a blob field. The blob field can be up to 5 MB of data.
When inserting a record into this table, does it lock the whole table?
So if you are querying any data from the table, will it block until the insert is done (I realise there are ways around this, but I am talking by default)?
How long will it take before it causes a deadlock? Will that time depend on how much load is on the server, e.g. if there is not much load, will it take longer to cause a deadlock?
Is there a way to monitor and see what is locked at any particular time?
If each thread is doing queries on single tables, is there then a case where blocking can occur? So isn't it the case that a deadlock can only occur if you have a query which has a join and is acting on multiple tables?
This is taking into account that most of my code is just a bunch of select statements, not heaps of long running transactions or anything like that.
Holy cow, you've got a lot of questions in here, heh. Here's a few answers:
When inserting a record into this table, does it lock the whole table?
Not by default, but if you use the TABLOCK hint or if you're doing certain kinds of bulk load operations, then yes.
So if you are querying any data from the table will it block until the insert is done (I realise there are ways around this, but I am talking by default)?
This one gets a little trickier. If someone's trying to select data from a page in the table that you've got locked, then yes, you'll block 'em. You can work around that with things like the NOLOCK hint on a select statement or by using Read Committed Snapshot Isolation. For a starting point on how isolation levels work, check out Kendra Little's isolation levels poster.
How long will it take before it causes a deadlock? Will that time depend on how much load is on the server, e.g. if there is not much load will it take longer to cause a deadlock?
Deadlocks aren't based on time - they're based on dependencies. Say we've got this situation:
Query A is holding a bunch of locks, and to finish his query, he needs stuff that's locked by Query B
Query B is also holding a bunch of locks, and to finish his query, he needs stuff that's locked by Query A
Neither query can move forward (think Mexican standoff) so SQL Server calls it a draw, shoots somebody's query in the back, releases his locks, and lets the other query keep going. SQL Server picks the victim based on which one will be less expensive to roll back. If you want to get fancy, you can use SET DEADLOCK_PRIORITY LOW on particular queries to paint targets on their back, and SQL Server will shoot them first.
Is there a way to monitor and see what is locked at any particular time?
Absolutely - there's Dynamic Management Views (DMVs) you can query like sys.dm_tran_locks, but the easiest way is to use Adam Machanic's free sp_WhoIsActive stored proc. It's a really slick replacement for sp_who that you can call like this:
sp_WhoIsActive #get_locks = 1
For each running query, you'll get a little XML that describes all of the locks it holds. There's also a Blocking column, so you can see who's blocking who. To interpret the locks being held, you'll want to check the Books Online descriptions of lock types.
If each thread is doing queries on single tables, is there then a case where blocking can occur? So isn't it the case that a deadlock can only occur if you have a query which has a join and is acting on multiple tables?
Believe it or not, a single query can actually deadlock itself, and yes, queries can deadlock on just one table. To learn even more about deadlocks, check out The Difficulty with Deadlocks by Jeremiah Peschka.
If you have direct control over the SQL, you can force row level locking using:
INSERT INTO WITH (ROWLOCK) MyTable(Id, BigColumn)
VALUES(...)
These two answers might be helpful:
Is it possible to force row level locking in SQL Server?
Locking a table with a select in Entity Framework
To view current held locks in Management Studio, look under the server, then under Management/Activity Monitor. It has a section for locks by object, so you should be able to see whether the inserts are really causing a problem.
Deadlock errors generally return quite quickly. Deadlock states do not occur as a result of a timeout error occurring while waiting for a lock. Deadlock is detected by SQL Server by looking for cycles in the lock requests.
The best answer I can come up with is: It depends.
The best way to check is to find your connection SPID and use sp_lock SPID to check if the lock mode is X on the TAB type. You can also verify the table name with SELECT OBJECT_NAME(objid). I also like to use the below query to check for locking.
SELECT RESOURCE_TYPE,RESOURCE_SUBTYPE,DB_NAME(RESOURCE_DATABASE_ID) AS 'DATABASE',resource_database_id DBID,
RESOURCE_DESCRIPTION,RESOURCE_ASSOCIATED_ENTITY_ID,REQUEST_MODE,REQUEST_SESSION_ID,
CASE WHEN RESOURCE_TYPE = 'OBJECT' THEN OBJECT_NAME(RESOURCE_ASSOCIATED_ENTITY_ID,RESOURCE_DATABASE_ID) ELSE '' END OBJETO
FROM SYS.DM_TRAN_LOCKS (NOLOCK)
WHERE REQUEST_SESSION_ID = --SPID here
In SQL Server 2008 (and later) you can disable the lock escalation on the table and enforce a WITH (ROWLOCK) in your insert clause effectively forcing a rowlock. This can't be done prior to SQL Server 2008 (you can write WITH ROWLOCK, but SQL Server can choose to ignore it).
I'm speaking generals here, and I don't have much experience with BLOBs as I usually advise developers to avoid them, especially if larger than 1 MB.
I'm using SQL Server 2008 on Windows Server 2008 R2, all sp'd up.
I'm getting occasional issues with SQL Server hanging with the CPU usage on 100% on our live server. It seems all the wait time on SQL Sever when this happens is given to SOS_SCHEDULER_YIELD.
Here is the Stored Proc that causes the hang. I've added the "WITH (NOLOCK)" in an attempt to fix what seems to be a locking issue.
ALTER PROCEDURE [dbo].[MostPopularRead]
AS
BEGIN
SET NOCOUNT ON;
SELECT
c.ForeignId , ct.ContentSource as ContentSource
, sum(ch.HitCount * hw.Weight) as Popularity
, (sum(ch.HitCount * hw.Weight) * 100) / #Total as Percent
, #Total as TotalHits
from
ContentHit ch WITH (NOLOCK)
join [Content] c WITH (NOLOCK) on ch.ContentId = c.ContentId
join HitWeight hw WITH (NOLOCK) on ch.HitWeightId = hw.HitWeightId
join ContentType ct WITH (NOLOCK) on c.ContentTypeId = ct.ContentTypeId
where
ch.CreatedDate between #Then and #Now
group by
c.ForeignId , ct.ContentSource
order by
sum(ch.HitCount * hw.HitWeightMultiplier) desc
END
The stored proc reads from the table "ContentHit", which is a table that tracks when content on the site is clicked (it gets hit quite frequently - anything from 4 to 20 hits a minute). So its pretty clear that this table is the source of the problem. There is a stored proc that is called to add hit tracks to the ContentHit table, its pretty trivial, it just builds up a string from the params passed in, which involves a few selects from some lookup tables, followed by the main insert:
BEGIN TRAN
insert into [ContentHit]
(ContentId, HitCount, HitWeightId, ContentHitComment)
values
(#ContentId, isnull(#HitCount,1), isnull(#HitWeightId,1), #ContentHitComment)
COMMIT TRAN
The ContentHit table has a clustered index on its ID column, and I've added another index on CreatedDate since that is used in the select.
When I profile the issue, I see the Stored proc executes for exactly 30 seconds, then the SQL timeout exception occurs. If it makes a difference the web application using it is ASP.NET, and I'm using Subsonic (3) to execute these stored procs.
Can someone please advise how best I can solve this problem? I don't care about reading dirty data...
EDIT:
The MostPopularRead stored proc is called very infrequently - its called on the home page of the site, but the results are cached for a day. The pattern of events that I am seeing is when I clear the cache, multiple requests come in for the home site, and they all hit the stored proc because it hasn't yet been cached. SQL Server then maxes out, and can only be resolved by restarting the sql server process. When I do this, usually the proc will execute OK (in about 200 ms) and put the data back in the cache.
EDIT 2:
I've checked the execution plan, and the query looks quite sound. As I said earlier when it does run it only takes around 200ms to execute. I've added MAXDOP 1 to the select statement to force it to use only one CPU core, but I still see the issue. When I look at the wait times I see that XE_DISPATCHER_WAIT, ONDEMAND_TASK_QUEUE, BROKER_TRANSMITTER, KSOURCE_WAKEUP and BROKER_EVENTHANDLER are taking up a massive amount of wait time.
EDIT 3:
I previously thought that this was related to Subsonic, our ORM, but having switched to ADO.NET, the erros is still live.
The issue is likely concurrency, not locking. SOS_SCHEDULER_YIELD occurs when a task voluntarily yields the scheduler for other tasks to execute. During this wait the task is waiting for its quantum to be renewed.
How often is [MostPopularRead] SP called and how long does it take to execute?
The aggregation in your query might be rather CPU-intensive, especially if there are lots of data and/or ineffective indexes. So, you might end up with high CPU pressure - basically, a demand for CPU time is too high.
I'd consider the following:
Check what other queries are executing while CPU is 100% busy? Look at sys.dm_os_waiting_tasks, sys.dm_os_tasks, sys.dm_exec_requests.
Look at the query plan of [MostPopularRead], try to optimize the query. Quite often an ineffective query is the root cause of a performance problem, and query optimization is much more straightforward than other performance improvement techniques.
If the query plan is parallel and the query is often called by multiple clients simultaneously, forcing a single-thread plan with MAXDOP=1 hint might help (abundant use of parallel plans is usually indicated by SOS_SCHEDULER_YIELD and CXPACKET waits).
Also, have a look at this paper: Performance tuning with wait statistics. It gives a pretty good summary of different wait types and their impact on performance.
P.S. It is easier to use SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED before a query instead of adding (nolock) to each table.
Remove the NOLOCK hint.
Open a query in SSMS, run SET STATISTICSIO ON and run the query in the procedure. Let it finish and post here the IO stats messages. Then post the table definitions and all indexes defined on them. Then somebody will be able to reply with the proper indexes you need.
As with all SQL performance problem, the text of the query is largely irrelevant without complete schema definition.
A guesstimate covering index would be:
create index ContentHitCreatedDate
on ContentHit (CreatedDate)
include (HitCount, ContentId, HitWeightId);
Update
XE_DISPATCHER_WAIT, ONDEMAND_TASK_QUEUE, BROKER_TRANSMITTER, KSOURCE_WAKEUP and BROKER_EVENTHANDLER: you can safely ignore all these waits. They show up because they represent threads parked and waiting to dispatch XEvents, Service Broker or internal SQL thread pool work items. As they spend most of their time parked and waiting, they get accounted for unrealistic wait times. Ignore them.
If you believe ContentHit to be the source of your problem, you could add a Covering Index
CREATE INDEX IX_CONTENTHIT_CONTENTID_HITWEIGHTID_HITCOUNT
ON dbo.ContentHit (ContentID, HitWeightID, HitCount)
Take a look at the Query Plan if you want to be certain about the bottleneck in your query.
By default settings sql server uses all the core/cpu for all queries (max DoP setting> advanced property, DoP= Degree of Parallelism), which can lead to 100% CPU even if only one core is actually waiting for some I/O.
If you search the net or this site you will find resource explaining it better than me (like monitoring your I/o despite you see a CPU-bound problem).
On one server we couldn't change the application with a bad query that locked down all resources (CPU) but by setting DoP to the half of the number of core we managed to avoid that the server get "stopped". The effect on the queries being less parallel was negligible in our case.
--
Dom
Thanks to all who posted, I got some great SQL Server perf tuning tips.
In the end we ran out time to resolve this mystery - we found a more effecient way to collect this information and cache it in the database, so this solved the problem for us.
We have a SQL Server database table that consists of user id, some numeric value, e.g. balance, and a version column.
We have multiple threads updating this table's value column in parallel, each in its own transaction and session (we're using a session-per-thread model). Since we want all logical transaction to occur, each thread does the following:
load the current row (mapped to a type).
make the change to the value, based on old value. (e.g. add 50).
session.update(obj)
session.flush() (since we're optimistic, we want to make sure we had the correct version value prior to the update)
if step 4 (flush) threw StaleStateException, refresh the object (with lockmode.read) and goto step 1
we only do this a certain number of times per logical transaction, if we can't commit it after X attempts, we reject the logical transaction.
each such thread commits periodically, e.g. after 100 successful logical transactions, to keep commit-induced I/O to manageable levels. meaning - we have a single database transaction (per transaction) with multiple flushes, at least once per logical change.
what's the problem here, you ask? well, on commits we see changes to failed logical objects.
specifically, if the value was 50 when we went through step 1 (for the first time), and we tried to update it to 100 (but we failed since e.g. another thread changed it to 70), then the value of 50 is committed for this row. obviously this is incorrect.
What are we missing here?
Well, I do not have a ton of experience here, but one thing I remember reading in the documentation is that if an exception occurs, you are supposed to immediately rollback the transaction and dispose of the session. Perhaps your issue is related to the session being in an inconsistent state?
Also, calling update in your code here is not necessary. Since you loaded the object in that session, it is already being tracked by nhibernate.
If you want to make your changes anyway, why do you bother with row versioning? It sounds like you should get the same result if you simply always update the data and let the last transaction win.
As to why the update becomes permanent, it depends on what the SQL statements for the version check/update look like and on your transaction control, which you left out of the code example. If you turn on the Hibernate SQL logging it will probably become obvious how this is happening.
I'm not a nhibernate guru, but answer seems simple.
When nhibernate loads an object, it expects it not to change in db as long as it's in nhibernate session cache.
As you mentioned - you got multi thread app.
This is what happens=>
1st thread loads an entity
2nd thread loads an entity
1st thread changes entity
2nd thread changes entity and => finds out that loaded entity has changed by something else and being afraid that it has screwed up changes 1st thread made - throws an exception to let programmer be aware about that.
You are missing locking mechanism. Can't tell much about how to apply that properly and elegantly. Maybe Transaction would help.
We had similar problems when we used nhibernate and raw ado.net concurrently (luckily - just for querying - at least for production code). All we had to do - force updating db on insert/update so we could actually query something through full-text search for some specific entities.
Had StaleStateException in integration tests when we used raw ado.net to reset db. NHibernate session was alive through bunch of tests, but every test tried to cleanup db without awareness of NHibernate.
Here is the documention for exception in the session
http://nhibernate.info/doc/nhibernate-reference/best-practices.html
I have a query that runs each night on a table with a bunch of records (200,000+). This application simply iterates over the results (using a DbDataReader in a C# app if that's relevant) and processes each one. The processing is done outside of the database altogether. During the time that the application is iterating over the results I am unable to insert any records into the table that I am querying for. The insert statements just hang and eventually timeout. The inserts are done in completely separate applications.
Does SQL Server lock the table down while a query is being done? This seems like an overly aggressive locking policy. I could understand how there could be a conflict between the query and newly inserted records, but I would be perfectly ok if records inserted after the query started were simply not included in the results.
Any ways to avoid this?
Update:
The WITH (NOLOCK) definitely did the trick. As some of you pointed out, this isn't the cleanest approach. I can't really query everything into memory given the amount of records and some of the columns in this table are binary (some records are actually about 1MB of total data).
The other suggestion, was to query for batches of records at a time. This isn't a bad idea either, but it does bring up a new issue: database independent queries. Right now the application can work with a variety of different databases (Oracle, MySQL, Access, etc). Each database has their own way of limiting the rows returned in a query. But maybe this is better saved for another question?
Back on topic, the "WITH (NOLOCK)" clause is certainly SQL Server specific, is there any way to keep this out of my query (and thus preventing it from working with other databases)? Maybe I could somehow specify a parameter on the DbCommand object? Or can I specify the locking policy at the database level? That is, change some properties in SQL Server itself that will prevent the table from locking like this by default?
If you're using SQL Server 2005+, then how about giving the new MVCC snapshot isolation a try. I've had good results with it:
ALTER DATABASE SET SINGLE_USER WITH ROLLBACK IMMEDIATE;
ALTER DATABASE SET READ_COMMITTED_SNAPSHOT ON;
ALTER DATABASE SET MULTI_USER;
It will stop readers blocking writers and vice-versa. It eliminates many deadlocks, at very little cost.
It depends what Isolation Level you are using. You might try doing your selects using the With (NoLock) hint, that will prevent the read locks, but will also mean the data being read might change before the selecting transaction completes.
The first thing you could do is try to add the "WITH (NOLOCK)" to any tables you have in your query. This will "Tame down" the locking that SQL Server does. An example of using "NOLOCK" on a join is as follows...
SELECT COUNT(Users.UserID)
FROM Users WITH (NOLOCK)
JOIN UsersInUserGroups WITH (NOLOCK) ON
Users.UserID = UsersInUserGroups.UserID
Another option is to use a dataset instead of a datareader. A datareader is a "fire hose" technique that stays connected to the tables while your program is processing and basically handling the table row by row through the hose. A dataset uses a "disconnected" methodology where all the data is loaded into memory and then the connection is closed. Your program can then loop the data in memory without having to worry about locking. However, if this is a really large amount of data, there maybe memory issues.
Hope this helps.
If you add the WITH (NOLOCK) hint after a table name in the FROM clause it should make sure it doesn't lock, and it doesn't care about reading data that is locked. You might get "out of date" results if you are writing at the same time, but if you don't care about that then you should be fine.
I reckon your best way of avoiding this is to do it in SQL rather than in the application.
You can add a
WAITFOR DELAY '000:00:01'
at the end of each loop iteration to provide time for other processes to run - just make sure that you haven't initiated a TRANSACTION such that all other processes are locked out anyway
The query is performing a table lock, thus the inserts are failing.
It sounds to me like you're keeping a lock on the table while processing the results.
You should instead load them into an array or collection of some sort, and close the database connection.
Then process the array.
In addition, while you're doing your select use either:
WITH(NOLOCK) or WITH(READPAST)
I'm not a big fan of using lock hints as you could end up with dirty reads or other weirdness. A couple of other ideas:
Can you break the number of rows down so you don't grab 200k at a time? Is there a way to tell whether you've processed a row - a flag, a timestamp - you could use to make the query? Your query could be 'SELECT TOP 5000 ...' getting a differnet 5k each time. Shorter queries mean shorter-lived locks.
If you can use smaller sets of rows I like the DataSet vs. IDataReader idea. You will be loading data into memory and not consuming any SQL locks, but the amount of memory can cause other problems.
-Brian
You should be able to set the isolation level at the .NET level so that you don't have to include the WITH (NOLOCK) hint.
If you want to go with the batching option, you should be able to specify the Rowcount setting from the .NET level which would tell the database to only return n number of records. By setting these settings at the .NET level they should become database independent and work across all the platforms.