I got a deadlock problem and I found that it's caused by two stored procedures that is called by different threads (2 called web services).
Insert sp that insert data in X table.
Delete sp that delete data in X table.
Moreover, I got result that told me about deadlock happened in non-unique and non-clustered index of X table. Do you have any idea for solve this problem?
Update
From Read/Write deadlock, I think it error because of the following statements.
In insert statement, it get id(clustered index) and then non-clustered index.
In delete statment, it get non-clustered index before id.
So, I need to select id for delete statment like the following statement.
SELECT id FROM X WITH(NOLOCK) WHERE [condition]
PS. Both stored procedures are called in transaction.
Thanks,
We'd have to see some kind of code... you mention a transaction; what isolation level is it at? One thing to try is adding the (UPDLOCK) hint to any query that you use to find the row (or check existence); so you'll take out a write lock (rather than a read lock) from the start.
When contested, this should then cause (very brief) blocking rather than a deadlock.
Do the stored procedures modify anything, or just do reads? If the modify something, are there where clauses on the updates to that they're sufficiently granular? If you can try to update the rows in smaller batches, SQL Server is less likely to deadlock, since it will only lock small amounts of the index, instead of the index as a whole.
If it's possible, can you post the code here that's deadlocking? IF the stored procedures are too long, can you post the offending statements within them (if you know which they are)?
Without the deadlock info is more of a guess than a proper answer... Could be an index access order issue similar to the read-write deadlock.
It could be that the select queries are the actual problem, especially if they are the same tables in both stored procedures, but in a different order. It's important to remember that reading from a table will create (shared) locks. You might want to read up on lock types.
The same can happen at the index level, as Remus posted about. The article he linked offers a good explanation, but unfortunately no one hit wonder solution, because there isn't a single best solution for each case.
I'm not exactly an expert in this field myself really, but using lock hints, you may be able to ensure the same resources get locked in the same order, preventing a deadlock. You will probably need more information from your testers to effectively solve this though.
The quick way to get your application back doing what it's supposed to is to detect the deadlock error (1205) and rerun the transaction. Code for this can be found in the "TRY...CATCH" section of Books Online.
If you're deleting and inserting, then that's affecting the clustered index, and every non-clustered index on the table also needs to have an insert/delete. So it's definitely very possible for deadlocks to occur. I would start by looking at what your clustered indexes are - try having a surrogate key for your clustered index, for example.
Unfortunately it's almost impossible to completely solve your deadlock problem without more information.
Rob
Related
I have big heap table with 17093139 rows. This table is the most heavily used table in the database. Since this is a heap table, there are only non-clustered indexes in this table. I rebuild/reorganize fragmented indexes on this table regularly. These days we are facing a issue very regularly:
Lot of queries accessing this table would suddenly start taking longer than usual. When I check, I observe that the execution plan for the queries have changed. I create and drop a random non-clustered index and this fixes the issue. What I don't get is what is causing these sudden slowness randomly anytime and what does creating and dropping the index do in the background to the table to fix it which the index rebuild job doesn't do. I need to find what exactly is triggering these slowdowns so that a permanent solution can be found as I can't just just keeping creating and dropping the index to fix this issue every time. Any help here would be greatly appreciated.
I would start by trying to find out what is changing in the query plan as you say and then try to understand why it's changing. Could be parallelism, could be that it's selecting an improper queryplan due to the parameters used. You could find the queryplans, and delete them all so that an old one is not used. If you find that new queryplans are always generated, look into parameter sniffing. If the indexes are always getting fragmented, why? If you are using a GUID for the primary key, that could definitely increase the fragmentation in the table. I always try to use integers for the primary key. Hope some of this helps with your debugging. Good luck :)
It sounds like you are suffering from Index Fragmentation which appears as you insert or update data in your table. This article suggests methods for Fixing Index Fragmentation.
Hope this helps.
Suppose I have a table which contains relevant information. However, the data is only relevant for, let's say, 30 minutes.
After that it's just database junk, so I need to get rid of it asap.
If I wanted, I could clean this table periodically, setting an expiration date time for each record individually and deleting expired records through a job or something. This is my #1 option, and it's what will be done unless someone convince me otherwise.
But I think this solution may be problematic. What if someone stops the job from running and no one notices? I'm looking for something like a built-in way to insert temporary data into a table. Or a table that has "volatile" data itself, in a way that it automagically removes data after x amount of time after its insertion.
And last but not least, if there's no built-in way to do that, could I be able to implement this functionality in SQL server 2008 (or 2012, we will be migrating soon) myself? If so, could someone give me directions as to what to look for to implement something like it?
(Sorry if the formatting ends up bad, first time using a smartphone to post on SO)
As another answer indicated, TRUNCATE TABLE is a fast way to remove the contents of a table, but it's aggressive; it will completely empty the table. Also, there are restrictions on its use; among others, it can't be used on tables which "are referenced by a FOREIGN KEY constraint".
Any more targeted removal of rows will require a DELETE statement with a WHERE clause. Having an index on relevant criteria fields (such as the insertion date) will improve performance of the deletion and might be a good idea (depending on its effect on INSERT and UPDATE statements).
You will need something to "trigger" the DELETE statement (or TRUNCATE statement). As you've suggested, a SQL Server Agent job is an obvious choice, but you are worried about the job being disabled or removed. Any solution will be vulnerable to someone removing your work, but there are more obscure ways to trigger an activity than a job. You could embed the deletion into the insertion process-- either in whatever stored procedure or application code you have, or as an actual table trigger. Both of those methods increase the time required for an INSERT and, because they are not handled out of band by the SQL Server Agent, will require your users to wait slightly longer. If you have the right indexes and the table is reasonably-sized, that might be an acceptable trade-off.
There isn't any other capability that I'm aware of for SQL Server to just start deleting data. There isn't automatic data retention policy enforcement.
See #Yuriy comment, that's relevant.
If you really need to implement it DB side....
Truncate table is fast way to get rid of records.
If all you need is ONE table and you just need to fill it with data, use it and dispose it asap you can consider truncating a (permanent) "CACHE_TEMP" table.
The scenario can become more complicated you are running concurrent threads/jobs and each is handling it's own data.
If that data is just existing for a single "job"/context you can consider using #TEMP tables. They are a bit volatile and maybe can be what you are looking for.
Also you maybe can use table variables, they are a bit more volatile than temporary tables but it depends on things you don't posted, so I cannot say what's really better.
I have a website that has a very popular forum on it and occasionally throughout the day I see several deadlocks happening between two identical (minus the data within them) update statements on the same forum. I'm not exactly sure why this is happening on this query as there are many other queries on the site that run with high concurrency without issue.
Full Image
The query between the two processes is nearly identical, the graph shows it as:
update [Forum] set [DateModified] = #DateModified, [LatestLocalThreadID] = #LatestLocalThreadID where ID = 310
Can anyone shed any light on what could be causing this?
This is because there is a foreign key to ForumThreads that generates an S-lock when you set LatestLocalThreadID (to make sure that the row still exist when the statement completes). A possible fix would be to prefix the update statement with
SELECT *
FROM ForumThreads WITH (XLOCK, ROWLOCK, HOLDLOCK)
WHERE ID = #LatestLocalThreadID
in order to X-lock on that. You can also try UPDLOCK as a less aggressive mode. This can of course cause deadlocks in other places, but it is the best first try.
Basically deadlocks are prevented by accessing the objects (tables, pages, rows) always In the same order. In your example there's one process accessing forum first and forumThread second and another thread doing it vice versa. An update usually searches first for the rows to update and uses S-locks during the search. The rows it has identified to be changed, are locked by X-locks and then the actual change happens.
The quick and dirty solutions might be to do a begin Tran then lock the objects in the order you need and do the update followed by the commit that will release the locks again. But this will bring down the overall thruput of your website because of blocking locks.
The better way is to identify the two statements (you might edit your question and give us the other one when you found it) and the execution plan of them. It should be possible to rewrite the transactions somehow to access all objects in the same order - and prevent the deadlock.
I have a column I'm doing a like '%X%' query on, I did it this way for simplicity with the knowledge that I'd have to revisit it as the amount of data grew.
However, one thing I did not expect was for it to lock the table, but it appears to be doing so. The query is slow, and if the query is running, other queries will not finish until it's done.
What I'm asking for are some general ideas as to why this might be happening, and advice on how I can drill down into SQL Server to get a better feel for exactly what's going on with respect to the locks.
I should mention that all of the queries are SELECT except for a service I have waking every 60 seconds to check for various things and potentially INSERT rows. This table is never updated, CRD only, not CRUD, but the issue is consistent rather than intermittent.
This is happening because LIKE '%X%' will force a complete scan. If there is an index that it can use, then the engine will scan the index, but it will need to read every row. If there is no nonclustered index, then the engine will perform a clustered index scan, or a table scan if the table is a heap.
This happens because of the LIKE %somevalue% construction. If you had a phonebook, say, and you were asked to find everyone with an X somewhere in the middle of their name, you'd have to read every entry in the index. If, on the other hand, it was LIKE 'X%', you'd know you only had to look at the entries beginning with 'X'.
Since it has to scan the entire table, it is likely to escalate the rowlocks to a table lock - this is not a flaw, SQL Server is almost always right when it determines that it would be more efficient for it to place a single lock on the table than 100,000 row locks, but it does block write operations.
My suggestion is that you try to find a better query than one involving LIKE '%X%'.
i have a stored procedure that performs a join of TableB to TableA:
SELECT <--- Nested <--- TableA
Loop <--
|
---TableB
At the same time, in a transaction, rows are inserted into TableA, and then into TableB.
This situation is occasionally causing deadlocks, as the stored procedure select grabs rows from TableB, while the insert adds rows to TableA, and then each wants the other to let go of the other table:
INSERT SELECT
========= ========
Lock A Lock B
Insert A Select B
Want B Want A
....deadlock...
Logic requires the INSERT to first add rows to A, and then to B, while i personally don't care the order in which SQL Server performs its join - as long as it joins.
The common recommendation for fixing deadlocks is to ensure that everyone accesses resources in the same order. But in this case SQL Server's optimizer is telling me that the opposite order is "better". i can force another join order, and have a worse performing query.
But should i?
Should i override the optimizer, now and forever, with a join order that i want it to use?
Or should i just trap error native error 1205, and resubmit the select statement?
The question isn't how much worse the query might perform when i override the optimizer and for it to do something non-optimal. The question is: is it better to automatically retry, rather than running worse queries?
Is it better to automatically retry deadlocks. The reason being that you may fix this deadlock, only to hit another one later. The behavior may change between SQL releases, if the size of the tables changes, if the server hardware specifications change, and even if the load on the server changes. If the deadlock is frequent, you should take active steps to eliminate it (an index is usually the answer), but for rare deadlocks (say every 10 mins or so), retry in the application can mask the deadlock. You can retry reads or writes, since the writes are, of course, surrounded by proper begin transaction/commit transaction to keep all write operations atomic and hence able to retry them w/o problems.
Another avenue to consider is turning on read committed snapshot. When this is enabled, SELECT will simply not take any locks, yet yield consistent reads.
To avoid deadlocks, one of the most common recommendations is "to acquire locks in the same order" or "access objects in the same order". Clearly this makes perfect sense, but is it always feasible? Is it always possible? I keep encountering cases when I cannot follow this advice.
If I store an object in one parent table and one or more child ones, I cannot follow this advice at all. When inserting, I need to insert my parent row first. When deleting, I have to do it in the opposite order.
If I use commands that touch multiple tables or multiple rows in one table, then usually I have no control in which order locks are acquired, (assuming that I am not using hints).
So, in many cases trying to acquire locks in the same order does not prevent all deadlocks. So, we need some kind of handling deadlocks anyway - we cannot assume that we can eliminate them all. Unless, of course, we serialize all access using Service Broker or sp_getapplock.
When we retry after deadlocks, we are very likely to overwrite other processes' changes. We need to be aware that very likely someone else modified the data we intended to modify. Especially if all the readers run under snapshot isolation, then readers cannot be involved in deadlocks, which means that all the parties involved in a deadlock are writers, modified or attempted to modify the same data. If we just catch the exception and automatically retry, we can overwrite someone else's changes.
This is called lost updates, and this is usually wrong. Typically the right thing to do after a deadlock is to retry on a much higher level - re-select the data and decide whether to save in the same way the original decision to save was made.
For example, if a user pushed a Save button and the saving transaction was chosen as a deadlock victim, it might be a good idea to re-display the data on the screen as of after the deadlock.
Trapping and rerunning can work, but are you sure that the SELECT is always the deadlock victim? If the insert is the deadlock victim, you'll have to be much more careful about retrying.
The easiest solution in this case, I think, is to NOLOCK or READUNCOMMITTED (same thing) your select. People have justifiable concerns about dirty reads, but we've run NOLOCK all over the place for higher concurrency for years and have never had a problem.
I'd also do a little more research into lock semantics. For example, I believe if you set transaction isolation level to snapshot (requires 2005 or later) your problems go away.