I am extracting data from a business system supplied by a third party to use in reporting. I am using a single SELECT statement issued from an SSIS data flow task source component that joins across multiple tables in the source system to create the dataset I want. We are using the default read-committed isolation level.
To my surprise I regularly find this extraction query is deadlocking and being selected as the victim. I didn't think a SELECT in a read-committed transaction could do this, but according to this SO answer it is possible: Can a readcommitted isolation level ever result in a deadlock (Sql Server)?
Through the use of the trace flags 1204 and 12222 I've identified the conflicting statement, and the object and index in question. Essentially, the contention is over a data page in the primary key of one of the tables. I need to extract from this table using a join on its key (so I'm taking out an S lock), the conflicting statement is performing an INSERT and is requesting an IX lock on the index data page.
(Side note: the above SO talks about this issue occurring with non-clustered indexes, but this appears to be occurring in the clustered PK. At least, that is what I believe based on my interpretation of the deadlock information in the event log and the "associatedObjectId" property.)
Here are my constraints:
The conflicting statement is in an encrypted stored procedure supplied by a third party as part of off-the-shelf software. There is no possibility of getting the plaintext code or having it changed.
I don't want to use dirty-reads as I need my extracted data to maintain its integrity.
It's not clear to me how or if restructuring my extract query could prevent this. The lock is on the PK of the table I'm most interested in, and I can't see any alternatives to using the PK.
I don't mind my extract query being the victim as I prefer this over interrupting the operational use of the source system. However, this does cause the SSIS execution to fail, so if it must be this way I'd like a cleaner, more graceful way to handle this situation.
Can anyone suggestion ways to, preferably, prevent the deadlock, or if not, then handle the error better?
My assumption here is that you are attempting to INSERT into the same table that you are SELECTing from. If no, then a screenshot of the data flow tab would be helpful in determining the problem. If yes, then you're in luck - I have had this problem before.
Add a sort to the data flow as this is a fully blocking transformation (see below regarding blocking transformations). What this means is that the SELECT will be required to complete loading all data into the pipeline buffer before any data is allowed to pass down to the destination. Otherwise, SSIS is attempting to INSERT data while there is a lock on the table/index. You might be able to get creative with your indexing strategies here (I have not tried this). But, a fully blocking transformation will do the trick and eliminates the need for any additional indexes to the table (and the overhead that entails).
Note: never use NOLOCK query hints when selecting data from a table as an attempt to get around this. I have never tried this nor do I intend to. You (the royal you) run the risk of ingesting uncommitted data into your ETL.
Reference:
https://jorgklein.com/2008/02/28/ssis-non-blocking-semi-blocking-and-fully-blocking-components/
Related
I have a SSIS package with a task to load data. For some reason i need to update and insert same destination table. This happen deadlock
I use SSIS MULTI-CAST control.
What to do? how to resolve this situation?
In your OLE DB Destination, change the access mode from "FastLoad" to "Table or View". The former will take a table lock which is generally better for large inserts but in your scenario, you need the table to remain "unlocked." Your performance will suffer since you'll be issuing singleton inserts but I guess that doesn't really matter since you'll also be doing singleton updates with your "OLE DB Command"
Finally, I think you're doing this wrong. The multicast essentially duplicates a row so that you can direct it to N components. I generally see people trying to detect whether a row exists in the target and then either insert or update it based on that lookup. But that's the lookup component, not a multicast. Maybe you're doing a type 2 dimension or something but even then, there will be better ways to accomplish this versus what you're showing in the picture.
Your way seems strange, as billinkc said, you are effectively double data rows and perform INSERT and UPDATE actions with the same table concurrently from two different connections/contexts. This have to end in a deadlock.
I would use alternative approach - do required transforms with the data, and then write it to an intermediate table in the DataFlow. Then on the next SSIS task - execute MS SQL MERGE - Microsoft table upsert - with OLE DB Command. This will assure you do not have a deadlock between concurrent operations, logic of the MERGE could be quite flexible.
Last but not the least - use dedicated or global ##temp table for an intermediate table, Working with regular MS SQL #temp tables in SSIS is little tricky. Do not forget to clean up intermediate before and after MERGE, or create and dispose of ##temp table properly.
I am debugging my database and I am finding that the replication is failing on a delete statement.
I look at the source and destination tables and they are the same. So someone deleted a row from the source and then put it back in. (The delete fails because of a FK reference to some manual data that I don't want to cascade delete.)
Is there a way to find out the PK of the row it is trying to delete?
(All Replication monitor will tell me is the name of the FK that is causing the delete statement to fail.)
There are a couple of ways. I'll tell you the easy one (because I'm lazy). Put a trace on your subscriber for non-successful stored procedure executions. You should get a hit for one called something like sp_MSdel_table (where table is the name of your table). The argument(s) to that procedure will be the primary key of the record that it's trying to delete.
Easy way number two is to modify the sproc identified in the previous method not to be angry at a missing row (after all, it's just going to delete it so the fact that's it's now missing isn't that big a deal). You might have other non-convergence issues, but at least you can get your commands flowing again. (EDIT: Just noticed the reason for your issue. I'd advise not having FK constraints at the subscriber since any referential integrity should be taken care of at the publisher. I'll make your replication faster when SQL doesn't have to check that each time it does an applicable insert, update, or delete).
Hard way number one involves looking at the error in replication monitor an noting that there's a transaction id and sequence number specified. You then use a sproc in the distribution database to get the text of the command being executed.
Hard way number two involved diffing the tables either with tablediff.exe, something like RedGate's SQLCompare, or a roll-your-own join over linked servers to show the difference. Keep this in your back pocket just in case one of the other one-row-at-a-time methods mentioned above doesn't do it for you. My threshold for such things is about three. YMMV.
I am using Entity Framework, and I am inserting records into our database which include a blob field. The blob field can be up to 5 MB of data.
When inserting a record into this table, does it lock the whole table?
So if you are querying any data from the table, will it block until the insert is done (I realise there are ways around this, but I am talking by default)?
How long will it take before it causes a deadlock? Will that time depend on how much load is on the server, e.g. if there is not much load, will it take longer to cause a deadlock?
Is there a way to monitor and see what is locked at any particular time?
If each thread is doing queries on single tables, is there then a case where blocking can occur? So isn't it the case that a deadlock can only occur if you have a query which has a join and is acting on multiple tables?
This is taking into account that most of my code is just a bunch of select statements, not heaps of long running transactions or anything like that.
Holy cow, you've got a lot of questions in here, heh. Here's a few answers:
When inserting a record into this table, does it lock the whole table?
Not by default, but if you use the TABLOCK hint or if you're doing certain kinds of bulk load operations, then yes.
So if you are querying any data from the table will it block until the insert is done (I realise there are ways around this, but I am talking by default)?
This one gets a little trickier. If someone's trying to select data from a page in the table that you've got locked, then yes, you'll block 'em. You can work around that with things like the NOLOCK hint on a select statement or by using Read Committed Snapshot Isolation. For a starting point on how isolation levels work, check out Kendra Little's isolation levels poster.
How long will it take before it causes a deadlock? Will that time depend on how much load is on the server, e.g. if there is not much load will it take longer to cause a deadlock?
Deadlocks aren't based on time - they're based on dependencies. Say we've got this situation:
Query A is holding a bunch of locks, and to finish his query, he needs stuff that's locked by Query B
Query B is also holding a bunch of locks, and to finish his query, he needs stuff that's locked by Query A
Neither query can move forward (think Mexican standoff) so SQL Server calls it a draw, shoots somebody's query in the back, releases his locks, and lets the other query keep going. SQL Server picks the victim based on which one will be less expensive to roll back. If you want to get fancy, you can use SET DEADLOCK_PRIORITY LOW on particular queries to paint targets on their back, and SQL Server will shoot them first.
Is there a way to monitor and see what is locked at any particular time?
Absolutely - there's Dynamic Management Views (DMVs) you can query like sys.dm_tran_locks, but the easiest way is to use Adam Machanic's free sp_WhoIsActive stored proc. It's a really slick replacement for sp_who that you can call like this:
sp_WhoIsActive #get_locks = 1
For each running query, you'll get a little XML that describes all of the locks it holds. There's also a Blocking column, so you can see who's blocking who. To interpret the locks being held, you'll want to check the Books Online descriptions of lock types.
If each thread is doing queries on single tables, is there then a case where blocking can occur? So isn't it the case that a deadlock can only occur if you have a query which has a join and is acting on multiple tables?
Believe it or not, a single query can actually deadlock itself, and yes, queries can deadlock on just one table. To learn even more about deadlocks, check out The Difficulty with Deadlocks by Jeremiah Peschka.
If you have direct control over the SQL, you can force row level locking using:
INSERT INTO WITH (ROWLOCK) MyTable(Id, BigColumn)
VALUES(...)
These two answers might be helpful:
Is it possible to force row level locking in SQL Server?
Locking a table with a select in Entity Framework
To view current held locks in Management Studio, look under the server, then under Management/Activity Monitor. It has a section for locks by object, so you should be able to see whether the inserts are really causing a problem.
Deadlock errors generally return quite quickly. Deadlock states do not occur as a result of a timeout error occurring while waiting for a lock. Deadlock is detected by SQL Server by looking for cycles in the lock requests.
The best answer I can come up with is: It depends.
The best way to check is to find your connection SPID and use sp_lock SPID to check if the lock mode is X on the TAB type. You can also verify the table name with SELECT OBJECT_NAME(objid). I also like to use the below query to check for locking.
SELECT RESOURCE_TYPE,RESOURCE_SUBTYPE,DB_NAME(RESOURCE_DATABASE_ID) AS 'DATABASE',resource_database_id DBID,
RESOURCE_DESCRIPTION,RESOURCE_ASSOCIATED_ENTITY_ID,REQUEST_MODE,REQUEST_SESSION_ID,
CASE WHEN RESOURCE_TYPE = 'OBJECT' THEN OBJECT_NAME(RESOURCE_ASSOCIATED_ENTITY_ID,RESOURCE_DATABASE_ID) ELSE '' END OBJETO
FROM SYS.DM_TRAN_LOCKS (NOLOCK)
WHERE REQUEST_SESSION_ID = --SPID here
In SQL Server 2008 (and later) you can disable the lock escalation on the table and enforce a WITH (ROWLOCK) in your insert clause effectively forcing a rowlock. This can't be done prior to SQL Server 2008 (you can write WITH ROWLOCK, but SQL Server can choose to ignore it).
I'm speaking generals here, and I don't have much experience with BLOBs as I usually advise developers to avoid them, especially if larger than 1 MB.
Why are rollbacks so important?
Is it to prevent data (like data in a SQL DB) from being in an inconsistent state?
If so, how comes the data "store" (the SQL DB or whatever) made it possible in the first place to become in a corrupt state?
Are there data storage mechanisms that don't have a need for "rollbacks"?
Rollbacks are important in case of any kind of errors appearing during database operational. They can really save the day in case of database server crashes or a critical exception is thrown in an application that modifies contents of DB. When a significant DB operation is performed (i.e. updates, inserts, etc.) and the process is broken in the middle, it would be very hard to trace which operations were successful and usage of DB afterward would be very complicated.
The "store" itself does not generally have a built-in mechanism for consistency control - this is exactly why we use rollbacks and transactions. This can be perceived as a sort of 'live backup' mechanism.
There are cases, when you need insert/update data in many related tables - if you didn't have transactional logic, then any errors somewhere in middle of process could make data inconsistent.
Simple example. Say you need to insert both order header data into orders table and order lines into lines table. You insert order header, read identity, start inserting order lines - but this second insert fails on whatever reason. Only reliable way to recover from this situation is to rollback first insert - either explicitly (when your connection to db is alive) or implicitly (when link is gone down).
We have a few customers with large data sets and during our upgrade procedure we need to modify the schema of various tables (adding some columns, renaming others, occasionally changing data types, but that's rare).
Previously we've been going via a temporary table with the new schema, and then dropping the original and renaming the temp table but I'm hoping to speed that up dramatically by using ALTER table ... instead.
My question is what data integrity and error handling issues do I need to consider? Should I enclose all changes to a table in a transaction (and if so, how?) or will the DBMS guarantee atomicity and integrity over an ALTER operation?
We already heavily recommend customers backup their data before starting the upgrade so that should always be a fall back option.
We need to target SQLServer 2005 and Oracle, but obviously I can add conditional code if they require different approaches.
Comments for Oracle only:
Table alterations are DDL, so the concept of a transaction doesn't apply - every DDL statement locks the table for the duration of the operation and either succeeds or fails.
Adding (nullable!) columns or renaming existing columns is a relatively lightweight process and shouldn't present any problems if the table lock can be acquired.
If you're adding/modifying constraints (either NOT NULL or other more complex check constraints) Oracle will check existing data to validate the constraints unless you add the ENABLE NOVALIDATE clause to the constraint DDL. The validation of existing data can be a lengthy process for large tables.
If you're scripting the upgrade to be run as a SQL*Plus script, save yourself a lot of headaches by using the "whenever sqlerror exit sql.sqlcode" directive to abort the script on the first failure to make the review of partially implemented upgrades easier.
If the upgrade must be performed on a live system where you can neither control transactions or afford to miss them, consider using the Oracle DBMS_REDEFINITION package, which automatically creates a temporary configuration of temp tables and triggers to capture in-flight transactions while redefining the table in the "background". Warning - lots of work and a steep learning curve for this option.
If you're using SQL Server then ddl statements are transactional, so wrap in a transaction (I don't think this applies to Oracle though).
We split upgrades into individual patches that go with a particular feature. Which patches are applied go in a database_patch_history table, and it's easy to see which patches were applied and how to roll them back.
As you say, taking a backup before you start is important.
I have had to do changes like this in the past and have always been very paranoid about data loss. To help mitigate that risk I have always done tons of testing against "sandbox" databases that mirrored the target databases in schema and data as closely as possible. Test out the process as much as possible before rolling it out, just like you would any other area of the application.
If you dramatically change any data types of columns, for instance change a VARCHAR to an INT, the DBMS will panic and you will probably loose that data. Luckily, nowadays DBMSs are intelligent enough to do some data type conversions without loosing the data, but you don't want to run the risk of damaging any of it when making the alterations.
You shouldn't loose any data by renaming columns and definitely won't by adding new columns, it's when you move the data about that you have to be concerned.
Firstly, backup the entire table, both the schema and data, so at a second's notice you can roll back to the previous schema. Secondly, look at the alterations you are trying to make, see how drastic they are - try to figure out exactly what needs to change. If you're making datatype conversions push that data to an intermediatery table first with 3 columns, the foreign key (id or whatever so you can locate the row), the old data and the new column. Then either push the old data to the new column directly, or convert it at the application-level.
When it's all in the correct types and everything's been successful, run the ALTER statements and repopulate the database! It's simple enough to do, just needs a logical thought process.