SQL Server deadlock - Fix needed - sql-server

I'm newbie as a SQL Server DBA , everyday at least once I've got a deadlock issue in SQL Server 2012 server which is using Merge statement. There are no clause like NOLOCK, UPDLOCK, HOLDLOCK has been used in the merge statement. It's a multi user environment where the Biztalk reads the xml and save data into SQL Server.
Per minute, Biztalk reads 300 xml messages. Since its a production server I can't implement anything just like that without doing research, but I haven't got any idea on how to resolve this issue. Recently I had an issue with two xml messages trying to update data in a table and trying to use the same index and error-ed out. Could anyone help me how to get away with this issue?

The scan phase of MERGE is performed with a shared lock (S), optimized for the case of a single session running MERGE and concurrent sessions running SELECT. In the case of multiple concurrent MERGE statements, this can lead to deadlocks or failures.
The solution you should add a HOLDLOCK hint on the target table. This is a little bit inconsistent with other read-for-update patters which use UPDLOCK on SELECT.

Related

DeadLock on Azure SQL Database without transactions

It seems like despite the fact we're not using transactions at all we get random deadlock error from SQL Azure.
Are there no transnational situation when SQL Azure can get into a deadlock?
It seems like when we are running a batch of UPDATE queries it acts like the batch is a one big transaction.
All the updates are by id and update a single a line.
There is no such things as "not using transactions". There's always a transaction, wether you start one explicitly or not. Read Tracking down Deadlocks in SQL Database for how to obtain the deadlock graph in SQL Azure. Connect to master and run:
SELECT * FROM sys.event_log
WHERE database_name like '<your db name>'
AND event_type = 'deadlock';
Then analyze the deadlock graph do understand the cause. Most likely you're doing scan because of missing indexes.
When you have concurrent transactions running (either implicit or explicit) you encounter deadlocks. Probably when you you said no transactions that means your transactions are implicit.

Using SAVE TRANSACTION with a linked server

Inside a transaction that have a savepoint I have to make a join with a table that is in a linked server. When I try to do it, I get the error message:
“Cannot use SAVE TRANSACTION within a distributed transaction”
The remote table data rarely changes. It is almost fixed. Is is possible to tell SqlServer to exclude this table from the transaction? I've tried a (NOLOCK) hint, but it isn't possible to use this hint for a table in a linked server.
Does anyone knows about a workaround? I'm using the ole SqlServer 2000.
One thing that you could do is to make a local copy of the remote table before you start the transaction. I know that this may sound like a lot of overhead, but remote joins are frequently a performance problem anyway and the SOP fix for that is also to make a local copy.
According to this link, the ability to use SAVEPOINTs in a Distributed transaction was dropped in SQL 7.
To allow application migration from Microsoft SQL Server 6.5 when
savepoints inside distributed transactions are in use, Microsoft SQL
Server 2000 Service Pack 1 introduces a trace flag that allows a
savepoint within a distributed transaction. The trace flag is 8599 and
can be turned on during the SQL Server startup or within an individual
session (that is, prior to enabling a distributed transaction with a
BEGIN DISTRIBUTED TRANSACTION statement) by using the DBCC TRACEON
command. When trace flag 8599 is set to ON, SQL Server allows you to
use a savepoint within a distributed transaction.
So unfortunately, you may either have to drop the bounding ACID transaction, or change the SPROC on the remote server so that it doesn't use SAVEPOINTs.
On a side note (Although I have seen that you have tagged it SQL SERVER 2000) but to make a point that SQL SERVER 2008 has remote proc trans Option for this.
In this case if the distributed table is not too large I would copy it to a temp table. If possible, include any filtering to get the number of rows to a minimum. Then you can proceed normally. Another option since the data changes rarely is copy the data to a permanant table and checking if anything has changed to prevent sending to much data over the network every time you run the transaction. You could only pull over the recent changes.
If you wish to handle transaction from UI level and you have Visual Studio 2008/.net fx 3.5 or + framework then you can wrap your logic with TransactionScope Class. If you dont have any frontends and you are working only on Sql Servers kindly ignore my answer...

SQL Server 2008 R2 Distributed Partition View Update/Delete issue

I have a big data table for store articles(has more than 500 million record), therefore I use distributed partition view feature of SQL Server 2008 across 3 servers.
Select and Insert operations work fine. But Delete or Update action take long time and never complete.
In Processes tab of Activity Monitor, I see Wait Type field is "PREEMPTIVE_OLEDBOPS" for Update command.
Any idea what's the problem?
Note: I think problem with MSDTC, because Update command not shown in SQL Profiler of second server. but when check MSDTC status on the same server, status column is Update(active).
What is most likely happening is that all the data from the other server is pulled over to the machine where the query is running before the filter of your update statement is applied. This can happen when you use 4-part naming. Possible solutions are:
Make sure each table has a correct "check constraint" which defines the minimum and maximum value of the partition table. Without this partition elimination is not going to work properly.
Call a stored procedure with 4-part naming on the other server to do the update.
use OPENQUERY() to connect to the other server
To serve 500 million records your server seems to be adequate. A setup with Table Partitioning with a sliding window is probably a more cost effective way of handling the volume.

SQL Server 2005 SP Deadlock issue

I have a scheduled job with a SP running on daily basis (SQL Server 2005). Recently I frequently encounter deadlock problem for this SP. Here is the error message:
Message
Executed as user: dbo. Transaction (Process ID 56) was deadlocked on thread |
communication buffer resources with another process and has been chosen as the deadlock
victim. Rerun the transaction. [SQLSTATE 40001] (Error 1205). The step failed.
The SP uses some inter joined views to some tables, one of them is a large size data table with several million rows of data(and keep growing). I am not sure if any job or query against to the table will cause the SP un-accessible to the table? I am going to investigate who is on line by using the query. That may expose some query or person on SQL server during that time.
Not sure if any one have similar issue or this is known SQL 2005 issue? Any additional way I should do in my SP or on SQL server to avoid the deadlock?
Use the SQL Server Profiler to track all the queries that are running. I put the output into SQL Server. This will help you figure out which ones are accessing your particular table / tables. Post your findings, and we can help you with that.
Deadlocks are when two transactions are each holding onto some resources and want a resource that the other one has as well - neither can proceed as they are both waiting for each other. They cannot be completely eliminated, but a lot can be done to mitigate them. Remus and Raj suggest capturing more information about them in Profiler - which I also recommend - generally optimizing your queries (if you know which ones are involved) can also help. Here is an MSDN article that can help get you going: "Minimizing Deadlocks".

OpenQuery to DB2/AS400 from SQL Server 2000 causing locks

Every morning we have a process that issues numerous queries (~10000) to DB2 on an AS400/iSeries/i6 (whatever IBM calls it nowadays), in the last 2 months, the operators have been complaining that our query locks a couple of files preventing them from completing their nightly processing. The queries are very simplisitic, e.g
Select [FieldName] from OpenQuery('<LinkedServerName>', 'Select [FieldName] from [LibraryName].[FieldName] where [SomeField]=[SomeParameter]')
I am not an expert on the iSeries side of the house and was wondering if anyone had any insight on lock escalation from an AS400/Db2 perspective. The ID that is causing the lock has been confirmed to be the ID we registered our linked server as and we know its most likely us because the [Library] and [FileName] are consistent with the query we are issuing.
This has just started happening recently. Is it possible that our select statements which are causing the AS400 to escalate locks? The problem is they are not being released without manual intervention.
Try adding "FOR READ ONLY" to the query then it won't lock records as you retrieve them.
Writes to the files on the AS/400 side from an RPG/COBOL/JPL job program will cause a file lock (by default I think). The job will be unable to get this lock when you are reading. The solution we used was ... don't read the files when jobs are running. We created a big schedule sheet in excel and put all the sql servers' and as/400's jobs on it in times slots w/ color coding for importance and server. That way no conflicts or out of date extract files either.
You might have Commitment Control causing a lock for a Repeatable Read. Check the SQL Server ODBC connection associated with <linkedServerName> to change the commitment control.

Resources