DeadLock on Azure SQL Database without transactions - sql-server

It seems like despite the fact we're not using transactions at all we get random deadlock error from SQL Azure.
Are there no transnational situation when SQL Azure can get into a deadlock?
It seems like when we are running a batch of UPDATE queries it acts like the batch is a one big transaction.
All the updates are by id and update a single a line.

There is no such things as "not using transactions". There's always a transaction, wether you start one explicitly or not. Read Tracking down Deadlocks in SQL Database for how to obtain the deadlock graph in SQL Azure. Connect to master and run:
SELECT * FROM sys.event_log
WHERE database_name like '<your db name>'
AND event_type = 'deadlock';
Then analyze the deadlock graph do understand the cause. Most likely you're doing scan because of missing indexes.

When you have concurrent transactions running (either implicit or explicit) you encounter deadlocks. Probably when you you said no transactions that means your transactions are implicit.

Related

Disable transactions on SQL Server

I need some light here. I am working with SQL Server 2008.
I have a database for my application. Each table has a trigger to stores all changes on another database (on the same server) on one unique table 'tbSysMasterLog'. Yes the log of the application its stored on another database.
Problem is, before any Insert/update/delete command on the application database, a transaction its started, and therefore, the table of the log database is locked until the transaction is committed or rolled back. So anyone else who tries to write in any another table of the application will be locked.
So...is there any way possible to disable transactions on a particular database or on a particular table?
You cannot turn off the log. Everything gets logged. You can set to "Simple" which will limit the amount of data saved after the records are committed.
" the table of the log database is locked": why that?
Normally you log changes by inserting records. The insert of records should not lock the complete table, normally there should not be any contention in insertion.
If you do more than inserts, perhaps you should consider changing that. Perhaps you should look at the indices defined on log, perhaps you can avoid some of them.
It sounds from the question that you have a create transaction at the start of your triggers, and that you are logging to the other database prior to the commit transaction.
Normally you do not need to have explicit transactions in SQL server.
If you do need explicit transactions. You could put the data to be logged into variables. Commit the transaction and then insert it into your log table.
Normally inserts are fast and can happen in parallel with out locking. There are certain things like identity columns that require order, but this is very lightweight structure they can be avoided by generating guids so inserts are non blocking, but for something like your log table a primary key identity column would give you a clear sequence that is probably helpful in working out the order.
Obviously if you log after the transaction, this may not be in the same order as the transactions occurred due to the different times that transactions take to commit.
We normally log into individual tables with a similar name to the master table e.g. FooHistory or AuditFoo
There are other options a very lightweight method is to use a trace, this is what is used for performance tuning and will give you a copy of every statement run on the database (including triggers), and you can log this to a different database server. It is a good idea to log to different server if you are doing a trace on a heavily used servers since the volume of data is massive if you are doing a trace across say 1,000 simultaneous sessions.
https://learn.microsoft.com/en-us/sql/tools/sql-server-profiler/save-trace-results-to-a-table-sql-server-profiler?view=sql-server-ver15
You can also trace to a file and then load it into a table, ( better performance), and script up starting stopping and loading traces.
The load on the server that is getting the trace log is minimal and I have never had a locking problem on the server receiving the trace, so I am pretty sure that you are doing something to cause the locks.

SQL Server deadlock - Fix needed

I'm newbie as a SQL Server DBA , everyday at least once I've got a deadlock issue in SQL Server 2012 server which is using Merge statement. There are no clause like NOLOCK, UPDLOCK, HOLDLOCK has been used in the merge statement. It's a multi user environment where the Biztalk reads the xml and save data into SQL Server.
Per minute, Biztalk reads 300 xml messages. Since its a production server I can't implement anything just like that without doing research, but I haven't got any idea on how to resolve this issue. Recently I had an issue with two xml messages trying to update data in a table and trying to use the same index and error-ed out. Could anyone help me how to get away with this issue?
The scan phase of MERGE is performed with a shared lock (S), optimized for the case of a single session running MERGE and concurrent sessions running SELECT. In the case of multiple concurrent MERGE statements, this can lead to deadlocks or failures.
The solution you should add a HOLDLOCK hint on the target table. This is a little bit inconsistent with other read-for-update patters which use UPDLOCK on SELECT.

Single User: Turn off locking for Microsoft SQL Server

I'm running an upgrade script against a database hosted in Microsoft SQL Server. It's taking a while. Some of the queries are not worth optimising any further, for various reasons.
I'm the only person using this database: Is there a way that I can tell SQL Server to not bother with transactions/locking?
For instance, on a DELETE ... WHERE, does SQL need to get exclusive locks on the rows it's about to delete? If so, can I tell it not to bother, since this is the only running query?
See SQL Query Performance - Do you feel dirty? (Dirty Reads).
Edit: This is just speculation, but if you are the only connection to the SQL Server, you could get exclusive lock at the table level using WITH (TABLOCKX). You are sacrificing concurrency, but it could get faster.
Turn off Autocommit (aka implicit transactions); you'll need to do a commit() at the end. The log file will grow correspondingly large, be sure you've got enough disk space.
Is tempdb on the same disk?

SQL Server 2005 SP Deadlock issue

I have a scheduled job with a SP running on daily basis (SQL Server 2005). Recently I frequently encounter deadlock problem for this SP. Here is the error message:
Message
Executed as user: dbo. Transaction (Process ID 56) was deadlocked on thread |
communication buffer resources with another process and has been chosen as the deadlock
victim. Rerun the transaction. [SQLSTATE 40001] (Error 1205). The step failed.
The SP uses some inter joined views to some tables, one of them is a large size data table with several million rows of data(and keep growing). I am not sure if any job or query against to the table will cause the SP un-accessible to the table? I am going to investigate who is on line by using the query. That may expose some query or person on SQL server during that time.
Not sure if any one have similar issue or this is known SQL 2005 issue? Any additional way I should do in my SP or on SQL server to avoid the deadlock?
Use the SQL Server Profiler to track all the queries that are running. I put the output into SQL Server. This will help you figure out which ones are accessing your particular table / tables. Post your findings, and we can help you with that.
Deadlocks are when two transactions are each holding onto some resources and want a resource that the other one has as well - neither can proceed as they are both waiting for each other. They cannot be completely eliminated, but a lot can be done to mitigate them. Remus and Raj suggest capturing more information about them in Profiler - which I also recommend - generally optimizing your queries (if you know which ones are involved) can also help. Here is an MSDN article that can help get you going: "Minimizing Deadlocks".

OpenQuery to DB2/AS400 from SQL Server 2000 causing locks

Every morning we have a process that issues numerous queries (~10000) to DB2 on an AS400/iSeries/i6 (whatever IBM calls it nowadays), in the last 2 months, the operators have been complaining that our query locks a couple of files preventing them from completing their nightly processing. The queries are very simplisitic, e.g
Select [FieldName] from OpenQuery('<LinkedServerName>', 'Select [FieldName] from [LibraryName].[FieldName] where [SomeField]=[SomeParameter]')
I am not an expert on the iSeries side of the house and was wondering if anyone had any insight on lock escalation from an AS400/Db2 perspective. The ID that is causing the lock has been confirmed to be the ID we registered our linked server as and we know its most likely us because the [Library] and [FileName] are consistent with the query we are issuing.
This has just started happening recently. Is it possible that our select statements which are causing the AS400 to escalate locks? The problem is they are not being released without manual intervention.
Try adding "FOR READ ONLY" to the query then it won't lock records as you retrieve them.
Writes to the files on the AS/400 side from an RPG/COBOL/JPL job program will cause a file lock (by default I think). The job will be unable to get this lock when you are reading. The solution we used was ... don't read the files when jobs are running. We created a big schedule sheet in excel and put all the sql servers' and as/400's jobs on it in times slots w/ color coding for importance and server. That way no conflicts or out of date extract files either.
You might have Commitment Control causing a lock for a Repeatable Read. Check the SQL Server ODBC connection associated with <linkedServerName> to change the commitment control.

Resources