In our application we are inserting records in 15 tables in a transaction from C# code.
For this we are creating one insert statement for each table, and append all in one query and use 'ExecuteNonQuery' to insert records in table. Because, we want insert to happen in all table and don't want any inconsistent data, we are using it in a transaction.
This functionality is written in a service and more than once service (same services, different installations) perform this task (inserting data into tables) concurrently.
These services are inserting totally different rows in tables and not in any way dependent.
But, when we are running these services we are getting deadlocks on these insert statements.
The code is like this:
Open DB Connection
Begin Transaction
Insert data in tables
Commit Transaction.
All services perform these steps on different set of data going in same 15 tables.
SQL Profiler trace suggests there are exclusive locks on these tables while insert.
Can you please suggest why it should be having table level locks while just firing insert statement on table and ending in deadlocks. And what is the best possible way to prevent it.
You do not get deadlocks just from locking tables, even exclusive locking. If the tables are locked the new inserts will just wait for the existing inserts to finish (assuming you are not using a no wait hint).
Deadlocks happen when you lock resources in such a way that the sql statements cannot continue. Look for unbounded sub selects or where clauses that are not specific enough in your insert statements.
Post your sql so we can see what you are doing.
Related
I've inherited a VB.NET application that INSERTs files into a varBinary(max) in SQL Server using System.Data.SqlClient.
It creates a transaction, uses SQLCommand to INSERT the record, then UPDATES that same record passing in the bytes, this cycle loops in 2Gb chunks until the entire file has been INSERTed. The Transaction is then Committed.
This process locks the table, so if two people try to INSERT documents at the same time, the application hangs for the second person in the queue waiting for it's turn.
I've looked at
Does inserting data into SQL Server lock the whole table?
There's no reference to TABLOCK anywhere in the code and I can't find what the "certain kinds of bulk load operations" are it refers to but this doesn't seem like a bulk operation.
Any help or ideas how to stop it locking or any or options would be appreciated.
I need some light here. I am working with SQL Server 2008.
I have a database for my application. Each table has a trigger to stores all changes on another database (on the same server) on one unique table 'tbSysMasterLog'. Yes the log of the application its stored on another database.
Problem is, before any Insert/update/delete command on the application database, a transaction its started, and therefore, the table of the log database is locked until the transaction is committed or rolled back. So anyone else who tries to write in any another table of the application will be locked.
So...is there any way possible to disable transactions on a particular database or on a particular table?
You cannot turn off the log. Everything gets logged. You can set to "Simple" which will limit the amount of data saved after the records are committed.
" the table of the log database is locked": why that?
Normally you log changes by inserting records. The insert of records should not lock the complete table, normally there should not be any contention in insertion.
If you do more than inserts, perhaps you should consider changing that. Perhaps you should look at the indices defined on log, perhaps you can avoid some of them.
It sounds from the question that you have a create transaction at the start of your triggers, and that you are logging to the other database prior to the commit transaction.
Normally you do not need to have explicit transactions in SQL server.
If you do need explicit transactions. You could put the data to be logged into variables. Commit the transaction and then insert it into your log table.
Normally inserts are fast and can happen in parallel with out locking. There are certain things like identity columns that require order, but this is very lightweight structure they can be avoided by generating guids so inserts are non blocking, but for something like your log table a primary key identity column would give you a clear sequence that is probably helpful in working out the order.
Obviously if you log after the transaction, this may not be in the same order as the transactions occurred due to the different times that transactions take to commit.
We normally log into individual tables with a similar name to the master table e.g. FooHistory or AuditFoo
There are other options a very lightweight method is to use a trace, this is what is used for performance tuning and will give you a copy of every statement run on the database (including triggers), and you can log this to a different database server. It is a good idea to log to different server if you are doing a trace on a heavily used servers since the volume of data is massive if you are doing a trace across say 1,000 simultaneous sessions.
https://learn.microsoft.com/en-us/sql/tools/sql-server-profiler/save-trace-results-to-a-table-sql-server-profiler?view=sql-server-ver15
You can also trace to a file and then load it into a table, ( better performance), and script up starting stopping and loading traces.
The load on the server that is getting the trace log is minimal and I have never had a locking problem on the server receiving the trace, so I am pretty sure that you are doing something to cause the locks.
I have two applications that do something to the same SQL Server table. One application uses C# SqlBulkCopy to import about two hundred thousand records into the SQL Server table, and the other application queries data from the same SQL Server table.
I find this message - please check the screenshot. The table has one hundred million rows. How can I fix it?
If any transaction is modifying a table and affecting more than 5000 rows, then SQL Server will escalate the locking from row-level locking to an exclusive table lock.
So if your application #1 is bulk-loading 200'000 rows into the table, then that table will be exclusively locked for the duration of the loading process.
Therefore, your application #2 - or any other client - won't be able to query that table, until the loading process is done.
This is normal, documented, expected behavior on the part of SQL Server.
Either make sure you load your data in batches of less than 5000 rows at a time during business hours, or then do the bulk-loading after hours, when no one is being negatively impacted by an exclusive table lock.
I'd like multiple connections to an MS SQL Server database to make parallel/ concurrent updates to a single table, for reasons of speed/ reducing the total time it takes to execute.
The updates are made based on looking up a primary/ unique key.
Currently, this throws an error "transaction was deadlocked on lock resources with another process". I think it's because the table is being locked after the 1st connection runs an update transaction on the table. All subsequent connections encounter a locked table being updated and --- errors occur.
Is there a way in MS SQL Server to allow parallel/ concurrent updates to a single table?
Note: No incoming updates would ever 'be using' the same row at the same time -- they are all unique. They would be using the same table, however. Nevertheless, if I can somehow switch to "row locking" instead of table locking, would this solve the problem? Or if I can switch to "screw any locking, period" (is this referred to as uncommitted writing?) -- I would also do that, as it wouldn't affect data integrity with my process.
Let me know what you think. Thanks!
PS ... if I did "row locks" only, would my "commit size" matter in any way in regards to locking?
I suggest to switch to row locking anyway if serialization of the updates is not absolutely necessary. Further you should check the isolation level and locking sequence of the read transactions on which the table is involved, because they may also be part of the deadlock scenario.
Since table locking is more restrictive than row locking, the probability to run into a deadlock is higher. Further you won't gain speed by using multiple connections because the table locks will serialize your concurrent transactions again.
I did some research and haven't found any explanation to this. So, If there's one out there, I'm sorry.
I am using TransactionScope to handle transactions with my SQL Server 2012 database. I'm also using Entity Framework.
The fact is that when I start a new transaction to insert a new record into my table, it locks the entire table, not just that row. So, if I run the Db.SaveChanges(), without committing it, and go to management studio and try to get the the already committed data from the same table, it hangs and return me no data.
What I would like in this scenario is to lock just the new row, not the entire table.
Is that possible?
Thank you in advance.
One thing to be very careful of when using TransactionScope is that it uses Serializable isolation level by default which can cause many locking issues in SQL Server. The default isolation level in SQL Server is Read Committed, so you should consider using that in any transactions that use TransactionScope. You can factor out a method that creates your default TransactionScope and always set to ReadCommitted by default (see Why is System.Transactions TransactionScope default Isolationlevel Serializable). Also ensure that you have a using block when using TransactionScope, to make sure that if errors occur with the transaction processing that the transaction is rolled back (http://msdn.microsoft.com/en-us/library/yh598w02.aspx).
By default, SQL Server uses a pessimistic concurrency model, which means that as DML commands are being processed (inserts, updates, deletes), it will acquire an exclusive lock on the data that is changing, which will prevent other updates or SELECTs from completing until those locks are released. The only way to release those locks is to commit or rollback the transaction. So if you have a transaction that is inserting data into a table, and you run a SELECT * FROM myTable before the insert has completed, then SQL Server will force your select to wait until the open transaction has been commit or rolled back before returning the results. Normally transactions should be small and fast, and you would not notice as much of an issue. Here is more info on isolation levels and locking (http://technet.microsoft.com/en-us/library/ms378149.aspx).
In your case, it sounds like you are debugging, and have hit a breakpoint in the code with the transaction open. For debugging purposes, you can add a nolock hint to your query, which would show the results of the data that has been committed, along with the insert which has not yet been committed. Because using nolock will return UN-committed data, be very careful about using this in any production environment. Here is an example of a query with a nolock hint.
SELECT * FROM myTable WITH(NOLOCK)
If you continue to run into locking issues outside of debugging, then you can also check out snapshot isolation (Great article by Kendra Little: http://www.brentozar.com/archive/2013/01/implementing-snapshot-or-read-committed-snapshot-isolation-in-sql-server-a-guide/). There are some special considerations when using snapshot isolation, such as tempdb tuning.