I have created a Flow to update the GUID(the unique identifier of CDS entity records) into an SQL Server table From CDS whenever a new record is created in CDS. The flow is working fine If I create records one by one. But If I import multiple records(around 3000 records) from SQL to CDS using Dataflows, then I am getting the below deadlock error in Flows.
"Transaction (Process ID 74) was deadlocked on lock resources with
another process and has been chosen as the deadlock victim. Rerun the
transaction."
The dataflow refreshes the data on a scheduled basis. So, I could not resubmit the failed runs every time.
How to get rid of this deadlock issue? Or Is there any other approaches to update the SQL table in an efficient way?
I tried options like the degree of parallelism(10 records), retry policy. But no use. If I reduce the parallelly running records to 1, then it is running slowly and taking more than 1h for updating 1000 records.
If your query is a deadlock victim, you can create extended event session to capture details about this event. Then, having the deadlock graph you will find the real cause of your issue.
The graph will show you exactly what's the resource lock causing it and the involved statements.
you can try to change isolation level of your transactions in your connection with
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
to learn more
https://learn.microsoft.com/en-us/sql/connect/jdbc/understanding-isolation-levels?view=sql-server-ver15
Related
I need some light here. I am working with SQL Server 2008.
I have a database for my application. Each table has a trigger to stores all changes on another database (on the same server) on one unique table 'tbSysMasterLog'. Yes the log of the application its stored on another database.
Problem is, before any Insert/update/delete command on the application database, a transaction its started, and therefore, the table of the log database is locked until the transaction is committed or rolled back. So anyone else who tries to write in any another table of the application will be locked.
So...is there any way possible to disable transactions on a particular database or on a particular table?
You cannot turn off the log. Everything gets logged. You can set to "Simple" which will limit the amount of data saved after the records are committed.
" the table of the log database is locked": why that?
Normally you log changes by inserting records. The insert of records should not lock the complete table, normally there should not be any contention in insertion.
If you do more than inserts, perhaps you should consider changing that. Perhaps you should look at the indices defined on log, perhaps you can avoid some of them.
It sounds from the question that you have a create transaction at the start of your triggers, and that you are logging to the other database prior to the commit transaction.
Normally you do not need to have explicit transactions in SQL server.
If you do need explicit transactions. You could put the data to be logged into variables. Commit the transaction and then insert it into your log table.
Normally inserts are fast and can happen in parallel with out locking. There are certain things like identity columns that require order, but this is very lightweight structure they can be avoided by generating guids so inserts are non blocking, but for something like your log table a primary key identity column would give you a clear sequence that is probably helpful in working out the order.
Obviously if you log after the transaction, this may not be in the same order as the transactions occurred due to the different times that transactions take to commit.
We normally log into individual tables with a similar name to the master table e.g. FooHistory or AuditFoo
There are other options a very lightweight method is to use a trace, this is what is used for performance tuning and will give you a copy of every statement run on the database (including triggers), and you can log this to a different database server. It is a good idea to log to different server if you are doing a trace on a heavily used servers since the volume of data is massive if you are doing a trace across say 1,000 simultaneous sessions.
https://learn.microsoft.com/en-us/sql/tools/sql-server-profiler/save-trace-results-to-a-table-sql-server-profiler?view=sql-server-ver15
You can also trace to a file and then load it into a table, ( better performance), and script up starting stopping and loading traces.
The load on the server that is getting the trace log is minimal and I have never had a locking problem on the server receiving the trace, so I am pretty sure that you are doing something to cause the locks.
We are experiencing the mentioned dead lock exception while doing CRUD on two SQL Server tables from parallel threads by calling Stored Procedures, here is the detailed scenario:
We have a desktop application where we are spinning up a code block in 100 - 150 parallel threads, the code block does insertion in TableA using SQL Bulk Copy and makes calls to three Stored Procedures, The stored procedures do insertion, updation and deletion in TableB based on some selection from TableA.
Soon as the application starts execution of the threads, SQL Server starts throwing the mentioned dead lock exception for a certain number of threads while some of the threads do run fine.
Exception Message:
Transaction (Process ID 160) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
Any help in this regard is appreciated in advance.
Thanks.
Is this SQL Server or SQL Azure/Azure SQL DB? If it's "box" SQL Server, you might consider ALTER DATABASE SET READ_COMMITTED_SNAPSHOT ON. This will enable read versioning. It's still possible to encounter deadlocks in this state, but it's as close to a silver bullet as you're likely to get.
Read versioning changes the concurrency model in some subtle ways, so be sure to read about it first and make sure it's compatible with your business logic: https://msdn.microsoft.com/en-us/library/tcbchxcb(v=vs.110).aspx
Otherwise, Srivats's other suggestion about minimizing transaction scope is not always simple to implement but is still solid. To that list, I would add: Ensure that you have well-indexed query access paths, and verify that none of the queries within your transactions require full table or index scans.
The message is clearly evident that Transaction (Process ID 160) was deadlocked on lock resources with another process.The lock could be on different levels. The locks are not getting released before another thread could lock that particular resource. Try to kill that Process Id and check the workflow if there are any lock conflicts.
I'd like multiple connections to an MS SQL Server database to make parallel/ concurrent updates to a single table, for reasons of speed/ reducing the total time it takes to execute.
The updates are made based on looking up a primary/ unique key.
Currently, this throws an error "transaction was deadlocked on lock resources with another process". I think it's because the table is being locked after the 1st connection runs an update transaction on the table. All subsequent connections encounter a locked table being updated and --- errors occur.
Is there a way in MS SQL Server to allow parallel/ concurrent updates to a single table?
Note: No incoming updates would ever 'be using' the same row at the same time -- they are all unique. They would be using the same table, however. Nevertheless, if I can somehow switch to "row locking" instead of table locking, would this solve the problem? Or if I can switch to "screw any locking, period" (is this referred to as uncommitted writing?) -- I would also do that, as it wouldn't affect data integrity with my process.
Let me know what you think. Thanks!
PS ... if I did "row locks" only, would my "commit size" matter in any way in regards to locking?
I suggest to switch to row locking anyway if serialization of the updates is not absolutely necessary. Further you should check the isolation level and locking sequence of the read transactions on which the table is involved, because they may also be part of the deadlock scenario.
Since table locking is more restrictive than row locking, the probability to run into a deadlock is higher. Further you won't gain speed by using multiple connections because the table locks will serialize your concurrent transactions again.
I need to sync(upload first to remote DB-download to mobile device next) DB tables with remote DB from mobile device (which may insert/update/delete rows from multiple tables).
The remote DB performs other operation based on uploaded sync data.When sync continues to download data to mobile device the remote DB still performing the previous tasks and leads to sync fail. something like 'critical condition' where both 'sync and DB-operations' want access remote Databse. How to solve this issue? is it possible to do sync DB and operate on same DB at a time?
Am using Sql server 2008 DB and mobilink sync.
Edit:
Operations i do in sequence:
1.A iPhone loaded with application which uses mobilink for SYNC data.
2.SYNC means UPLOAD(from device to Remote DB)followed by DOWNLOAD(from Remote DB to device).
3.Remote DB means Consolidated DB ; device Db is Ultralite DB.
4.Remote DB has some triggers to fire when certain tables are updated.
5.An UPLOAD from device to Remote will fire triggers when sync upload finished.
6.Very next moment the UPLOAD finished DOWNLOAD to device starts.
7.Exactly same moment those DB triggers will fire.
8.Now a deadlock between DB SYNC(-DOWNLOAD) and trigger(Update queries included within) operations occur.
9.Sync fails with error saying cannot access some tables.
I did a lots of work around and Google! Came out with a simple(?!) solution for the problem.
(though the exact problem cannot be solved at this point ..i tried my best).
Keep track of all clients who does a sync(kind of user details).
Create a sql job scheduler which contains all the operations to be performed when user syncs.
Announce a "maintenance period" everyday to execute the tasks of sql job with respect to saved user/client sync details.
Here keeping track of client details every time is costlier but much needed!
Remote consolidated DB "completely-updated" only after maintenance period.
Any approaches better than this would be appreciated! all Suggestions are welcome!
My understanding of your system is following:
Mobile application sends UPDATE statement to SQL Server DB.
There is ON UPDATE trigger, that updates around 30 tables (= at least 30 UPDATE statements in the trigger + 1 main update statement)
UPDATEis executed in single transaction. This transaction ends when Trigger completes all updates.
Mobile application does not wait for UPDATE to finish and sends multiple SELECT statements to get data from database.
These SELECTstatements query same tables as the Trigger above is updating.
Blocking and deadlocks occur at some query for some user as Trigger is not completing updates before selects and keeps lock on tables.
When optimizing we are trying make it our processes less easy for computer, achieve same result in less iterations and use less resources or those resources that are more available/less overloaded.
My suggestions for your design:
Use parametrized SPs. Every time SQL Server receives any statement it creates Execution plan. For 1 UPDATE statement with a trigger DB needs at least 31 execution plan. It happens on busy Production environment for every connection every time app updates DB. It is a big waste.
How SPs would help reduce blocking?
Now you have 1 transaction for 31 queries, where locks are issued against all tables involved and held until transaction commits. With SP you'll have 31 small transaction and only 1-2 tables will be locked at a time.
Another question I would like to address: how to do asynchronous updates to your database?
There is a feature in SQL Server called Service Broker. It allows to process message queue (rows from the queue table) automatically: it monitors queue, takes messages from it and does processing you specify and deletes processes messages from the queue.
For example, you save parameters for your SPs - messages - and Service Broker executes SP with parameters.
I have a stored procedure on SQL Server 2005 doing a Serializable Transaction. Inside this transaction, it selects a table with rowlock. At the end of the procedure, after rollback/commit, it sets the transaction isolation level to Read Commited.
This procedure is running, different processes have concurrent access controlled by these constraints, but suddenly, after some time, some processes throw a Sql Exception:
The instance of the SQL Server
Database Engine cannot obtain a LOCK
resource at this time. Rerun your
statement when there are fewer active
users. Ask the database administrator
to check the lock and memory
configuration for this instance, or to
check for long-running transactions.
This is not predictable, it can happen early, or after an hour.
What can I do to solve this problem?
you have too many locks for your memory. increase ram or rewrite your queries to use fewer locks.
serializable is a lock hog. do you really need it?
I resolved this error by reducing the data range been passed between servers,
meaning if you are selecting 1000 records try reducing the transaction into two batches for 500 records and another 500 records
keep reducing the number until error stops