In other words, what are the steps to acquire locks ? Also, when 'WITH(Nolock)' hint is added to a query/'Read Uncommitted' Isolation level is used, does this avoid all or some of the overheads associated with acquiring locks ?
This is too big for a specific answer, but in a nutshell SQL Server will employ various types of locks depending on request being made of it. A select might acquire one type of lock and an update will acquire another.
This link has a good 101 on the subject. SQL Server locking Basics
And this one too
Another good locking read
Related
In Microsoft Document on Locking behavior they state the following.
By default, a DELETE statement always acquires an intent exclusive
(IX) lock on the table object it modifies, and holds that lock until
the transaction completes. With an intent exclusive (IX) lock, no
other transactions can modify data; read operations can take place
only with the use of the NOLOCK hint or read uncommitted isolation
level.
I'm confused by their comment on Updates and Reads being blocked. Its my understanding that a Intent exclusive Shared lock will be taken on the table for reads and that Intent Shared and Intent Exclusive locks are compatible. See Lock Compatibility
Its currently my understanding that these locks are compatible and that multiple actions can be performed ie (Updates, Reads, Deletes) on the same table provided they target different rows and no lock escalation takes place.
I've tried to find a answer to this but seeing as my confusion comes from Microsoft official document, I haven't found a answer that restores my confidence in my mental model, I would appreciate any help.
You are correct. That doc is wrong. I've submitted a PR to get the doc fixed.
I have two question..
I want to confirm are deadlock may happen if one session is querying a table which is locked by another session with.
And how do resolve the above mentioned SQL error when there are more than one computer accessing to the MSSQL Server for actions such as update and delete
enter image description here
A deadlock can occur when a transaction that has started to change data, conflicts with another transaction for acquiring an exclusive lock. A blockage, even a very long one, does not necessarily lead to a deadly lock.
It is possible to eradicate any deadlock using the banker's algorithm but this leads to crippling any access conccurancy in the database, which leads to have only one user in parallel!
However, we can reduce the occurrence of deadly locks by:
modeling the base correctly, i.e. respecting the normal form
indexing tables for reading as well as updating
reducing the level of isoaltion of transactions
switching from pessimistic locking (default in SQL Server) to optimized locking
avoiding unnecessary transactions
reducing the number of requests in the explicit transaction
changing the sequence of processing logic in the transaction
See the link below Using WITH For SELECT queries use WITH(NOLOCK) for UPDATE,DELETE,INSERT use WITH(XLOCK) ..
I am making this application which reads and writes from a database and is accessed by multiple users. To avoid concurrency issues I am using mutex. The database that I am using is postgresql. Its documentation says it is ACID compliant and provides various levels of synchronization such as read_committed etc. So I can avoid using mutex and put all my statements in a transaction block and the database will take care of it. But I am not fully confident of using this transaction based approach as I am having trust issues with the database automatic mechanism.
My current approach:
mutex.lock();
\\perform database operations
mutex.unlock();
Alternative approach:
begin transaction
\\perform database operations
end transaction
Is it wise to handle with mutex or should I rely on the database mechanism.
Each user is accessing the database in a separate thread. And database operations are simple. One read and one write. That is all.
If the database is being accessed by multiple users simultaneously, an application level mutex does absolutely nothing to prevent them from stepping on each other on the database side1. You must use the locking constructs provided at the database level (transactions) to achieve what you are after.
A better use case for the application level mutex is to provide resource locking between threads running within the application (which may also be achievable with database transactions, but use the right tool for the job).
1: I have to be careful here: if an application handles multiple users in a single instance, or otherwise shares database objects outside of the database, then a mutex might be a good way to do locking. Even then, it won't protect things on the database (meaning it's not functionality that is built into the DBMS), and it's still probably better to let the database take care of it's own locks.
An app I am working on has to handle lots of ajax requests that needs to update some data on DB.
[Macromedia][SQLServer JDBC Driver][SQLServer]Transaction (Process ID
66) was deadlocked on lock | communication buffer resources with
another process and has been chosen as the deadlock victim. Rerun the
transaction.
For reads, I've already used the WITH (NOLOCK) hint and that prevented a lot of deadlocks on reads.
What can I do to better deal with writes?
CFLock the update code in CF?
Or is there a way to ask SQL Server to lock a Row instead of a Table?
Have anyone tried implementing CQRS? Seems to solve the problem but I am not clear on how to handle:
ID generation (right now it uses Auto Increment on DB)
How to deal with update request fail if server couldn't send the error back to the client right away.
Thanks
Here are my thoughts on this.
From the ColdFusion server side
I do believe that using named <cflock> tags around your ColdFusion code that updates the database could prevent the deadlock issue on the database server. Using a named lock would make each call single threaded. However, you could run into timeouts on the ColdFusion server side, waiting for the <cflock>, if transactions are taking a while. Handling it in this way on the ColdFusion server side may also slow down your application. You could do some load testing before and after to see how this method affects your app.
From the database server side
First of all let me say that I don't think deadlocks on the database server can be entirely prevented, just minimized and handled appropriately. I found this reference on TechNet for you - Minimizing Deadlocks. The first sentence from that page:
Although deadlocks cannot be completely avoided, following certain coding conventions can minimize the chance of generating a deadlock.
Here are the key points from that reference. They go into a bit more detail about each topic so please read the original source.
Minimizing deadlocks can increase transaction throughput and reduce system overhead because fewer transactions are:
Rolled back, undoing all the work performed by the transaction.
Resubmitted by applications because they were rolled back when deadlocked.
To help minimize deadlocks:
Access objects in the same order.
Avoid user interaction in transactions.
Keep transactions short and in one batch.
Use a lower isolation level.
Use a row versioning-based isolation level.
Set READ_COMMITTED_SNAPSHOT database option ON to enable read-committed transactions to use row versioning.
Use snapshot isolation.
Use bound connections.
The "row versioning-based isolation level" may answer your question Or is there a way to ask SQL Server to lock a Row instead of a Table?. There are some notes mentioned in the original source regarding this option.
Here are some other references that came up during my search:
Avoiding deadlock by using NOLOCK hint
How to avoid sql deadlock?
Tips to avoid deadlocks? - This one mentions being careful when using the NOLOCK hint.
The Difficulty with Deadlocks
Using Row Versioning-based Isolation Levels
I am taking an operating systems class where we just learned about the 'readers and writers' problem: how do you deal with multiple processes that want to read and write from the same memory (at the same time)? I'm also dealing with a version of this problem at work: I am writing an application that requires multiple users to read and write to a shared SQL server database. Because the 'readers and writers' problem seems so well understood and discussed, I'm assuming that Microsoft has solved it for me. Meaning, that I don't need to worry about setting permissions or configuring SQL server to ensure that people are not reading and writing to a database at the same time. Specifically, can I assume that, with SQL server 2005, by default:
No process reads while another process is writing
No two processes write at the same time
A writer will take an exclusive X lock on at least the row(s) they are modifying and will hold this until their transaction commits. X locks are incompatible with other X locks so two writers can not concurrently modify the same row.
A reader will (at default read committed isolation level) take a shared lock and release this as soon as the data is read. This is incompatible with an X lock so readers must wait for writing transactions to finish before reading modified data. SQL Server also has snapshot isolation in which readers are not blocked by writers but instead read an earlier version of the row.
Classic SQL Servers like MS-SQL use a pessimistic approach and lock rows, tables or pages until a writing operation is done. You really don't have to cope with that, because -- as you said -- the creator already solved that problem. Have a look at this article for some first information, any database book will cover the problem in depth. If you are interested in this topic, I would suggest reading "Database Systems" by Connolly and Begg, for example.