I'm getting very poor performance on a stored procedure that does well over a hundred (!) updates to a couple of very small tables and it appears that all the concurrent users are constantly blocking each other.
A full rewrite of the proc is scheduled for later this year, but in the meantime I wanted to see if we can alleviate the problem by forcing row-level locking on each of the affected tables.
On Sybase, you can (or at least could circa 2007) force row-level locking for a table using this statement:
alter table titles lock datarows
On SQL Server, it appears that the only way to get the same effect, is to use WITH (ROWLOCK) on every update or insert statement. And even then, it's only a hint that may be ignored.
Is there a way in SQL Server to force (or strongly favour) row-level locking for all updates to a given table?
First and foremost make sure the updates are not table scans. In other words you do have the proper indexes (UPDATE needs indexes too...). After you ensure that, upon careful consideration, disable page locks on the index used:
ALTER INDEX ... WITH (ALLOW_PAGE_LOCKS = OFF, ALLOW_ROW_LOCKS = ON);
Give this operation the same care you would give in joggling a barrel of TNT and a bottle of Napalm...
Additional information (from the comments below):
You can disable page locks on the clustered index, but you cannot disable page-locks on heaps (since they're physically structured and would not be possible to do proper lock hierarchy w/o locking the page).
Lock escalation is a (related, but) different subject. Locks escalation only comes into picture if the statement chooses row-level lock granularity and it decides to escalate during execution to rowset granularity level. Is true that the OP may be in fact victim of escalation, from how I read the OP I think the more likely cause is just lack of an index (ie. a high lock granularity is chosen upfront because of the scan, no escalation is triggered).
The initial granularity level is the result of the engine estimation of the task at hand. If the estimate indicates a large number of rows will have to be locked then it may opt for page granularity instead, since acquiring a large number of row locks is usually problematic. Lack of an index will trigger a scan, which will usually choose page granularity.
Also escalation is from row/page to rowset (object) granularity. Going through the intermediate page level first is riddled with concurrency problems so the 'big-hammer' is used.
You can use this to disable lock escalation:
ALTER TABLE titles SET (LOCK_ESCALATION=DISABLE)
See the docs about lock escalation.
trace flag 1211
turn off page level locking via alter index
combo of those 2 will give you row level locking. unless of course sql will decide to go directly to table locking :)
there is also 1224 flag, but it will ignore your request being under memory pressure, 1211 won't. memory consumption might increase big time, be warned and do some user activity simulation (RML or Benchmark Factory) and run perfmon to observe. I know people that have done it in the past.
Related
This question was migrated from Stack Overflow because it can be answered on Database Administrators Stack Exchange.
Migrated 3 days ago.
Environment:
SQL Server 2019 on Windows Server 2019, on KVM backed by TrueNAS, 16 cores, 32 GB RAM.
Application runs 50 parallel threads all inserting into the same massive table.
This combination appears to work against the SQL Server architecture
Additional details
the problem table is both deep and wide - 20,000,000 rows with over 300 columns and 40-50 indexes
The application uses JDBC Batch API's. This particular table, due to row size, is inserting in batches of 1,000 rows.
Tables with more reasonable row sizes are inserting in batches of 10,000 rows
I can't share the actual DDL, but it's pretty mundane apart from the row simply being massive (a surrogate key BIGINT ID column, two natural key VARCHAR columns, 300 or so cargo columns, 0 BLOB/CLOB columns, then 40-50 indexes)
The primary key index DDL is "create unique index mytable_pk on dbo.mytable (keycolumn);"
The only other unique index DDL is "create unique index mytable_ndx1 on dbo.mytable (division, itemnum)";
The product that owns the database is used by hundreds of fortune 2000 customers, so changing hte data model is not an option for me or the product vendor.
Restrictions
Since the database is ultimately a third party's, any changes I make
to it must be in-place. Once the data is inserted into it, I no
longer have any access to it.
The database is owned by a third party
off-the-shelf application.
the primary key is a sequential integer
Observations and metrics
Early in the process, we were bottlenecked on CPU resources.
Once we hit about 1,000,000 rows, we were single threading on latches, sometimes spending over two seconds in a latch, and rarely spending less than 500ms in a latch. Latching and IO buffer waits were both excessive. CPU dropped to about 12% usage.
In a second test, I dropped all of the indexes and re-ran the job. The job completed 8 times as quickly, showing zero load on the SQL server and bottlenecking on CPU on the application which is very good from the SQL Server perspective.
After reading Microsoft's literature, I came to the conclusion that the data model is working against SQL Server's indexing architecture for tuning for massive inserts.
I will not always have the option of dropping and recreating the indexes. Is there a way to tune the table to distribute the I/O
** Now to the real question **
Is there a way to tune SQL Server, under the covers, to distribute the IO so sequential numbers in an index not in the same buffer when doing massive inserts of sequential data?
There are several well-known approaches to addressing last page insert contention in SQL Server.
Many of these are covered in the documentation at Resolve last-page insert PAGELATCH_EX contention in SQL Server. Summarising the options from that link:
Use OPTIMIZE_FOR_SEQUENTIAL_KEY (details)
Move primary key off identity column
Make the leading key a non-sequential column
Add a non-sequential value as a leading key
Use a GUID as a leading key
Use table partitioning and a computed column with a hash value
Switch to In-Memory OLTP
Method 7 can also be implemented as an in-memory OLTP table to handle a high rate of ingestion with regular batch moves to the final destination table. For the very highest concurrency, use natively compiled code with the in-memory table as much as possible (including for the inserts). The frequency and size of moves is dictated by your requirements.
As mentioned in another answer, delayed durability can also improve insert performance in many cases.
Related Q & A: Solving periodic high PAGELATCH_EX Waits. Last page contention?
All that said, you haven't shown evidence of a last-page contention issue at all. More likely, you're encountering problems related to updating all those secondary indexes and a lack of memory on the instance meaning index maintenance often has to wait for pages to be brought in from storage for modification. You don't mention the type of latch you see waits on, but I imagine they'd be PAGEIOLATCH_*.
The primary solution would be to dramatically increase the memory available to SQL Server for its buffer pool so fewer IOs are necessary. Failing that, a faster storage subsystem would be required.
Have you tried using Delayed Durability?
When to use delayed transaction durability
Some of the cases in which you could benefit from using delayed transaction durability are:
You can tolerate some data loss.
If you can tolerate some data loss, for example, where individual records are not critical as long as you have most of the data, then delayed durability may be worth considering. If you cannot tolerate any data loss, do not use delayed transaction durability.
You are experiencing a bottleneck on transaction log writes.
If your performance issues are due to latency in transaction log writes, your application will likely benefit from using delayed transaction durability.
Your workloads have a high contention rate.
If your system has workloads with a high contention level much time is lost waiting for locks to be released. Delayed transaction durability reduces commit time and thus releases locks faster, which results in higher throughput.
The short answer to your "real question" is no because contiguous keys of a disk-based b-tree index must be stored in the same page.
I've never used SQL server, but your problem isn't specific to one database, so maybe this can still help.
When inserting a large number of rows per second the bottlenecks are either going to be parsing overhead (which can be parallelized), index updates (which may be parallelizable or not), primary key sequence generation, or other stuff like postgres' large object support, but that depends on your column types and database quirks. Then at some point any transactional database must generate sequential transaction log entries which is also a concurrency bottleneck.
First thing you should do is check if the inserts are grouped into transactions (not one insert per transaction). Then make sure the IO is fast, look for bottlenecks there, iowait, etc.
In a second test, I dropped all of the indexes and re-ran the job. The job completed 8 times as quickly, showing zero load on the SQL server
So that eliminates some of the candidates and hints that the problem is indices.
For example if 50 threads each insert a row at the same time, and...
You have a high cardinality index with each row hitting a different page in the index, then these can be parallelized
You have a low cardinality index, most of the inserted rows have the same value in the same column, and all these threads are fighting for control of the same index page.
This can compound with index/table page splits if your fillfactor is too high, in this case all the threads will want to insert in the same index page, and it's already full, so one thread is splitting the page while all others are waiting.
Unfortunately you didn't post the table info in the question, which you should really do. But you probably know if your indices are low cardinality or high. The first thing you could do is run the same tests again, adding the indices one by one, try to see which one causes trouble.
You can also lower fillfactor so there is less chance the inserts end up in a page that is already full.
If you find a problematic low cardinality index then you should first wonder if it's actually useful for queries, maybe you can drop it. If you want to keep it, you can hack it into a high cardinality index by adding a dummy column at the end. For example if you have an index on (category) which has few different values and causes problems for inserts, you can turn it into (category,other_column) which will work just as well for selecting based on category and might provide some extra features like sorting on other_column while selecting on category. However other_column should not be the PK or date or any other column that will have have values that end up in the same page in all your concurrent inserts, because that would be back to square one.
Next, you can try single-threading, or a low number of threads. Back to this:
In a second test, I dropped all of the indexes and re-ran the job. The job completed 8 times as quickly, showing zero load on the SQL server and bottlenecking on CPU on the application which is very good from the SQL Server perspective.
This may look nice at first glance but there's a problem here. Basically your application is doing the easy things (processing rows) and delegating the hard things (ie, concurrency) to the database. That's fine until it exceeds the database's capabilities, then it breaks down. Databases are excellent at handling concurrency correctly, but doing it fast is a very hard problem: coordinating several cores on a lock has a hard performance limit, caused by latency of communication between the cores, which is the speed of information propagation, in other words the speed of light, which cannot be negotiated with.
Locks are just memory held as cache lines in CPU caches. So a side effect of the way multicore systems work is, it's much faster for the same core to reacquire a lock it just released, because the line is still in its cache, so there is no slow inter-core communication involved. Likewise, several cores attempting to modify different parts of the same index page will result in cache line exchanges between them and lots of communication to determine what core owns what byte in that page. And that is surprisingly slow, it can take microseconds instead of nanoseconds.
In addition you have 50 client threads, so 50 server threads, and only 16 cores, so on the database server the OS will multitask the 50 threads between the 16 cores. This means the OS will end up putting one thread to sleep while it's holding a lock, and when that happens, performance is destroyed.
So the next test you can do is to compare insertion time with all your indices between these two scenarii:
Your current one with 50 threads
Then stop it, copy the inserted data from your main table into a temp table, truncate the main table, and insert the exact same data again with:
INSERT INTO yourtable SELECT * FROM temptable
In the second case you're inserting the same data. For the test to be valid it should be in the same order, so you might want to add an ORDER BY primary key while copying the rows into the temp table, so they come out in the proper order. I don't know if the tables are clustered, but you'll find a way to get the order correct.
You can also try various orders, one of the indices may be faster if data is inserted in an order that it likes.
If the second insert is much faster than the mutli-threaded one, then that will give you a clue of what you need to do. In this case that's probably a funnel, ie a process that gathers rows generated by the many threads and inserts them using a low number of threads, maybe just one.
This can simply be all the threads inserting into a non-indexed table, and a separate task flushing this table into the main one every X milliseconds.
This is really more of a discussion than a specific question about nolock.
I took over an app recently that almost every query (and there are lots of them) has the nolock option on them. Now I am pretty new to SQL server (used Oracle for 10 years) but yet I find this pretty disturbing. So this weekend I was talking with one of my friends who runs a rather large ecommerce site (name will be withheld to protect the guilty) and he says he has to do this with all of his SQL servers cause he will always end in deadlocks.
Is this just a huge short fall with SQL server? Is this just a failure in the DB design (mine is not 3rd level, but its close) Is anybody out there running an SQL server app without nolocks? These are issues that Oracle handles better with more grandulare recordlocks.
Is SQL server just not able to handle big loads? Is there some better workaround than reading uncommited data? I would love to hear what people think.
Thanks
SQL Server has added snapshot isolation in SQL Server 2005, this will enable you to still read the latest correct value without having to wait for locks. StackOverflow is also using Snapshot Isolation. The Snapshot Isolation level is more or less the same that Oracle uses, this is why deadlocks are not very common on an Oracle box. Just be aware to have plenty of tempdb space if you do enable it
from Books On Line
When the READ_COMMITTED_SNAPSHOT
database option is set ON, read
committed isolation uses row
versioning to provide statement-level
read consistency. Read operations
require only SCH-S table level locks
and no page or row locks. When the
READ_COMMITTED_SNAPSHOT database
option is set OFF, which is the
default setting, read committed
isolation behaves as it did in earlier
versions of SQL Server. Both
implementations meet the ANSI
definition of read committed
isolation.
If somebody says that without NOLOCK their application always gets deadlocked, then there is (more than likely) a problem with their queries. A deadlock means that two transactions cannot proceed because of resource contention and the problem cannot be resolved. An example:
Consider Transactions A and B. Both are in-flight. Transaction A has inserted a row into table X and Transaction B has inserted a row into table Y, so Transaction A has an exclusive lock on X and Transaction B has an exclusive lock on Y.
Now, Transaction A needs run a SELECT against table Y and Transaction B needs to run a SELECT against table X.
The two transactions are deadlocked: A needs resource Y and B needs resource X. Since neither transaction can proceed until the other completes, the situtation cannot be resolved: neither transactions demand for a resource may be satisified until the other transaction releases its lock on the resource in contention (either by ROLLBACK or COMMIT, doesn't matter.)
SQL Server identifies this situation and select one transaction or the other as the deadlock victim, aborts that transaction and rolls back, leaving the other transaction free to proceed to its presumable completion.
Deadlocks are rare in real life (IMHO). One rectifies them by
ensuring that transaction scope is as small as possible, something SQL server does automatically (SQL Server's default transaction scope is a single statement with an implicit COMMIT), and
ensuring that transactions access resources in the same sequence. In the example above, if transactions A and B both locked resources X and Y in the same sequence, there would not be a deadlock.
Timeouts
A timeout, on the other hand, occurs when a transaction exceeds its wait time and is rolled back due to resource contention. For instance, Transaction A needs resource X. Resource X is locked by Transaction B, so Transaction A waits for the lock to be released. If the lock isn't released within the queries timeout limimt, the waiting transaction is aborted and rolled back. Every query has a query timeout associated with it (the default value is 30s, I believe), after which time the transaction is aborted and rolled back. The query timeout can be set to 0s, in which case SQL Server will let the query wait forever.
This is probably what they are talking about. In my experience, timeouts like this usually occur in big databases when large batch jobs are updating thousands and thousands of records in a single transaction, although they can happen because a transaction goes to long (connect to your production database in Query Abalyzer, execute BEGIN TRANSACTION, update a single row in a frequently hit table in Query Analyzer and go to lunch without executing ROLLBACK or COMMIT TRANSACTION and see how long it takes for the production DBAs to go apes**t on you. Don't ask me how I know this)
This sort of timeout is usually what results in splattering perfectly innocent SQL with all sorts of NOLOCK hints
[TIP: if your going to do that, just execute SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED as the first statement in your stored procedure and have done with it.]
The problem with this approach (NOLOCK/READ UNCOMMITTED) is that you can read uncommitted data from other transaction: stuff that is incomplete or that may get rolled back later, so your data integrity is comprimised. You might be sending out a bill based on data with a high level of bogosity.
My general rule is that one should avoid the use of table hints insofar as possible. Let SQL Server and its query optimizer do their jobs.
The right way to avoid this sort of issue is to avoid the sort of transactions (insert a million rows all at one fell swoop, for instance) that cause the problems. The locking strategy implicit in relational database SQL is designed around small transactions of short scope. Lock should be small in scope and short in duration. Think "bank teller updating somebody's checking account with a deposit." as the underlying use case. Design your processes to work in that model and you'll be much happier all the way 'round.
Instead of inserting a million rows in one mondo insert statement, do the work in independent chunks and commit each chunk independently. If your million row insert dies after processing 999,000 rows, all the work done is lost (not to mention that the rollback can be a b*tch, and the table is still locked during rollback as well.) If you insert the million rows in block of 1000 rows each, committing after each block, you avoid the lock contention that causes deadlocks, as locks will be obtained and released and things will keep moving. If something goes south in the 999th block of 1000 rows, and the transaction get aborted and rolled back, you've still gotten 998,000 rows inserted; you've only lost 1000 rows of work. Restart/Retry is much easier.
Also, lock escalation occurs in large transactions. For effiency, locks escalate to larger and larger scope as the number of locks held by transaction increases. If a single transaction inserts/updates/deletes a single row in a table, I get a row lock. Keep doing that and once the number of row locks held by that transaction against that table hits a threshold value, SQL Server will escalate the locking strategy: the row locks will be consolidated and converted into a smaller number page locks, thus increasing the scope of the locks held. From that point forward, an insert/delete/update of a single row will lock that page in the table. Once the number of page locks held hits its threshold value, the page locks are again consolidated and the locking strategy escalates to table locks: the transaction now locks the entire table and nobody else may play until the transaction commits or rolls back.
Whether you can avoid functionally avoid the use of NOLOCK/READ UNCOMMITTED is entirely dependent on the nature of the processes hitting the underlying database (and the culture of the organization owning it).
Myself, I try to avoid its use as much as possible.
Hope this helps.
No, there is no need to use NOLOCK. Links: SO 1
As for load, we deal with 2000 rows per second which is small change compared to 35k TPS
Deadlocks are caused by lock contention and usually caused by inconsistent write order on tables in transactions. ORMs especially are rubbish at this. We get them very infrequently. A well written DAL should retry too as per MSDN.
In a traditional normalized OLTP environment, NOLOCK is a code smell and almost certainly unnecessary in a properly designed system.
In a dimensional model, I used NOLOCK extensively to avoid locking very large fact and dimension tables which were being populated with later fact data (and dimensions may have been expiring). In the dimensional model, the facts either never change or never change after a certain point. Similarly, any dimension which is referenced will also be static, so for example, the NOLOCK will stop your long analysis operation on yesterday's data from blocking a dimension expiration during a data load for today's data.
You should only use nolock on an unchanging table. Of course, this will be the same then as Read Committed Snapshot. Without the snapshot, you are only saving the time it takes to apply a shared lock, and then to remove it, which for most cases isn't necessary.
As for a changing table... No lock doesn't just mean getting a row before a transaction is done updating all of its rows. You can get ghost data as data pages split, or even index pages split. Or no data. That alone scared me away, but I think there may be even more scenarios where you simply get the wrong data.
Of course, nolock for getting rough estimates or to just check in on a process might be reasonable.
Basic rule of thumb -- if you care about the data at all, and the data is changing, then do not use NoLOCK.
Do I understand correctly that table/row lock hints are being used for pessimistic transaction (TX) isolation models of concurrency ONLY?
In other words, when can table/row lock hints be used during engagement of optimistic TX isolation provided by SQL Server (2005 and higher)?
When one would need pessimistic TX isolation levels/hints in SQL Server2005+ if the later provides built-in optimistic (aka snapshot aka versioning) concurrency isolation?
I did read that pessimistic options are legacy and are not needed anymore, though I am in doubt.
Also, having optimistic (aka snapshot aka versioning) TX isolation levels built-in SQL Server2005+,
when one would need to manually code for optimistic concurrency features?
The last question is inspired by having read:
"Optimistic Concurrency in SQL Server" (September 28, 2007)
describing custom coding to provide versioning in SQL Server.
Optimistic concurrency requires more resources and is more expensive when the conflict occurs.
Two sessions can read and modify the values and the conflict only occurs when they try to apply their changes simultaneously. This means that in case of the concurrent update both values should be stored somewhere (which of course requires resources).
Also, when a conflict occurs, usually the whole transaction should be rolled back or the cursor refetched, which is expensive too.
Pessimistic concurrency model uses locking, thus downgrading concurrency but improving performance.
In case of two concurrent tasks, it may be cheaper for the second task to wait for a lock to release than spending CPU time and disk I/O on two simultaneous works and then yet more on rolling back the less fortunate work and redoing it.
Say, you have a query like this:
UPDATE mytable
SET myvalue = very_complex_function(#range)
WHERE rangeid = #range
, with very_complex_function reading some data from mytable itself. In other words, this query transforms a subset of mytable sharing the value of range.
Now, when two functions work on the same range, there may be two scenarios:
Pessimistic: the first query locks, the second query waits for it. The first query completes in 10 seconds, the second one does too. Total: 20 seconds.
Optimistic: both queries work independently (on the same input). This shares CPU time between them plus some overhead on switching. They should keep their intermediate data somewhere, so the data is stored twice (which implies twice I/O or memory). Let's say both complete almost at the same time, in 15seconds.
But when it's time to commit the work, the second query will conflict and will have to rollback its changes (say, it takes the same 15 seconds). Then it needs to reread the data again and do the work again, with the new set of data (10 seconds).
As a result, both queries complete later than with a pessimistic locking: 15 and 40 seconds vs. 10 and 20.
When one would need pessimistic TX isolation levels/hints in SQL Server2005+ if the later provides built-in optimistic (aka snapshot aka versioning) concurrency isolation?
Optimistic isolation levels are, well, optimistic. You should not use them when you expect high contention on your data.
BTW, optimistic isolation (for the read queries) was available in SQL Server 2000 too.
I have a detailed answer here: Developing Modifications that Survive Concurrency
I think there's a bit confusion over terminology here.
The technique of optimistic locking/optimistic concurrency/... is a programming technique used to avoid the following scenario :
start transaction
read data, setting a "read" lock on it to prevent any deletes/modifications to our data
display data on user's screen
await user input, lock remains active
keep awaiting user input, lock still preventing any writes/modifications
user input never comes (for whatever reason)
transaction times out (and this is usually not very rapidly, as the user must be given reasonable time to enter his input).
Optimistic locking replaces this with the following:
start transaction READ
read data, setting a "read" lock on it to prevent any deletes/modifications to our data
end transaction READ, releasing the read lock just set
display data on user's screen
await user input, but data can be modified/deleted meanwhile by other transactions
user input arrives
start transaction WRITE
verify that the data has remained unaltered, raising an exception if it has
apply user updates
end transaction WRITE
So the single "user transaction" to go fetch some data, and change and update them, consists of two distinct "database transactions". What is usually called "isolation levels" applies to those database transactions. The "optimistic locking" that you refer to applies to the "user transaction".
The matter is further complicated in that, broadly speaking, two completely distinct strategies are possible for the "isolating the database transactions part" :
MVCC
2-phase locking
I think the "snapshot versioning isolation level" means that the MVCC technique (well, one of its various possible variations) is being used for the database transaction. The other commonly known isolation levels apply more to transaction isolation using 2PL as the serialization(/isolation) technique. (And mixing them up can get messy ...)
I have two long running queries that are both on transactions and access the same table but completely separate rows in those tables. These queries also perform some update and inserts based on those queries.
It appears that when these run concurrently that they encounter a lock of some kind and it’s preventing the task from finishing and locks up when it goes to update one of the rows. I’m using an exclusive row lock on the rows being read and the lock that shows up on the process is a lck_m_ix lock.
Two questions:
When I update/insert a single row does it lock the entire table?
What can be done to work around this sort of issue?
Typically no, but it depends (most often used answer for SQL Server!)
SQL Server will have to lock the data involved in a transaction in some way. It has to lock the data in the table itself, and the data any affected indexes, while you perform a modification. In order to improve concurrency, there are several "granularities" of locking that the server might decide to use, in order to allow multiple processes to run: row locks, page locks, and table locks are common (there are more). Which scale of locking is in play depends on how the server decides to execute a given update. Complicating things, there are also classifications of locks like shared, exclusive, and intent exclusive, that control whether the locked object can be read and/or modified.
It's been my experience that SQL Server mainly uses page locks for changes to small portions of tables, and past some threshold will automatically escalate to a table lock, if a larger portion of a table seems (from stats) to be affected by an update or delete. The idea is that it is faster to lock a table (one lock) than obtaining and managing thousands of individual row or page locks for a big update.
To see what is happening in your specific case, you'd need to look at the query logic and, while your stuff is running, examine the locking/blocking conditions in sys.dm_tran_locks, sys.dm_os_waiting_tasks or other DMV's. You would want to discover what exactly is getting locked by what step in each of your processes, to discover why one is blocking the other.
The short version:
No
Fix your code.
The long version:
LCK_M_IX is an intent lock, meaning the operation will place an X lock on a subordinate element. Eg. When updating a row in a table, the operation table takes an IX lock on the table before locking X the row being updated/inserted/deleted. Intent locks are common strategy to deal with hierarchies, like table/page/row, because the lock manager cannot understand the physical structure of resources requested to be locked (ie. it cannot know that an X-lock on page P1 is incompatible with an S-lock on row R1 because R1 is contained in P1). For more details, see Lock Modes.
The fact that you are seeing contention on intent locks means you are trying to obtain high level object locks, like table locks. You will need to analyze your source code for the request being blocked (the one requesting the lock incompatible with LCK_M_IX) and remove the cause of the object level lock request. What that means will depend on your source code, I cannot know what you're doing there. My guess is that you use an erroneous lock hint.
A more general approach is to rely on SNAPSHOT ISOLATION. But this, most likely, will not solve the problem you're seeing, since snapshot isolation can only benefit row level contention issues, not applications that request table locks.
A frequent aim of using transactions: keep them as short and sweet as possible. I get the sense from your wording in the question that you are opening a transaction, then doing all kinds of things, some of which take a long time. Then expecting multiple users to be able to run this same code concurrently. Unfortunately, if you perform an insert at the beginning of that set of code, then do 40 other things before committing or rolling back, it is possible that that insert will block everyone else from running the same type of insert, essentially turning your operation from free-for-all to serial.
Find out what each query is doing, and if you are getting lock escalations that you wouldn't expect. Just because you say WITH (ROWLOCK) on a query doesn't mean SQL Server will be able to comply... if you are touched multiple indexes, indexed views, persisted computed columns etc. then there are all kinds of reasons why your rowlock may not hold any water. You also might have things later in the transaction that are taking longer than you think, and maybe you don't realize that the locks on all of the objects involved in the transaction (not just the statement that is currently running) can be held for the duration of the transaction.
Different databases have different locking mechanisms, but ones like SQL Server and Oracle have different types of locking.
The default on SQL Server appears to be pessimistic Page locking - so if you have a small number of records then all of them may get locked.
Most databases should not lock when running a script, so I'm wondering whether you're potentially running multiple queries concurrently without transactions.
One of the classical reasons we have a database deadlock is when two transactions are inserting and updating tables in a different order.
For example, transaction A inserts in Table A then Table B.
And transaction B inserts in Table B followed by A.
Such a scenario is always at risk of a database deadlock (assuming you are not using serializable isolation level).
My questions are:
What kind of patterns do you follow in your design to make sure that all transactions are inserting and updating in the same order.
A book I was reading- had a suggestion that you can sort the statements by the name of the table. Have you done something like this or different - which would enforce that all inserts and updates are in the same order?
What about deleting records? Delete needs to start from child tables and updates and inserts need to start from parent tables. How do you ensure that this would not run into a deadlock?
All transactions are
inserting\updating in the same order.
Deletes; identify records to be
deleted outside a transaction and
then attempt the deletion in the
smallest possible transaction, e.g.
looking up by the primary key or similar
identified during the lookup stage.
Small transactions generally.
Indexing and other performance
tuning to both speed transactions
and to promote index lookups over
tablescans.
Avoid 'Hot tables',
e.g. one table with incrementing
counters for other tables primary
keys. Any other 'switchboard' type
configuration is risky.
Especially if not using Oracle, learn
the looking behaviour of the target
RDBMS in detail (optimistic /
pessimistic, isolation levels, etc.)
Ensure you do not allow row locks to
escalate to table locks as some
RDMSes will.
Deadlocks are no biggie. Just be prepared to retry your transactions on failure.
And keep them short. Short transactions consisting of queries that touch very few records (via the magic of indexing) are ideal to minimize deadlocks - fewer rows are locked, and for a shorter period of time.
You need to know that modern database engines don't lock tables; they lock rows; so deadlocks are a bit less likely.
You can also avoid locking by using MVCC and the CONSISTENT READ transaction isolation level: instead of locking, some threads will just see stale data.
Carefully design your database processes to eliminate as much as possible transactions that involve multiple tables. When I've had database design control there has never been a case of deadlock for which I could not design out the condition that caused it. That's not to say they don't exist and perhaps abound in situations outside my limited experience; but I've had no shortage of opportunities to improve designs causing these kinds of problems. One obvious strategy is to start with a chronological write-only table for insertion of new complete atomic transactions with no interdependencies, and apply their effects in an orderly asynchronous process.
Always use the database default isolation levels and locking settings unless you are absolutely sure what risks they incur, and have proven it by testing. Redesign your process if at all possible first. Then, impose the least increase in protection required to eliminate the risk (and test to prove it.) Don't increase restrictiveness "just in case" - this often leads to unintended consequences, sometimes of the type you intended to avoid.
To repeat the point from another direction, most of what you will read on this and other sites advocating the alteration of database settings to deal with transaction risks and locking problems is misleading and/or false, as demonstrated by how they conflict with each other so regularly. Sadly, especially for SQL Server, I have found no source of documentation that isn't hopelessly confusing and inadequate.
I have found that one of the best investments I ever made in avoiding deadlocks was to use a Object Relational Mapper that could order database updates. The exact order is not important, as long as every transaction writes in the same order (and deletes in exactly the reverse order).
The reason that this avoids most deadlocks out of the box is that your operations are always table A first, then table B, then table C (which perhaps depends on table B).
You can achieve a similar result as long as you exercise care in your stored procedures or data layer's access code. The only problem is that it requires great care to do it by hand, whereas a ORM with a Unit of Work concept can automate most cases.
UPDATE: A delete should run forward to verify that everything is the version you expect (you still need record version numbers or timestamps) and then delete backwards once everything verifies. As this should all happen in one transaction, the possibility of something changing out from under you shouldn't exist. The only reason for the ORM doing it backwards is to obey the key requirements, but if you do your check forward, you will have all the locks you need already in hand.
I analyze all database actions to determine, for each one, if it needs to be in a multiple statement transaction, and then for each such case, what the minimum isolation level is required to prevent deadlocks... As you said serializable will certainly do so...
Generally, only a very few database actions require a multiple statement transaction in the first place, and of those, only a few require serializable isolation to eliminate deadlocks.
For those that do, set the isolation level for that transaction before you begin, and reset it whatever your default is after it commits.
Your example would only be a problem if the database locked the ENTIRE table. If your database is doing that...run :)