Isolation Level in SQL Server 2008 R2 - sql-server

I have gone thru the entire Microsoft site to understand the isolation levels in SQL Server 2008 R2. However before adopting one I would like to take suggestion from the experts at SO.
I have a PHP based web page primarily used as a dashboard. Users (not more than 5) will upload bulk data (around 40,000 rows) each day, and around 70 users will have ready only access to the database. Please note that I have fixed schedule for these 5 users for upload, but I want to mistake proof the same for any data loss. Help me with the below questions:
What is the best isolation level I can use?
Will the default READ COMMITTED isolation help me here?
Also is there a way to set isolation level thru SSMS for a particular database, other than TSQL statements? (universal isolation for a database)
70 users will have the download options, is there a chance that the db will get corrupted if all or most of them try to download at the same time? How do I avoid the same?
Any suggestion from experts....
Regards,
Yuvraj S

Isolation levels are really about how long shared locks on data being read are kept. But as Lieven already mention: those are NOT about preventing "corruption" in the database - those are about preventing readers and writers from getting in each other's way.
First up: any write operation (INSERT, UPDATE) will always require an exclusive lock on that row, and exclusive locks are not compatible with anything else - so if a given row to be updated is already locked, any UPDATE operation will have to wait - no way around this.
For reading data, SQL Server takes out shared locks - and the isolation levels are about how long those are held.
The default isolation level (READ COMMITTED) means: SQL Server will try to get a shared lock on a row, and if successful, read the contents of the row and release that lock again right away. So the lock only exists just for the brief period of the row being read. Shared locks are compatible to other shared locks, so any number of readers can read the same rows at the same time. Shared locks however prevent exclusive locks, so shared locks prevent mostly UPDATE on the same rows.
And then there's also the READ UNCOMMITTED isolation level - which basically takes out no locks; this means, it can also read rows that are currently being updated and exclusively locked - so you might get non-committed data - data that might not even really end up in the database in the end (if the transaction updating it gets rolled back) - be careful with this one!
The next level up is REPEATABLE READ, in which case the shared locks once acquired are held until the current transaction terminates. This locks more rows and for a longer period of time - reads are repeatable since those rows you have read are locked against updates "behind your back".
And the ultimate level is SERIALIZABLE in which entire ranges for rows (defined by your WHERE clause in the SELECT) are locked until the current transaction terminates.
Update:
More than the download part (secondary for me) I am worried about 5 users trying to update one database at the same time.
Well, don't worry - SQL Server will definitely handle this without any trouble!
If those 5 (or even 50) concurrent users are updating different rows - they won't even notice someone else is around. The updates will happen, no data will be hurt in the process - all is well.
If some of those users try to update the same row - they will be serialized. The first one will be able to get the exclusive lock on the row, do its update, release the lock and then go on. Now the second user would get its chance at it - get the exclusive lock, update the data, release lock, go on.
Of course: if you don't do anything about it, the second user's data will simply overwrite the first update. That's why there is a need for concurrency checks. You should check to see whether or not the data has changed between the time you read it and the time you want to write it; if it's changed, it means someone else already updated it in the mean time -> you need to think of a concurrency conflict resolution strategy for this case (but that's a whole other question in itself....)

Related

Parallel/ Concurrent Updates in MS SQL Server -- how to perform? (getting around locks)

I'd like multiple connections to an MS SQL Server database to make parallel/ concurrent updates to a single table, for reasons of speed/ reducing the total time it takes to execute.
The updates are made based on looking up a primary/ unique key.
Currently, this throws an error "transaction was deadlocked on lock resources with another process". I think it's because the table is being locked after the 1st connection runs an update transaction on the table. All subsequent connections encounter a locked table being updated and --- errors occur.
Is there a way in MS SQL Server to allow parallel/ concurrent updates to a single table?
Note: No incoming updates would ever 'be using' the same row at the same time -- they are all unique. They would be using the same table, however. Nevertheless, if I can somehow switch to "row locking" instead of table locking, would this solve the problem? Or if I can switch to "screw any locking, period" (is this referred to as uncommitted writing?) -- I would also do that, as it wouldn't affect data integrity with my process.
Let me know what you think. Thanks!
PS ... if I did "row locks" only, would my "commit size" matter in any way in regards to locking?
I suggest to switch to row locking anyway if serialization of the updates is not absolutely necessary. Further you should check the isolation level and locking sequence of the read transactions on which the table is involved, because they may also be part of the deadlock scenario.
Since table locking is more restrictive than row locking, the probability to run into a deadlock is higher. Further you won't gain speed by using multiple connections because the table locks will serialize your concurrent transactions again.

What is the proper way to run a long query against an active database?

We are using SQL Server 2012 EE but currently do not have the option to run queries on a R/O mirror though that is my long term goal, though am concerned I may run into the below issue in that scenario as well since the mirror would also be updating data I am querying.
I have a view that joins across several tables from two databases and is used for invoicing from existing data. Three of these tables are also actively updated by ongoing transactions. Running a report that used this view did not used to be a problem but now our database is getting much larger and I have run into some timeout problems. First the query was timing out so I set command timeout to 0 and reran the query which pegged all 4 CPUs 100% for 90 minutes and then I killed it. There were no problems with active transactions during that time. I reviewed the query and found a field I was joining on that was not indexed so created an index on that field, reran the report, which then finished in three minutes and all the CPUs were busy but not at all pegged out. Same data amount queried both times. I figured problem solved. Of course later, my boss ran a similar query, perhaps with some more data but probably not a lot more, and our live transactions started timing out 100% while his query was running. I did not get a chance to see the CPU usage during that time.
So my questions are two:
Given I have to use the live and active database, what is the proper way to run a long R/O query so that active transactions can still continue? I am considering NO LOCK but am hoping there is a better standard practice.
And what might cause sqlserver to peg out 4 CPUs with 100% busy and not cause live transaction timeouts, yet when my boss ran his query, after I added the index and my query ran much better, the live update transactions start timing out 100%?
I know this is not a lot of info to go on. I'm not very familiar with sql profiling and performance monitoring yet this behavior seems rather odd and am hoping a best practice would be the correct workaround.
The default behavior of SELECT queries in the READ_COMMITTED transaction isolation level is to acquire shared locks during query execution to provide the requested data consistency (read committed data only). These locks are typically row-level and released quickly during query execution immediately after each row is read. There are also less granular intent locks at the page and table level prevent concurrent updates to data as it is being read. Depending on the particulars of the execution plan, there may even be shared locks held at the table level for the duration of the query, which will prevent updates to the table during query execution and result in readers blocking writers.
Setting the READ_COMMITTED_SNAPSHOT database option causes SQL Server to use row versioning instead of locking to provide the same read consistency. A row version store is maintained in tempdb so that when a row requested by the query has changed since the query began, the most recent committed row version is returned instead. This row-versioning behavior avoids locking and effectively provides a statement-level snapshot of the database at the time the query began. Readers do not block writers and writers do not block readers. Do not confuse the READ_COMMITTED_SNAPSHOT database option with the SNAPSHOT isolation level (a common mistake).
The downside of setting the READ_COMMITTED_SNAPSHOT is additional resource usage. An additional 14 bytes of storage overhead for each row is incurred once the database option is enabled. Updates and deletes will generate row versions in tempdb. These versions require tempdb space for the duration of the longest running query and there is overhead in maintained the version store. Also consider whether you have existing applications that depend on readers-block-writers locking behavior. Despite this overhead, the concurrency benefits may yield better overall performance depending on your workload, while providing read integrity. See http://technet.microsoft.com/en-us/library/ms188277.aspx for more information.
Actually I decided to create a snapshot at the beginning of each month for reporting to run against. Then delete when no longer needed for reporting. This seems to work fine. I could do something similar with a database restore but slightly more work. This allows not needing a second SQL EE license, and lets me run reports w/o locking tables for live transactions.

SQL server Isolation level

I have n machines writing to DB (sql server) at the exact same time (initiating a transaction). I'm setting the Isolation level to serializable. My understanding is that whichever machine's transaction gets to the DB first, gets executed and the other transactions will be blocked while this completes.
Is that correct?
It depends - are they all performing the same activities? That is, exactly the same statements executing in the same order, with no flow control?
If not, and two connections are accessing independent objects in the DB, they can run in parallel.
If there's some overlap of resources, then some progress may be made by multiple connections until one of them wants to take a lock that another already has - at which point it will wait. There is then the possibility of deadlocks.
SERIALIZABLE:
Statements cannot read data that has been modified but not yet committed by other transactions.
No other transactions can modify data that has been read by the current transaction until the current transaction completes.
Other transactions cannot insert new rows with key values that would fall in the range of keys read by any statements in the current transaction until the current transaction completes.
whichever machine's transaction gets to the DB first, gets executed and the other transactions will be blocked while this completes
No, this i incorrect. The results should be as if each transaction was executed one after another (serially, hence the isolation level name). But the engine is free to use any implementation it likes, as long as it honors the guarantees of the serializable isolation model. And some engines actually implement it pretty much as you describe it, eg. Redis Transactions (although Redis has no 'isolation level' concept).
For SQL Server the transactions will execute in parallel until they hit a lock conflict. When a conflict occurs the transaction that has the lock granted continues undisturbed, while the one that requests the lock in a conflicting mode has to wait for the lock to free (for the granted transaction to commit). Which transaction happens to be the request and which one happens to be the granted is entirely up to what is being executed. That means that it well may be the case that the second machine gets the grant first and finishes first, while the first machine waits.
For a better understanding how the locking behavior differs under serializable isolation level, see Key-Range Locking
Yes, this will be true for write operations at any isolation level: "My understanding is that whichever machine's transaction gets to the DB first, gets executed and the other transactions will be blocked while this completes."
The isolation level helps determine what happens when you READ data while this is going on. Serializable read operations will block your write operations, which might be the behavior you want.

Can I be a deadlock victim if I'm not executing a query within a transaction?

Lets say I open a transaction and run update queries.
BEGIN TRANSACTION
UPDATE x SET y = z WHERE w = v
The query returns successfully and the transaction stays open deliberately for a period of time before I decide to commit.
While I'm sitting on the transaction is it ever possible the MSSQL deadlock machinary would be able to preempt my open transaction that is not actually executing anything to either clear a deadlock or free resources as system memory/resource limits are reached?
I know about SET DEADLOCK_PRIORITY and have read the MSDN articles on the topic of deadlocks. Logically since I'm not actively seeking to stake claim on any additional resources I can't imagine a scenario that would trigger a sane deadlock avoidance algorithm.
Does anyone know for sure if its possible that simply holding any locks can make me a valid target? Similarly could any low resource condition trigger the killing of my SPID?
NO
For a deadlock to occur all the participants in the deadlock chain must be waiting for a resource (a lock). If your connection is idle it means it doesn't execute a request, which implies it cannot be waiting.
As for other conditions that can kill your session I can think of at least three:
administrative operations that use WITH ROLLBACK_IMMEDIATE
a mirroring failover
intentional KILL <yourspid>, perhaps as a joke by your friendly DBA
To answer your question: you can be a deadlock victim if you're not executing a query in a transaction.
It's counter-intuitive, but you can be a deadlock victim by running a SELECT statement.
It can happen if you're running a query that uses an index:
you scan indexes looking for matching rows
other process starts updating data pages
you now want to fetch data from data pages from matching rows
other process holding locks on data pages
you wait for data page locks to release
other process finished updating data pages, wants to update indexes
you are holding read locks on indexes
other process waits for index locks to release
DEADLOCK
So, strictly speaking, you can be a deadlock victim, when you're not executing a query in a transaction. The other guy wasn't executing his UPDATE statement in a transaction either.
Nobody's explicitly using a transaction, yet there's a deadlock.
Possible problems:
SQL Server only has a finite number of locks. It is possible to run out of locks.
Other resources are finite (e.g. memory, tempdb). Holding on to these resources could cause those resources to run out.
Transaction logs - the logical transaction logs cannot be freed for re-use if a transaction is open. The result could be a log that fills up. This problem could stop your process because it would halt the entire instance.
To consider:
CASCADE: DELETE may only have one table in the command, but the a CASCADE relationship may touch other tables.
Triggers: Triggers on the modified table may affect other tables.
DELETE and UPDATE commands may use the FROM clause which touch other tables. I've never seen this, but I would not rule it out.
Transactions may time out, is that what is happening.
As you have at least 1 (or more) update locks taken out and make be some read and table scan locks, you may be killed to help free up deadlocks created by other transactions. The deadlock recovery code in SQL Server is unlikely to be totally bug free and it is not normal to keep a transaction open for a long time on SQL Server. However I would not expect that to happen often.
Some system when they detach deadlock type problem, just start killing “long lived” transactions that have not done match work so as to free up locks. Just because you are not part of the deadlock loop, does not stop the system picking on you.
To understand what is going on in your case, you will have to use the Sql Server Profiler to collect all the locking and deadlock related events, as well as event about aborted connection and transactions etc. Good lack this will time some time and a good level of understanding of the profiler events you are looking at...
The detail of this sort of things are different between database vendors and versions of their database. However as it is considered bad design by most database vendors to have a transaction open for a long time, doing so tends to lead to problems and hit code paths that have not had the most testing effort.
Just because you're not in a transaction doesn't mean you're not holding locks.

Concurrency issues

Here's my situation (SQL Server):
I have a web application that utilizes nHibernate for data access, and another 3 desktop applications. All access the same database, and are likely to utilize the same tables at any one time.
Now, with the help of NH I'm batching selects in order to load an aggregate with all of its hierarchy - so I would see 4 to maybe 7 selects being issued at once (not sure if it matters).
Every few days one of the applications will get a : "Transaction has been chosen as the deadlock victim." (this usually appears on a select)
I tried changing to snapshot isolation on the database , but that didn't helped - I was ending up with :
Snapshot isolation transaction aborted
due to update conflict. You cannot use
snapshot isolation to access table
'...' directly or indirectly in
database '...' to update,
delete, or insert the row that has
been modified or deleted by another
transaction. Retry the transaction or
change the isolation level for the
update/delete statement.
What suggestions to you have for this situation ? What should I try, or what should I read in order to find a solution ?
EDIT:
Actually there's no raid in there :). The number of users per day is small (I'll say 100 per day - with hundreds of small orders on a busy day), the database is a bit bigger at about 2GB and growing faster every day.
It's a business app, that handles orders, emails, reports, invoices and stuff like that.
Lazy loading would not be an option in this case.
I guess taking a very close looks at those indexes is my best bet.
Deadlocks are complicated. A deadlock means that at least two sessions have locks and are waiting for one another to release a different lock; since both are waiting, the locks never get released, neither session can continue, and a deadlock occurs.
In other words, A has lock X, B has lock Y, now A wants Y and B wants X. Neither will give up the lock they have until they are finished with their transaction. Both will wait indefinitely until they get the other lock. SQL Server sees that this is happening and kills one of the transactions in order to prevent the deadlock. Snapshot isolation won't help you - the DB still needs to preserve atomicity of transactions.
There is no simple answer anyone can give as to why a deadlock would be occurring. You'll need to profile your application to find out.
Start here: How to debug SQL deadlocks. That's a good intro.
Next, look at Detecting and Ending Deadlocks on MSDN. That will give you a lot of good background information on why deadlocks occur, and help you understand what you're looking at/for.
There are also some previous SO questions that you might want to look at:
Diagnosing Deadlocks in SQL Server 2005
Zero SQL deadlock by design
Or, if the deadlocks are very infrequent, just write some exception-handling code into your application to retry the transaction if a deadlock occurs. Sometimes it can be extremely hard (if not nearly impossible) to prevent certain deadlocks. As long as you write transactionally-safe code, it's not the end of the world; it's completely safe to just try the transaction again.
Is your hardware properly configured (specifically RAID configuration)? Is it capable of matching your workload?
If hardware is all good and humming, you should ensure you have the 'right' indexes to match your query workload.
Many locking/deadlock problems can be eliminated with the correct indexes (covering indexes can take pressure off the clustered index during inserts).
BTW: turning on snapshot isolation will put increased pressure on your tempDB. How is tempDB configured? RAID 0 is preferred (and even better use an SSD if tempDB is a bottleneck).
While it's not uncommon to find this error in NHibernate sessions with large numbers of users, it seems to be happening too often in your case.
Perhaps your objects are very large resulting in long-running selects? And if your selects are taking too long, that might indicate problems with your indexes (as Mitch Wheat explains)
If everything is in order, you could also try Lazy Loading to postpone your selects until when you really need your data. This might not be appropriate for your exact situation so you do have to see if it works.

Resources