What is "with (nolock)" in SQL Server? - sql-server

Can someone explain the implications of using with (nolock) on queries, when you should/shouldn't use it?
For example, if you have a banking application with high transaction rates and a lot of data in certain tables, in what types of queries would nolock be okay? Are there cases when you should always use it/never use it?

WITH (NOLOCK) is the equivalent of using READ UNCOMMITED as a transaction isolation level. So, you stand the risk of reading an uncommitted row that is subsequently rolled back, i.e. data that never made it into the database. So, while it can prevent reads being deadlocked by other operations, it comes with a risk. In a banking application with high transaction rates, it's probably not going to be the right solution to whatever problem you're trying to solve with it IMHO.

The question is what is worse:
a deadlock, or
a wrong value?
For financial databases, deadlocks are far worse than wrong values. I know that sounds backwards, but hear me out. The traditional example of DB transactions is you update two rows, subtracting from one and adding to another. That is wrong.
In a financial database you use business transactions. That means adding one row to each account. It is of utmost importance that these transactions complete and the rows are successfully written.
Getting the account balance temporarily wrong isn't a big deal, that is what the end of day reconciliation is for. And an overdraft from an account is far more likely to occur because two ATMs are being used at once than because of a uncommitted read from a database.
That said, SQL Server 2005 fixed most of the bugs that made NOLOCK necessary. So unless you are using SQL Server 2000 or earlier, you shouldn't need it.
Further Reading
Row-Level Versioning

Unfortunately it's not just about reading uncommitted data. In the background you may end up reading pages twice (in the case of a page split), or you may miss the pages altogether. So your results may be grossly skewed.
Check out Itzik Ben-Gan's article. Here's an excerpt:
" With the NOLOCK hint (or setting the
isolation level of the session to READ
UNCOMMITTED) you tell SQL Server that
you don't expect consistency, so there
are no guarantees. Bear in mind though
that "inconsistent data" does not only
mean that you might see uncommitted
changes that were later rolled back,
or data changes in an intermediate
state of the transaction. It also
means that in a simple query that
scans all table/index data SQL Server
may lose the scan position, or you
might end up getting the same row
twice. "

The text book example for legitimate usage of the nolock hint is report sampling against a high update OLTP database.
To take a topical example. If a large US high street bank wanted to run an hourly report looking for the first signs of a city level run on the bank, a nolock query could scan transaction tables summing cash deposits and cash withdrawals per city. For such a report the tiny percentage of error caused by rolled back update transactions would not reduce the value of the report.

Not sure why you are not wrapping financial transactions in database transactions (as when you transfer funds from one account to another - you don't commit one side of the transaction at-a-time - this is why explicit transactions exist). Even if your code is braindead to business transactions as it sounds like it is, all transactional databases have the potential to do implicit rollbacks in the event of errors or failure. I think this discussion is way over your head.
If you are having locking problems, implement versioning and clean up your code.
No lock not only returns wrong values it returns phantom records and duplicates.
It is a common misconception that it always makes queries run faster. If there are no write locks on a table, it does not make any difference. If there are locks on the table, it may make the query faster, but there is a reason locks were invented in the first place.
In fairness, here are two special scenarios where a nolock hint may provide utility
1) Pre-2005 sql server database that needs to run long query against live OLTP database this may be the only way
2) Poorly written application that locks records and returns control to the UI and readers are indefinitely blocked. Nolock can be helpful here if application cannot be fixed (third party etc) and database is either pre-2005 or versioning cannot be turned on.

NOLOCK is equivalent to READ UNCOMMITTED, however Microsoft says you should not use it for UPDATE or DELETE statements:
For UPDATE or DELETE statements: This feature will be removed in a future version of Microsoft SQL Server. Avoid using this feature in new development work, and plan to modify applications that currently use this feature.
http://msdn.microsoft.com/en-us/library/ms187373.aspx
This article applies to SQL Server 2005, so the support for NOLOCK exists if you are using that version. In order to future-proof you code (assuming you've decided to use dirty reads) you could use this in your stored procedures:
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED

You can use it when you're only reading data, and you don't really care about whether or not you might be getting back data that is not committed yet.
It can be faster on a read operation, but I cannot really say by how much.
In general, I recommend against using it - reading uncommitted data can be a bit confusing at best.

Another case where it's usually okay is in a reporting database, where data is perhaps already aged and writes just don't happen. In this case, though, the option should be set at the database or table level by the administrator by changing the default isolation level.
In the general case: you can use it when you are very sure that it's okay to read old data. The important thing to remember is that its very easy to get that wrong. For example, even if it's okay at the time you write the query, are you sure something won't change in the database in the future to make these updates more important?
I'll also 2nd the notion that it's probably not a good idea in banking app. Or inventory app. Or anywhere you're thinking about transactions.

Simple answer - whenever your SQL is not altering data, and you have a query that might interfere with other activity (via locking).
It's worth considering for any queries used for reports, especially if the query takes more than, say, 1 second.
It's especially useful if you have OLAP-type reports you're running against an OLTP database.
The first question to ask, though, is "why am I worrying about this?" ln my experience, fudging the default locking behavior often takes place when someone is in "try anything" mode and this is one case where unexpected consequences are not unlikely. Too often it's a case of premature optimization and can too easily get left embedded in an application "just in case." It's important to understand why you're doing it, what problem it solves, and whether you actually have the problem.

Short answer:
Only use WITH (NOLOCK) in SELECT statement on tables that have a clustered index.
Long answer:
WITH(NOLOCK) is often exploited as a magic way to speed up database reads.
The result set can contain rows that have not yet been committed, that are often later rolled back.
If WITH(NOLOCK) is applied to a table that has a non-clustered index then row-indexes can be changed by other transactions as the row data is being streamed into the result-table. This means that the result-set can be missing rows or display the same row multiple times.
READ COMMITTED adds an additional issue where data is corrupted within a single column where multiple users change the same cell simultaneously.

My 2 cents - it makes sense to use WITH (NOLOCK) when you need to generate reports. At this point, the data wouldn't change much & you wouldn't want to lock those records.

If you are handling finance transactions then you will never want to use nolock. nolock is best used to select from large tables that have lots updates and you don't care if the record you get could possibly be out of date.
For financial records (and almost all other records in most applications) nolock would wreak havoc as you could potentially read data back from a record that was being written to and not get the correct data.

I've used to retrieve a "next batch" for things to do. It doesn't matter in this case which exact item, and I have a lot of users running this same query.

Use nolock when you are okay with the "dirty" data. Which means nolock can also read data which is in the process of being modified and/or uncommitted data.
It's generally not a good idea to use it in high transaction environment and that is why it is not a default option on query.

I use with (nolock) hint particularly in SQLServer 2000 databases with high activity. I am not certain that it is needed in SQL Server 2005 however. I recently added that hint in a SQL Server 2000 at the request of the client's DBA, because he was noticing a lot of SPID record locks.
All I can say is that using the hint has NOT hurt us and appears to have made the locking problem solve itself. The DBA at that particular client basically insisted that we use the hint.
By the way, the databases I deal with are back-ends to enterprise medical claims systems, so we are talking about millions of records and 20+ tables in many joins. I typically add a WITH (nolock) hint for each table in the join (unless it is a derived table, in which case you can't use that particular hint)

The simplest answer is a simple question - do you need your results to be repeatable? If yes then NOLOCKS is not appropriate under any circumstances
If you don't need repeatability then nolocks may be useful, especially if you don't have control over all processes connecting to the target database.

Related

How long will dirty reads stay visible?

While running the stored procedure CSP1 on AZURE SQL 2012 (Standard: S0) I also ran concurrently
SELECT TOP 10 *
FROM [affected_table] WITH (NOLOCK)
ORDER BY 1 DESC
Which returns past 10 records in the [affected_table] including those UNCOMMITED by CSP1. When the CSP1 fails and ROLLBACK, no records had been inserted into the [affected_table] yet when I re-run the same SELECT query with NOLOCK I can still see all those phantom records there.
Shouldn't these be cleared after the ROLLBACK is finished ?
Should I be clearing the cache?
Edit
After some construtive criticsm I fell I diverged from the real question.
1) The dirty reads stays "visible" while the transaction is not commited or rolled back. Note this can take an indefined time and can be hold (bloked) longer by other concurrent transactions.
2) Sure but issuing a rollback or a commit ill not necessary act imediatly. Your (update, insert, delete) transactions issued exclusive locks and the only thing ill prevent dirty reads is using a more restrictive isolation level.
3) You never mess with the cache in production! It's ok in development and even good in some kinds of perfomance tests.
Manage concurrence is an integral part of any DBMS and it's done automaticaly without human intervention (most of time)and in fact the engine is planned with that in mind.
Of course you can change the isolation level and even try to use hints.
If your requirements needs high concurrence and allows dirty reads be aware you cannot predict how much time they ill be held.
If your cannot allow dirty reads stick with the read commited and never use nolock hint.
NOLOCK
is a common Cargo Cult used in TSQL. It was almost mandatory in the past (SQL Server 7) but with the changes in isolation architecture (SQL Server 2000) it become almost obsolete. In fact its considered a bad practice since SQL Server 2005.
If you read the linked MSDN documentation you ill see NOLOCK is the old name for READUNCOMMITTED and it allows dirty reads.
Some people believe NOLOCK magically increases performance and cures deadlocks and balding.
IMHO you can try to get rid of all NOLOCKS.
Note: If you remove a nolock and your system develops a serious case of deadlocks it merely means NOLOCK is maskering another deeper and serious problems. There are scenarios where nolock is recommended but they are rare and needs the attention of a good DBA.

What is the proper way to run a long query against an active database?

We are using SQL Server 2012 EE but currently do not have the option to run queries on a R/O mirror though that is my long term goal, though am concerned I may run into the below issue in that scenario as well since the mirror would also be updating data I am querying.
I have a view that joins across several tables from two databases and is used for invoicing from existing data. Three of these tables are also actively updated by ongoing transactions. Running a report that used this view did not used to be a problem but now our database is getting much larger and I have run into some timeout problems. First the query was timing out so I set command timeout to 0 and reran the query which pegged all 4 CPUs 100% for 90 minutes and then I killed it. There were no problems with active transactions during that time. I reviewed the query and found a field I was joining on that was not indexed so created an index on that field, reran the report, which then finished in three minutes and all the CPUs were busy but not at all pegged out. Same data amount queried both times. I figured problem solved. Of course later, my boss ran a similar query, perhaps with some more data but probably not a lot more, and our live transactions started timing out 100% while his query was running. I did not get a chance to see the CPU usage during that time.
So my questions are two:
Given I have to use the live and active database, what is the proper way to run a long R/O query so that active transactions can still continue? I am considering NO LOCK but am hoping there is a better standard practice.
And what might cause sqlserver to peg out 4 CPUs with 100% busy and not cause live transaction timeouts, yet when my boss ran his query, after I added the index and my query ran much better, the live update transactions start timing out 100%?
I know this is not a lot of info to go on. I'm not very familiar with sql profiling and performance monitoring yet this behavior seems rather odd and am hoping a best practice would be the correct workaround.
The default behavior of SELECT queries in the READ_COMMITTED transaction isolation level is to acquire shared locks during query execution to provide the requested data consistency (read committed data only). These locks are typically row-level and released quickly during query execution immediately after each row is read. There are also less granular intent locks at the page and table level prevent concurrent updates to data as it is being read. Depending on the particulars of the execution plan, there may even be shared locks held at the table level for the duration of the query, which will prevent updates to the table during query execution and result in readers blocking writers.
Setting the READ_COMMITTED_SNAPSHOT database option causes SQL Server to use row versioning instead of locking to provide the same read consistency. A row version store is maintained in tempdb so that when a row requested by the query has changed since the query began, the most recent committed row version is returned instead. This row-versioning behavior avoids locking and effectively provides a statement-level snapshot of the database at the time the query began. Readers do not block writers and writers do not block readers. Do not confuse the READ_COMMITTED_SNAPSHOT database option with the SNAPSHOT isolation level (a common mistake).
The downside of setting the READ_COMMITTED_SNAPSHOT is additional resource usage. An additional 14 bytes of storage overhead for each row is incurred once the database option is enabled. Updates and deletes will generate row versions in tempdb. These versions require tempdb space for the duration of the longest running query and there is overhead in maintained the version store. Also consider whether you have existing applications that depend on readers-block-writers locking behavior. Despite this overhead, the concurrency benefits may yield better overall performance depending on your workload, while providing read integrity. See http://technet.microsoft.com/en-us/library/ms188277.aspx for more information.
Actually I decided to create a snapshot at the beginning of each month for reporting to run against. Then delete when no longer needed for reporting. This seems to work fine. I could do something similar with a database restore but slightly more work. This allows not needing a second SQL EE license, and lets me run reports w/o locking tables for live transactions.

How to figure the read/write ratio in Sql Server?

How can I query the read/write ratio in Sql Server 2005? Are there any caveats I should be aware of?
Perhaps it can be found in a DMV query, a standard report, a custom report (i.e the Performance Dashboard), or examining a Sql Profiler trace. I'm not sure exactly.
Why do I care?
I'm taking time to improve the performance of my web app's data layer. It deals with millions of records and thousands of users.
One of the points I'm examining is database concurrency. Sql Server uses pessimistic concurrency by default--good for a write-heavy app. If my app is read-heavy, I might switch it to optimistic concurrency (isolation level: read committed snapshot) like Jeff Atwood did with StackOverflow.
All apps are heavy read only.
An UPDATE is a read for the WHERE clause followed by a write
An INSERT must check unique indexes and FKs, which are reads and why you index FK columns
At most you have 15% writes. I saw an article once discussing it, but can't find it again. More likely 1%.
I know that in our 6 million new rows per day DB, we still have a minimum of 95%+ reads (an estimate of course).
Why do you need to know?
Also: How to find out SQL Server table’s read/write statistics?
Edit, based on the question update...
I would leave DB concurrency until you need to change it. We've not change anything out of the box for our 6 million rows + heavy reads too
For tuning our web app, we designed it to reduce round trips (one call = one action, mutliple record sets per call etc)
Check out sys.dm_db_index_usage_stats:
seeks, scans, lookups are all reads
updates are writes
Keep in mind that the counters are reset with each server restart, you need to look at them only after a representative load was run.
There are also some performance counters that can help you:
Batch Requests/sec: number of Transact-SQL command batches received per second.
Write Transactions/sec: number of transactions that wrote to the database and committed
Transactions/sec: number of transactions started for the database
From these rates you can get a pretty good estimate of read:write ratio of your requests.
after your update
Turning on the version store is probably the best avenue for dealing with concurrency. Rather than using the snapshot isolation explicitly, I'd recommend turning on read committed snapshot:
alter database <dbname> set allow_snapshot_isolation on;
alter database <dbname> set read_committed_snapshot on;
this will make read committed reads (ie. the default ones) to use snapshot instead, so it literally doesn't require any change in the app and can be quickly tested.
You should also investigate if your reads don't get executed under serialization reads isolation level, which is what happens when a TransactionScope is used w/o explicitly specifying the isolation level.
One word of caution that the version store is not exactly free. See Row Versioning Resource Usage. And you should give a read to SQL Server 2005 Row Versioning-Based Transaction Isolation.
How about finding a ratio of num_of_writes & num_of_reads counters in sys.dm_io_virtual_file_stats?
I did it using SQL Server Profiler. I just opened it before running application and tested what kind of queries are executed while I'm doing something in application. But I think it's better just for making sure that queries work, don't know if it is convenient for measuring server workload like this. Profiler can also save traced which you can analyse later, so it might work.

Concurrency issues

Here's my situation (SQL Server):
I have a web application that utilizes nHibernate for data access, and another 3 desktop applications. All access the same database, and are likely to utilize the same tables at any one time.
Now, with the help of NH I'm batching selects in order to load an aggregate with all of its hierarchy - so I would see 4 to maybe 7 selects being issued at once (not sure if it matters).
Every few days one of the applications will get a : "Transaction has been chosen as the deadlock victim." (this usually appears on a select)
I tried changing to snapshot isolation on the database , but that didn't helped - I was ending up with :
Snapshot isolation transaction aborted
due to update conflict. You cannot use
snapshot isolation to access table
'...' directly or indirectly in
database '...' to update,
delete, or insert the row that has
been modified or deleted by another
transaction. Retry the transaction or
change the isolation level for the
update/delete statement.
What suggestions to you have for this situation ? What should I try, or what should I read in order to find a solution ?
EDIT:
Actually there's no raid in there :). The number of users per day is small (I'll say 100 per day - with hundreds of small orders on a busy day), the database is a bit bigger at about 2GB and growing faster every day.
It's a business app, that handles orders, emails, reports, invoices and stuff like that.
Lazy loading would not be an option in this case.
I guess taking a very close looks at those indexes is my best bet.
Deadlocks are complicated. A deadlock means that at least two sessions have locks and are waiting for one another to release a different lock; since both are waiting, the locks never get released, neither session can continue, and a deadlock occurs.
In other words, A has lock X, B has lock Y, now A wants Y and B wants X. Neither will give up the lock they have until they are finished with their transaction. Both will wait indefinitely until they get the other lock. SQL Server sees that this is happening and kills one of the transactions in order to prevent the deadlock. Snapshot isolation won't help you - the DB still needs to preserve atomicity of transactions.
There is no simple answer anyone can give as to why a deadlock would be occurring. You'll need to profile your application to find out.
Start here: How to debug SQL deadlocks. That's a good intro.
Next, look at Detecting and Ending Deadlocks on MSDN. That will give you a lot of good background information on why deadlocks occur, and help you understand what you're looking at/for.
There are also some previous SO questions that you might want to look at:
Diagnosing Deadlocks in SQL Server 2005
Zero SQL deadlock by design
Or, if the deadlocks are very infrequent, just write some exception-handling code into your application to retry the transaction if a deadlock occurs. Sometimes it can be extremely hard (if not nearly impossible) to prevent certain deadlocks. As long as you write transactionally-safe code, it's not the end of the world; it's completely safe to just try the transaction again.
Is your hardware properly configured (specifically RAID configuration)? Is it capable of matching your workload?
If hardware is all good and humming, you should ensure you have the 'right' indexes to match your query workload.
Many locking/deadlock problems can be eliminated with the correct indexes (covering indexes can take pressure off the clustered index during inserts).
BTW: turning on snapshot isolation will put increased pressure on your tempDB. How is tempDB configured? RAID 0 is preferred (and even better use an SSD if tempDB is a bottleneck).
While it's not uncommon to find this error in NHibernate sessions with large numbers of users, it seems to be happening too often in your case.
Perhaps your objects are very large resulting in long-running selects? And if your selects are taking too long, that might indicate problems with your indexes (as Mitch Wheat explains)
If everything is in order, you could also try Lazy Loading to postpone your selects until when you really need your data. This might not be appropriate for your exact situation so you do have to see if it works.

Hints in Sql Server

Are hints really necessary for every sql statement? We have a dba who is anal about it and asks us to put hints on every select and update statements in our stored procs. Is this really necessary?
Not usually. Putting them on everything sounds like overkill.
The documentation says
Because the SQL Server query optimizer typically selects the best execution plan for a query, we recommend that join_hint, query_hint, and table_hint be used only as a last resort by experienced developers and database administrators
Your DBA is wrong.
From MS:
Because the SQL Server query optimizer
typically selects the best execution
plan for a query, we recommend that
join hint, query hint, and
table hint be used only as a last
resort by experienced developers and
database administrators.
Hints are merely hints. They help the optimizer to do the best job possible. But like any optimization, you should focus on the statements that are actually problems.
Depends - the query optimizer makes pretty good choices of intent. What hints are your DBA's demanding? #Ned is a little amiss - a hint explicitly tells the optimizer not to figure-out a path - but use your optimization instead.
To legislate that you should always or never use hints is somewhat ignorant of the issues that hints are there to solve. Some occassions where hints have been critical:
NOLOCK to explicitly remove read-locks from domain lookup tables queried within a transaction.
nailing a query plan to a specific index because of statistics "drift" during heavy updates (in this instance the plan reverted to table-scan on a 10m row table than use the clustered index)
Never had to use join hints.
Normally this is just backwards. However, depending on your situation it might be desirable.
For example, we have one database (a set of databases on one server, actually) where the data is all a nightly snapshot dump of a mainframe system, used for reporting and other purposes. Aside from the batch process that recreates the databases each night, nothing does any writing to this system. In that context, the default locking scheme is not appropriate and politics between our group and the IT group that manages all our servers prevents us from changing it. So: pretty much all queries to those dbs have the with (nolock) hints.
I imagine there are other situations where you may have reporting databases with no writes, or perhaps the reverse: an archiving or logging database that is rarely read. The point is that occasionally a specialized database might be set up where the default locking scheme doesn't fit and you are not able to change it. Then you will need a plethora of pinatas... I mean hints.
But that's the exception that proves the rule. In general, the database optimizer is smarter than you are when it comes to things like locking.

Resources