To NOLOCK or not to NOLOCK? - sql-server

I've worked in a very large organization where they had to use NOLOCK on most queries - because data was commonly updated via ETL processes during the day and locking the application for 40% of the working day was certainly not an option.
Out of habit, in my next place I went on to automatically use NOLOCK everywhere. But since reading warnings and risks, I've been gradually undoing this, to have no table hints specified and let SQL Server do it's thing.
However, I'm still not comfortable I'm doing the right thing. In the place where we used NOLOCK, I never once saw data get doubled up, or corrupt data.. I was there for many years. Ever since removing NOLOCK I am running into the obvious obstacle of rowlocks slowing / pausing queries which gives the illusion that my DB is slow or flakey. When in actuality it's just somebody running a lengthy save somewhere (the ability for them to do so is a requirement of the application).
I would be interested to hear from anyone who has actually experienced data corruption or data duplication from NOLOCK in practise, rather than people who are going by what they've read about it on the internet. I would be especially appreciative if anybody can provide replication steps to see this happen. I am trying to gauge just how risky is it, and do the risks outweigh the obvious benefit of being able to run reports parallel to updates?

I've seen corruptions, duplication and bad results with NOLOCK. Not once, not seldom. Every deployment that relied on NOLOCK and I had a chance to look at had correctness issues. True that many (most?) were not aware and did not realize, but the problems were there, always.
You have to realize that NOLOCK problems do no manifest as hard corruptions (the kind DBCC CHECKDB would report), but 'soft' corruption. And the problems are obvious only on certain kind of workloads, mostly on analytic types (aggregates). They would manifest as an incorrect value in a report, a balance mismatch in a ledger, a wrong department headcount and similar. These problems are visible as problems only when carefully inspected by a qualified human. And they may well vanish mysteriously on a simple Refresh of the page. So you may well have all these problems and not be aware of them. Your users might accept that 'sometimes the balance is wrong, just ask for the report again and will be OK' and never report you the issue.
And there are some workloads that are not very sensitive to NOLOCK issues. If you display 'posts' and 'comments' you won't see much of NOLOCK issues. Maybe the 'unanswered count' is off by 2, but who will notice?
Ever since removing NOLOCK I am running into the obvious obstacle of rowlocks slowing / pausing queries
I would recommend evaluating SNPASHOT isolation models (including READ_COMMITTED_SNAPSHOT). You may get a free lunch.

I see you've read a lot about it, but allow me to point you to a very good explanation on the dangers of using NOLOCK (that's it READ UNCOMMITTED isolation level): SQL Server NOLOCK Hint & other poor ideas.
Apart from this, I'll make some citations and comments. The worst part of NOLOCK is this:
It creates “incredibly hard to reproduce” bugs.
The problem is that when you read uncommited data, most of the time is commited, so everything is alright. But it will randomly fail if the transaction is not comitted. And that doesn't usually happen. Right? Nope: first, a single error is a very bad thing (your customer don't like it). And second, things can get much worse, LO:
The issue is that transactions do more than just update the row. Often they require an index to be updated OR they run out of space on the data page. This may require new pages to be allocated & existing rows on that page to be moved, called a PageSplit. It is possible for your select to completely miss a number of rows &/or count other rows twice. More info on this in the linked article
So, that means that even if the uncommited transaction you've read is committed, you can still read bad data. And, this will happen at random times. That's ugly, very ugly!
What about corruption?
As Remus Rusanu said, it's not "hard" but "soft" corruption. And it affects specially aggregates, because you're reading what you shouldn't when updating them. This can lead for example to a wrong account balance.
Haven't you heard of big LOB apps that have procedures to rebuild account balances? Why? They should be correctly updated inside transactions! (That can be acceptable if the balances are rebuilt at critical moments, for example while calcultaing taxes).
What can I do without corrupting data (and thus is relatively safe)?
Let's say it's "quite safe" to read uncommited data when you're not using it to update other existing data on the DB. I.e. if you use NOLOCK only for reporting purposes (without write-back) you're on the "quite safe" side. The only "tiny trouble" is that the report can show the wrong data, but, at least, the data in the DB will keep consistent.
To consider this safe depends on the prupose of what you're reading. If it's something informational, which is not going to be used to make decissions, that's quite safe (for example it's not very bad to have some errors on a report of the best customers, or the most sold products). But if you're getting this information to make decissions, things can be much worse (you can make a decission on a wrong basis!)
A particular experience
I worked on the development of a 'crowded' application, with some 1,500 users which used NOLOCK for reading data, modifying it an updating it on the DB (a HHRR/TEA company). And (apparently) there were no problems. The trick was that each employee read "atomic data" (an employee's data) to modify it, and it was nearly impossible that two people read and modified the same data at the same time. Besides this "atomic data" didn't influence any aggregate data. So everything was fine. But from time to time there were problems on the reporting area, which read "aggregated data" with NOLOCK. So, the critical reports had to be scheduled for moments where noone was working in the DB. The small deviations on non-critical reports was overlooked and admittable.
Now you know it. You have no excuses. You decide, to NOLOCK or not to NOLOCK

Related

Concurrent editing of same data

I recently came up with a case that makes me wonder if I'm a newbie or something trivial has escaped to me.
Suppose I have a software to be run by many users, that uses a table. When the user makes login in the app a series of information from the table appears and he has just to add and work or correct some information to save it. Now, if the software he uses is run by many people, how can I guarantee is he is the only one working with that particular record? I mean how can I know the record is not selected and being worked by 2 or more users at the same time? And please I wouldn't like the answer use “SELECT FOR UPDATE... “
because for what I've read it has too negative impact on the database. Thanks to all of you. Keep up the good work.
This is something that is not solved primarily by the database. The database manages isolation and locking of "concurrent transactions". But when the records are sent to the client, you usually (and hopefully) closed the transaction and start a new one when it comes back.
So you have to care yourself.
There are different approaches, the ones that come into my mind are:
optimistic locking strategies (first wins)
pessimistic locking strategies
last wins
Optimistic locking: you check whether a record had been changed in the meanwhile when storing. Usually it does this by having a version counter or timestamp. Some ORMs and frameworks may help a little to implement this.
Pessimistic locking: build a mechanism that stores the information that someone started to edit something and do not allow someone else to edit the same. Especially in web projects it needs a timeout when the lock is released anyway.
Last wins: the second person storing the record just overwrites the first changes.
... makes me wonder if I'm a newbie ...
That's what happens always when we discover that very common stuff is still not solved by the tools and frameworks we use and we have to solve it over and over again.
Now, if the software he uses is runed by many people how can I guarantee is he
is the only one working with that particular record.
Ah...
And please I wouldn't like the answer use “SELECT FOR UPDATE... “ because for
what I've read it has too negative impact on the database.
Who cares? I mean, it is the only way (keep a lock on a row) to guarantee you are the only one who can change it. Yes, this limits throughput, but then this is WHAT YOU WANT.
It is called programming - choosing the right tools for the job. IN this case impact is required because of the requirements.
The alternative - not a guarantee on the database but an application server - is an in memory or in database locking mechanism (like a table indicating what objects belong to what user).
But if you need to guarantee one record is only used by one person on db level, then you MUST keep a lock around and deal with the impact.
But seriously, most programs avoid this. They deal with it either with optimistic locking (second user submitting changes gets error) or other programmer level decisions BECAUSE the cost of such guarantees are ridiculously high.
Oracle is different from SQL server.
In Oracle, when you update a record or data set the old information is still available because your update is still on hold on the database buffer cache until commit.
Therefore who is reading the same record will be able to see the old result.
If the access to this record though is a write access, it will be a lock until commit, then you'll have access to write the same record.
Whenever the lock can't be resolved, a deadlock will pop up.
SQL server though doesn't have the ability to read a record that has been locked to write changes, therefore depending which query you're running, you might lock an entire table
First you need to separate queries and insert/updates using a data-warehouse database. Which means you could solve slow performance in update that causes locks.
The next step is to identify what is causing locks and work out each case separately.
rebuilding indexes during working hours could cause very nasty locks. Push them to after hours.

What (kind of) data is lost when using REPAIR_ALLOW_DATA_LOSS?

after having some trouble with my sql server 2012 I could only fix the data inconsistencies using DBCC CHECKDB (xxx, REPAIR_ALLOW_DATA_LOSS). The option's name implies, that there will (possibly) be a data loss, when the database is repaired.
What is the data that is lost and how harmfull is the loss?
For example, take a look at this log message:
The off-row data node at page (1:705), slot 0, text ID 328867287793664 is not referenced.
If that node is not referenced and it is this node, that causes the inconsistency, delete it. That shouldn't hurt anyone. Is that the kind of data loss MS is talking about?
Best regards,
Sascha
Check Paul Randal's blog post for some additional insight into the implications of running DBCC CHECKDB with REPOAIR_ALLOW_DATA_LOSS:
REPAIR_ALLOW_DATA_LOSS is the repair level that DBCC CHECKDB recommends when it finds corruptions. This is because fixing nearly
anything that’s not a minor non-clustered index issue requires
deleting something to repair it. So, REPAIR_ALLOW_DATA_LOSS will
delete things. This means it will probably delete some of your data as
well. If, for instance it finds a corrupt record on a data page, it
may end up having to delete the entire data page, including all the
other records on the page, to fix the corruption. That could be a lot
of data. For this reason, the repair level name was carefully chosen.
You can’t type in REPAIR_ALLOW_DATA_LOSS without realizing that you’re
probably going to lose some data as part of the operation.
I’ve been asked why this is. The purpose of repair is not to save user
data. The purpose of repair is to make the database structurally
consistent as fast as possible (to limit downtime) and correctly (to
avoid making things worse). This means that repairs have to be
engineered to be fast and reliable operations that will work in every
circumstance. The simple way to do this is to delete what’s broken and
fix up everything that linked to (or was linked from) the thing being
deleted – whether a record or page. Trying to do anything more
complicated would increase the chances of the repair not working, or
even making things worse.
The ramifications of this are that running REPAIR_ALLOW_DATA_LOSS can
lead to the same effect on your data as rebuilding a transaction log
with in-flight transactions altering user data – your business logic,
inherent and constraint-enforced relationships between tables, and the
basic logical integrity of your user data could all be broken. BUT,
the database is now structurally consistent and SQL Server can run on
it without fear of hitting a corruption that could cause a crash.
To continue the contrived example from above, imagine your bank
checking and savings accounts just happen to be stored on the same
data page in the bank’s SQL Server database. The new DBA doesn’t
realize that backups are necessary for disaster recovery and data
preservation and so isn’t taking any. Disaster strikes again in the
form of the work crew outside the data-center accidentally cutting the
power and the machine hosting SQL Server powers down. This time, one
of the drives has a problem while powering down and a page write
doesn’t complete – causing a torn page. Unfortunately, it’s the page
holding your bank accounts. As the DBA doesn’t have any backups, the
only alternative to fix the torn-page is to run
REPAIR_ALLOW_DATA_LOSS. For this error, it will delete the page, and
does so. In the process, everything else on the page is also lost,
including your bank accounts!!
Source: Corruption: Last resorts that people try first…

Should I keep this "GlobalConnection" or create connection for every query?

I have inherited a legacy Delphi application that uses ADO to connect to SQL Server.
The application has a notion of a "Global Connection" -- that is a single connection that it opens at the start, and then keeps open all throughout the running of the application (which can be days, weeks, or longer....)
So my question is this: Should I keep this way of doing things or should I switch to a "connect-query-disconnect" mode of doing things? Does it matter?
Switching would be a non-trivial task, but I'll do it if it means better performance, data management, etc.
Well, it depends on what you're expecting to get out of it, and what kind of application it is.
There's nothing in particular wrong with using a single long-running connection, as long as the application can gracefully handle disconnections and recover or log/notify when it can't reconnect.
The problem with a connect-query-disconnect setup is that you're adding the overhead of connecting and disconnecting on every query. That's going to slow things down, and in an interactive GUI application users may notice the additional overhead. You also have to make sure that authorization is transparently handled if it isn't already.
At the same time, there may be interactive performance gains to be had if you can push all the queries off onto background threads and asynchronously update the GUI. If contention appears because the queries are serialized, you can migrate to a connection-pool system fairly readily as well and improve things even more. This has a fairly high complexity cost to it though, so now you're looking to balancing what the gains are compared to the work involved.
Right now, my ultimate response is "if it ain't broke, don't fix it." Changes along the lines you propose are a lot of work -- how much do the users of this application stand to gain? Are there other problems to solve that might benefit them more?
Edit: Okay, so it's broke. Well, slow at least, which is all the same to me. If you've ruled out problems with the SQL Server itself, and the queries are performing as fast as they can (i.e. DB schema is sane, the right indexes are available, queries aren't completely braindead, server has enough RAM and fast enough I/O, network isn't flaky, etc.), then yes, it's time to find ways to improve the performance of the app itself.
Simply moving to a connect-query-disconnect is going to make things worse, and the more queries you're issuing the bigger the drop off is going to be. It sounds like you're going to need to rearchitect the app so that you can run fewer queries, run them in the background, cache more aggressively on the client, or some combination of all 3.
Don't forget the making the clients perform better means that server side performance gets more important since it's probably going to be handling a higher load if clients start making multiple connections and issuing multiple queries in parallel.
As mr Frazier told before - the one global connection is not bad per se.
If you intend to change, first detect WHAT is the problem. Let's see some scenarios:
1
Some screens(IOW: an set of 1..n forms to operate in a business entity) are slow. Possible causes:
insuficient filtering resulting in a pletora of records being pulled from database without necessity.
the number of records are ok, but takes too much to render it. Solution: faster controls or intelligent rendering (ex.: Virtual list views)
too much queries each time you open an screen. Possible solutions: use TClientDatasets (or any in-memory dataset) to hold infrequently modified lookup tables. An more sophisticated cache for more extensive tables or opening those datasets in other threads can improve response times.
Scrolling on datasets with controls bound can be slow (just to remember, because those little details can be easily forgotten).
2
Whole app simply slows down. Checklist:
Network cards are ok? An few net cards mal-functioning can wreak havoc even on good structured networks as they create unnecessary noise on the line.
[MSSQL DBA HAT ON] The next on the line of attack is SQL Server. Ask the DBA to trace blocks and deadlocks. Register slow queries and work on them speed up. This relate directly to #1.1 and #1.3
Detect if some naive developer have done SELECT inside transactions. In read committed isolation, it's just overhead, as it'll create more network traffic. Open the query, retrieve the data and close the dataset.
Review the database schema, if you can.
Are any data-bound operations on a bulk of records (let's say, remarking the price of some/majority/all products) being done on the app? Make an SP or refactor the operation on an query, it'll be much faster and will reduce the load of the entire server.
Extensive operations on a group of records? Learn how to do that operations at once on the server instead of one-by-one record. See an examination of most used alternatives on the MSSQL MVP Erland Sommarskog's article on array and list on MSSQL.
Beware of queries with WHERE like : WHERE SomeFunction(table1.blabla) = #SomeParam . Most of time, that ones will not use an index causing to read the entire table to select the desired data. If is a big table.... Indexing on a persisted computed columns can make miracles...[MSSQL HAT OFF]
That's what I can think of without a little more detail... ;-)
Database connections are time consuming resources to create and the rule of thumb should be create as little as possible and reuse as much as possible. That's why some other technologies have database connection pools, which are typically established at application/service startup and then kept as long as possible and shared among threads.
From your comment, the application has performances issues, but it's difficult without more details to make any recommendation.
Should try to nail down what is slow - are all queries slow or just some specific ones?
If just some specific ones is there some correlation.
My 2 cents.

Relying on db transaction rollback in sunshine scenario

In a financial system I am currently maintaining, we are relying on the rollback mechanism of our database to simulate the results of running some large batch jobs - rolling back the transaction or committing it at the end, depending on whether we were doing a test run.
I really cannot decide what my opinion is. In a way I think this is great, because then there is no difference between the simulation and a live run - on the other hand it just feels kind of icky, like e.g. depending on code throwing exceptions to carry out your business logic.
What is your opinion on relying on database transactions to carry out your business logic?
EXAMPLE
Consider an administration system having 1000 mortgage deeds in a database. Now the user wants to run a batch job that creates the next term invoice on each deed, using a couple of advanced search criteria that decides which deeds are to be invoiced.
Before she actually does this, she does a test run (implemented by doing the actualy run but ending in a transaction rollback), creating a report on which deeds will be invoiced. If it looks satisfactory, she can choose to do the actual run, which will end in a transaction commit.
Each invoice will be stamped with a batch number, allowing us to revert the changes later if it is needed, so it's not "dangereous" to do the batch run. The users just feel that it's better UX to be able to simulate the results first.
CLARIFICATION
It is NOT about testing. We do have test and staging environments for that. It's about a regular user using our system wanting to simulate the results of a large operation, that may seem "uncontrollable" or "irreversible" even though it isn't.
CONCLUSION
Doesn't seem like anyone has any real good arguments against our solution. As always, context means everything, so in the context of complex functional requirements exceeding performance requirements, using db rollback to implement batch job simulations seems a viable solution.
As there is no real answer to this question, I am not choosing an answer - instead I upvoted those who actually put forth an argument.
I think it's an acceptable approach, as long as it doesn't interfere with regular processing.
The alternative would be to build a query that displays the consequences for review, but we all have had the experience of taking such an approach and not quite getting it right; or finding that the context changed between query and execution.
At the scale of 1000 rows, it's unlikely the system load is burdensome.
Before she actually does this, she does a test run (implemented by doing the actualy run but ending in a transaction rollback), creating a report on which deeds will be invoiced. If it looks satisfactory, she can choose to do the actual run, which will end in a transaction commit.
That's wrong, prone to failure, and must be hell on your database logs. Unless you wrap your simulation and the actual run in a single transaction (which, judging by the timeline necessary to inspect 1000 deeds, would lead to a lot of blocked users) then there's no guaranteed consistency between test run and real run. If somebody changed data, added rows, etc. then you could end up with a different result - defeating the entire purpose of the test run.
A better pattern to do this would be for the test run to tag the records, and the real run to pick up the tagged records and process them. Or, if you have a thick client app, you can pull down the records to the client, show the report, and - if approved - push them back up.
We can see what the user needs to do, quite a reasonable thing. I mean how often do we get a regexp right first time? Refining a query till it does exactly what you want is not unusual.
The business consequences of not catching errors may be quite high, so doing a trial run makes sense.
Given a blank sheet of paper I'm sure we can devise an clean implementation expressed in formal behaviours of the system rather than this somewhat back-door appraoch.
How much effort would I put into fixing that now? Depends on whether the current approach is actually hurting. We can imagine that in a heaviliy used system it could lead to contention in the database.
What I wrote about the PRO FORMA environment in that bank I worked in was also entirely a user thing.
I'm not sure exactly what you're asking here. Taking you literally
What is your opinion on relying on
database transactions to carry out
your business logic?
Well, that's why we have transactions. We do rely on them. We hit an error and abort a transaction and rely on work done in that transaction scope to be rolled-back. So exploiting the transactional beahviours of our systems is a jolly good thing, and we'd need to hand-roll the same thing ourselves if we didn't.
But I think your question is about testing in a live system and relying on not commiting in order to do no damage. In an ideal world we have a live system and a test system and we don't mess with live systems. Such ideals are rarely seen. Far more common is "patch the live system. testing? what do you mean testing?" So in fact you're ahead of the game compared with some.
An alternative is to have dummy data in the live system, so that some actions can actually run all the way through. Again, error prone.
A surprisingly high proportion of systems outage are due to finger trouble, it's the humans who foul up.
It works - as you say. I'd worry about the concurrency of the system since the transaction will hold locks, possibly many locks. It means that your tests will hold up any live action on the system (and any live action operations will hold up your tests). It is generally better to use a test system for testing. I don't like it much, but if the risks from running a test but forgetting to roll it back are not a problem, and neither is the interference, then it is an effective way of attaining a 'what if' type calculation. But still not very clean.
When I was working in a company that was part of the "financial system", there was a project team that had decided to use the production environment to test their conversion procedure (and just rollback instead of commit at the end).
They almost got shot for it. With afterthought, it's a pity they weren't.
Your test environments that you claim you have are for the IT people's use. Get a similar "PRO-FORMA" environment of which you can guarantee your users that it is for THEIR use exclusively.
When I worked in that bank, creating such a PRO-FORMA environment was standard procedure at every year closure.
"But it's not about testing, it's about a user wanting to simulate the results of a large job."
Paraphrased : "it's not about testing, it's about simulation".
Sometimes I wish Fabian Pascal was still in business.
(Oh, and in case anyone doesn't understand : both are about "not perpetuating the results".)

SQL Server NOLOCK on queries run for authorization

During the course of our application login there are several queries ran, all around validating the login. In evaluating them I noticed that one of the queries is run without the NOLOCK hint.
There does not seem to be any particular danger of dirty read because the data would hardly ever change.
Thinking about it from an attempted DOS type attack by somebody attempting failed logins over and over again I am suggesting that the lack of NOLOCK lowers our threshold for failure.
I believe it is an extremely unlikely result of a DOS attack (I think the web server would go first) but adding NOLOCK should make it go from unlikely to impossible.
So, am I being excessive or trivial?
Having NOLOCKs or not is the least of your worries with a DoS attempt against your server.
I wouldn't sweat it.
If, as you say, the data rarely changes, having the NOLOCKs there probably don't hurt.
Yes, you are being excessively trival.
If you are exposed to DOS attacks, NOLOCK on the SQL authorization call is the least of your worries. Implement some DOS detection, failure tracking+throttle, even some planned pauses that wouldn't effect the user but would slow down an attack...
The NOLOCK could potentially improve your performance if you call that query frequently especially in broader transactions.
Consider the nature of the dirty read - would the time window where that can occur be truely critical? e.g. where you add or remove someone from an authorized role.
In the add scenario a dirty read would fail on that attempt. (access denied)
In the remove scenario a dirty read would work on that attempt. (access granted)
If the data changes through manual operation e.g. human interaction - their margin for "latency" is typically much higher/indeterminate than your databases!
Also a very rare case, but still: Just at the moment somebody deactivates the user to prevent them from logging in, NOLOCK lets them in. Could be a rogue user/hacker/employee who needs to be locked out immediately?
You would have to be concerned about this particular scenario to forgo the performance advantage of NOLOCK.
You would protect your table far better by looking at the permissions to it. A table like this should not allow any direct access, all access should be from stored procs and the permissions set on them instead.

Resources