Open 2 windows in SQLServer 2014 Management Studio connected to a 2014 DB.
In 1st window:
SELECT * FROM categorytypes WHERE id = 12
Output:
ID DESCRIPTION ORDER
---------------------
12 Electronics 20
Then:
begin tran
UPDATE CategoryTypes SET [order] = 20 WHERE id = 12
Now go to other window (Ctrl+N):
SELECT * FROM CategoryTypes
The query will execute indefinitely until 1st window tran is committed or roll-backed. That's fine because ID=12 row is locked.
SELECT * FROM CategoryTypes WHERE ID <> 12
That works fine.
SELECT * FROM CategoryTypes WHERE Description = 'MEN'
Here it is problem, why the hell this query should execute indefinitely, we know that ID=12 has description 'Electronics'.
In a large application where huge DML process and select operation is done simultaneously on same table, this kind of locking mechanism wont allow to do these 2 things on same time on different set of records.
In Oracle this kind of use case works, as long as locked (dirty row) is not part of result set.
Guys, is there any way to avoid this, kind of Oracle locking mechanism? I don't want to use NOLOCK, or set my transaction to READ UNCOMMITTED.
Thanks.
In SQL Server, the default behavior is to use locking in the default READ_COMMITTED isolation level. With this default locking behavior, it is especially important to have useful indexes so that only needed data are touched. For example, the query with Description in the WHERE clause will not be blocked if you 1) have an index on Description and 2) that index is used in the query plan to locate the needed rows. Without an index on Description, a full table scan will result and the query will be blocked when it hits the uncommitted change.
If you want to avoid locking such that readers don't block writers and visa-versa, you can turn on the READ_COMMITTED_SNAPSHOT database option. SQL Server will then use row versioning instead of locking to ensure only committed data are returned.
Like other DBMS products that use row versioning, there is more overhead with READ_COMMITTED_SNAPSHOT than in-memory locking. SQL Server adds 14 bytes of additional storage per row plus uses tempdb more heavily for the row version store. Whether or not these overhead costs are justified depends on the concurrency benefits your workload experiences.
Related
I have a SQL Server database where I am deleting rows from three tables A,B,C in batches with some conditions through a SQL script scheduled in a SQL job. The job runs for 2 hours as the tables have a large amount of data. While the job is running, my front end application is not accessible (giving timeout error) since the application inserts and updates data in these same tables A,B,C.
Is it possible for the front end application to run in parallel without any issues while the SQL script is running? I have checked for the locks on the table and SQL Server is acquiring page locks. Can Read Committed Snapshot or Snapshot isolation levels or converting page locks to row locks help here. Need advice.
Split the operation in two phases. In the first phase, collect the primary keys of rows to delete:
create table #TempList (ID int);
insert #TempList
select ID
from YourTable
In the second phase, use a loop to delete those rows in small batches:
while 1=1
begin
delete top (1000)
from YourTable
where ID in (select ID from #TempList)
if ##rowcount = 0
break
end
The smaller batches will allow your front end applications to continue in between them.
I suspect that SQL Server at some point escalates to table lock, and this means that the table is inaccessible, both for reading and updating.
To optimize locking and concurrency when dealing with large deletes, use batches. Start with 5000 rows at the time (to prevent lock escalation) and monitor how it behaves and whether it needs further tuning up or down. 5000 is a "magic number", but it's low enough number that lock manager doesn't consider escalating to table lock, and large enough for the performance.
Whether timeouts will happen or not depends on other factors as well, but this will surely reduce if not elliminate alltogether. If the timeout happen on read operations, you should be able to get rid of them. Another approach, of course, is to increase the command timeout value on client.
Snapshot (optimistic) isolation is an option as well, READ COMMITTED SNAPSHOT more precisely, but it won't help with updates from other sessions. Also, beware of version store (in tempdb) growth. Best if you combine it with the proposed batch approach to keep the transactions small.
Also, switch to bulk-logged recovery for the duration of delete if the database is in full recovery normally. But switch back as soon as it finishes, and make a backup.
Almost forgot -- if it's Enterprise edition of SQL Server, partition your table; then you can just switch the partition out, it's almost momentarilly and the clients will never notice it.
I'm currently having troubles with frequent deadlocks with a specific user table in SQL Server 2008. Here are some facts about this particular table:
Has a large amount of rows (1 to 2 million)
All the indexes used on this table only have the "use row lock" ticked in their options
Edit: There is only one index on the table which is its primary Key
rows are frequently updated by multiple transactions but are unique (e.g. probably a thousand or more update statements are executed to different unique rows every hour)
the table does not use partitions.
Upon checking the table on sys.tables, I found that the lock_escalation is set to TABLE
I'm very tempted to turn the lock_escalation for this table to DISABLE but I'm not really sure what side effect this would incur. From What I understand, using DISABLE will minimize escalating locks from TABLE level which if combined with the row lock settings of the indexes should theoretically minimize the deadlocks I am encountering..
From what I have read in Determining threshold for lock escalation it seems that locking automatically escalates when a single transaction fetches 5000 rows..
What does a single transaction mean in this sense? A single session/connection getting 5000 rows thru individual update/select statements?
Or is it a single sql update/select statement that fetches 5000 or more rows?
Any insight is appreciated, btw, n00b DBA here
Thanks
LOCK Escalation triggers when a statement holds more than 5000 locks on a SINGLE object. A statement holding 3000 locks each on two different indexes of the same table will not trigger escalation.
When a lock escalation is attempted and a conflicting lock exists on the object, the attempt is aborted and retried after another 1250 locks (held, not acquired)
So if your updates are performed on individual rows and you have a supporting index on the column, then lock escalation is not your issue.
You will be able to verify this using the Locks-> lock escalation event from profiler.
I suggest you capture the deadlock trace to identify the actual cause of the deadlock.
I found this article after a quick Google of disabling table lock escalation. Although not a real answer for the OP I think it is still relevant for one off scripts and note worthy here. There's a nice little trick you can do to temporarily disable table lock escalation.
Open another connection and issue something like.
BEGIN TRAN
SELECT * FROM mytable (UPDLOCK, HOLDLOCK) WHERE 1=0
WAITFOR DELAY '1:00:00'
COMMIT TRAN
as
Lock escalation cannot occur if a different SPID is currently holding
an incompatible table lock.
from microsoft kb
I'm wondering what is the benefit to use SELECT WITH (NOLOCK) on a table if the only other queries affecting that table are SELECT queries.
How is that handled by SQL Server? Would a SELECT query block another SELECT query?
I'm using SQL Server 2012 and a Linq-to-SQL DataContext.
(EDIT)
About performance :
Would a 2nd SELECT have to wait for a 1st SELECT to finish if using a locked SELECT?
Versus a SELECT WITH (NOLOCK)?
A SELECT in SQL Server will place a shared lock on a table row - and a second SELECT would also require a shared lock, and those are compatible with one another.
So no - one SELECT cannot block another SELECT.
What the WITH (NOLOCK) query hint is used for is to be able to read data that's in the process of being inserted (by another connection) and that hasn't been committed yet.
Without that query hint, a SELECT might be blocked reading a table by an ongoing INSERT (or UPDATE) statement that places an exclusive lock on rows (or possibly a whole table), until that operation's transaction has been committed (or rolled back).
Problem of the WITH (NOLOCK) hint is: you might be reading data rows that aren't going to be inserted at all, in the end (if the INSERT transaction is rolled back) - so your e.g. report might show data that's never really been committed to the database.
There's another query hint that might be useful - WITH (READPAST). This instructs the SELECT command to just skip any rows that it attempts to read and that are locked exclusively. The SELECT will not block, and it will not read any "dirty" un-committed data - but it might skip some rows, e.g. not show all your rows in the table.
On performance you keep focusing on select.
Shared does not block reads.
Shared lock blocks update.
If you have hundreds of shared locks it is going to take an update a while to get an exclusive lock as it must wait for shared locks to clear.
By default a select (read) takes a shared lock.
Shared (S) locks allow concurrent transactions to read (SELECT) a resource.
A shared lock as no effect on other selects (1 or a 1000).
The difference is how the nolock versus shared lock effects update or insert operation.
No other transactions can modify the data while shared (S) locks exist on the resource.
A shared lock blocks an update!
But nolock does not block an update.
This can have huge impacts on performance of updates. It also impact inserts.
Dirty read (nolock) just sounds dirty. You are never going to get partial data. If an update is changing John to Sally you are never going to get Jolly.
I use shared locks a lot for concurrency. Data is stale as soon as it is read. A read of John that changes to Sally the next millisecond is stale data. A read of Sally that gets rolled back John the next millisecond is stale data. That is on the millisecond level. I have a dataloader that take 20 hours to run if users are taking shared locks and 4 hours to run is users are taking no lock. Shared locks in this case cause data to be 16 hours stale.
Don't use nolocks wrong. But they do have a place. If you are going to cut a check when a byte is set to 1 and then set it to 2 when the check is cut - not a time for a nolock.
I have to add one important comment. Everyone is mentioning that NOLOCKreads only dirty data. This is not precise. It is also possible that you'll get the same row twice or the whole row is skipped during your read. The reason is that you could ask for some data at the same time when SQL Server is re-balancing b-tree.
Check another threads
https://stackoverflow.com/a/5469238/2108874
http://www.sqlmag.com/article/sql-server/quaere-verum-clustered-index-scans-part-iii.aspx)
With the NOLOCK hint (or setting the isolation level of the session to READ UNCOMMITTED) you tell SQL Server that you don't expect consistency, so there are no guarantees. Bear in mind though that "inconsistent data" does not only mean that you might see uncommitted changes that were later rolled back, or data changes in an intermediate state of the transaction. It also means that in a simple query that scans all table/index data SQL Server may lose the scan position, or you might end up getting the same row twice.
At my work, we have a very big system that runs on many PCs at the same time, with very big tables with hundreds of thousands of rows, and sometimes many millions of rows.
When you make a SELECT on a very big table, let's say you want to know every transaction a user has made in the past 10 years, and the primary key of the table is not built in an efficient way, the query might take several minutes to run.
Then, our application might me running on many user's PCs at the same time, accessing the same database. So if someone tries to insert into the table that the other SELECT is reading (in pages that SQL is trying to read), then a LOCK can occur and the two transactions block each other.
We had to add a "NO LOCK" to our SELECT statement, because it was a huge SELECT on a table that is used a lot by a lot of users at the same time and we had LOCKS all the time.
I don't know if my example is clear enough? This is a real life example.
The SELECT WITH (NOLOCK) allows reads of uncommitted data, which is equivalent to having the READ UNCOMMITTED isolation level set on your database. The NOLOCK keyword allows finer grained control than setting the isolation level on the entire database.
Wikipedia has a useful article: Wikipedia: Isolation (database systems)
It is also discussed at length in other stackoverflow articles.
select with no lock - will select records which may / may not going to be inserted. you will read a dirty data.
for example - lets say a transaction insert 1000 rows and then fails.
when you select - you will get the 1000 rows.
In my SQL tempOrder table has millions of records and with 10 trigger to update tempOrder table with another table's update.
So I want to apply apply with(NOLOCK) on table.
I know with
SELECT * FROM temporder with(NOLOCK)
This statement I can do. But is there any way to apply with(NOLOCK) directly to the table from SQL Server 2008.
The direct answer to your question is NO -- there is no option to to tell SQL to never lock tableX. With that said, your question opens up a whole series of things that should be brought up.
Isolation Level
First, the most direct way you can accomplish what you want is to use with (nolock) option or SET TRANSACTION ISLOATION LEVEL READ UNCOMMITTED (aka chaos). These options are good for the query or the duration of the connection respectively. If I chose this route I would combine it with a long running SQL Profiler trace to identify any queries taking locks on TableX.
Lock Escalation
Second, SQL Server does have a table wide LOCK_ESCALATION threshold (executed as ALTER TABLE SET LOCK_ESCALATION x where X is the number of locks or AUTO). This controls when SQL attempts to consolidate many fine grained locks into fewer coarse grained locks. Said another way, it is a numeric threshold for converting how many locks are taken out on a single database object (think index).
Overriding SQL's lock escaltion generally isn't a good idea. As the documentation states:
In most cases, the Database Engine delivers the best performance when
operating with its default settings for locking and lock escalation.
As counter intuitive as it may seem, from the scenario you described you might have some luck with fewer broad locks instead of NOLOCK. You'll need to test this theory out with a real workload to determine if its worthwhile.
Snapshot Isolation
You might also check out the SNAPSHOT isolation level. There isn't enough information in your question to know, but I suspect it would help.
Dangers of NOLOCK
With that said, as you might have picked up from #GSerg's comment, NOLOCK can be evil. No-Lock is colloquially referred to as Chaos--and for good reason. When developers first encounter NOLOCK it seems like allowing dirty reads is the only implication. There are more...
dirty data is read for inconsistent results (the common impression)
wrong data -- meaning neither consistent with the pre-write or post-write state of your data.
Hard exceptions (like error 601 due to data movement) that terminate your query
Blank data is returned
previously committed rows are missed
Malformed bytes are returned
But don't take my word for it :
Actual Email: "NoLOCK is the epitome of evil?"
SQL Sever NOLOCK hint & other poor ideas
Is the nolock hint a bad practice
this is not a table's configuration.
If you add (nolock) to the query (it is called a query hint) you are saying that when executing this (and only this) query, it wont create lock on the affected tables.
Of course, you can make this configuration permanent for the current connection by setting a transaction isolation level to read uncommitted for example: set transaction isolation level read uncommitted. But again, it is valid only until that connection is open.
Perhaps if you explain in more details what you are trying to achieve, we can better help you.
You cannot change the default isolation level (except for snapshot) for a table or a database, however you can change it for all read queries in one transaction:
set transaction isolation level read uncommitted
See msdn for more information.
I have seen sql statements using nolock and with(nolock)
e.g -
select * from table1 nolock where column1 > 10
AND
select * from table1 with(nolock) where column1 > 10
Which of the above statements is correct and why?
The first statement doesn't lock anything, whereas the second one does. When I tested this out just now on SQL Server 2005, in
select * from table1 nolock where column1 > 10 --INCORRECT
"nolock" became the alias, within that query, of table1.
select * from table1 with(nolock) where column1 > 10
performs the desired nolock functionality. Skeptical? In a separate window, run
BEGIN TRANSACTION
UPDATE tabl1
set SomeColumn = 'x' + SomeColumn
to lock the table, and then try each locking statement in its own window. The first will hang, waiting for the lock to be released, and the second will run immediately (and show the "dirty data"). Don't forget to issue
ROLLBACK
when you're done.
The list of deprecated features is at Deprecated Database Engine Features in SQL Server 2008:
Specifying NOLOCK or READUNCOMMITTED
in the FROM clause of an UPDATE or
DELETE statement.
Specifying table
hints without using the WITH keyword.
HOLDLOCK table hint without
parenthesis
Use of a space as a separator between table hints.
The indirect application of table hints to an invocation of a multi-statement table-valued function (TVF) through a view.
They are all in the list of features that will be removed sometimes after the next release of SQL, meaning they'll likely be supported in the enxt release only under a lower database compatibility level.
That being said my 2c on the issue are as such:
Both from table nolock and from table with(nolock) are wrong. If you need dirty reads, you should use appropiate transaction isolation levels: set transaction isolation level read uncommitted. This way the islation level used is explictily stated and controlled from one 'knob', as opposed to being spread out trough the source and subject to all the quirks of table hints (indirect application through views and TVFs etc).
Dirty reads are an abonimation. What is needed, in 99.99% of the cases, is reduction of contention, not read uncommitted data. Contention is reduced by writing proper queries against a well designed schema and, if necessary, by deploying snapshot isolation. The best solution, that solves works almost always save a few extreme cases, is to enable read commited snapshot in the database and let the engine work its magic:
ALTER DATABASE MyDatabase SET ALLOW_SNAPSHOT_ISOLATION ON
ALTER DATABASE MyDatabase SET READ_COMMITTED_SNAPSHOT ON
Then remove ALL hints from the selects.
They are both technically correct, however not using the WITH keyword has been deprecated as of SQL 2005, so get used to using the WITH keyword - short answer, use the WITH keyword.
Use "WITH (NOLOCK)".
Both are syntactically correct.
NOLOCK will become the alias for table1.
WITH (NOLOCK) is often exploited as a magic way to speed up database reads, but I try to avoid using it whever possible.
The result set can contain rows that have not yet been committed, that are often later rolled back.
An error or Result set can be empty, be missing rows or display the same row multiple times.
This is because other transactions are moving data at the same time you're reading it.
READ COMMITTED adds an additional issue where data is corrupted within a single column where multiple users change the same cell simultaneously.
There are other side-effects too, which result in sacrificing the speed increase you were hoping to gain in the first place.
Now you know, never use it again.