SQL Server: do readers wait on writers (for same data)? - sql-server

I've read that in older versions of SQL Server .. it had a pessimistic locking strategy. I.e. readers wait on writers for access to the same data (row or page level), unlike Oracle.
Is this still the case in newer versions ? I've read that the locking strategy has been changed in recent versions.

What you heard of is the SNAPSHOT ISOLATION, available since SQL Server 2005. Snapshot isolation, aka. row-versioning, is the default behavior in Oracle. You can make it default in SQL Server too, by enabling READ_COMMITTED_SNAPSHOT on the database:
ALTER DATABASE [<dbname>] SET READ_COMMITTED_SNAPSHOT ON;
With row versioning SQL Server does not acquire data locks during reads. If concurrent writes occur, the read will fetch the previous version of the row. For more details, read Row Versioning-based Isolation Levels in the Database Engine.
You should not confuse row versioning and snapshot with dirty reads. Dirty reads offer inconsistent data which makes programming a challenge, to say the least (ie. you should not use it!). Snapshot reads offer always a transactionally consistent image of the data.

by default SQL server uses READ COMMITTED isolation level, which means it will wait on uncommited changes before it tries to read them.
http://msdn.microsoft.com/en-us/library/ms173763.aspx
note that if you don't care about the accuracy of the data returned, you can always set you isolation level to Read Uncommitted this will give you all the records evening the ones that have binding changes
You can also use snapshot isolation level, which will give you all the record, including the latest known version of data that are currently being modified, without the current modification.

The locking strategy is something that is handled on connection by connection basis - this is something that can be set by the application and withing SQL Server itself.
Read about the Transaction Isolation Levels for more details.

Related

Does DB2 for z/OS have isolation level similar to READ ONLY in Oracle?

Does DB2 for z/OS have isolation level similar to READ ONLY in Oracle?
I need to implement several big selects to DB2 and also I need to retrieve consistent data which was commited on time when queries were started, so I need something like 'snapshot isolation level'. As far as I know in Oracle it can be implemented by READ ONLY isolation level, but what about DB2 for z/OS?
DB2 for z/OS does not have "read only" isolation level (nor does Oracle, as "read only" is a transaction state, not an isolation level).
You can avoid lock waits by queries if you use the currently committed concurrent access resolution option, however, note that it does not implement "snapshot isolation" per se -- a query that uses this option will see the latest committed changes, even if those changes were committed after the query started.
There is a FOR READ ONLY clause in DB2 z/OS. You add it at the end of your query.
For tables in which updates and deletes are allowed, specifying FOR
READ ONLY can possibly improve the performance of FETCH operations as
DB2® can do blocking and avoid exclusive locks. For example, in
programs that contain dynamic SQL statements without the FOR READ ONLY
or ORDER BY clause, DB2 might open cursors as if the UPDATE clause was
specified.
Here is the Info Center article with more information.
If you're really looking for the DB2 version of "Serializable", then you are looking for Repeatable Read.

Transaction isolation in JET?

MSDN describes the JET transaction isolation for its OLEDB provider as follows:
Jet supports five levels of nesting in transactions. The only
supported mode for transactions is Read Committed. Setting lesser
levels of transactional separation implies Read Committed. Setting
higher levels will cause StartTransaction to fail.
Jet supports only single-phase commit.
MSDN describes Read Committed as follows:
Specifies that shared locks are held while the data is being read to
avoid dirty reads, but the data can be changed before the end of the
transaction, resulting in nonrepeatable reads or phantom data. This
option is the SQL Server default.
My questions are:
What is single-phase commit? What consequence does this have for transactions and isolation?
Would the Read Committed isolation level as described above be suitable for my requirements here?
What is the best way to achieve a Serializable transaction isolation using Jet?
By question number:
Single-phase commit is used where all of your data is in one database -- the activity of the transaction is committed atomically and you're done. If you have a logical transaction which needs to be spread across multiple storage engines (like a relational database for metadata and some sort of document store for a big blob) you can use a transaction manager to coordinate the activities so that the work is persisted in both or neither, if both products support two phase commit. They are just telling you that they don't support two-phase commit, so the product is not suitable for distributed transactions.
Yes, if you check the condition in the UPDATE statement itself; otherwise you might have problems.
They seem to be suggesting that you can't.
As an aside, I worked for decades as a consultant in quite a variety of environments. More than once I was engaged to migrate people off of Jet because of performance problems. In one case a simple "star" type query was running for two minutes because it was joining on the client rather than letting the database do it. As a direct query against the database it was sub-second. In another case there was a report which took 72 hours to run through Jet, which took 2 minutes when run directly against the database. If it generally works OK for you, you might be able to deal with such situations by using stored procedures where Jet is causing performance pain.

In SQL Server 2005 and 2008, how to tell I'm using pessimistic concurrency model or optimistic one?

I know SQL Server 2000 has a pessimistic concurrency model. And the optimistic model was added in SQL Server 2005. So how do I tell whether I'm using the pessimistic concurrency model or the optimistic one in SQL Server 2005 and 2008?
Thanks.
SQL 2005 (and 2008) introduces SNAPSHOT issolation. This is the way to move to optimistic concurrency. Take a look to Transaction Isolation and the New Snapshot Isolation Level article:
Isolation level Dirty Reads Non-repeatable Phantom reads Concurrency
reads control
READ UNCOMMITTED Yes Yes Yes Pessimistic
READ COMMITTED No Yes Yes Pessimistic
(with locking)
READ COMMITTED No Yes Yes Optimistic
(with snapshot)
REPEATABLE READ No No Yes Pessimistic
SNAPSHOT No No No Optimistic
SERIALIZABLE No No No Pessimistic
After reading some articles and documents from Microsoft. I got the following conclusion.
On SQL Server 2005+
If you are using read uncommitted, repeatable read or serializable isolation level, you are using pessimistic concurrency model.
If you are using snapshot isolation level, you are using optimistic concurrency model.
If you are using read committed isolation level and the database setting read_committed_snapshot is ON, then you are using optimistic concurrency model
If you are using read committed isolation level and the database setting read_committed_snapshot is OFF, then you are using pessimistic concurrency model
However, I still need confirmation. Also, If there are some code to test the concurrency model that would be great.
Basically:
Pessimistic: you lock the record only for you until you have finished with it. So read committed transaction isolation level. (not uncommitted as you said)
Optimistic concurrency control works on the assumption that resource conflicts between multiple users are unlikely, and it permits transactions to execute without locking any resources. The resources are checked only when transactions are trying to change data. This determines whether any conflict has occurred (for example, by checking a version number). If a conflict occurs, the application must read the data and try the change again. Optimistic concurrency control is not provided with the product, but you can build it into your application manually by tracking database access. (Source)

How to figure the read/write ratio in Sql Server?

How can I query the read/write ratio in Sql Server 2005? Are there any caveats I should be aware of?
Perhaps it can be found in a DMV query, a standard report, a custom report (i.e the Performance Dashboard), or examining a Sql Profiler trace. I'm not sure exactly.
Why do I care?
I'm taking time to improve the performance of my web app's data layer. It deals with millions of records and thousands of users.
One of the points I'm examining is database concurrency. Sql Server uses pessimistic concurrency by default--good for a write-heavy app. If my app is read-heavy, I might switch it to optimistic concurrency (isolation level: read committed snapshot) like Jeff Atwood did with StackOverflow.
All apps are heavy read only.
An UPDATE is a read for the WHERE clause followed by a write
An INSERT must check unique indexes and FKs, which are reads and why you index FK columns
At most you have 15% writes. I saw an article once discussing it, but can't find it again. More likely 1%.
I know that in our 6 million new rows per day DB, we still have a minimum of 95%+ reads (an estimate of course).
Why do you need to know?
Also: How to find out SQL Server table’s read/write statistics?
Edit, based on the question update...
I would leave DB concurrency until you need to change it. We've not change anything out of the box for our 6 million rows + heavy reads too
For tuning our web app, we designed it to reduce round trips (one call = one action, mutliple record sets per call etc)
Check out sys.dm_db_index_usage_stats:
seeks, scans, lookups are all reads
updates are writes
Keep in mind that the counters are reset with each server restart, you need to look at them only after a representative load was run.
There are also some performance counters that can help you:
Batch Requests/sec: number of Transact-SQL command batches received per second.
Write Transactions/sec: number of transactions that wrote to the database and committed
Transactions/sec: number of transactions started for the database
From these rates you can get a pretty good estimate of read:write ratio of your requests.
after your update
Turning on the version store is probably the best avenue for dealing with concurrency. Rather than using the snapshot isolation explicitly, I'd recommend turning on read committed snapshot:
alter database <dbname> set allow_snapshot_isolation on;
alter database <dbname> set read_committed_snapshot on;
this will make read committed reads (ie. the default ones) to use snapshot instead, so it literally doesn't require any change in the app and can be quickly tested.
You should also investigate if your reads don't get executed under serialization reads isolation level, which is what happens when a TransactionScope is used w/o explicitly specifying the isolation level.
One word of caution that the version store is not exactly free. See Row Versioning Resource Usage. And you should give a read to SQL Server 2005 Row Versioning-Based Transaction Isolation.
How about finding a ratio of num_of_writes & num_of_reads counters in sys.dm_io_virtual_file_stats?
I did it using SQL Server Profiler. I just opened it before running application and tested what kind of queries are executed while I'm doing something in application. But I think it's better just for making sure that queries work, don't know if it is convenient for measuring server workload like this. Profiler can also save traced which you can analyse later, so it might work.

Better concurrency in Oracle than SQL Server?

Is it true that better concurrency can be achieved in Oracle databases than in MS SQL Server databases? In particular in an OLTP scenario, such as an ERP system?
I've overheard an SAP consultant making this claim, referring to Oracle locking techniques like row locking and multi-version read consistency and the redo log.
Out of the box, Oracle will have a higher transaction throughput but this is because it defaults to MVCC. SQL Server defaults to blocking selects on uncommitted updates but it can be changed to MVCC as well so that difference should basically go away. See Read Committed Isolation Level.
See Enabling Row Versioning-Based Isolation Levels.
When the ALLOW_SNAPSHOT_ISOLATION
database option is set ON, the
instance of the Microsoft SQL Server
Database Engine does not generate row
versions for modified data until all
active transactions that have modified
data in the database complete. If
there are active modification
transactions, SQL Server sets the
state of the option to PENDING_ON.
After all of the modification
transactions complete, the state of
the option is changed to ON. Users
cannot start a snapshot transaction in
that database until the option is
fully ON. The database passes through
a PENDING_OFF state when the database
administrator sets the
ALLOW_SNAPSHOT_ISOLATION option to
OFF.
He/She was probably referring to the facts that:
In Oracle readers do not block writers and writers do not block readers
Oracle does not maintain a list of row locks so there is no significant overhead in locking and locks never escalate to the table level.
Starting with SQL 2005 this is no longer true - you can enable snapshot isolation and your writers will not block your readers, just like in Oracle.
Sql Server has row locking, several different transaction isolation levels, and a transaction log that can be replayed.
Maybe he's referring to Access, which does not have these.
Or maybe he believes Oracle uses better defaults. He might have a better argument there, but with either DBMS if you're talking ERP you better have a DBA who knows enough about the system to keep it tuned properly.

Resources