Is it true that better concurrency can be achieved in Oracle databases than in MS SQL Server databases? In particular in an OLTP scenario, such as an ERP system?
I've overheard an SAP consultant making this claim, referring to Oracle locking techniques like row locking and multi-version read consistency and the redo log.
Out of the box, Oracle will have a higher transaction throughput but this is because it defaults to MVCC. SQL Server defaults to blocking selects on uncommitted updates but it can be changed to MVCC as well so that difference should basically go away. See Read Committed Isolation Level.
See Enabling Row Versioning-Based Isolation Levels.
When the ALLOW_SNAPSHOT_ISOLATION
database option is set ON, the
instance of the Microsoft SQL Server
Database Engine does not generate row
versions for modified data until all
active transactions that have modified
data in the database complete. If
there are active modification
transactions, SQL Server sets the
state of the option to PENDING_ON.
After all of the modification
transactions complete, the state of
the option is changed to ON. Users
cannot start a snapshot transaction in
that database until the option is
fully ON. The database passes through
a PENDING_OFF state when the database
administrator sets the
ALLOW_SNAPSHOT_ISOLATION option to
OFF.
He/She was probably referring to the facts that:
In Oracle readers do not block writers and writers do not block readers
Oracle does not maintain a list of row locks so there is no significant overhead in locking and locks never escalate to the table level.
Starting with SQL 2005 this is no longer true - you can enable snapshot isolation and your writers will not block your readers, just like in Oracle.
Sql Server has row locking, several different transaction isolation levels, and a transaction log that can be replayed.
Maybe he's referring to Access, which does not have these.
Or maybe he believes Oracle uses better defaults. He might have a better argument there, but with either DBMS if you're talking ERP you better have a DBA who knows enough about the system to keep it tuned properly.
Related
Does DB2 for z/OS have isolation level similar to READ ONLY in Oracle?
I need to implement several big selects to DB2 and also I need to retrieve consistent data which was commited on time when queries were started, so I need something like 'snapshot isolation level'. As far as I know in Oracle it can be implemented by READ ONLY isolation level, but what about DB2 for z/OS?
DB2 for z/OS does not have "read only" isolation level (nor does Oracle, as "read only" is a transaction state, not an isolation level).
You can avoid lock waits by queries if you use the currently committed concurrent access resolution option, however, note that it does not implement "snapshot isolation" per se -- a query that uses this option will see the latest committed changes, even if those changes were committed after the query started.
There is a FOR READ ONLY clause in DB2 z/OS. You add it at the end of your query.
For tables in which updates and deletes are allowed, specifying FOR
READ ONLY can possibly improve the performance of FETCH operations as
DB2® can do blocking and avoid exclusive locks. For example, in
programs that contain dynamic SQL statements without the FOR READ ONLY
or ORDER BY clause, DB2 might open cursors as if the UPDATE clause was
specified.
Here is the Info Center article with more information.
If you're really looking for the DB2 version of "Serializable", then you are looking for Repeatable Read.
When the initial snapshot is being generated while configuring SQL Server
Transactional Replication, does anyone know if the snapshot agent places locks on the tables (articles) you have selected? I have some tables that contain 2+ millions rows and wanted to know if SQL Server actually places table locks to prevent updates while the publishing database is online. If locks are placed, then I want to run the initial snapshot during off peak hours in production.
Thanks!
In Transactional replication or any other type of replication the starting point is a snapshot of the database. The initial step of creating the snapshot is exactly the same in any type of the replication.
SQL Server does not obtain any kind of locks at all when creating a snapshot, it literally is a snapshot of the database at a certain point in time and creating snapshot does not interfere with any transactions. Uncommitted transactions are rolled back in the snapshot once it is created.
To read more about how database snapshot works read this article from MSDN How Database Snapshots Work
If you're running on an edition of SQL Server that supports database snapshots (as in create database [foo]... as snapshot of [bar]), then you can optionally use those as the basis of the snapshot. Check the #sync_method parameter of sp_addpublication. The caveat is that you still probably want to do it during a non-busy time of the day because of how database snapshots work (i.e. copy-on-write will slow down any write activity), but you won't be contending on locks.
Starting SQL Server 2005, the default #sync_method value for sp_addpublication is "concurrent", which means the tables are not locked during snaphsot agent run. Note this is not entirely true - the snapshot agent places schema locks on the tables, but the duration of that lock is mere seconds at most.
So if you set #sync_method = "concurrent", then no, updates, in theory, will not be blocked. If #sync_method = "native" (default in SQL Server 2000) or "character", then yes, updates will be blocked.
I wanted to see if you guys are utilizing marked transactions in your TFS backup scenario. Are there any drawbacks or gotchas to consider for this?
If I use the TFS Power Tools to create a backup plan, the following is created for me:
Tables and Stored Procedures needed for marked transactions
Scheduled Jobs
Maintenance Plans for Full, Differential, and Transaction Logs
The Backup/Restore Power Tool relies on SQL marked transactions to
keep consistency across the TFS (and dependency products) databases. Source: http://intovsts.net/tag/tfs-power-tools/
Before inserting named marks into the transaction log, consider the
following: Source: MSDN
Because transaction marks consume log space, use them only for
transactions that play a significant role in the database recovery
strategy.
After a marked transaction commits, a row is inserted in the
logmarkhistory table in msdb.
If a marked transaction spans multiple databases on the same database
server or on different servers, the marks must be recorded in the logs
of all the affected databases.
That kind of settles the matter of marked transactions in my backup plan. Especially since the TFS databases use full recovery mode, and the tool relies on it, there isn't much choice. :)
I've read that in older versions of SQL Server .. it had a pessimistic locking strategy. I.e. readers wait on writers for access to the same data (row or page level), unlike Oracle.
Is this still the case in newer versions ? I've read that the locking strategy has been changed in recent versions.
What you heard of is the SNAPSHOT ISOLATION, available since SQL Server 2005. Snapshot isolation, aka. row-versioning, is the default behavior in Oracle. You can make it default in SQL Server too, by enabling READ_COMMITTED_SNAPSHOT on the database:
ALTER DATABASE [<dbname>] SET READ_COMMITTED_SNAPSHOT ON;
With row versioning SQL Server does not acquire data locks during reads. If concurrent writes occur, the read will fetch the previous version of the row. For more details, read Row Versioning-based Isolation Levels in the Database Engine.
You should not confuse row versioning and snapshot with dirty reads. Dirty reads offer inconsistent data which makes programming a challenge, to say the least (ie. you should not use it!). Snapshot reads offer always a transactionally consistent image of the data.
by default SQL server uses READ COMMITTED isolation level, which means it will wait on uncommited changes before it tries to read them.
http://msdn.microsoft.com/en-us/library/ms173763.aspx
note that if you don't care about the accuracy of the data returned, you can always set you isolation level to Read Uncommitted this will give you all the records evening the ones that have binding changes
You can also use snapshot isolation level, which will give you all the record, including the latest known version of data that are currently being modified, without the current modification.
The locking strategy is something that is handled on connection by connection basis - this is something that can be set by the application and withing SQL Server itself.
Read about the Transaction Isolation Levels for more details.
We have an old version of Cognos 7 running on Sql Server 2000 Enterprise.
It is issuing very badly constructed sql commands that are creating many locks which are escalating and blocking the server.
The targeted database is built once a day and then only used for selection.
As the Cognos queries can't be changed (short of upgrading to Cog 10), what can I do to improve this situation?
If I mark the database Read Only will this prevent the locks?
Locking does not happen in read-only databases, so this would (probably) help, assuming that locks are the only cause.
If you can issue a new query at the start of a session you could also change the transaction isolation level to read uncommitted, which would cause selects to ignore locks.