Monitioring Locks - sql-server

We are using Livelink for our records management and if a user moves a folder with lost of sub folders it presents a lock on the database and slows the whole system down. Dispite sending many warnings out to users this is still happening. Is there any sort or monitoring tool that will give us a an early warning system as to when the locks occur?
If not what code would I use to run to show locks and to present with the username of who is causing the locks?
Thanks

I have no idea what Livelink is and I have no clue what documents, folders and movements you're talking about. That being said, since you mention locks and you tagged your question SQL Server: in a SQL Server system lock contention can be greatly reduced by deploying snapshot isolation models, see Using Snapshot Isolation. So simply enable read_committed snapshot and this will unblock any read performed under the default, read committed, isolation level:
ALTER DATABASE [dbname] SET READ_COMMITTED_SNAPSHOT ON;

Related

Solution for preventing DB deadlock in ColdFusion?

An app I am working on has to handle lots of ajax requests that needs to update some data on DB.
[Macromedia][SQLServer JDBC Driver][SQLServer]Transaction (Process ID
66) was deadlocked on lock | communication buffer resources with
another process and has been chosen as the deadlock victim. Rerun the
transaction.
For reads, I've already used the WITH (NOLOCK) hint and that prevented a lot of deadlocks on reads.
What can I do to better deal with writes?
CFLock the update code in CF?
Or is there a way to ask SQL Server to lock a Row instead of a Table?
Have anyone tried implementing CQRS? Seems to solve the problem but I am not clear on how to handle:
ID generation (right now it uses Auto Increment on DB)
How to deal with update request fail if server couldn't send the error back to the client right away.
Thanks
Here are my thoughts on this.
From the ColdFusion server side
I do believe that using named <cflock> tags around your ColdFusion code that updates the database could prevent the deadlock issue on the database server. Using a named lock would make each call single threaded. However, you could run into timeouts on the ColdFusion server side, waiting for the <cflock>, if transactions are taking a while. Handling it in this way on the ColdFusion server side may also slow down your application. You could do some load testing before and after to see how this method affects your app.
From the database server side
First of all let me say that I don't think deadlocks on the database server can be entirely prevented, just minimized and handled appropriately. I found this reference on TechNet for you - Minimizing Deadlocks. The first sentence from that page:
Although deadlocks cannot be completely avoided, following certain coding conventions can minimize the chance of generating a deadlock.
Here are the key points from that reference. They go into a bit more detail about each topic so please read the original source.
Minimizing deadlocks can increase transaction throughput and reduce system overhead because fewer transactions are:
Rolled back, undoing all the work performed by the transaction.
Resubmitted by applications because they were rolled back when deadlocked.
To help minimize deadlocks:
Access objects in the same order.
Avoid user interaction in transactions.
Keep transactions short and in one batch.
Use a lower isolation level.
Use a row versioning-based isolation level.
Set READ_COMMITTED_SNAPSHOT database option ON to enable read-committed transactions to use row versioning.
Use snapshot isolation.
Use bound connections.
The "row versioning-based isolation level" may answer your question Or is there a way to ask SQL Server to lock a Row instead of a Table?. There are some notes mentioned in the original source regarding this option.
Here are some other references that came up during my search:
Avoiding deadlock by using NOLOCK hint
How to avoid sql deadlock?
Tips to avoid deadlocks? - This one mentions being careful when using the NOLOCK hint.
The Difficulty with Deadlocks
Using Row Versioning-based Isolation Levels

SQL Server transactions / concurrency confusion - must you always use table hints?

When you create a SQL Server transaction, what gets locked if you do not specify a table hint? In order to lock something, must you always use a table hint? Can you lock rows/tables outside of transactions (i.e. in ordinary queries)? I understand the concept of locking and why you'd want to use it, I'm just not sure about how to implement it in SQL Server, any advice appreciated.
You should use query hints only as a last resort, and even then only after expert analysis. In some cases, they will cause a query to perform badly. So, unless you really know what you are doing, avoid using query hints.
Locking (of various types) happens automatically everytime you perform a query (unless NOLOCK is specified). The default Transaction Isolation level is READ COMMITTED
What are you actually trying to do?
Understanding Locking in SQL Server
"Can you lock rows/tables outside of transactions (i.e. in ordinary queries)?"
You'd better understand that there are no ordinary queries or actions in SQL Server, they are ALL, without any exceptions, transactional. This is how ACID-ness is achieved, see, for ex., [1]. If client tools or developer interactively do not specify transaction explicitly with BEGIN TRANSACTION and COMMIT/ROLLBACK, then implicit transactions are used.
Also, transaction is not synonym of locking/locks engagement. There is a plethora of mechanisms to control concurrency without locking (for example, versioning. etc.) as well as READ UNCOMMITTED transaction "isolation" (in this case, absence of any isolation) level does not control it at all.
Update2:
In order to lock something, must you always use a table hint?
As far as, transaction isolation level is not READ UNCOMMITTED or one of row-versioning (snapshot) isolation levels, for ex., default READ COMMITTED or set by, for ex.,
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
the locks are issued (I do not know where to start and how to end this topic[2]). Table hints, which can be used in statements, override these settings.
[1]
Paul S. Randal. Understanding Logging and Recovery in SQL Server
What is Logging?
http://technet.microsoft.com/en-us/magazine/2009.02.logging.aspx#id0060003
[2]
Insert trailing ) upon clicking this link
http://en.wikipedia.org/wiki/Isolation_(database_systems)
Update:
5 min ago I had reputation 784 (the same as 24h ago) and, now, without any visible downvotes, it dropped to 779.
Where can I ask this question if I am banned from meta.stackoverflow.com?

Lock Question - When is an Update (U) lock issued?

We are trying to resolve a deadlock problem. The transaction that is getting rolled back is attempting to issue an Update (U) lock on a resource that another transaction has an Exclusive (X) lock on. According to Books Online (http://msdn.microsoft.com/en-us/library/ms175519.aspx), an Update lock is supposed to prevent deadlocks, not cause them.
So, my question is, why/when is an Update lock applied to a resource? We're a little confused about this because the resource that is attempting to have the Update lock applied to will not be updated by the process that is having the transaction rolled back.
Thanks for your help on this.
Randy
You are going to need to do a bit more research to find out what is actually locking, what isolation level each query is in, etc.
Some helpful resources.
SQL Server Transaction Isolation Levels and their Locks
SQL Server Lock Types and Lock Hints
There's a whole universe of "what if" behind what causes deadlocks (by which I mean, there's no way to tell from your initial post what's really going on). Could be table locks, could be locks on indexes; could be oustanding transactions you are not aware of; could be table header locks, could be tempdb issues (very unlikely), who knows?
The best method I've ever found to diagnose deadlocks works like so:
Fire up SQL Profiler, configure it with the "Deadlock Graph" and "Lock: Deadlock" events, and be sure to include the TextData column
While Profiler is running, cause your application to generate the deadlock
Select the "Deadlock Graph" profiler row, and you'll get a simple yet confusing graphic display of what's going on. This might help you figure out what's really going on.
If that doesn't help, the graphic is generated by Profiler based on a very detailed lump of XML. Extract this string (select the Deadlock Graph row, Ctrl+C to copy, paste in your text editor of choice, and delete all the columns but the XML), and then review the XML (in your preferred XML editor).
To-date, once I've gotten and worked through that XML, I've always been able to figure out what was causing the deadlock. It's a good way to learn how weird and convoluted some of SQL's internals can get.
Probably you do not handle your transaction isolation properly and are in serialized transaction mode.
You should
Use the proper transaction isolation on the connection.
Sometimes use hints on SQL, like the NoLock hint when you basically just read some data and don't do anything else with it anyway in a transaction.

Is it possible to have secondary server available read-only in a log shipping scenario?

I am looking into using log shipping in a SQL Server 2005 environment. The idea was to set up frequent log shipping to a secondary server. The intent: Use the secondary server to serve report queries, thereby offloading the primary db server.
I came across this on a sqlservercentral forum thread:
When you create the log shipping you have 2 choices. You can configure restore log operation to be done with norecovery or with standby option. If you use the norecovery option, you can not issue select statements on it. If instead of norecovery you use the standby option, you can run select queries on the database.
Bear in mind with the standby option when log file restores occur users will be kicked out without warning by the restore process. Acutely when you configure the log shipping with standby option, you can also select between 2 choices – kill all processes in the secondary database and perform log restore or don’t perform log restore if the database is being used. Of course if you select the second option, the restore operation might never run if someone opens a connection to the database and doesn’t close it, so it is better to use the first option.
So my questions are:
Is the above true? Can you really not use log shipping in the way I intend?
If it is true, could someone explain why you cannot execute SELECT statements to a database while the transaction log is being restored?
EDIT:
First question is duplicate of this serverfault question. But I still would like the second question answered: Why is it not possible to execute SELECT statements while the transaction log is being restored?
could someone explain why you cannot
execute SELECT statements to a
database while the transaction log is
being restored?
Short answer is that RESTORE statement takes an exclusive lock on the database being restored.
For writes, I hope there is no need for me to explain why they are incompatible with a restore. Why does it not allow reads either? First of all, there is no way to know if a session that has a lock on a database is going to do a read or a write. But even if it would be possible, restore (log or backup) is an operation that updates directly the data pages in the database. Since these updates go straight to the physical location (the page) and do not follow the logical hierarchy (metadata-partition-page-row), they would not honor possible intent locks from other data readers, and thus have the possibility to change structures as they are read. A SELECT table scan following the page next-prev pointers would be thrown into disarray, resulting in a corrupted read.
Well yes and no.
You can do exactly what you wish to do, in that you may offload reporting workloads to a secondary server by configuring Log Shipping to a read only copy of a database. I have set this type of architecture up on a number of occasions previously and it works very well indeed.
The caveat is that in order to perform a restore of a Transaction Log Backup file there must be no other connections to the database in question. Hence the two choices being, when the restore process runs it will either fail, thereby prioritising user connections, or it will succeed by disconnecting all user connection in order to perform the restore.
Dependent on your restore frequency this is not necessarily a problem. You simply educate your users to the fact that, say every hour at 10 past the hour, there is a possibility that your report may fail. If this happens simply re-run the report.
EDIT: You may also want to evaluate alternative architeciture solutions to your business need. For example, Transactional Replication or Database Mirroring with a Database Snapshot
If you have enterprise version, you can use database mirroring + snapshot to create read-only copy of the database, available for reporting, etc. Mirroring uses "continuous" log shipping "under the hood". It is frequently used in scenario you have described.
Yes it's true.
I think the following happens:
While the transaction log is being restored, the database is locked, as large portions of it are being updated.
This is for performance reasons more then anything else.
I can see two options:
Use database mirroring.
Schedule the log shipping to only occur when the reporting system is not in use.
Slight confusion in that, the norecovery flag on the restore means your database is not going to be brought out of a recovery state and into an online state - that is why the select statements will not work - the database is offline. The no-recovery flag is there to allow you to restore multiple log files in a row (in a DR type scenario) without bringing the database back online.
If you did not want to log ship / have the disadvantages you could swap to a one way transactional replication, but the overhead / set-up will be more complex overall.
Would peer-to-peer replication work. Then you can run queries on one instance and so save the load on the original instance.

Better concurrency in Oracle than SQL Server?

Is it true that better concurrency can be achieved in Oracle databases than in MS SQL Server databases? In particular in an OLTP scenario, such as an ERP system?
I've overheard an SAP consultant making this claim, referring to Oracle locking techniques like row locking and multi-version read consistency and the redo log.
Out of the box, Oracle will have a higher transaction throughput but this is because it defaults to MVCC. SQL Server defaults to blocking selects on uncommitted updates but it can be changed to MVCC as well so that difference should basically go away. See Read Committed Isolation Level.
See Enabling Row Versioning-Based Isolation Levels.
When the ALLOW_SNAPSHOT_ISOLATION
database option is set ON, the
instance of the Microsoft SQL Server
Database Engine does not generate row
versions for modified data until all
active transactions that have modified
data in the database complete. If
there are active modification
transactions, SQL Server sets the
state of the option to PENDING_ON.
After all of the modification
transactions complete, the state of
the option is changed to ON. Users
cannot start a snapshot transaction in
that database until the option is
fully ON. The database passes through
a PENDING_OFF state when the database
administrator sets the
ALLOW_SNAPSHOT_ISOLATION option to
OFF.
He/She was probably referring to the facts that:
In Oracle readers do not block writers and writers do not block readers
Oracle does not maintain a list of row locks so there is no significant overhead in locking and locks never escalate to the table level.
Starting with SQL 2005 this is no longer true - you can enable snapshot isolation and your writers will not block your readers, just like in Oracle.
Sql Server has row locking, several different transaction isolation levels, and a transaction log that can be replayed.
Maybe he's referring to Access, which does not have these.
Or maybe he believes Oracle uses better defaults. He might have a better argument there, but with either DBMS if you're talking ERP you better have a DBA who knows enough about the system to keep it tuned properly.

Resources