query ASE DB lock and the duration the lock is held - sybase

I am looking for a sql that prints out all DB locks currently are held and also the duration the locks have been held for ASE.
I would like to run that sql periodically so that we can monitor the DB health for lock issues.

See the master..monLocks table for a list of granted locks and pending (blocked) lock requests.
The WaitTime column will give you the number of seconds a process has been waiting for a requested lock. You should be able to use the rest of the columns in that table to build the desired query (NOTE: you may need to join with other tables ... depends on what info you're looking for).

There's also the system stored proc sp_object_stats. From the docs:
"Shows lock contention, lock wait-time, and deadlock statistics for tables and indexes"
The output for a single object looks like:
Object Name: pubtune..titles (dbid=7, objid=208003772,lockscheme=Datapages)
Page Locks SH_PAGE UP_PAGE EX_PAGE$
---------- ---------- ---------- ----------
Grants: 94488 4052 4828
Waits: 532 500 776
Deadlocks: 4 0 24
Wait-time: 20603764 ms 14265708 ms 2831556 ms
Contention: 0.56% 10.98% 13.79%
*** Consider altering pubtune..titles to Datarows locking.
Displays the top 10 objects by default.

Related

Prevent SQL Server concurrency insert locking

I have 45 active concurrent insert transaction that each transaction try to insert (only insert without any select or update) about 250 rows to some tables.
The problem is when a transaction wants to insert the data into the tables, there are about 1000 X and IX locks (sys.dm_tran_locks) on multi index rows and index pages.
I have moved index files to an SSD but it didn't help and I still have a lot of pending transactions which each transaction takes about 200ms to 4000ms to be completed according to Adult logout on SQL profiler.
The Buffer I/O, Buffer latch, Lock, Latch, Logging Wait times are 0 or very low in activity monitor.
I have tried to increase number of transactions, but it also didn't help and number of execution in Activity monitor is still same.
My system info:
2x E5,
SSD Raid 0 for log files,
HDD Raid 10 for data,
SSD Raid 0 for indexes,
+64GB DDR3,
SQL Server 2014 SP2
There can be multiply suggestions depending on what exactly causes the problem:
Your indexes. Every time you do an insert SQL Server updates all indexes on a table. So, solution would be to decrease number of indexes on your tables.
IDENTITY column contention. Try to replace your IDENTITY columns by UNIQUEIDENTIFIER.
Extra I/O associated with page splits. Regularly rebuild clustered index with lower FILLFACTOR (Extreme scenario: <50%).
PFS contention. Create multiple files in your DB, and split indexes/tables to them.
You are on SQL2014. Try to use In-Memory features.

Readpast lock in memory optimized table in SQL Server 2014?

I have a big problem in working with table in memory in SQL Server 2014.
I know that there is no readpast lock in SQL Server. but in some scenarios it can cause decrease in performance.
Suppose that there are 20 records in one table. Each record has one column LockStatus with an initial value of Wait.
If two consumers want to pick top(10) of records, what happens?
Consumer one gets first 10 records and changes their status to Locked and while it is using them, the second consumer tries to pick top(10) but it will be aborted:
The current transaction attempted to update a record that has been
updated since this transaction started. the transaction was aborted.
With readpast lock we could say to the consumer 2 to pick second 10 records, instead of being aborted.

SQL Server Locking Mechanism Irritating

Open 2 windows in SQLServer 2014 Management Studio connected to a 2014 DB.
In 1st window:
SELECT * FROM categorytypes WHERE id = 12
Output:
ID DESCRIPTION ORDER
---------------------
12 Electronics 20
Then:
begin tran
UPDATE CategoryTypes SET [order] = 20 WHERE id = 12
Now go to other window (Ctrl+N):
SELECT * FROM CategoryTypes
The query will execute indefinitely until 1st window tran is committed or roll-backed. That's fine because ID=12 row is locked.
SELECT * FROM CategoryTypes WHERE ID <> 12
That works fine.
SELECT * FROM CategoryTypes WHERE Description = 'MEN'
Here it is problem, why the hell this query should execute indefinitely, we know that ID=12 has description 'Electronics'.
In a large application where huge DML process and select operation is done simultaneously on same table, this kind of locking mechanism wont allow to do these 2 things on same time on different set of records.
In Oracle this kind of use case works, as long as locked (dirty row) is not part of result set.
Guys, is there any way to avoid this, kind of Oracle locking mechanism? I don't want to use NOLOCK, or set my transaction to READ UNCOMMITTED.
Thanks.
In SQL Server, the default behavior is to use locking in the default READ_COMMITTED isolation level. With this default locking behavior, it is especially important to have useful indexes so that only needed data are touched. For example, the query with Description in the WHERE clause will not be blocked if you 1) have an index on Description and 2) that index is used in the query plan to locate the needed rows. Without an index on Description, a full table scan will result and the query will be blocked when it hits the uncommitted change.
If you want to avoid locking such that readers don't block writers and visa-versa, you can turn on the READ_COMMITTED_SNAPSHOT database option. SQL Server will then use row versioning instead of locking to ensure only committed data are returned.
Like other DBMS products that use row versioning, there is more overhead with READ_COMMITTED_SNAPSHOT than in-memory locking. SQL Server adds 14 bytes of additional storage per row plus uses tempdb more heavily for the row version store. Whether or not these overhead costs are justified depends on the concurrency benefits your workload experiences.

How to efficiently use LOCK_ESCALATION in SQL Server 2008

I'm currently having troubles with frequent deadlocks with a specific user table in SQL Server 2008. Here are some facts about this particular table:
Has a large amount of rows (1 to 2 million)
All the indexes used on this table only have the "use row lock" ticked in their options
Edit: There is only one index on the table which is its primary Key
rows are frequently updated by multiple transactions but are unique (e.g. probably a thousand or more update statements are executed to different unique rows every hour)
the table does not use partitions.
Upon checking the table on sys.tables, I found that the lock_escalation is set to TABLE
I'm very tempted to turn the lock_escalation for this table to DISABLE but I'm not really sure what side effect this would incur. From What I understand, using DISABLE will minimize escalating locks from TABLE level which if combined with the row lock settings of the indexes should theoretically minimize the deadlocks I am encountering..
From what I have read in Determining threshold for lock escalation it seems that locking automatically escalates when a single transaction fetches 5000 rows..
What does a single transaction mean in this sense? A single session/connection getting 5000 rows thru individual update/select statements?
Or is it a single sql update/select statement that fetches 5000 or more rows?
Any insight is appreciated, btw, n00b DBA here
Thanks
LOCK Escalation triggers when a statement holds more than 5000 locks on a SINGLE object. A statement holding 3000 locks each on two different indexes of the same table will not trigger escalation.
When a lock escalation is attempted and a conflicting lock exists on the object, the attempt is aborted and retried after another 1250 locks (held, not acquired)
So if your updates are performed on individual rows and you have a supporting index on the column, then lock escalation is not your issue.
You will be able to verify this using the Locks-> lock escalation event from profiler.
I suggest you capture the deadlock trace to identify the actual cause of the deadlock.
I found this article after a quick Google of disabling table lock escalation. Although not a real answer for the OP I think it is still relevant for one off scripts and note worthy here. There's a nice little trick you can do to temporarily disable table lock escalation.
Open another connection and issue something like.
BEGIN TRAN
SELECT * FROM mytable (UPDLOCK, HOLDLOCK) WHERE 1=0
WAITFOR DELAY '1:00:00'
COMMIT TRAN
as
Lock escalation cannot occur if a different SPID is currently holding
an incompatible table lock.
from microsoft kb

SQL Server deadlock clarity?

After reading this interesting article I have some questions.
This table shows a deadlock situation :
T1 holds X lock on all rows with c1=5 on table t_lock1 while T2 holds
X lock on all rows with C1=1 on table t_lock2.
Now each of these transactions wants to update the rows previously
locked by the other. This results in a deadlock.
Question #1
Do transactions obtain locks? I know that reading from table is done by a shared lock, and write to a table is done using an exclusive lock (I'm talking about the default locking settings).
So it seems from this example that transaction also holds a lock ....is it correct?
Question #2
...T1 holds X lock on all rows with c1=5 on table t_lock1...
IMHO as I've said the locking is not per row (although it can be made, but the author didn't mentioned it) - so why does he say : on all rows with C1=5 ?
Do transactions obtain locks?
No. The statement that you execute - a SELECT or an UPDATE will acquire the locks. Depending on your transaction isolation level setting, the duration of how long the (shared) locks (for a reading SELECT) will be held differs - that's all. Shared locks normally are held only very briefly, while update and exclusive locks are held until the transaction ends. The transaction might hold the locks - but it's not the transaction that acquires the locks...
*...T1 holds X lock on all rows with c1=5 on table t_lock1...*
IMHO as I've said the locking is not per row ( although it can be made , but the author didn't mentioned it) so why does he say : on all rows with C1=5 ?
The locking is per row - by default. But why do you think there's only a single row with C1=5? There could be multiple - possibly thousands - and the UPDATE statement will lock all those rows affected by the UPDATE statement.
For question 1: SQL Server reads the source tables rows using U-locks, then updates them converting them to X-locks only on those rows which qualify for the update. Notice the distinction between reading many rows, then filtering them down to those which get written. Those two sets are locked differently.
As there are no selects in your queries only U and X locks are taken. S-lock are not taken for update-queries on the table being updated. This is a heuristic deadlock-avoidance scheme.
Question 2: Locking can be done at different granularity but for low row counts it is usually per row (and this can be forced). Maybe the author assumes an index on C1 which would mean that only the rows with C1=1 need to be read and locked. All other rows wouldn't be touched.
If there was no index SQL Server would indeed read all rows of the table, U-lock them while doing that and then X-lock those which satisfy C1=1. The author indeed mentiones that only rows with C1=1 are x-locked.

Resources