Deadlock on reorganize index - sql-server

I have the following code
BEGIN TRANSACTION
-- begin transaction to avoid truncate Table_Staging from parallel process
INSERT INTO [dbo].[Table_Staging] WITH (TABLOCK) (COLUMN1)
SELECT COLUMN1
FROM [Table2]
WHERE [RegistrationDate] BETWEEN '20221101' AND '20221112'
BEGIN TRANSACTION
-- begin transaction to avoid reads from Table from other queries
TRUNCATE TABLE [DB].[dbo].[Table] WITH (PARTITIONS (156));
ALTER TABLE [DB].[dbo].[Table_Staging]
SWITCH PARTITION 156 TO [DB].[dbo].[Table] PARTITION 156;
COMMIT TRANSACTION
ALTER INDEX [ics_SplitEventAggregated]
ON [DB].[dbo].[Table] REORGANIZE PARTITION = 156;
COMMIT TRANSACTION
I get this error:
Msg 1205, Level 13, State 18, Procedure sys.sp_cci_tuple_mover, Line 7 [Batch Start Line 0]
Transaction (Process ID 99) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
Msg 35373, Level 16, State 1, Line 80
ALTER INDEX REORGANIZE statement failed on a clustered columnstore index with error 1205. See previous error messages for more details.
The main question is, can I somehow run index reorganization in this logic?
Or do I have to finish all the transactions and only then do the reorganization?
Second question - why does this query get a deadlock if the previous transaction is confirmed?
Table and Table_Staging has columnstore index.
Microsoft SQL Server 2019 (RTM-CU16) (KB5011644) - 15.0.4223.1 (X64)
Copyright (C) 2019 Microsoft Corporation Developer Edition (64-bit)
on Windows Server 2022 Standard 10.0 <X64> (Build 20348: )
Prod is on Enterprise and Standard editions.
Update #1
I removed TABLOCK from the query and the DEADLOCK problem disappeared. Do I understand correctly that the TABLOCK hint applies to both tables (although I only specify it in INSERT, not SELECT)?
Why does TRUNCATE TABLE work correctly then, since it also requires an exclusive lock? Or is it just a coincidence?

can I somehow run index reorganization in this logic?
Yes. But commit your whole transaction before reorganize. Otherwise you're holding an exclusive lock on the table/partition before you start the reorganize. This makes it more likely that some other session is blocked by your session, and that you may, in turn, become blocked by a lock held by that session.
And if the reorganize is a deadlock victim, retry or just move on.

Related

Transaction (Process ID) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. in sql server 2014

I have an update stored procedure, I call it from c# code and my code is running in 3 threads at the same time. Update statement generally throws the error "Transaction (Process ID) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction". How can I solve this in sql server 2014 or in c# code?
Update stored procedure:
ALTER PROCEDURE sp_UpdateSP
#RecordID nvarchar(50),
#FileNetID nvarchar(50),
#ClassName nvarchar(150)
AS
Begin tran t1
UPDATE MYTABLE SET FilenetID=#FileNetID, DOCUMENT_TYPE=#ClassName, CONTROLID='FileAttach' where OTRECORDID=#RecordID
Commit tran t1
Table Index:
Non-Unique, Non-Clustered OTRECORDID Ascending nvarchar(255)
Thanks
I suspect the problem is caused by SQL performing a scan over the table because it thinks that is quicker than doing a seek on the index and then a keylookup to find the row to update.
You can prevent these scans and force SQL to perform a seek by using the FORCESEEK hint.
You code would become
Begin tran t1
UPDATE mt SET FilenetID=#FileNetID, DOCUMENT_TYPE=#ClassName, CONTROLID='FileAttach' FROM MYTABLE mt WITH(FORCESEEK) where OTRECORDID=#RecordID
Commit tran t1
This will be slower than the scan but will reduce the probability of deadlocks.
It might not be the exact answer, but if you want a workaround to skip the problem, run services:
Win + R > type services.msc
Find SQL Server service - Mostly named SQL Server (MSSQLSERVER) if you have only one instance - then Restart the service, now the deadlocked transaction has gone so you can keep working.

SQL Server lock escalation

My application runs a nightly purge process to delete old records from the primary tables in my OLTP application. I was experiencing lock escalation during the purge process which was blocking concurrent inserts into the table, so I modified the purge procedure to loop through and delete records in blocks of 4900 which should be well below SQL Server's lock escalation threshold of 5000. While lock escalation was much reduced, SQL Server Profiler still reports occasional lock escalation on the following DELETE statement in the loop:
-- outer loop increments #BatchMinId and #BatchMaxId variables
BEGIN TRAN
-- limit is set at 4900
DELETE TOP (#limit) h
OUTPUT DELETED.ChildTable1Id,
DELETED.ChildTable2Id,
DELETED.ChildTable3Id,
DELETED.ChildTable4Id
INTO #ChildRecordsToDelete
FROM MainTable h WITH (ROWLOCK)
WHERE h.Id >= #BatchMinId AND h.Id <= #BatchMaxId AND h.Id < #MaxId AND
NOT EXISTS (SELECT 1 FROM OtherTable ot WHERE ot.Id = h.Id);
-- delete from ChildTables 1-4 (no additional references to MainTable)
COMMIT TRAN;
-- end loop
The "IntegerData2" column in SQL Server Profiler for the reported lock escalation events (which is supposed to be the escalated lock count) ranges from 10197 to 10222 which does not look close to any multiple of 4900 (my purge batch size) plus any multiple of 1250 (number of additional locks SQL Server may take before attempting escalation).
Given that I am explicitly limiting the DELETE statement to 4900 rows, how are more locks ever being taken, especially to the point that SQL Server is escalating to a table lock? I would like to understand this before I resort to disabling lock escalation altogether on this table.
I can't comment on your question since I don't have enough reputation on this web site, so I'm commenting here.
I had a similar issue with a cleanup task running at night. The delete statement was locked by the "GHOST CLEANUP".
Here have a look at this :
SQL Server Lock Timeout Exceeded Deleting Records in a Loop
Hope this help.
One weird solution that I found at the time was :
1) Insert the record to keep in another table with same structure. (Copy)
2) Truncate table to clean
3) Insert back data to keep from the copy into the now empty table.
4) Truncate copy table to release space.
This trick was faster to cleanup then the delete itself, because the deletion was done in a split second because of truncate. Somehow the cost of insertion was less expensive then deletion one.
But still, I would recommend to avoid this solution. You could also reduce the chunk between 100 to 500. This increase time the cleanup takes, but you are less likely to have the lock escalation.

ms sql commit failed, rollback not complete

I have a parent and child table in MS SQL 2008 and trying to perform a save call using hibernate with cascade enabled.
Table specs that matter: parent table has an identity column and child table has a varchar column with length 50.
Environment: spring,jboss 7.1.1, hibernate 3, jta transaction (transaction is started/commited/rolled back from the code by obtaining the transaction from jndi)
Issue Arises when: the data that i insert in the child table varchar column is more than 50.
The insert query on the child table is fired only when i execute commit, and it fails to insert because of the char length. I get an exception because of this, i catch the exception and perform a rollback, rollback fails because the transaction is in inactive state.
Issue: The problem here is the parent data is getting committed which is not what I want. How to make sure that the transaction is rolled back completely?

What are implications of SET-ting ALLOW_SNAPSHOT_ISOLATION ON?

Should I run
ALTER DATABASE DbName SET ALLOW_SNAPSHOT_ISOLATION OFF
if snapshot transaction (TX) isolation (iso) is not temporarily used?
In other words,
why should it be enabled, in first place?
Why isn't it enabled by default?
What is the cost of having it enabled (but temporarily not used) in SQL Server?
--Update:
enabling of snapshot TX iso level on database does not change READ COMMITTED tx iso to be default.
You may check it by running:
use someDbName;
--( 1 )
alter database someDbName set allow_snapshot_isolation ON;
dbcc useroptions;
the last row shows that tx iso level of current session is (read committed).
So, enabling snapshot tx iso level without changing to it does not use it, etc
In order to use it one should issue
--( 2 )
SET TRANSACTION ISOLATION LEVEL SNAPSHOT
Update2:
I repeat the scripts from [1] but with SNAPSHOT enabled (but not switched on) but without enabling READ_COMMITTED_SNAPSHOT
--with enabling allow_snapshot_isolation
alter database snapshottest set allow_snapshot_isolation ON
-- but without enabling read_committed_snapshot
--alter database snapshottest set read_committed_snapshot ON
-- OR with OFF
alter database snapshottest set read_committed_snapshot OFF
go
There no results/rows from from executing
select * from sys.dm_tran_version_store
after executing INSERT, DELETE or UPDATE
Can you provide me with scripts illustrating that enabled SNAPSHOT tx iso level by ( 1 ) but not switched on by ( 2 ) produces any versions in tempdb and/or increase the size of data with 14 bytes per row?
Really I do not understand what is the point in versioning if it is enabled by ( 1 ) but not used (not set on by ( 2))?
[1]
Managing TempDB in SQL Server: TempDB Basics (Version Store: Simple Example)
Link
As soon as row versioning (aka. snapshot) is enabled in the database all writes have to be versioned. It doesn't matter under what isolation level the write occurred, since isolation levels always affect only reads. As soon the database row versioning is enabled, any insert/update/delete will:
increase the size of data with 14 bytes per row
possibly create an image of the data before the update in the version store (tempdb)
Again, it is completely irrelevant what isolation level is used. Note that row versioning occurs also if any of the following is true:
table has a trigger
MARS is enabled on the connection
Online index operation is running on the table
All this is explained in Row Versioning Resource Usage:
Each database row may use up to 14
bytes at the end of the row for row
versioning information. The row
versioning information contains the
transaction sequence number of the
transaction that committed the version
and the pointer to the versioned row.
These 14 bytes are added the first
time the row is modified, or when a
new row is inserted, under any
of these conditions:
READ_COMMITTED_SNAPSHOT or ALLOW_SNAPSHOT_ISOLATION options are
ON.
The table has a trigger.
Multiple Active Results Sets (MARS) is being used.
Online index build operations are currently running on the table.
...
Row versions must be stored for as
long as an active transaction needs to
access it. ... if it meets any of the
following conditions:
It uses row versioning-based isolation.
It uses triggers, MARS, or online index build operations.
It generates row versions.
Update
:setvar dbname testsnapshot
use master;
if db_id('$(dbname)') is not null
begin
alter database [$(dbname)] set single_user with rollback immediate;
drop database [$(dbname)];
end
go
create database [$(dbname)];
go
use [$(dbname)];
go
-- create a table before row versioning is enabled
--
create table t1 (i int not null);
go
insert into t1(i) values (1);
go
-- this check will show that the records do not contain a version number
--
select avg_record_size_in_bytes
from sys.dm_db_index_physical_stats (db_id(), object_id('t1'), NULL, NULL, 'DETAILED')
-- record size: 11 (lacks version info that is at least 14 bytes)
-- enable row versioning and and create an identical table
--
alter database [$(dbname)] set allow_snapshot_isolation on;
go
create table t2 (i int not null);
go
set transaction isolation level read committed;
go
insert into t2(i) values (1);
go
-- This check shows that the rows in t2 have version number
--
select avg_record_size_in_bytes
from sys.dm_db_index_physical_stats (db_id(), object_id('t2'), NULL, NULL, 'DETAILED')
-- record size: 25 (11+14)
-- this update will show that the version store has records
-- even though the isolation level is read commited
--
begin transaction;
update t1
set i += 1;
select * from sys.dm_tran_version_store;
commit;
go
-- And if we check again the row size of t1, its rows now have a version number
select avg_record_size_in_bytes
from sys.dm_db_index_physical_stats (db_id(), object_id('t1'), NULL, NULL, 'DETAILED')
-- record size: 25
By default, you have snapshot isolation OFF, If you turn it ON, SQL will maintain snapshots of data for running transactions.
Example: On connection 1, you are running big select. On connection 2, you update some of the records that are going to be returned by first select.
In snapshot isolation ON, SQL will make a temporary copy of the data, affected by update, so SELECT will return original data.
Any additional data manipulation will affect performance. That's why this setting is OFF by default.

Understanding locking behavior in SQL Server

I tried to reproduce the situation of question [1].
On table, taken and filled with data from wiki's "Isolation (database systems)" [2],
in SQL Server 2008 R2 SSMS, I executed:
1) first in first tab (window) of SSMS
-- transaction isolation level in first window does not influence results (?)
-- initially I thought that second transaction in 2) runs at the level set in first window
begin transaction
INSERT INTO users VALUES ( 3, 'Bob', 27 )
waitfor delay '00:00:22'
rollback
2) immediately after, in second window
-- this is what I commented/uncommented
-- set transaction isolation level SERIALIZABLE
-- set transaction isolation level READ REPEATABLE
-- set transaction isolation level READ COMMITTED
-- set transaction isolation level READ UNCOMMITTED
SELECT * FROM users --WITH(NOLOCK)
Update:
Sorry, results were corrected.
My results, depending on isolation level set in 2), are that SELECT returns:
immediately (reading uncommitted inserted row)
for all cases of SELECT with NOLOCK
for READ UNCOMMITTED (SELECT either with or without NOLOCK)
is waiting the completion of transaction 1) (ONLY IF SELECT is without NOLOCK) and
in READ COMMITTED and higher (REPEATABLE READ, SERIALIZABLE) transaction isolation level
These results contradict to situation described in question (and explained in answers?) [1]
(for example, that SELECT with NOCHECK is waiting completion of 1)), etc.
How can my results and [1] be explained?
Update2:
This question is really subquestion of my questions [3] (or the result of them not being answered).
Cited:
[1]
Explain locking behavior in SQL Server
Explain locking behavior in SQL Server
[2]
"Isolation (database systems)"
Plz add trailing ) to link. I cannot manage to preserve it here in the link!
http://en.wikipedia.org/wiki/Isolation_(database_systems)
[3]
Is NOLOCK the default for SELECT statements in SQL Server 2005?
Is NOLOCK the default for SELECT statements in SQL Server 2005?
There is a useful MSDN link her talk about locking hints in SQL 2008. Maybe in your example its a case of SQL Server 2008 disfavoring your tables locks?
(The following snippet from the link below talks about locks potentially being ingored by SQL Server 2008)
As shown in the following example, if the transaction isolation level is set to SERIALIZABLE, and the table-level locking hint NOLOCK is used with the SELECT statement, key-range locks typically used to maintain serializable transactions are not taken.
CopyUSE AdventureWorks2008R2;
GO
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
GO
BEGIN TRANSACTION;
GO
SELECT Title
FROM HumanResources.Employee WITH (NOLOCK);
GO
-- Get information about the locks held by
-- the transaction.
SELECT
resource_type,
resource_subtype,
request_mode
FROM sys.dm_tran_locks
WHERE request_session_id = ##spid;
-- End the transaction.
ROLLBACK;
GO
The only lock taken that references HumanResources.Employee is a schema stability (Sch-S) lock. In this case, serializability is no longer guaranteed.
In SQL Server 2008, the LOCK_ESCALATION option of A LTER TABLE can disfavor table locks, and enable HoBT locks on partitioned tables. This option is not a locking hint, but can but used to reduce lock escalation. For more information, see ALTER TABLE (Transact-SQL).
The hint in the second query overrides transaction isolation level.
SELECT ... WITH (NOLOCK) is basically identical to SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED; SELECT ....
With any other isolation level the locks are honored, so the second transaction waits until the locks are released by the first one.

Resources