Are there Two exclusive locks on same table - sql-server

I am inserting rows into single table from two instances and they are completing successfully.
When any transaction updating any table then it acquire 'Exclusive' on that Table (resource) and there must be single Exclusive lock on table while inserting data.
Granted
Requested Exclusive(X) Shared(S)
Exclusive NO NO
Shared NO Yes
Creating Sample table:
create table TestTransaction
(
Colid int Primary Key,
name varchar(10)
)
Inserting Instance 1:
Declare #counter int =1
Declare #countName varchar(10)='te'
Declare #max int=1000000
while #counter<#max
Begin
insert into TestTransaction
values
(
#counter,
#countName+Cast(#counter as varchar(7))
)
Set #counter=#counter+1
End
Inserting Instance 2:
insert into TestTransaction
values
(2000001,'yesOUTofT')
Why it is successful?
At the same time retrieval (Select) from this table is not happening because of the lock on table.

When any transaction updating any table then it acquire 'Exclusive' on that Table (resource) and there must be single Exclusive lock on table while inserting data.
That's a common myth. Locks in SQL Server are usually per-row. Various things cause them to escalate to page, partition or table level. SQL Server is designed, though, to try to lock at the smallest level first in order to allow for more concurrency.
Do not rely on any particular locking behavior in your apps if you can. Rather, make use of the isolation level setting if possible in order to obtain the required consistency guarantees you need.

Related

Do SQL Server partition switch on partition A while a long running query is consuming partition B

I'm currently looking for a method to improve the overall ability to ingest and consume data at the same time in an analytical environment (data warehouse). You may have already faced a similar situation and I'm interested about the various ways of improvement. Thanks in advance for your reading of the situation and your potential help on this matter!
The data ingestion process is currently relying on partition switching mechanism on top of Azure SQL Server. Multiple jobs per day are running. Data is loaded into staging table, then once the data is ready, there is partition swap operation happening, taking out the previous data on this partition, replaced by the new data set. The target table is configured with a clustered columnstore index. In addition, the table is configured with the lock escalation mode set to auto.
Once the data is ingested, a monitoring system is then automatically pushing this data into some Power BI datasets in import mode, by triggering refresh at partition level in PBI. The datasets are reading data in SQL Server in a very strict way, reading data partition by partition and are never reading data that is currently being ingested into the data warehouse. This system guarantees to have always the latest data up to date in the various PBI datasets with the shortest delay possible in the end-2-end.
When the alter table switch partition statement is fired, it may be locked by some other running statements that are consuming data located on the same table, but on another partition. This is due to a sch-s lock placed by the select query on the table.
Having set the lock escalation mode to auto, I was expecting the sch-s lock would be placed only at partition level, but this seems not the case.
This situation is particularly annoying as the queries running on Power BI are quite long, as a lots of records need to be moved into the dataset. As Power BI is running multiple queries in parallel, the locks can remain a very long time in total, preventing ingestion of new data to complete.
Few additional notes about the current setup:
I'm already taking advantage of the wait_at_low_priority feature, to ensure the swap partition process is not blocking another query to run in the meantime.
The partition swap process has been considered as we have snapshots of data, replacing entirely was has been ingested previously for the same partition.
Loading data in a staging table allows the computation of the columnstore index faster than inserting on the final data directly.
Here is below a test script showing the situation.
Hope this is all clear. Many thanks for your help!
Initialization
-- Clean any previous stuff.
drop table if exists dbo.test;
drop table if exists dbo.staging_test;
drop table if exists dbo.staging_out_test;
if exists(select 1 from sys.partition_schemes where name = 'ps_test') drop partition scheme ps_test;
if exists(select 1 from sys.partition_functions where name = 'pf_test') drop partition function pf_test;
go
-- Create partition function and scheme.
create partition function pf_test (int) as range right for values(1, 2, 3);
go
create partition scheme ps_test as partition pf_test all to ([primary]);
go
alter partition scheme ps_test next used [primary];
go
-- Data table.
create table dbo.test (
id int not null,
name varchar(100) not null
) on ps_test(id);
go
-- Staging table for data ingestion.
create table dbo.staging_test (
id int not null,
name varchar(100) not null
) on ps_test(id);
go
-- Staging table for taking out previous data.
create table dbo.staging_out_test (
id int not null,
name varchar(100) not null
) on ps_test(id);
go
-- Set clustered columnstore index on all tables.
create clustered columnstore index ix_test on dbo.test on ps_test(id);
create clustered columnstore index ix_staging_test on dbo.staging_test on ps_test(id);
create clustered columnstore index ix_staging_out_test on dbo.staging_out_test on ps_test(id);
go
-- Lock escalation mode is set to auto to allow locks at partition level.
alter table dbo.test set (lock_escalation = auto);
alter table dbo.staging_test set (lock_escalation = auto);
alter table dbo.staging_out_test set (lock_escalation = auto);
go
-- Insert few data...
insert into dbo.test (id, name) values(1, 'initial data partition 1'), (2, 'initial data partition 2');
insert into dbo.staging_test (id, name) values(1, 'new data partition 1'), (2, 'new data partition 2');
go
-- Display current data.
select * from dbo.test;
select * from dbo.staging_test;
select * from dbo.staging_out_test;
go
Long running query example (adjust variable #c to generate more or less records):
-- Generate a long running query hitting only one specific partition on the test table.
declare #i bigint = 1;
declare #c bigint = 100000;
with x as (
select #i n
union all
select n + 1
from x
where n < #c
)
select
d.name
from
x,
(
select name
from
dbo.test d
where
d.id = 2) d
option (MaxRecursion 0)
Partition swap example (to be run while "long running query" is running, showing the lock behavior:
select * from dbo.test;
-- Switch old data out.
alter table dbo.test switch partition $PARTITION.pf_test(1) to dbo.staging_out_test partition $PARTITION.pf_test(1) with(wait_at_low_priority (max_duration = 1 minutes, abort_after_wait = self));
-- Switch new data in.
alter table dbo.staging_test switch partition $PARTITION.pf_test(1) to dbo.test partition $PARTITION.pf_test(1) with(wait_at_low_priority (max_duration = 1 minutes, abort_after_wait = self));
go
select * from dbo.test;

Lock database table for just a couple of sentences

Suppose a table in SQLServer with this structure:
TABLE t (Id INT PRIMARY KEY)
Then I have a stored procedure, which is constantly being called, that works inserting data in this table among other kind of things:
BEGIN TRAN
DECLARE #Id INT = SELECT MAX(Id) + 1 FROM t
INSERT t VALUES (#Id)
...
-- Stuff that gets a long time to get completed
...
COMMIT
The problem with this aproach is sometimes I get a primary key violation because 2 or more procedure calls get and try to insert the same Id on the table.
I have been able to solve this problem adding a tablock in the SELECT sentence:
DECLARE #Id INT = SELECT MAX(Id) + 1 FROM t WITH (TABLOCK)
The problem now is sucessive calls to the procedure must wait to the completion of the transaction currently beeing executed to start their work, allowing just one procedure to run simultaneosly.
Is there any advice or trick to get the lock just during the execution of the select and insert sentence?
Thanks.
TABLOCK is a terrible idea, since you're serialising all the calls (no concurrency).
Note that with an SP you will retain all the locks granted over the run until the SP completes.
So you want to minimise locks except for where you really need them.
Unless you have a special case, use an internally generated id:
CREATE TABLE t (Id INT IDENTITY PRIMARY KEY)
Improved performance, concurrency etc. since you are not dependent on external tables to manage the id.
If you have existing data you can (re)set the start value using DBCC
DBCC CHECKIDENT ('t', RESEED, 100)
If you need to inject rows with a value preassigned, use:
SET IDENTITY_INSERT t ON
(and off again afterwards, resetting the seed as required).
[Consider whether you want this value to be the primary key, or simply unique.
In many cases where you need to reference a tables PK as a FK then you'll want it as PK for simplicity of join, but having a business readable value (eg, Accounting Code or OrderNo+OrderLine is completely valid) : that's just modelling]

UPDLOCK and HOLDLOCK query not creating the expected lock

I have the below table:
CREATE TABLE [dbo].[table1](
[id] [int] IDENTITY(1,1) NOT NULL,
[name] [nvarchar](50) NULL,
CONSTRAINT [PK_table1] PRIMARY KEY CLUSTERED
(
[id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
I'm learning how SQL locks work, and I'm trying to test a situation where I want to lock a row from being read and updated. Some of the inspiration in this quest starting from this article, and here's the original problem I was trying to solve.
When I run this T-SQL:
BEGIN TRANSACTION
SELECT * FROM dbo.table1 WITH (UPDLOCK, HOLDLOCK)
WAITFOR DELAY '00:00:15'
COMMIT TRANSACTION
I would expect an exclusive lock to be placed on the table, and specifically for the row (if I had a WHERE statement on the primary key)
But running this query, I can see that the GRANTed LOCK is for the request mode IX.
SELECT * FROM sys.dm_tran_locks WHERE resource_database_id = DB_ID() AND resource_associated_entity_id = OBJECT_ID(N'dbo.table1');
Also, in seperate SSMS windows, I can fully query the table while the transaction is running.
Why is MSSQL not respecting the lock hints?
(SQL Server 2016)
Edit 1
Any information about how these locks work is appreciated, however, the issue at hand is that SQL Server does not seem to be enforcing the locks I'm specifying. My hunch is that this has to do with row versioning, or something related.
Edit 2
I created this Github gist. It requires .NET and the external library Dapper to run (available via Nuget package).
Here's the interesting thing I noticed:
SELECT statements can be ran against table1 even though a previous query with UPDLOCK, HOLDLOCK has been requested.
INSERT statements cannot be ran while the lock is there
UPDATE statements against existing records cannot be ran while the lock is there
UPDATE statements against non-existing records can be ran.
Here's the Console output of that Gist:
Run locking SELECT Start - 00:00:00.0165118
Run NON-locking SELECT Start - 00:00:02.0155787
Run NON-locking SELECT Finished - 00:00:02.0222536
Run INSERT Start - 00:00:04.0156334
Run UPDATE ALL Start - 00:00:06.0259382
Run UPDATE EXISTING Start - 00:00:08.0216868
Run UPDATE NON-EXISTING Start - 00:00:10.0236223
Run UPDATE NON-EXISTING Finished - 00:00:10.0268826
Run locking SELECT Finished - 00:00:31.3204120
Run INSERT Finished - 00:00:31.3209670
Run UPDATE ALL Finished - 00:00:31.3213625
Run UPDATE EXISTING Finished - 00:00:31.3219371
and I'm trying to test a situation where I want to lock a row from
being read and updated
If you want to lock a row from being read and updated you need an exclusive lock, but UPDLOCK lock hint requests update locks, not exclusive locks. The query should be:
SELECT * FROM table1 WITH (XLOCK, HOLDLOCK, ROWLOCK)
WHERE Id = <some id>
Additionally, under READ COMMITTED SNAPSHOT and SNAPSHOT isolation levels, SELECT statements don't request shared locks, just schema stability locks. Therefore, the SELECT statement can read the row despite there is an exclusive lock. And surprisingly, under READ COMMITTED isolation level, SELECT statements might not request row level shared locks. You will need to add a query hint to the SELECT statement to prevent it from read the locked row:
SELECT * FROM dbo.Table1 WITH (REPEATABLEREAD)
WHERE id = <some id>
With REPEATABLEREAD lock hint, the SELECT statement will request shared locks and will hold them during the transaction, so it won't read exclusively locked rows. Note that using READCOMMITTEDLOCK is not enough, since SQL Server might not request shared locks under some circumstances as described in this blog post.
Please, take a look at the Lock Compatibility Table
Under the default isolation level READ COMMITTED, and with not lock hints, SELECT statements request shared locks for each row it reads, and those locks are released immediately after the row is read. However, if you use WITH (HOLDLOCK), the shared locks are held until the transaction ends. Taking into account the lock compatibility table, a SELECT statement running under READ COMMITTED, can read any row that is not locked exclusively (IX, SIX, X locks). Exclusive locks are requested by INSERT, UPDATE and DELETE statements or by SELECT statements with XLOCK hints.
I would expect an exclusive lock to be placed on the table, and
specifically for the row (if I had a WHERE statement on the primary
key)
I need to understand WHY SQL Server is not respcting the locking
directives given to it. (i.e. Why is an exclusive lock not on the
table, or row for that matter?)
UPDLOCK hint doesn't request exclusive locks, it requests update locks. Additionally, the lock can be granted on other resources than the row itself, it can be granted on the table, data pages, index pages, and index keys. The complete list of resource types SQL Server can lock is: DATABASE, FILE, OBJECT, PAGE, KEY, EXTENT, RID, APPLICATION, METADATA, HOBT, and ALLOCATION_UNIT. When ROWLOCK hint is specified, SQL Server will lock rows, not pages nor extents nor tables and the actual resources that SQL Server will lock are RID's and KEY's
#Remus Rusuanu has explained it a lot better than I ever could here.
In essence - you can always read UNLESS you ask for the same lock type (or more restrictive). However, if you want to UPDATE or DELETE then you will be blocked. But as I said, the link above explains it really well.
Your answer is right in the documentation:
https://learn.microsoft.com/en-us/sql/t-sql/queries/hints-transact-sql-table
Lock hints ROWLOCK, UPDLOCK, AND XLOCK that acquire row-level locks may place locks on index keys rather than the actual data rows. For example, if a table has a nonclustered index, and a SELECT statement using a lock hint is handled by a covering index, a lock is acquired on the index key in the covering index rather than on the data row in the base table.
This is why you are getting an index lock (IX) and not a table row lock.
And this explains why you can read while running the first query:
http://aboutsqlserver.com/2011/04/14/locking-in-microsoft-sql-server-part-1-lock-types/
Update locks (U). Those locks are the mix between shared and exclusive locks. SQL Server uses them with data modification statements while searching for the rows need to be modified. For example, if you issue the statement like: “update MyTable set Column1 = 0 where Column1 is null” SQL Server acquires update lock for every row it processes while searching for Column1 is null. When eligible row found, SQL Server converts (U) lock to (X).
Your UPDLock is an update lock. Notice that update locks are SHARED while searching, and changed EXCLUSIVE when performing the actual update. Since your query is a select with an update lock hint, the lock is a SHARED lock. This will allow other queries to also read the rows.

SQL Server - Force shared lock on partition

I am using partition switching to rebuild indexes on a staging table without dropping them on the partitioned table as in microsoft's article.
I have what boils down to
BEGIN TRAN
ALTER INDEX IX_Working ON dbo.WorkingTable DISABLE
INSERT INTO dbo.WorkingTable ( Id, PartitionColumn, Values...)
SELECT Id, PartitionColumn, Values...
FROM PartitionedTable WITH (HOLDLOCK)
WHERE PartitionColumn <= #rightboundary
AND PartitionColumn > #leftboundary
INSERT INTO WorkingTable ( Id, PartitionColumn, Values...)
SELECT Id, PartitionColumn, Values...
FROM Imports
ALTER INDEX IX_Working ON WorkingTable REBUILD -- SLOW BIT
ALTER TABLE PartitionedTable SWITCH PARTITION #partition TO SwapTable
ALTER TABLE WorkingTable SWITCH TO PartitionedTable PARTITION #partition
TRUNCATE TABLE SwapTable
COMMIT
Now during this operation I need to block any updates to the partition being reindexed but still allow them on other partitions. The PartionedTable has lock escalation set to auto. I am trying to do this with the HOLDLOCK but I'm still able to do INSERT INTO PartionedTable (Id, #somevalueInTheRange, Values...) from another connection during the slow bit.
How can I block this while still allowing selects?
Can you try to use TABLOCKX along with HOLDLOCK? currently the select will put an SHARED lock which is not released because of HOLDLOCK. But SHARED lock wont prevent inserts.

SQL deadlock on delete then bulk insert

I have an issue with a deadlock in SQL Server that I haven't been able to resolve.
Basically I have a large number of concurrent connections (from many machines) that are executing transactions where they first delete a range of entries and then re-insert entries within the same range with a bulk insert.
Essentially, the transaction looks like this
BEGIN TRANSACTION T1
DELETE FROM [TableName] WITH( XLOCK HOLDLOCK ) WHERE [Id]=#Id AND [SubId]=#SubId
INSERT BULK [TableName] (
[Id] Int
, [SubId] Int
, [Text] VarChar(max) COLLATE SQL_Latin1_General_CP1_CI_AS
) WITH(CHECK_CONSTRAINTS, FIRE_TRIGGERS)
COMMIT TRANSACTION T1
The bulk insert only inserts items matching the Id and SubId of the deletion in the same transaction. Furthermore, these Id and SubId entries should never overlap.
When I have enough concurrent transaction of this form, I start to see a significant number of deadlocks between these statements.
I added the locking hints XLOCK HOLDLOCK to attempt to deal with the issue, but they don't seem to be helpling.
The canonical deadlock graph for this error shows:
Connection 1:
Holds RangeX-X on PK_TableName
Holds IX Page lock on the table
Requesting X Page lock on the table
Connection 2:
Holds IX Page lock on the table
Requests RangeX-X lock on the table
What do I need to do in order to ensure that these deadlocks don't occur.
I have been doing some reading on the RangeX-X locks and I'm not sure I fully understand what is going on with these. Do I have any options short of locking the entire table here?
Following on from Sam Saffron's answer:
Consider READPAST hint to skip over any held locks if #ID7#SubID is distinc
Consider SERIALIZABLE and remove XLOCK, HOLDLOCK
Use a separate staging table for the bulk insert, then copy from that
Its hard to give you an accurate answer without having a list of indexes / table size etc, however keep in mind that SQL can not grab multiple locks at the same instance. It will grab locks one at at time, and if another connection already holds the lock and it holds a lock to something the first transaction needs, kaboom you have a deadlock.
In this particular instance there are a few things you can do:
Ensure there is an index on (Id, SubId), that way SQL will be able to grab a single range lock for the data being deleted.
If deadlocks become rare, retry your deadlocks.
You can approach this with a sledghammer and use a TABLOCKX which will not deadlock ever
Get an accurate deadlock analysis using trace flag 1204 http://support.microsoft.com/kb/832524 (the more info you have about the actual deadlock the easier it is to work around)

Resources