UPDATE Blocking SELECT Of Unrelated Rows - sql-server

I have TableA with Col1 as the primary key. I am running the following transaction without committing it (for test purposes).
BEGIN TRANSACTION
UPDATE TableA
SET Col3 = 0
WHERE Col2 = 'AAA'
In the meanwhile, I run the following query and see that it waits on the first transaction to complete.
SELECT *
FROM TableA
WHERE Col2 = 'BBB'
But the following query returns the results immediately:
SELECT *
FROM TableA
WHERE Col1 = '1'
So I thought that the second query might need to read rows that have exclusive locks put by the first transaction in order to select rows with Col2 = 'BBB'. That's why then I tried to index Col2 so that a table seek will not be necessary but that did not work either. Second query still waits on the first transaction.
What should be done to prevent SELECT from blocking (except the use of NOLOCK).
P.S: Transaction isolation level is "Read Committed".

Related

Multiple SQL Server inserts fail, but go unnoticed in Mule

When I run multiple insert queries together into a SQL Server database with Mule, if the second insert fails, it doesn't insert a row, and won't show as a failure in the flow or logs.
We use variables to collect together different SQL statements to insert into a header and detail table. I noticed last week that in some cases, the header record was there but no detail. There was nothing in the logs for this.
After some investigation it appears that Mule will take the result of the first SQL insert as the return code, regardless of whether the resulting SQL inserts worked or not.
I've tried changing this to a BULK UPDATE but i still get the same result.
Edit - code included for a sample insert. 4 insert statements, 3 will be successful, 1 will fail, but will simply pass back as successful -
insert into highjump.t_import_order(status,idoc_number,datetime_created,datetime_processed,error_message,wh_id,order_number,order_type,order_subtype,is_vas,is_shrinkwrap,is_mhe_packhold,is_consolidation,is_nonmhe_packhold,is_full_case,ship_to_account,ship_to_name,ship_to_address1,ship_to_address2,ship_to_address3,ship_to_city,ship_to_state,ship_to_zip,ship_to_country,sold_to_account,sold_to_name,sold_to_address1,sold_to_address2,sold_to_address3,sold_to_city,sold_to_state,sold_to_zip,telephone_number,sold_to_country,stock_pool,discount,box_type,service_level,telephone_number_alt,dest_type,carrier_code,route_code,inv_cat,cust_order_date,expected_ship_date,expected_delivery_date,dsv_tracking_number,postage_cost,carton_contents_type,unit_total,total_before_discount,total_after_discount,carton_cubing_indicator,req_proof_of_delivery,payment_type,is_cms,carrier_override_type,sales_org,pack_note_preference,shipper_order_id,master_order_number,currency_code,store_code,order_method,dsv_reference,email_address,ship_complete_flag,replen_type,carton_content_flags,partner_profile) values
(N'Z',N'0000000629673252','2019-04-12 09:57:38','2019-04-12 09:57:38',null,N'WST',N'6412210697',N'MCR',N'STD EU',0,0,0,0,0,0,N'MCRSHPTODE',N'Dave Smith',
N'888415936',N'PACKSTATION 432',null,N'Koettgenstr. 8',null,N'13629',N'DE',N'MCRSLDTODE',N'MCR SOLD TO DE',N'High St.',null,null,N'Street',null,N'BA330YA',null,
N'GB',N'MC01',0,N'BAG',N'10',null,N'RE','',N'01',N'W','2019-03-29 11:38:13','2019-03-29 11:38:13','2019-03-29 11:38:13',null,0,N'001',2,null,null,'91',1,
N'MCR CON - UK Orders',1,'1',null,N'N',null,N'623611121','GBP',null,null,null,N'Smith#arcor.com',null,'R',N'F', N'WWMULESFTH');
insert into highjump.t_import_order_cms
(order_id,delivery_from_date,delivery_to_date,pin_number,cms_location,cms_delivery_endpoint,cms_comm_preference,cms_dont_despatch_before,cms_market,cms_brand,is_gift,gift_message,loyalty_number,cms_dest_type,cms_time_delivery,cms_day_delivery,cms_customer_type,carrier_service_name,special_instructions) values ((select top(1) order_id from highjump.t_import_order where order_number='6412210697'),'2019-04-03','2019-04-03',null,N'432',N'PACKSTATIONPACKSTATION',null,null,null,N'CLA',null,null,null,N'PUDO',null,null,null,null,null);
insert into highjump.t_import_order_detail
(order_id,line_number,item_number,order_quantity,customer_item_number,ratio_pack_group,is_ratio_pack,ratio_pack_qty,uom,retail_price,freight_class,sales_order_number,customer_order_number,dsv_price_discount,customer_item_colour,price_paid,currency_code,customer_item_size)
values ((select top(1) order_id from highjump.t_import_order where order_number='6412210697'),00010,'261392464080',1.000,null,null,null,null,'U','0.0',null,N'623611121000010',N'623611121',null,null,99.95,null,null);
insert into highjump.t_import_order_detail (order_id,line_number,item_number,order_quantity,customer_item_number,ratio_pack_group,is_ratio_pack,ratio_pack_qty,uom,retail_price,freight_class,sales_order_number,customer_order_number,dsv_price_discount,customer_item_colour,price_paid,currency_code,customer_item_size)
values ((select top(1) order_id from highjump.t_import_order where order_number='6412210697'),00020,'261394324080',1.000,null,null,null,null,'U','0.0',null,N'623611121000020',N'623611121',null,null,89.95,null,null);
Structurally, these SQL queries seem to be fine. It is unclear to me why and how any of these queries would fail (or would not insert any data). It should just work fine, as far as I can see.
In the end, when you execute these queries in SQL Server Management Studio, they should all return value 1:
select count(*) from highjump.t_import_order where order_number = '6412210697';
select count(*) from highjump.t_import_order_cms where order_id = (select top (1) order_id from highjump.t_import_order where order_number = '6412210697');
select count(*) from highjump.t_import_order_detail where line_number = 10 and order_id = (select top (1) order_id from highjump.t_import_order where order_number = '6412210697');
select count(*) from highjump.t_import_order_detail where line_number = 20 and order_id = (select top (1) order_id from highjump.t_import_order where order_number = '6412210697');
Use transaction for multiple insert queries execution. In multiple SQL queries suppose one query gives error then it will be roll back.
BEGIN TRY
BEGIN TRANSACTION
//Here you will write multiple insert/delete/update queries
COMMIT
END TRY
BEGIN CATCH
ROLLBACK
END CATCH

Is it possible to produce phantom read in single SQL Server query?

All of explanations of phantom reads I managed to find demonstrate phantom read by running 2 select statements in one transaction (e.g. https://blobeater.blog/2017/10/26/sql-server-phantom-reads/ )
BEGIN TRAN
SELECT #1
DELAY DURING WHICH AN INSERT TAKES PLACE IN A DIFFERENT TRANSACTION
SELECT #2
END TRAN
Is it possible to reproduce a phantom read in one select statement? This would mean that select statement starts on transaction #1. Then insert runs on transaction #2 and commits. Finally select statement from transaction #1 completes, but does not return a row that transaction #2 has inserted.
The SQL Server Transaction Isolation Levels documentation defines a phantom row as one "that matches the search criteria but is not initially seen" (emphasis mine). Consequently, more than one SELECT statement is needed for a phantom read to occur.
Data inserted during execution SELECT statement execution might not be returned in the READ COMMITTED isolation level depending on the timing but this is not a phantom read by definition. The example below shows this behavior.
--create table with enough data for a long-running SELECT query
CREATE TABLE dbo.PhantomReadExample(
PhantomReadExampleID int NOT NULL
CONSTRAINT PK_PhantomReadExample PRIMARY KEY
, PhantomReadData char(8000) NOT NULL
);
--insert 100K rows
WITH
t10 AS (SELECT n FROM (VALUES(0),(0),(0),(0),(0),(0),(0),(0),(0),(0)) t(n))
,t1k AS (SELECT 0 AS n FROM t10 AS a CROSS JOIN t10 AS b CROSS JOIN t10 AS c)
,t1m AS (SELECT ROW_NUMBER() OVER (ORDER BY (SELECT 0)) AS num FROM t1k AS a CROSS JOIN t1k AS b)
INSERT INTO dbo.PhantomReadExample WITH(TABLOCKX) (PhantomReadExampleID, PhantomReadData)
SELECT num*2, 'data'
FROM t1m
WHERE num <= 100000;
GO
--run this on connection 1
SELECT *
FROM dbo.PhantomReadExample
ORDER BY PhantomReadExampleID;
GO
--run this on connection 2 while the connection 1 SELECT is running
INSERT INTO dbo.PhantomReadExample(PhantomReadExampleID, PhantomReadData)
VALUES(1, 'data');
GO
Shared locks are acquired on rows as they are read during the SELECT query scan to ensure only committed data are read but these are immediately released once data are read improve concurrency. This allows other sessions to insert, update, and delete rows while the SELECT query is running.
The inserted row is not returned in this case because the ordered clustered index scan had already past the point of the insert.
Below is the wikipedia definition of phantom reads
A phantom read occurs when, in the course of a transaction, new rows
are added by another transaction to the records being read.
This can occur when range locks are not acquired on performing a
SELECT ... WHERE operation. The phantom reads anomaly is a special
case of Non-repeatable reads when Transaction 1 repeats a ranged
SELECT ... WHERE query and, between both operations, Transaction 2
creates (i.e. INSERT) new rows (in the target table) which fulfill
that WHERE clause.
This is certainly possible to reproduce in a single reading query (of course other database activity must also be happening to produce the phantom rows).
Setup
CREATE TABLE Test(X INT PRIMARY KEY);
Connection 1 (leave this running)
SET NOCOUNT ON;
WHILE 1 = 1
INSERT INTO Test VALUES (CRYPT_GEN_RANDOM(4))
Connection 2
This is extremely likely to return some rows if running at read committed lock isolation level (default for the on premise product and enforced with table hint below)
WITH CTE AS
(
SELECT *
FROM Test WITH (READCOMMITTEDLOCK)
WHERE X BETWEEN 0 AND 2147483647
)
SELECT *
FROM CTE c1
FULL OUTER HASH JOIN CTE c2 ON c1.X = c2.X
WHERE (c1.X IS NULL OR c2.X IS NULL)
The returned rows are values added between the first and second read of the table for rows matching the WHERE X BETWEEN 0 AND 2147483647 predicate.

SQL Server Is there any impact if we keep the TRAN open for the SELECT and UPDATE

I came across a procedure where TRANSACTION has been kept open, here is the snippet
BEGIN TRAN
--Lot of select queries to process the business logic, lets assume 30 seconds to generate the #Par3 and #Par4 as they are having XML data
IF 1= 1
BEGIN
UPDATE Table SET Col1= 'Value' WHERE Col2=#Par1 AND Col3 = #Par4
UPDATE Table2 SET Col5= 'Value' WHERE Col2=#Par1 AND Col3 = #Par4
END
COMMIT
I would like to know if the above code will lock the tables which are in SELECT clause. I am planning to add the TRANSACTION only before the UPDATE.
Is the below code is better than the above one
BEGIN
--Lot of select queries to process the business logic, lets assume 30 seconds to generate the #Par3 and #Par4 as they are having XML data
IF 1= 1
BEGIN
BEGIN TRAN
UPDATE Table SET Col1= 'Value' WHERE Col2=#Par1 AND Col3 = #Par4
UPDATE Table2 SET Col5= 'Value' WHERE Col2=#Par1 AND Col3 = #Par4
COMMIT
END
Please let me know if it makes any difference.
The default isolation level should be READ COMMITTED. Without having READ_COMMITTED_SNAPSHOT set to ON, your select may be blocked if in another transaction some updates/deletes/inserts are done. This depends on which locks are used which depends on the data you touch.
Both statemenmts should be equal as without specifing any transaction, SQL server creates one on its own and should use the default isolation level.
BEGIN TRAN
UPDATE Table SET Col1= 'Value' WHERE Col2=#Par1 AND Col3 = #Par4
COMMIT
has no sense at all as by default SQL Server operates in autocommit mode, this means that there is no need to wrap single UPDATE statement in begin tran..commit, it will be committed automatically.
The original code opens a transaction for doing multiple updates, this mean that business logic requires all these updates to be committed or all to be rolled back if something goes wrong.
I would like to know if the above code will lock the tables which are in SELECT clause
This depends on your SELECTs and on the table structure. If your table has an index on col2 or col3 and your SELECTs do not touch the same rows or use readpast there will be no conflict.

Insert if not exists avoiding race condition

How can I make sure following statements don't have a race condition?
IF NOT EXISTS (select col1 from Table1 where SomeId=#SomeId)
INSERT INTO Table1 values (#SomeId,...)
IF NOT EXISTS (select col1 from Table2 where SomeId=#SomeId)
INSERT INTO Table2 values (#SomeId,...)
Is this enough
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
BEGIN TRAN
IF NOT EXISTS (SELECT col1 FROM Table1 WITH (UPDLOCK) WHERE SomeId=#SomeId)
INSERT INTO Table1 VALUES (#SomeId,...)
COMMIT TRAN
BEGIN TRAN
IF NOT EXISTS (SELECT col1 FROM Table2 WITH (UPDLOCK) WHERE SomeId=#SomeId)
INSERT INTO Table2 VALUES (#SomeId,...)
COMMIT TRAN
Yes. That is enough. Setting the transaction isolation level to serializable will create key locks that cover SomeId=#SomeId when you run your select-- which will prevent other processes from inserting values with the same key (SomeId=#SomeId) while your transaction is running.
The WITH(UPDLOCK) hint will cause the SELECT to obtain an update lock on the selected row(s), if they exist. This will prevent other transactions from modifying these rows (if they existed at the time of the select) while your transaction is running.
It doesn't look like you really need the WITH(UPDLOCK) hint, since you are committing the transaction right away if the record already exists. If you wanted to do something else before committing if the record does exist, you might need this hint-- but as it is, it appears you do not.
A statement is a transaction
declare #v int = 11;
insert into iden (val)
select #v
where not exists (select 1 from iden with (UPDLOCK) where val = #v)

How do I acquire write locks in SQL Server?

I need to run a query that selects ten records. Then, based on their values and some outside information, update said records.
Unfortunately I am running into deadlocks when I do this in a multi-threaded fashion. Both threads A and B run their selects at the same time, acquiring read locks on the ten records. So when one of them tries to do an update, the other transaction is aborted.
So what I need to be able to say is "select and write-lock these ten records".
(Yea, I know serial transactions should be avoided, but this is a special case for me.)
Try applying UPDLOCK
BEGIN TRAN
SELECT * FROM table1
WITH (UPDLOCK, ROWLOCK)
WHERE col1 = 'value1'
UPDATE table1
set col1 = 'value2'
where col1 = 'value1'
COMMIT TRAN

Resources