I'm trying to test some deadlock cases on a distributed view. I set up three nodes with docker. All servers are registered as linked server and the distributed view is working fine.
Datanode1 has the table movie_33 and holds all movies up to the id 333
Datanode2 has the table movie_66 and holds alld movies from 334 up to 666
Datanode3 has the table movie_99 and holds all movies from 667 up to 999
The view connects all tables.
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER OFF
GO
CREATE VIEW [dbo].[movie]
AS
SELECT *
FROM dbo.movie_33
UNION ALL
SELECT *
FROM [172.16.1.3].Sakila.dbo.movie_66
UNION ALL
SELECT *
FROM [172.16.1.4].Sakila.dbo.movie_99
GO
Now I want that one transaction is the victim. I do it with the following code:
Window 1:
--
-- Example on (distributed) transactions in SQL Server
--
-- Deadlock tracing requires administrative permissions (sysadmin)!
--
-- IDs for additional testing:
-- 321 & 123 are both on mysql1
-- 123 & 456 are on mysql1 and mysql2
-- 456 & 654 are both on mysql2
-- 456 & 789 are on mysql2 and mysql3
--
-- Query Window 1 [TA1]
--
use sakila
-- allow the entire transaction to be aborted, if a sub-transaction fails
set xact_abort on
-- enable tracing for deadlocks and output status
dbcc traceon (1204,-1)
dbcc traceon (1222,-1)
dbcc tracestatus(-1)
-- we want TA1 to become the victim and TA2 to be successful
set deadlock_priority LOW
set transaction isolation level read committed
begin transaction
PRINT 'Start'
update dbo.movie set title='test1' where movie_id = 456 -- obtains lock for this row
-- allow other transaction to acquire locks
waitfor delay '00:00:10'
update dbo.movie set title='test1' where movie_id = 789 -- results in deadlock with TA2
rollback
Window 2:
--
-- Query Window 2 [TA2] (execute immediately after Query Window 1)
--
use sakila
-- allow the entire transaction to be aborted, if a sub-transaction fails
set xact_abort on
-- enable tracing for deadlocks and output status
dbcc traceon (1204,-1)
dbcc traceon (1222,-1)
dbcc tracestatus(-1)
-- we want TA1 to become the victim and TA2 to be successful
set deadlock_priority HIGH
set transaction isolation level read committed
begin transaction
update dbo.movie set title='test2' where movie_id = 789 -- obtains lock for this row
update dbo.movie set title='test2' where movie_id = 456 -- this row should be locked by TA1
-- we do not want to cause permanent changes to the database
rollback
So when I use ids which querying items on the same server. The deadlock test is working fine. (for example 321 & 123 are both on mysql1 or 456 & 654 are both on mysql2)
When using ids which querying items on different server i get the following error
Msg 7399, Level 16, State 1, Line 21
The OLE DB provider "MSOLEDBSQL" for linked server "172.16.1.3" reported an error. Execution terminated by the provider because a resource limit was reached.
Msg 7320, Level 16, State 2, Line 21
Cannot execute the query "UPDATE "Sakila"."dbo"."movie_66" set "title" = 'test2' WHERE "movie_id"=(456)" against OLE DB provider "MSOLEDBSQL" for linked server "172.16.1.3".
Can someone help me with the problem?
Thank you in advance
Related
I'm working in SQL Server 2016. I have two tables, one is a queue of work items (TQueue), and the second is the work items (TWork) that are being processed.
I have a script that grabs the top 100 items from TQueue that do not have a record in TWork, and then inserts those items into TWork to be processed.
For performance reasons, I want to run multiple instances of the script simultaneously. The challenge is that Script 1 grabs 100 items, and before the transaction to insert these items into TWork is committed, Script 2 grabs the same set of items and inserts them as well.
Question
I would like to block the reading of TQueue until insert transaction into TWork has completed. Is this possible?
You may use table hints to achieve this goal.
For example:
Create Table Val (ID Int)
Insert Into Val (ID)
Values (0),(1),(2),(3),(4),(5)
First session:
Set Transaction Isolation level Read Committed
Begin Transaction
Select Top 2 * From Val With (ReadPast, XLock, RowLock)
-- Return 0,1
-- Commit has been commented for illustrative purposes.
-- Don't forget to commit the transaction later.
-- Commit
Second session:
Set Transaction Isolation level Read Committed
Begin Transaction
Select * From Val With (ReadPast, XLock, RowLock)
-- Return 2,3,4,5
Commit
I'm trying to write a stored procedure that truncates the first table on our data warehouse, then copies data from our local database to the DWH server.
Here's the code:
USE [ARGTPAWN-DB-DWH].[DWH].[dbo].[PML];
GO
TRUNCATE TABLE [ARGTPAWN-DB-DWH].[DWH].[dbo].[PML];
GO
SELECT *
INTO [ARGTPAWN-DB-DWH].[DWH].[dbo].[PML]
FROM [14TPAWNDB001].[FLMedicaid].[dbo].[PML]
GO
And the response I am getting is:
Msg 911, Level 16, State 1, Line 1
Database 'ARGTPAWN-DB-DWH' does not exist. Make sure that the name is entered correctly.
Msg 4701, Level 16, State 1, Line 3
Cannot find the object "PML" because it does not exist or you do not have permissions.
Msg 117, Level 15, State 1, Line 7
The object name 'ARGTPAWN-DB-DWH.DWH.dbo.PML' contains more than the maximum number of prefixes. The maximum is 2.
The servers are already linked, so that is not an issue, but I'm very curious as to why this is not working.
Linked Server and distributed query can be tricky in term of performance...
You should consider to write the Stored Procedure on the database that hosts the target tables even if you call it from the database that hosts the source tables.
On target database :
CREATE PROCEDURE [DBO].[TARGET_SIDE_PS]
AS
-- For error handling.
DECLARE #aERROR int
DECLARE #aCOUNT int
-- Start transaction.
BEGIN TRANSACTION
-- Drop target table.
IF OBJECT_ID('dbo.PML', 'U') IS NOT NULL
DROP TABLE dbo.PML;
-- Error catching
SELECT #aERROR = ##ERROR, #aCOUNT = ##ROWCOUNT
IF #aERROR<>0
BEGIN
-- Error : do what is needed.
--
ROLLBACK TRANSACTION
RETURN 1
END
SELECT *
INTO dbo.PML
FROM [SOURCELINKEDSERVER].[SOURCEDATABASE].[dbo].[PML]
-- Error catching
SELECT #aERROR = ##ERROR, #aCOUNT = ##ROWCOUNT
IF #aERROR<>0
BEGIN
-- Error : do what is needed.
--
ROLLBACK TRANSACTION
RETURN 2
END
IF #aCOUNT <= 0
BEGIN
-- No data: do what is needed.
--
PRINT 'NO DATA !!'
END
COMMIT TRANSACTION
RETURN 0
Call the target side PS from the source side (or from the target side) and it's done but source side database have to be linked to target side database.
Transaction can be remove due to DROP/CREATE/INSERT sequence.
You can do the oposite : PS on the source side, with drop and insert on the linked target side server database, but you must know :
- transaction will take a while.
- all the source data will be locked during the whole process.
- INSERT will take a while.
You don't need to execute:
USE [ARGTPAWN-DB-DWH].[DWH].[dbo].[PML];
Simply execute the lines below.
What is ARGTPAWN-DB-DWH? If this is the server name it is not needed.
Your USE statement should refer to the database only.
Your SELECT * INTO... will attempt to create the table PML, if this already exists (which it will if you are performing a truncate) it will fail - Use INSERT INTO... or DROP TABLE... instead of truncate.
Don't use SELECT *.
I created a database called 'test_isolation' and created a table 'person' with data
name age
---- ---
test1 1
test2 2
test3 3
test4 4
test5 5
test6 6
Now the database is altered to allow snapshot isolation in session1
ALTER DATABASE test_isolation
SET ALLOW_SNAPSHOT_ISOLATION ON
GO
Now I create a transaction in session 2
SET TRANSACTION ISOLATION LEVEL SNAPSHOT;
GO
BEGIN TRAN
SELECT * FROM PERSON
GO
DELETE FROM PERSON WHERE name = 'test6'
GO
SELECT * FROM PERSON
GO
The results are as expected. (Note we haven't committed this transaction yet!)
Now I execute the following query in session 3
SELECT * FROM PERSON
The query in session 3 keeps on running infinitely which means the table is locked.
If I go back to session 2 and commit the transaction.. I'm able to run the query on session 3 and the results are as expected.
Transaction isolation level SNAPSHOT is not supposed lock the table right? Am I doing something wrong or my understanding of transaction SNAPSHOT isolation is wrong?
Please help..
You must explicitly declare SET TRANSACTION ISOLATION LEVEL SNAPSHOT in session three, otherwise session 3 will still operate as READ_COMMITTED and block on the update.
This option can also be set at the database level to replace READ_COMMITTED with SNAPSHOT.
ALTER DATABASE MyDatabase
SET READ_COMMITTED_SNAPSHOT ON
Greetings,
I have been analyzing a problem with a delete stored procedure. The procedure simply performs a cascading delete of a certain entity.
When I break the SP out into SQL in the query editor it runs in approx. 7 seconds, however, when the SP is executed via EXEC SP it takes over 1 minute to execute.
I have tried the following with no luck:
Dropped the SP and then recreated it using WITH RECOMPILE
Added SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
The SQL runs in the editor with many concurrent connections without issue.
The EXEC Procedure hangs with or without concurrent connections
The procedure is similar to:
ALTER PROCEDURE [dbo].[DELETE_Something]
(
#SomethingID INT,
#Result INT OUT,
#ResultMessage NVARCHAR(1000) OUT
)--WITH RECOMPILE--!!!DEBUGGING
AS
--SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED--!!!DEBUGGING
SET #Result=1
BEGIN TRANSACTION
BEGIN TRY
DELETE FROM XXXXX --APPROX. 34 Records
DELETE FROM XXXX --APPROX. 227 Records
DELETE FROM XXX --APPROX. 58 Records
DELETE FROM XX --APPROX. 24 Records
DELETE FROM X --APPROX. 14 Records
DELETE FROM A -- 1 Record
DELETE FROM B -- 1 Record
DELETE FROM C -- 1 Record
DELETE FROM D --APROX. 3400 Records !!!HANGS FOR OVER ONE MINUTE TRACING THROUGH SP BUT NOT SQL
GOTO COMMIT_TRANS
END TRY
BEGIN CATCH
GOTO ROLLBACK_TRANS
END CATCH
COMMIT_TRANS:
SET #Result=1
COMMIT TRANSACTION
RETURN
ROLLBACK_TRANS:
SET #Result=0
SET #ResultMessage=CAST(ERROR_MESSAGE() AS NVARCHAR(1000))
ROLLBACK TRANSACTION
RETURN
Make sure your statistics are up to date. Assuming the DELETE statements have some reference to the parameters getting passed, you might try the OPTIMIZE FOR UNKNOWN option, if you are using SQL 2008.
As with any performance problem, you need to measure why it 'hangs'. Guessing will get you nowhere fast. Use a methodological approach, like Waits and Queues. The simplest thing to do is look at wait_type, wait_time and wait_resource in sys.dm_exec_requests, for the request doing the exec sp, while it executes the sp. Based on what is actually causing the blockage, you can take appropriate action.
This was more of a Parameter Sniffing (or Spoofing) issue. This is a rarely used SP. Using the OPTION (OPTIMIZE FOR UNKNOWN) for a statement using the parameter against a rather large table apparently solved the problem. Thank you SqlACID for the tip.
DELETE FROM
ProblemTableWithManyIndexes
WHERE
TableID=#TableID
OPTION (OPTIMIZE FOR UNKNOWN)
Should I run
ALTER DATABASE DbName SET ALLOW_SNAPSHOT_ISOLATION OFF
if snapshot transaction (TX) isolation (iso) is not temporarily used?
In other words,
why should it be enabled, in first place?
Why isn't it enabled by default?
What is the cost of having it enabled (but temporarily not used) in SQL Server?
--Update:
enabling of snapshot TX iso level on database does not change READ COMMITTED tx iso to be default.
You may check it by running:
use someDbName;
--( 1 )
alter database someDbName set allow_snapshot_isolation ON;
dbcc useroptions;
the last row shows that tx iso level of current session is (read committed).
So, enabling snapshot tx iso level without changing to it does not use it, etc
In order to use it one should issue
--( 2 )
SET TRANSACTION ISOLATION LEVEL SNAPSHOT
Update2:
I repeat the scripts from [1] but with SNAPSHOT enabled (but not switched on) but without enabling READ_COMMITTED_SNAPSHOT
--with enabling allow_snapshot_isolation
alter database snapshottest set allow_snapshot_isolation ON
-- but without enabling read_committed_snapshot
--alter database snapshottest set read_committed_snapshot ON
-- OR with OFF
alter database snapshottest set read_committed_snapshot OFF
go
There no results/rows from from executing
select * from sys.dm_tran_version_store
after executing INSERT, DELETE or UPDATE
Can you provide me with scripts illustrating that enabled SNAPSHOT tx iso level by ( 1 ) but not switched on by ( 2 ) produces any versions in tempdb and/or increase the size of data with 14 bytes per row?
Really I do not understand what is the point in versioning if it is enabled by ( 1 ) but not used (not set on by ( 2))?
[1]
Managing TempDB in SQL Server: TempDB Basics (Version Store: Simple Example)
Link
As soon as row versioning (aka. snapshot) is enabled in the database all writes have to be versioned. It doesn't matter under what isolation level the write occurred, since isolation levels always affect only reads. As soon the database row versioning is enabled, any insert/update/delete will:
increase the size of data with 14 bytes per row
possibly create an image of the data before the update in the version store (tempdb)
Again, it is completely irrelevant what isolation level is used. Note that row versioning occurs also if any of the following is true:
table has a trigger
MARS is enabled on the connection
Online index operation is running on the table
All this is explained in Row Versioning Resource Usage:
Each database row may use up to 14
bytes at the end of the row for row
versioning information. The row
versioning information contains the
transaction sequence number of the
transaction that committed the version
and the pointer to the versioned row.
These 14 bytes are added the first
time the row is modified, or when a
new row is inserted, under any
of these conditions:
READ_COMMITTED_SNAPSHOT or ALLOW_SNAPSHOT_ISOLATION options are
ON.
The table has a trigger.
Multiple Active Results Sets (MARS) is being used.
Online index build operations are currently running on the table.
...
Row versions must be stored for as
long as an active transaction needs to
access it. ... if it meets any of the
following conditions:
It uses row versioning-based isolation.
It uses triggers, MARS, or online index build operations.
It generates row versions.
Update
:setvar dbname testsnapshot
use master;
if db_id('$(dbname)') is not null
begin
alter database [$(dbname)] set single_user with rollback immediate;
drop database [$(dbname)];
end
go
create database [$(dbname)];
go
use [$(dbname)];
go
-- create a table before row versioning is enabled
--
create table t1 (i int not null);
go
insert into t1(i) values (1);
go
-- this check will show that the records do not contain a version number
--
select avg_record_size_in_bytes
from sys.dm_db_index_physical_stats (db_id(), object_id('t1'), NULL, NULL, 'DETAILED')
-- record size: 11 (lacks version info that is at least 14 bytes)
-- enable row versioning and and create an identical table
--
alter database [$(dbname)] set allow_snapshot_isolation on;
go
create table t2 (i int not null);
go
set transaction isolation level read committed;
go
insert into t2(i) values (1);
go
-- This check shows that the rows in t2 have version number
--
select avg_record_size_in_bytes
from sys.dm_db_index_physical_stats (db_id(), object_id('t2'), NULL, NULL, 'DETAILED')
-- record size: 25 (11+14)
-- this update will show that the version store has records
-- even though the isolation level is read commited
--
begin transaction;
update t1
set i += 1;
select * from sys.dm_tran_version_store;
commit;
go
-- And if we check again the row size of t1, its rows now have a version number
select avg_record_size_in_bytes
from sys.dm_db_index_physical_stats (db_id(), object_id('t1'), NULL, NULL, 'DETAILED')
-- record size: 25
By default, you have snapshot isolation OFF, If you turn it ON, SQL will maintain snapshots of data for running transactions.
Example: On connection 1, you are running big select. On connection 2, you update some of the records that are going to be returned by first select.
In snapshot isolation ON, SQL will make a temporary copy of the data, affected by update, so SELECT will return original data.
Any additional data manipulation will affect performance. That's why this setting is OFF by default.