What are implications of SET-ting ALLOW_SNAPSHOT_ISOLATION ON? - sql-server

Should I run
ALTER DATABASE DbName SET ALLOW_SNAPSHOT_ISOLATION OFF
if snapshot transaction (TX) isolation (iso) is not temporarily used?
In other words,
why should it be enabled, in first place?
Why isn't it enabled by default?
What is the cost of having it enabled (but temporarily not used) in SQL Server?
--Update:
enabling of snapshot TX iso level on database does not change READ COMMITTED tx iso to be default.
You may check it by running:
use someDbName;
--( 1 )
alter database someDbName set allow_snapshot_isolation ON;
dbcc useroptions;
the last row shows that tx iso level of current session is (read committed).
So, enabling snapshot tx iso level without changing to it does not use it, etc
In order to use it one should issue
--( 2 )
SET TRANSACTION ISOLATION LEVEL SNAPSHOT
Update2:
I repeat the scripts from [1] but with SNAPSHOT enabled (but not switched on) but without enabling READ_COMMITTED_SNAPSHOT
--with enabling allow_snapshot_isolation
alter database snapshottest set allow_snapshot_isolation ON
-- but without enabling read_committed_snapshot
--alter database snapshottest set read_committed_snapshot ON
-- OR with OFF
alter database snapshottest set read_committed_snapshot OFF
go
There no results/rows from from executing
select * from sys.dm_tran_version_store
after executing INSERT, DELETE or UPDATE
Can you provide me with scripts illustrating that enabled SNAPSHOT tx iso level by ( 1 ) but not switched on by ( 2 ) produces any versions in tempdb and/or increase the size of data with 14 bytes per row?
Really I do not understand what is the point in versioning if it is enabled by ( 1 ) but not used (not set on by ( 2))?
[1]
Managing TempDB in SQL Server: TempDB Basics (Version Store: Simple Example)
Link

As soon as row versioning (aka. snapshot) is enabled in the database all writes have to be versioned. It doesn't matter under what isolation level the write occurred, since isolation levels always affect only reads. As soon the database row versioning is enabled, any insert/update/delete will:
increase the size of data with 14 bytes per row
possibly create an image of the data before the update in the version store (tempdb)
Again, it is completely irrelevant what isolation level is used. Note that row versioning occurs also if any of the following is true:
table has a trigger
MARS is enabled on the connection
Online index operation is running on the table
All this is explained in Row Versioning Resource Usage:
Each database row may use up to 14
bytes at the end of the row for row
versioning information. The row
versioning information contains the
transaction sequence number of the
transaction that committed the version
and the pointer to the versioned row.
These 14 bytes are added the first
time the row is modified, or when a
new row is inserted, under any
of these conditions:
READ_COMMITTED_SNAPSHOT or ALLOW_SNAPSHOT_ISOLATION options are
ON.
The table has a trigger.
Multiple Active Results Sets (MARS) is being used.
Online index build operations are currently running on the table.
...
Row versions must be stored for as
long as an active transaction needs to
access it. ... if it meets any of the
following conditions:
It uses row versioning-based isolation.
It uses triggers, MARS, or online index build operations.
It generates row versions.
Update
:setvar dbname testsnapshot
use master;
if db_id('$(dbname)') is not null
begin
alter database [$(dbname)] set single_user with rollback immediate;
drop database [$(dbname)];
end
go
create database [$(dbname)];
go
use [$(dbname)];
go
-- create a table before row versioning is enabled
--
create table t1 (i int not null);
go
insert into t1(i) values (1);
go
-- this check will show that the records do not contain a version number
--
select avg_record_size_in_bytes
from sys.dm_db_index_physical_stats (db_id(), object_id('t1'), NULL, NULL, 'DETAILED')
-- record size: 11 (lacks version info that is at least 14 bytes)
-- enable row versioning and and create an identical table
--
alter database [$(dbname)] set allow_snapshot_isolation on;
go
create table t2 (i int not null);
go
set transaction isolation level read committed;
go
insert into t2(i) values (1);
go
-- This check shows that the rows in t2 have version number
--
select avg_record_size_in_bytes
from sys.dm_db_index_physical_stats (db_id(), object_id('t2'), NULL, NULL, 'DETAILED')
-- record size: 25 (11+14)
-- this update will show that the version store has records
-- even though the isolation level is read commited
--
begin transaction;
update t1
set i += 1;
select * from sys.dm_tran_version_store;
commit;
go
-- And if we check again the row size of t1, its rows now have a version number
select avg_record_size_in_bytes
from sys.dm_db_index_physical_stats (db_id(), object_id('t1'), NULL, NULL, 'DETAILED')
-- record size: 25

By default, you have snapshot isolation OFF, If you turn it ON, SQL will maintain snapshots of data for running transactions.
Example: On connection 1, you are running big select. On connection 2, you update some of the records that are going to be returned by first select.
In snapshot isolation ON, SQL will make a temporary copy of the data, affected by update, so SELECT will return original data.
Any additional data manipulation will affect performance. That's why this setting is OFF by default.

Related

Transaction cause freezing entire database in SQL Server

I was working mostly on PostgreSQL, but recently I was assigned to project with SqlServer and I encountered very strange behavior of this engine. I am using transaction in my code and connect with server via System.Data.SqlClient library. The code in transaction is approximately 1000 lines long, so I would like to not copy it here, but transaction is handled via code below:
using (var transaction = connection.BeginTransaction(IsolationLevel.ReadCommited))
{
//here code goes
//1. inserting new table metadata via inserts and updates
//2. creating new tables according to users project
//3. post execute actions via inserts and updates
//here is intended transaction freeze
await Task.Delay(1000 * 60 * 2);
}
During this execution I cannot perform any operation on database (query execution in SSMS or some code execution in application - doesn't matter). Simple selects f.e. SELECT * FROM "TableA" hangs, retrieving database properties in SSMS hangs etc. Any independent query waits for this one transaction to be completed.
I found several articles and answers here on SO, and based on those I tried following solutions:
Use WITH (NOLOCK) or WITH (READPAST) in SELECT statement
Changing database property Is Read Commited Snapshot ON to true
Changing transaction isolation level in code (all possible levels were tested)
None of the above solutions works.
I tested on 3 different computers: desktop, two laptops - the same behavior (SqlServer and SSMS was installed with default options).
In this thread: Understanding locking behavior in SQL Server there are some explanation of transaction isolation levels and locks but the problem is that WITH (NOLOCK) doesn't work for me as mentioned in 1).
This is very big problem for me, because my asynchronous application works synchronously because of that weird locks.
Oracle and postgres databases works perfectly fine, the problem concerns SqlServer only.
I don't use EntityFramework - I handle connections myself.
Windows 10 Pro
Microsoft SQL Server Developer (64-bit) version 15.0.2000.5
System.Data.SqlClient version 4.8.3
.NET 6.0
Any clues?
Update 1:
As pointed out in comments I have indeed schema changes in my transaction - CREATE TABLE and ALTER TABLE statements mixed with standard UPDATES and SELECTS. My app allows user to create own tables (in limited functionality) and when this table is registered in table via INSERT there are some CREATES to adjust table structure.
Update 2:
I can perform SELECT * FROM sys.dm_tran_locks
I executed DBCC SQLPERF ('sys.dm_os_wait_stats', CLEAR);
The problem remains.
The cause of the locking issue is DDL (CREATE TABLE, etc.) within a transaction. This will acquire and hold restrictive locks on system table meta-data and block other database activity that need access to object meta-data until the transaction is committed.
This is an app design problem as one should not routinely execute DDL functions in application code. If that design cannot be easily remediated, perform the DDL operations separately in a short transaction (or with utocommit statements without an explict transaction) and handle DDL rollback in code.
You can use this useful store proc which I picked up somewhere along my travels. It recently helped me see the locking on a table and showed that after setting READ UNCOMMITED it was no longer doing row/page/table locks, but still had the schema lock. I believe you may have schema locks if you are modifying them! and also as commented don't keep a transaction open long, in and out is key.
What this does is runs the stored proc every seconds 20 times, so you will get a snapshot of locking, a really useful stored proc to remember.
EXEC [dbo].CheckLocks #Database = 'PriceBook'
WAITFOR DELAY '00:00:01'
GO 20
The stored proc is as follows
/*
This script can be run to find locks at the current time
We can run it as follows:
EXEC [dbo].CheckLocks #Database = 'PriceBook'
WAITFOR DELAY '00:00:01'
GO 10
*/
CREATE OR ALTER PROCEDURE [dbo].[CheckLocks]
#Database NVARCHAR(256)
AS
BEGIN
-- Get the sp_who details
IF object_id('tempdb..#WhoDetails') IS NOT NULL
BEGIN
DROP TABLE #WhoDetails
END
CREATE TABLE #WhoDetails (
[spid] INT,
[ecid] INT,
[status] VARCHAR(30),
[loginame] VARCHAR(128),
[hostname] VARCHAR(128),
[blk] VARCHAR(5),
[dbname] VARCHAR(128),
[cmd] VARCHAR(128),
[request_id] INT
)
INSERT INTO #WhoDetails EXEC sp_who
-- Get the sp_lock details
IF object_id('tempdb..#CheckLocks') IS NOT NULL
BEGIN
DROP TABLE #CheckLocks
END
CREATE TABLE #CheckLocks (
[spid] int,
[dbid] int,
[ObjId] int,
[IndId] int,
[Type] char(4),
[Resource] nchar(32),
[Mode] char(8),
[Status] char(6)
)
INSERT INTO #CheckLocks EXEC sp_lock
SELECT DISTINCT
W.[loginame],
L.[spid],
L.[dbid],
db_name(L.dbid) AS [Database],
L.[ObjId],
object_name(objID) AS [ObjectName],
L.[IndId],
L.[Type],
L.[Resource],
L.[Mode],
L.[Status]--,
--ST.text,
--IB.event_info
FROM #CheckLocks AS L
INNER JOIN #WhoDetails AS W ON W.spid = L.spid
INNER JOIN sys.dm_exec_connections AS EC ON EC.session_id = L.spid
--CROSS APPLY sys.dm_exec_sql_text(EC.most_recent_sql_handle) AS ST
--CROSS APPLY sys.dm_exec_input_buffer(EC.session_id, NULL) AS IB -- get the code that the session of interest last submitted
WHERE L.[dbid] != db_id('tempdb')
AND L.[Type] IN ('PAG', 'EXT', 'TAB')
AND L.[dbid] = db_id(#Database)
/*
https://learn.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sp-lock-transact-sql?view=sql-server-ver15
Lock modes are as follows
------------------------------
S = Shared
U = Update
X = Exclusive
IS = Indent Shared
IS = Intent Update
IX = Intent Exclusive
Sch-S = Schema Stability lock so no we cant remove tables or indexes in use
Lock Type are as follows:
------------------------------
RID = Single row lock
KEY = Lock within an index that protects a range of keys
PAG = Page level lock
EXT = Extend Lock
TAB = Table Lock
DB = Database lock
*/
END
This is what you might see if you can catch the locking, this was before an after example, left and right.

How to prevent deadlock of table in SQL Server

I have a table were values can be altered by different users and records of 100k rows.
I made a stored procedure where in, it has a begin tran and at the last part
to either commit or rollback the changes depending on the situation.
So for now the problem we're encountering is a lock of that table. For example 1st user is executing the stored procedure thru the system, then the other users won't be able to select or also execute the stored procedure because the table is currently locked.
So is there anyway where I can avoid lock other than using dirty read. Or a way where I can rollback the changes made without using begin tran, because it is the main reason why the table is locked up.
Yes, you can at least (quick & dirty) enable SNAPSHOT isolation level for transactions. That will prevent locks inside the transactions.
ALTER DATABASE MyDatabase
SET ALLOW_SNAPSHOT_ISOLATION ON
ALTER DATABASE MyDatabase
SET READ_COMMITTED_SNAPSHOT ON
See for details.

What is the fastest way to clear a SQL table?

I have a table with about 300,000 rows and 30 columns. How can I quickly clear it out? If I do a DROP FROM MyTable query, it takes a long time to run. I'm trying the following stored procedure to basically make a copy of the table with no data, drop the original table, and rename the new table as the original:
USE [myDatabase]
GO
SET ANSI_NULLS_ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER PROCEDURE [dbo].[ClearTheTable]
AS
BEGIN
SET NOCOUNT ON;
SELECT * INTO tempMyTable FROM MyTable WHERE 1 = 0;
DROP TABLE MyTable
EXEC sp_rename tempMyTable, MyTable;
END
This took nearly 7 minutes to run. Is there a faster way to do it? I don't need any logging, rollback or anything of that nature.
If it matters, I'm calling the stored procedure from a C# app. I guess I could write some code to recreate the table from scratch after doing a DROP TABLE, but I didn't want to have to recompile the app any time a column is added or changed.
Thanks!
EDIT
Here's my updated stored procedure:
USE [myDatabase]
GO
SET ANSI_NULLS_ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER PROCEDURE [dbo].[ClearTheTable]
AS
BEGIN
SET NOCOUNT ON;
ALTER DATABASE myDatabase
SET RESTRICTED_USER WITH ROLLBACK IMMEDIATE
TRUNCATE TABLE MyTable
ALTER DATABASE myDatabase
SET MULTI_USER
END
Best way to clear a table is with TRUNCATE.
Since you are creating and droping ill assume you have no constraints.
TRUNCATE TABLE <target table>
Some advantages:
Less transaction log space is used.
The DELETE statement removes rows one at a time and records an entry
in the transaction log for each deleted row. TRUNCATE TABLE removes
the data by deallocating the data pages used to store the table data
and records only the page deallocations in the transaction log.
Fewer locks are typically used.
When the DELETE statement is executed using a row lock, each row in
the table is locked for deletion. TRUNCATE TABLE always locks the
table (including a schema (SCH-M) lock) and page but not each row.
Without exception, zero pages are left in the table.
After a DELETE statement is executed, the table can still contain
empty pages. For example, empty pages in a heap cannot be deallocated
without at least an exclusive (LCK_M_X) table lock. If the delete
operation does not use a table lock, the table (heap) will contain
many empty pages. For indexes, the delete operation can leave empty
pages behind, although these pages will be deallocated quickly by a
background cleanup process.

Reserving clean block of identity values in T-SQL for data migration

We're currently working on the following process whose goal is to move data between 2 sets of database servers while maintaining FK's and handling the fact that the destination tables already have rows with overlapping identity column values:
Extract a set of rows from a "root" table and all of its children tables' FK associated data n-levels deep along with related rows that may reside in other databases on the same instance from the source database server.
Place that extracted data set into a set of staging tables on the destination database server.
Rekey the data in the staging tables by reserving block of identities for the destination tables and update all related child staging tables (each of these staging tables will have the same schema as the source/destination table with the addition of a "lNewIdentityID" column).
Insert the data with its new identity into the destination tables in correct order (option SET IDENTITY_INSERT 'desttable' ON will be used obviously).
I'm struggling with the block reservation portion of this process (#3). Our system is pretty much a 24 hour system except for a short weekly maintenance window. Management needs this process to NOT have to wait each week for the maintenance window to migrate data between servers. That being said, I may have 100 insert transactions competing with our migration process while it is on #3. Below is my wag at an attempt to reserve the block of identities, but I'm worried that between "SET #newIdent..." and "DBCC CHECKIDENT..." that an insert transaction will complete and the migration process won't have a "clean" block of identities in a known range that it can use to rekey the staging data.
I essentially need to lock the table, get the current identity, increase the identity, and then unlock the table. I don't know how to do that in T-SQL and am looking for ideas. Thank you.
IF EXISTS (SELECT TOP 1 1 FROM sys.procedures WHERE [name]='DataMigration_ReserveBlock')
DROP PROC DataMigration_ReserveBlock
GO
CREATE PROC DataMigration_ReserveBlock (
#tableName varchar(100),
#blockSize int
)
AS
BEGIN
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
DECLARE #newIdent bigint;
SET #newIdent = #blockSize + IDENT_CURRENT(#tableName);
DBCC CHECKIDENT (#tableName, RESEED, #newIdent);
SELECT #newIdent AS NewIdentity;
END
GO
DataMigration_ReserveBlock 'tblAddress', 1234
You could wrap it in a transaction
BEGIN TRANSACTION
...
COMMIT
It should be fast enough to not cause problems with your other insert processes. Though it would be a good idea to include try / catch logic so you could rollback if problems do occur.

How to read original data based on isolation levels

My test table:
CREATE TABLE [dbo].[Personel](
[PersonelID] [int] NOT NULL,
[Name] [nchar](10) NULL,
CONSTRAINT [PK_Personel] PRIMARY KEY CLUSTERED
(
[PersonelID] ASC
)
)
My Test Data:
insert into Personel
values (1, 'Jack')
, (2, 'John')
, (3, 'Kevin')
Connection A:
begin tran
update Personel
set Name = 'Michael'
where PersonelID = 1
Connection B:
SET TRANSACTION ISOLATION LEVEL ????
SELECT Name
FROM Personel WITH (????)
where PersonelID = 1
Connection A starts a transaction and is trying to update data, but transaction is still going on. Connection B tries to read the data that is being updated.
Is there a way (an Isolation Level or a hint or combination of these two) to see the original data (Jack, not Michael) before the transaction is committed or rolled back?
You can access the old version of the data in the SNAPSHOT isolation level.
This requires that the database has snapshot isolation enabled before you start:
ALTER DATABASE <dbname> SET ALLOW_SNAPSHOT_ISOLATION ON
Then in connection B
SET TRANSACTION ISOLATION LEVEL SNAPSHOT
select * from Personel
There are some performance considerations with snapshot isolation, because it duplicates the rows read into tempdb.
Documentation reference
SNAPSHOT (aka. row versioning).
Under snapshot isolation the connection B will see the data as-it-was when ti started the transaction in connection B (even if you did not start an explicit transaction, there is an implicit transaction started by the SELECT statement). See Understanding Row Versioning-Based Isolation Levels:
Read operations performed by a snapshot transaction retrieve the last
version of each row that had been committed at the time the snapshot
transaction started.
SNAPSHOT support must be explictly enabled in teh database:
ALTER DATABASE <DatabaseName> SET ALLOW_SNAPSHOT_ISOLATION ON;

Resources