SELECT statement is not blocked by an existing exclusive table lock - sql-server

For testing, I am trying to simulate a condition in which a query from our web application to our SQL Server backend would timeout. The web application is configured so this happens if the query runs longer than 30 seconds. I felt the easiest way to do this would be to take and hold an exclusive lock on the the table that the web application wants to query. As I understand it, an exclusive lock should prevent any additional locks (even the shared locks taken by a SELECT statement).
I used the following methodology:
CREATE A LONG-HELD LOCK
Open a first query window in SSMS and run
BEGIN TRAN;
SELECT * FROM MyTable WITH (TABLOCKX);
WAITFOR DELAY '00:02:00';
ROLLBACK;
(see https://stackoverflow.com/a/25274225/2824445 )
CONFIRM THE LOCK
I can EXEC sp_lock and see results with ObjId matching MyTable, Type of TAB, Mode of X
TRY TO GET BLOCKED BY THE LOCK
Open a second query window in SSMS and run SELECT * FROM MyTable
I would expect this to sit and wait, not returning any results until after the lock is released by the first query. Instead, the second query returns with full results immediately.
STUFF I TRIED
In the second query window, if I SET TRANSACTION ISOLATION LEVEL SERIALIZABLE, then the second query waits until the first completes as expected. However, the point is to simulate a timeout in our web application, and I do not have any easy way to alter the transaction isolation level of the web application's connections away from the default of READ COMMITTED.
In the first window, I tried modifying the table's values inside the transaction. In this case, when the second query returns immediately, the values it shows are the unmodified values.

Figured it out. We had READ_COMMITTED_SNAPSHOT turned on, which is how the second query was able to return the previous, unmodified values in part 2 of "Stuff I tried". I was able to determine this with SELECT is_read_committed_snapshot_on FROM sys.databases WHERE name = 'MyDatabase'. Once it was turned off with ALTER DATABASE MyDatabase SET READ_COMMITTED_SNAPSHOT OFF, I began to see the expected behavior in which the second query would wait for the first to complete.

Related

How to properly truncate a staging table in an ETL pipeline?

We have an ETL pipeline that runs for each CSV uploaded into an storage account (Azure). It runs some transformations on the CSV and writes the outputs to another location, also as CSV, and calls a stored procedure on the database (SQL Azure) which ingests (BULK INSERT) this resulting CSV into a staging table.
This pipeline can have concurrent executions as multiple resources can be uploading files to the storage. Hence, the staging table is getting data inserted pretty often.
Then, we have an scheduled SQL job (Elastic Job) that triggers an SP that moves the data from the staging table into the final table.
At this point, we would want to truncate/empty the staging table so that we do not re-insert them in the next execution of the job.
Problem is, we cannot be sure that between the load from the staging table to the final table and the truncate command, there has not been any new data written into the staging table that could be truncated without first being inserted in to the final table.
Is there a way to lock the staging table while we're copying the data into the final table so that the SP (called from the ETL pipeline) trying to write to it will just wait until the lock is release? Is this achievable by using transactions or maybe some manual lock commands?
If not, what's the best approach to handle this?
I would propose solution with two identical staging tables. Lets name them StageLoading and StageProcessing.
Load process would have following steps:
1. At the beginning both tables are empty.
2. We load some data into StageLoading table (I assume each load is a transaction).
3. When Elastic job starts it will do:
- ALTER TABLE SWITCH to move all data from StageLoading to StageProcessing. It will make StageLoading empty and ready for next loads. It is a metadata operation, so takes miliseconds and it is fully blocking, so will be done between loads.
- load the data from StageProcessing to final tables.
- truncate table StageProcessing.
4. Now we are ready for next Elastic job.
If we try to do SWITCH when StageProcessing is not empty, ALTER will fail and it will mean that last load process failed.
I like the sp_getapplock and use this method myself in few places for its flexibility and that you have full control over the locking logic and wait times.
The only problem that I see is that in your case concurrent processes are not all equal.
You have SP1 that moves data from the staging table into the main table. Your system never tries to run several instances of this SP.
Another SP2 that inserts data into the staging table can be run several times simultaneously and it is fine to do it.
It is easy to implement the locking that would prevent any concurrent run of any combination of SP1 or SP2. In other words, it is easy if the locking logic is the same for SP1 and SP2 and they are treated equal. But, then you can't have several instances of SP2 running simultaneously.
It is not obvious how to implement the locking that would prevent concurrent run of SP1 and SP2, while allowing several instances of SP2 to run simultaneously.
There is another approach that doesn't attempt to prevent concurrent run of SPs, but embraces and expects that simultaneous runs are possible.
One way to do it is to add an IDENTITY column to the staging table. Or an automatically populated datetime if you can guarantee that it is unique and never decreases, which can be tricky. Or rowversion column.
The logic inside SP2 that inserts data into the staging table doesn't change.
The logic inside SP1 that moves data from the staging table into the main table needs to use these identity values.
At first read the current maximum value of identity from the staging table and remember it in a variable, say, #MaxID. All subsequent SELECTs, UPDATEs and DELETEs from the staging table in that SP1 should include a filter WHERE ID <= #MaxID.
This would ensure that if there happen to be a new row added to the staging table while SP1 is running, that row would not be processed and would remain in the staging table until the next run of SP1.
The drawback of this approach is that you can't use TRUNCATE, you need to use DELETE with WHERE ID <= #MaxID.
If you are OK with several instances of SP2 waiting for each other (and SP1), then you can use sp_getapplock similar to the following. I have this code in my stored procedure. You should put this logic into both SP1 and SP2.
I'm not calling sp_releaseapplock explicitly here, because the lock owner is set to Transaction and engine will release the lock automatically when transaction ends.
You don't have to put retry logic in the stored procedure, it can be within external code that runs these stored procedures. In any case, your code should be ready to retry.
CREATE PROCEDURE SP2 -- or SP1
AS
BEGIN
SET NOCOUNT ON;
SET XACT_ABORT ON;
BEGIN TRANSACTION;
BEGIN TRY
-- Maximum number of retries
DECLARE #VarCount int = 10;
WHILE (#VarCount > 0)
BEGIN
SET #VarCount = #VarCount - 1;
DECLARE #VarLockResult int;
EXEC #VarLockResult = sp_getapplock
#Resource = 'StagingTable_app_lock',
-- this resource name should be the same in SP1 and SP2
#LockMode = 'Exclusive',
#LockOwner = 'Transaction',
#LockTimeout = 60000,
-- I'd set this timeout to be about twice the time
-- you expect SP to run normally
#DbPrincipal = 'public';
IF #VarLockResult >= 0
BEGIN
-- Acquired the lock
-- for SP2
-- INSERT INTO StagingTable ...
-- for SP1
-- SELECT FROM StagingTable ...
-- TRUNCATE StagingTable ...
-- don't retry any more
BREAK;
END ELSE BEGIN
-- wait for 5 seconds and retry
WAITFOR DELAY '00:00:05';
END;
END;
COMMIT TRANSACTION;
END TRY
BEGIN CATCH
ROLLBACK TRANSACTION;
-- log error
END CATCH;
END
This code guarantees that only one procedure is working with the staging table at any given moment. There is no concurrency. All other instances will wait.
Obviously, if you try to access the staging table not through these SP1 or SP2 (which try to acquire the lock first), then such access will not be blocked.
Is there a way to lock the staging table while we're copying the data into the final table so that the SP (called from the ETL pipeline) trying to write to it will just wait until the lock is release? Is this achievable by using transactions or maybe some manual lock commands?
It looks you are searching for a mechanism that is wider than a transaction level. SQL Server/Azure SQL DB has one and it is called application lock:
sp_getapplock
Places a lock on an application resource.
Locks placed on a resource are associated with either the current transaction or the current session. Locks associated with the current transaction are released when the transaction commits or rolls back.Locks associated with the session are released when the session is logged out. When the server shuts down for any reason, all locks are released.
Locks can be explicitly released with sp_releaseapplock. When an application calls sp_getapplock multiple times for the same lock resource, sp_releaseapplock must be called the same number of times to release the lock. When a lock is opened with the Transaction lock owner, that lock is released when the transaction is committed or rolled back.
It basically means that your ETL Tool should open single session to DB, acquire the lock and release when finished. Other sessions before trying to do anything should try to acquire the lock(they cannot because it already taken), wait until when it released and continue to work.
Assuming you have a single outbound job
Add an OutboundProcessing BIT DEFAULT 0 to the table
In the job, SET OutboundProcessing = 1 WHERE OutboundProcessing = 0 (claim the rows)
For the ETL, incorporate WHERE OutboundProcessing = 1 in the query that sources the data (transfer the rows)
After the ETL, DELETE FROM TABLE WHERE OutboundProcessing = 1 (remove the rows you transferred)
If the ETL fails, SET OutboundProcessing = 0 WHERE OutboundProcessing = 1
I always prefer to "ID" each file I receive. If you can do this, you can associate the records from a given file throughout your load process. You haven't called out a need for this, but jus sayin.
However, with each file having an identity (just a int/bigint identity value should do) you can then dynamically create as many load tables as you like from a "template" load table.
When a file arrives, create a new load table named with the ID of the file.
Process your data from load to final table.
drop the load table for the file being processed.
This is somewhat similar to the other solution about using 2 tables (load and stage) but even in that solution you are still limited to having 2 files "loaded" (your still only applying one file to the final table though?)
Last, it is not clear if your "Elastic Job" is detached from the actual "load" pipeline/processing or if it is included. Being a job, I assume it is not included, if a job, you can only run a single instance at time? So its not clear why it's important to load multiple files at once if you can only move one from load to final at a time. Why the rush to get files into load?

Why Is My Azure SQL Database Table Permanently Locked?

I have an isolated Azure SQL test database that has no active connections except my development machine through SSMS and a development web application instance. I am the only one using this database.
I am running some tests on a table of ~1M records where we need to do a large UPDATE to data in nearly all of the ~1M records.
DECLARE #BatchSize INT = 1000
WHILE #BatchSize > 0
BEGIN
UPDATE TOP (#BatchSize)
[MyTable]
SET
[Data] = [Data] + ' a change'
WHERE
[Data] IS NOT NULL
SET #BatchSize = ##ROWCOUNT
RAISERROR('Updated %d records', 0, 1, #BatchSize) WITH NOWAIT
END
This query works fine, and I can see my data being updated 1000 records at a time every few seconds.
Performing additional INSERT/UPDATE/DELETE commands on MyTable seem to be somewhat affected by this batch query running, but these operations do execute within a few seconds when ran. I assume this is because locks are being taken on MyTable and my other commands will execute in between the batch query's locks/looping iterations.
This behavior is all expected.
However, every so often while the batch query is running I notice that additional INSERT/UPDATE/DELETE commands on MyTable will no longer execute. They always time out/never finish. I assume some type of lock has occurred on MyTable, but it seems that the lock is never being released. Further, even if I cancel the long-running update batch query I can still no longer run any INSERT/UPDATE/DELETE commands on MyTable. Even after 10-15 minutes of the database sitting stale with nothing happening on it anymore I cannot execute write commands on MyTable. The only way I have found to "free up" the database from whatever is occurring is to scale it up and down to a new pricing tier. I assume that this pricing tier change is recycling/rebooting the instance or something.
I have reproduced this behavior multiple times during my testing today.
What is going on here?
Scaling up/down the tier rollback all open transactions and disconnect server logins.
About what you are seeing it seems is lock escalation. Try to serialize access to the database using sp_getapplock. You can also try lock hints.

Why is my SQL transaction not executing a delete at the end?

I've got a simple SQL command that is supposed to read all of the records in from a table and then delete them all. Because there's a chance someone else could be writing to this table at the exact moment, I want to lock the table so that I'm sure that everything I delete is also everything I read.
BEGIN TRAN T1;
SELECT LotID FROM fsScannerIOInvalidCachedLots WITH (TABLOCK, HOLDLOCK);
DELETE FROM fsInvalidCachedLots;
COMMIT TRAN T1;
The really strange thing is, this USED to work. It worked for a while through testing, but now I guess something has changed because it's reading everything in, but it's not deleting any of the records. Consequently, SQL Server is spinning up high CPU usage because when this runs the next time it takes significantly longer to execute, which I assume has something to do with the lock.
Any idea what could be going on here? I've tried both TABLOCK and TABLOCKX
Update: Oh yea, something I forgot to mention, I can't query that table until after the next read the program does. What I mean is, after that statement is executed in code (and the command and connection are disposed of) if I try to query that table from within Management Studio it just hangs, which I assume means it's still locked. But then if I step through the calling program until I hit the next database connection, the moment after the first read the table is no longer locked.
Your SELECT retrieves data from a table named fsScannerIOInvalidCachedLots, but the delete is from a different table named fsInvalidCachedLots.
If you run this query in set xact_abort off, the transaction will not be aborted by the error from the invalid table name. In fact, select ##trancount will show you that there is an active transaction, and select xact_state() will return 1 meaning that it is active and no error has occurred.
On the other hand, with set xact_abort on, the transaction is aborted. select ##trancount will return 0, and select xact_state() will return 0 (no active transaction).
See ##trancount and xact_state() on MSDN for more information about them.

In SQL Server, how can I lock a single row in a way similar to Oracle's "SELECT FOR UPDATE WAIT"?

I have a program that connects to an Oracle database and performs operations on it. I now want to adapt that program to also support an SQL Server database.
In the Oracle version, I use "SELECT FOR UPDATE WAIT" to lock specific rows I need. I use it in situations where the update is based on the result of the SELECT and other sessions can absolutely not modify it simultaneously, so they must manually lock it first. The system is highly subject to sessions trying to access the same data at the same time.
For example:
Two users try to fetch the row in the database with the highest priority, mark it as busy, performs operations on it, and mark it as available again for later use.
In Oracle, the logic would go basically like this:
BEGIN TRANSACTION;
SELECT ITEM_ID FROM TABLE_ITEM WHERE ITEM_PRIORITY > 10 AND ITEM_CATEGORY = 'CT1'
ITEM_STATUS = 'available' AND ROWNUM = 1 FOR UPDATE WAIT 5;
UPDATE [locked item_id] SET ITEM_STATUS = 'unavailable';
COMMIT TRANSACTION;
Note that the queries are built dynamically in my code. Also note that when the previously most favorable row is marked as unavailable, the second user will automatically go for the next one and so on. Furthermore, different users working on different categories will not have to wait for each other's locks to be released. Worst comes to worst, after 5 seconds, an error would be returned and the operation would be cancelled.
So finally, the question is: how do I achieve the same results in SQL Server? I have been looking at locking hints which, in theory, seem like they should work. However, the only locks that prevents other locks are "UPDLOCK" AND "XLOCK" which both only work at a table level.
Those locking hints that do work at a row level are all shared locks, which also do not satisfy my needs (both users could lock the same row at the same time, both mark it as unavailable and perform redundant operations on the corresponding item).
Some people seem to add a "time modified" column so sessions can verify that they are the ones who modified it, but this sounds like there would be a lot of redundant and unnecessary accesses.
You're probably looking forwith (updlock, holdlock). This will make a select grab an exclusive lock, which is required for updates, instead of a shared lock. The holdlock hint tells SQL Server to keep the lock until the transaction ends.
FROM TABLE_ITEM with (updlock, holdlock)
As documentation sayed:
XLOCK
Specifies that exclusive locks are to be taken and held until the
transaction completes. If specified with ROWLOCK, PAGLOCK, or TABLOCK,
the exclusive locks apply to the appropriate level of granularity.
So solution is using WITH(XLOCK, ROWLOCK):
BEGIN TRANSACTION;
SELECT ITEM_ID
FROM TABLE_ITEM
WITH(XLOCK, ROWLOCK)
WHERE ITEM_PRIORITY > 10 AND ITEM_CATEGORY = 'CT1' AND ITEM_STATUS = 'available' AND ROWNUM = 1;
UPDATE [locked item_id] SET ITEM_STATUS = 'unavailable';
COMMIT TRANSACTION;
In SQL Server there are locking hints but they do not span their statements like the Oracle example you provided. The way to do it in SQL Server is to set an isolation level on the transaction that contains the statements that you want to execute. See this MSDN page but the general structure would look something like:
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
BEGIN TRANSACTION;
select * from ...
update ...
COMMIT TRANSACTION;
SERIALIZABLE is the highest isolation level. See the link for other options. From MSDN:
SERIALIZABLE Specifies the following:
Statements cannot read data that has been modified but not yet
committed by other transactions.
No other transactions can modify data that has been read by the
current transaction until the current transaction completes.
Other transactions cannot insert new rows with key values that would
fall in the range of keys read by any statements in the current
transaction until the current transaction completes.
Have you tried WITH (ROWLOCK)?
BEGIN TRAN
UPDATE your_table WITH (ROWLOCK)
SET your_field = a_value
WHERE <a predicate>
COMMIT TRAN

Does SQL Server wrap Select...Insert Queries into an implicit transaction?

When I perform a select/Insert query, does SQL Server automatically create an implicit transaction and thus treat it as one atomic operation?
Take the following query that inserts a value into a table if it isn't already there:
INSERT INTO Table1 (FieldA)
SELECT 'newvalue'
WHERE NOT EXISTS (Select * FROM Table1 where FieldA='newvalue')
Is there any possibility of 'newvalue' being inserted into the table by another user between the evaluation of the WHERE clause and the execution of the INSERT clause if I it isn't explicitly wrapped in a transaction?
You are confusing between transaction and locking. Transaction reverts your data back to the original state if there is any error. If not, it will move the data to the new state. You will never ever have your data in an intermittent state when the operations are transacted. On the other hand, locking is the one that allows or prevents multiple users from accessing the data simultaneously. To answer your question, select...insert is atomic and as long as no granular locks are explicitly requested, no other user will be able to insert while select..insert is in progress.
John, the answer to this depends on your current isolation level. If you're set to READ UNCOMMITTED you could be looking for trouble, but with a higher isolation level, you should not get additional records in the table between the select and insert. With a READ COMMITTED (the default), REPEATABLE READ, or SERIALIZABLE isolation level, you should be covered.
Using SSMS 2016, it can be verified that the Select/Insert statement requests a lock (and so most likely operates atomically):
Open a new query/connection for the following transaction and set a break-point on ROLLBACK TRANSACTION before starting the debugger:
BEGIN TRANSACTION
INSERT INTO Table1 (FieldA) VALUES ('newvalue');
ROLLBACK TRANSACTION --[break-point]
While at the above break-point, execute the following from a separate query window to show any locks (may take a few seconds to register any output):
SELECT * FROM sys.dm_tran_locks
WHERE resource_database_id = DB_ID()
AND resource_associated_entity_id = OBJECT_ID(N'dbo.Table1');
There should be a single lock associated to the BEGIN TRANSACTION/INSERT above (since by default runs in an ISOLATION LEVEL of READ COMMITTED)
OBJECT ** ********** * IX LOCK GRANT 1
From another instance of SSMS, open up a new query and run the following (while still stopped at the above break-point):
INSERT INTO Table1 (FieldA)
SELECT 'newvalue'
WHERE NOT EXISTS (Select * FROM Table1 where FieldA='newvalue')
This should hang with the string "(Executing)..." being displayed in the tab title of the query window (since ##LOCK_TIMEOUT is -1 by default).
Re-run the query from Step 2.
Another lock corresponding to the Select/Insert should now show:
OBJECT ** ********** 0 IX LOCK GRANT 1
OBJECT ** ********** 0 IX LOCK GRANT 1
ref: How to check which locks are held on a table

Resources