I have a process (tableProcess) UI where multiple people will analyze then.
The user workflow :
1 - User access the process UI that execute a SP to return how many process are available and opening the more recent one
I´m using
FROM tableProcess AS process WITH (UPDLOCK, READPAST)
In the same SP I´m updating the selected row with current Date.
2 - User confirm the action validating/invalidating the process.
Using a SP to select and update the process
The problem Im getting lock on entire tableProcess sometimes. Any workaround on that?
sp snippet
SELECT TOP (1) #Column1,#Column2,#Column3
FROM tableProcess AS process WITH (UPDLOCK, READPAST)
WHERE (process.Date IS NULL)
ORDER BY process.AnalyseDate
BEGIN TRAN
UPDATE process
SET process.Date = GETDATE()
FROM tblProcess AS process
WHERE process.Column2 = #Column2;
COMMIT TRAN;
I have two indexes on that table:
-Colunm1(pk) (Unique, Clustered)
-Column2 (Non-unique, Non Clustered)
Related
I have a SP that runs at night and sometimes it does not finish. The tool I automate the runs with has an option about that can kill the job after some time if it does not finish, i.e. it kills the job e.g. after one hour.
Anyway I think the reason it sometimes does not finish in the maximum allotted time is because it is being blocked by another session ID. How can I query the DMV's for the text of the query and find out exactly what is in the blocking session.
I have this query and I know the blocking session ID and my session ID.
SELECT TOP 100 w.session_id, w.wait_duration_ms, w.blocking_session_id, w.wait_type, e.database_id, D.name
FROM sys.dm_os_waiting_tasks w
LEFT JOIN sys.dm_exec_sessions e ON w.session_id = e.session_id
LEFT JOIN sys.databases d ON e.database_id = d.database_id
where w.session_id = x and w.blocking_session_id = y
order by w.wait_duration_ms desc
How can I get the content (e.g. name of the SP) of the blocking session ID?
You can download and create sp_whoisactive routine. It will give you a clear details of what's going on right now.
For example, create a table:
DROP TABLE IF EXISTS dbo.TEST;
CREATE TABLE dbo.TEST
(
[Column] INT
);
In one session execute the code below:
BEGIN TRAN;
INSERT INTO dbo.TEST
SELECT 1
-- commit tran
Then in second:
SELECT *
FROM dbo.TEST;
In third one, execute the routine:
EXEC sp_Whoisactive
It will give you something like the below:
You can clearly see the SELECT is blocked by the session with open transaction.
As the routine is returning the activity for particular moment, you may want to record the details in a table and analyze them later.
If you are doubting that the process is blocked or a deadlock victim, it will be more appropriate to create extended event session which is collecting only these events. There are many examples of how this is done and it's easy. It's good, because you can analyze the deadlock graph and fix the issue easier.
I have a table saving a list of completed jobs. Each job is done and inserted into that table after completion. There are multi-users who can fetch and run the same jobs. But before running the job should be checked (against the completed jobs table I've just mentioned) to ensure that it's not been run by anyone.
In fact the job is inserted into that table right before running the job, if the job is failed it will be removed from that table later. I have a stored procedure to check if a job exists in the table but I'm not really sure about the situation when multi-users can accidentally run the same jobs.
Here is the basic logic (for each user's app)
check if job A has been existed in the completed jobs table:
if exists(select * from CompletedJobs where JobId = JobA_Id)
select 1
else select 0
if job A has been existed (actually being run or has been completed), the current user's action should stop here. Otherwise the current user can continue by first inserting job A into the completed jobs table:
insert into CompletedJobs(...) values(...)
then it can just continue actually run the job and if it's failed, the Job A will be deleted from the table.
So in multi-threading, I can use lock to ensure that there is no other user's action involved between checking-inserting (kind of marking completion), so it should work safely. But in SQL Server I'm not so sure how that could be done. For example what if there are 2 users passing the step 1 (and both have the same result of 0 - meaning job is free to run)?
I guess both will then continue running the same job and that should be avoided. Unless at the phase of inserting the job (at the beginning of step 2), somehow I take benefit of unique constraint or primary key constraint to make SQL Server throw exception so that only one job can be continued successfully. But I feel that it's a bit hacky and not a nice solution. Are there some better (and more standard) solutions to this issue.
I think the primary/unique key approach is a valid one. But there are other options, for example you can try to lock the completed job row and if it success then run the job and insert it into the completed jobs table. You can lock the row even it doesn't exist yet.
Here is the code:
DECLARE #job_id int = 1
SET LOCK_TIMEOUT 100
BEGIN TRANSACTION
BEGIN TRY
-- it will try to exclusively lock the row. If it success, the
-- lock will be held during the transaction.
-- If the row is locked, it will wait for 100 ms before failing
-- with error 1222
IF EXISTS (SELECT * FROM completed_jobs WITH (ROWLOCK, HOLDLOCK, XLOCK) WHERE job_id = #job_id)
BEGIN
SELECT 1
COMMIT
RETURN
END
SET LOCK_TIMEOUT -1
-- execute the job and insert it into completed_jobs table
SELECT 0;
END TRY
BEGIN CATCH
IF ##TRANCOUNT > 0 ROLLBACK
SET LOCK_TIMEOUT -1
-- 1222: Lock request time out period exceeded.
IF ERROR_NUMBER() = 1222 SELECT 2
ELSE THROW
END CATCH
The script returns:
SELECT 0 if it completes the job
SELECT 1 if the job is already completed
SELECT 2 if the job is running by other one.
Two connections can run this script concurrently as long as #job_id is different.
If two connections run this script at the same time with the same #job_id and the job is not completed yet, one of them completes the job and the other one sees it as a completed job (SELECT 1) or as a running job (SELECT 2).
If one connection A executes SELECT * FROM completed_jobs WHERE job_id = #job_id while other connection B is executing this script with the same #job_id, then connection A will be blocked until B completes the script. It is true only if A runs under READ COMMITTED, REPEATABLE READ and SERIALIZABLE isolation level. If A runs under READ UNCOMMITTED, READ COMMITTED SNAPSHOT or SNAPHOST, it won't be blocked, and it will see the job as uncompleted.
I have a procedure which has only two update statements. Both are on same table updating data based on different columns. For example
update table1
set column1 = somevalue, column2 = somevalue
where column3 = somevalue
update table1
set column3 = somevalue, column2 = somevalue
where column1 = somevalue
Intermittently I am getting an error
Transaction (Process ID "different number)) was deadlocked on lock | communication buffer resources with another process and has been chosen as the deadlock victim
Process id is pointing to the same stored procedure when I check in SQL Server using command sp_who.
There can be a situation where both update statements can update same row. can this be a reason of deadlock?
CREATE PROCEDURE update_tib_details
(#tib_id INT, #sys_id INT)
AS
BEGIN
UPDATE tib_sys_asc
SET tib_id = #tib_id
WHERE sys_id = #sys_id
UPDATE tib_sys_asc
SET sys_id = #sys_id
WHERE tib_id = #tib_id
END
This won't happen if the updates execute in the same process. A single process can't cause a deadlock with itself.
If this update is being trigged by some other process, and isn't somehow protected from concurrency, you could experience a deadlock.
Deadlock occurs when two processes are each mid-transaction and waiting on the other to complete before they can continue. In this case, for example
Process A starts, and updates row 1
Process B starts, and updates row 2
Process A now wants to update row 2, and must wait for Process B to commit
Process B now wants to update row 1, and must wait for Process A to commit
The database engine is pretty good at detecting these cross dependencies, and chooses which process to kill. If Process B is killed, Process A can finally update row 2 and commit, or vice versa.
In your case you should decide what the appropriate end result should be. If you don't care in which order operations complete (last in wins) then just commit after each update. If you do care, then you should be able to write an exclusive lock around the entire operation (i.e. Process B waits until the entirety of Process A has completed).
I have a database table with thousands of entries. I have multiple worker threads which pick up one row at a time, does some work (takes roughly one second each). While picking up the row, each thread updates a flag on the database row (like a timestamp) so that the other threads do not pick it up. But the problem is that I end up in a scenario where multiple threads are picking up the same row.
My general question is that what general design approach should I follow here to ensure that each thread picks up unique rows and does their task independently.
Note : Multiple threads are running in parallel to hasten the processing of the database rows. So I would like to have a as small as possible critical segment or exclusive lock.
Just to give some context, below is the stored proc which picks up the rows from the table after it has updated the flag on the row. Please note that the stored proc is not compilable as I have removed unnecessary portions from it. But generally that's the structure of it.
The problem happens when multiple threads execute the stored proc in parallel. The change made by the update statement (note that the update is done after taking up a lock) in one thread is not visible to the other thread unless the transaction is committed. And as there is a SELECT statement (which takes around 50ms) between the UPDATE and the TRANSACTION COMMIT, on 20% cases the UPDATE statement in a thread picks up a row which has already been processed.
I hope I am clear enough here.
USE ['mydatabase']
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER PROCEDURE [dbo].[GetRequest]
AS
BEGIN
-- some variable declaration here
BEGIN TRANSACTION
-- check if there are blocking rows in the request table
-- FM: Remove records that don't qualify for operation.
-- delete operation on the table to remove rows we don't want to process
delete FROM request where somecondition = 1
-- Identify the requests to process
DECLARE #TmpTableVar table(TmpRequestId int NULL);
UPDATE TOP(1) request
WITH (ROWLOCK)
SET Lock = DateAdd(mi, 5, GETDATE())
OUTPUT INSERTED.ID INTO #TmpTableVar
FROM request tur
WHERE (Lock IS NULL OR GETDATE() > Lock) -- not locked or lock expired
AND GETDATE() > NextRetry -- next in the queue
IF(##RowCount = 0)
BEGIN
ROLLBACK TRANSACTION
RETURN
END
select #RequestID = TmpRequestId from #TmpTableVar
-- Get details about the request that has been just updated
SELECT somerows
FROM request
WHERE somecondition = 1
COMMIT TRANSACTION
END
The analog of a critical section in SQL Server is sp_getapplock, which is simple to use. Alternatively you can SELECT the row to update with (UPDLOCK,READPAST,ROWLOCK) table hints. Both of these require a multi-statement transaction to control the duration of the exclusive locking.
You need start a transaction isolation level on sql for isolation your line, but this can impact on your performance.
Look the sample:
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
GO
BEGIN TRANSACTION
GO
SELECT ID, NAME, FLAG FROM SAMPLE_TABLE WHERE FLAG=0
GO
UPDATE SAMPLE_TABLE SET FLAG=1 WHERE ID=1
GO
COMMIT TRANSACTION
Finishing, not exist a better way for use isolation level. You need analyze the positive and negative point for each level isolation and test your system performance.
More information:
https://learn.microsoft.com/en-us/sql/t-sql/statements/set-transaction-isolation-level-transact-sql
http://www.besttechtools.com/articles/article/sql-server-isolation-levels-by-example
https://en.wikipedia.org/wiki/Isolation_(database_systems)
I have 3 stored procedures (simplified, please try to ignore why I'm updating the table twice and why the SP is called twice):
CREATE SP1 AS
BEGIN TRANSACTION
-- Updated twice
UPDATE Customers SET Name = 'something' Where Id = 1 OUTPUT INSERTED.*
UPDATE Customers SET Name = 'something'
COMMIT TRANSACTION;
END
CREATE SP2 AS
BEGIN TRANSACTION
UPDATE Customers SET Name = 'anothername'
COMMIT TRANSACTION;
END
CREATE SP3 AS
BEGIN TRANSACTION
-- Called twice
EXEC SP2
EXEC SP2
COMMIT TRANSACTION;
END
The problem is that I got a deadlock from sql server. It says that SP1 and SP3 are both waiting for the Customers table resource. Does it make sense? Could it be because of the inner transaction in SP2? or maybe the use of OUTPUT statement...?
The lock is a Key lock on the PK of Customers. The requested lock mode of each waiting SP is U and the owner is X (The other object i guess).
A few more details:
1. These are called from the same user multiple times on different processes.
2. The statements are called twice only for the sake of the example.
3. In my actual code, Customer is actualy called 'Pending Instructions'. The instructions table is sampled every minute by each listener (computer, actualy).
4. The first update query first gets all the pending instructions and the second one updates the status of the entire table to completed, just to make sure that none are left in pending mode.
5. SP3 is calling SP2 twice because it updates 2 proprietory instructions row, this happens once a day.
Thanks a lot!!
Why are you surprised by this? You have written the book case for a deadlock and hit it.
The first update query first gets all the pending instructions and the second one updates the status of the entire table to completed.
Yes, this will deadlock. Two concurrent calls will find different 'pending' instructions (as new 'pending' instructions can be inserted in between). Then they will proceed to attempt to update the entire table and block on each other, deadlock. Here is the timeline:
Table contains customer:1, pending
T1 (running first update of SP1) updates table and modifies customer:1
T2 inserts a new record, customer:2, pending
T3 (running first update of SP1) updates table and modifies customer:2
T1 (running second update of SP1) tries to update all table, is blocked by T3
T3 (running second update of SP1) tries to update all table, is blocked by T1. Deadlock.
I have good news though: the deadlock is the best outcome you can get. A far worse outcome is when your logic missed 'pending' customers (which will happen more often). simply stated, your SP1 will erroneously mark any new 'pending' customer inserted after the first update as 'processed', when it was actually just skipped. Here is the timeline:
Table contains customer:1, pending
T1 (running first update of SP1) updates table and modifies customer:1
T2 inserts a new record, customer:2, pending
T1 (running second update of SP1) tries updates the whole table. customer:2 was pending and is reset w/o actually had been processed (is not i SP1's result set).
Your business lost an update.
so I suggest to go back to the drawing board and design SP1 properly. I suggest SP1 should only update on the second statement what it had updated on on the first one, for instance. Posting real code, with proper DDL, would go along way toward getting a useful solution.