SQL Server updates are sequential or parallel? - sql-server

I have a procedure which has only two update statements. Both are on same table updating data based on different columns. For example
update table1
set column1 = somevalue, column2 = somevalue
where column3 = somevalue
update table1
set column3 = somevalue, column2 = somevalue
where column1 = somevalue
Intermittently I am getting an error
Transaction (Process ID "different number)) was deadlocked on lock | communication buffer resources with another process and has been chosen as the deadlock victim
Process id is pointing to the same stored procedure when I check in SQL Server using command sp_who.
There can be a situation where both update statements can update same row. can this be a reason of deadlock?
CREATE PROCEDURE update_tib_details
(#tib_id INT, #sys_id INT)
AS
BEGIN
UPDATE tib_sys_asc
SET tib_id = #tib_id
WHERE sys_id = #sys_id
UPDATE tib_sys_asc
SET sys_id = #sys_id
WHERE tib_id = #tib_id
END

This won't happen if the updates execute in the same process. A single process can't cause a deadlock with itself.
If this update is being trigged by some other process, and isn't somehow protected from concurrency, you could experience a deadlock.
Deadlock occurs when two processes are each mid-transaction and waiting on the other to complete before they can continue. In this case, for example
Process A starts, and updates row 1
Process B starts, and updates row 2
Process A now wants to update row 2, and must wait for Process B to commit
Process B now wants to update row 1, and must wait for Process A to commit
The database engine is pretty good at detecting these cross dependencies, and chooses which process to kill. If Process B is killed, Process A can finally update row 2 and commit, or vice versa.
In your case you should decide what the appropriate end result should be. If you don't care in which order operations complete (last in wins) then just commit after each update. If you do care, then you should be able to write an exclusive lock around the entire operation (i.e. Process B waits until the entirety of Process A has completed).

Related

Atomic DROP and SELECT ... INTO table

I would have thought that code like the following would be atomic: if DeleteMe exists before running this transaction, it should be dropped and recreated. Otherwise it should simply be created:
BEGIN TRANSACTION
IF OBJECT_ID('DeleteMe') IS NOT NULL
DROP TABLE DeleteMe
SELECT query.*
INTO DeleteMe
FROM (SELECT 1 AS Value) AS query
COMMIT TRANSACTION
However, it appears that executing this code multiple times concurrently can cause various combinations of the errors:
Cannot drop the table 'DeleteMe', because it does not exist or you do not have permission.
There is already an object named 'DeleteMe' in the database.
Here's a LINQPad Script to show what I mean.
var sql = #"
BEGIN TRANSACTION
IF OBJECT_ID('DeleteMe') IS NOT NULL
DROP TABLE DeleteMe
SELECT query.*
INTO DeleteMe
FROM (SELECT 1 AS Value) AS query
COMMIT TRANSACTION
";
await Task.WhenAll(Enumerable.Range(1, 50)
.Select(async i =>
{
using var connection = new SqlConnection(this.Connection.ConnectionString);
await connection.OpenAsync();
await connection.ExecuteAsync(sql);
}).Dump());
And an example of its output:
If I use SQL Server 2016's DROP TABLE IF EXISTS feature, that part at least appears to be atomic, but then another concurrent command can apparently still create the DeleteMe table between the time this one gets dropped and the time it gets created again.
Question: Is there any way to atomically drop, create, and populate a table, such that there's no time during which that table won't exist from the perspective of another concurrent connection?
Is there any way to atomically drop, create, and populate a table, such that there's no time during which that table won't exist from the perspective of another concurrent connection?
Sure. It's just like any transaction: you have to take an inconsistent lock on the very first statement. In your transaction two sessions can run IF OBJECT_ID('DeleteMe') IS NOT NULL at the same time. Then they both try to drop the object, and only one succeeds.
DROP TABLE IF EXISTS also performs the existence check before taking the exclusive schema lock on the object that would be necessary to drop it.
A simple and reliable way to get an exclusive lock is to use sp_getapplock.
eg
BEGIN TRANSACTION
exec sp_getapplock 'dropandcreate_DeleteMe', 'exclusive'
DROP TABLE IF EXISTS DeleteMe
SELECT query.*
INTO DeleteMe
FROM (SELECT 1 AS Value) AS query
COMMIT TRANSACTION
The biggest problem I see you encountering, is that by dropping the object you want to lock (you can lock an object, but not a 'name' of an object) you have nothing to lock.
Proposals that involve finding something else to lock only resolve half the issue; the process stops racing itself, but then any other process that references the DeleteMe table can still race with this process.
10x the process referenced in the question, using sp_getapplock, for example
Those 10 concurrent instances of the process no longer race each other
Then 1x another process that only uses SELECT * FROM DeleteMe but not sp_getapplock
That process CAN fail due to racing with the currently Active DROP/SELECT INTO process
That leads me to conclude that NOT dropping objects is better, so that the table in use remains in existence and CAN be locked...
BEGIN TRANSACTION
TRUNCATE TABLE DeleteMe
INSERT INTO DeleteMe SELECT 1 AS Value
COMMIT TRANSACTION
The TRUNCATE implicitly takes a table lock, and a secondary process that reads from this table never sees it as empty.

Read Committed vs Repeatable Read Example

I'm trying to execute the following two queries in SQL Server Management Studio (in separate query windows). I run them in the same order I typed them here.
When isolation level is set to READ COMMITTED they execute ok, but when it's set to REPEATABLE READS the transactions are dead locked.
Can you please help me to understand what is dead locked here?
First:
begin tran
declare #a int, #b int
set #a = (select col1 from Test where id = 1)
set #b = (select col1 from Test where id = 2)
waitfor delay '00:00:10'
update Test set col1 = #a + #b where id = 1
update Test set col1 = #a - #b where id = 2
commit
Second:
begin tran
update Test set col1 = -1 where id = 1
commit
UPD Answer is laready given but folowing the advice I'm inserting the deadlock graph
In both cases the selects use a shared lock and the updates an exclusive lock.
In READ COMMITTED mode, the shared lock is released immediately after the select finishes.
In REPEATABLE READS mode, the shared locks for the selects are held untill the end of the transaction, to ensure that no other sessions can change the data that was read. A new read within the same transaction is garantueed to yield the same results, unless the data was changed in the current session/transaction
Originally I thought, that you executed "First" in both sessions. Then the explanation would be trivial: both sessions acquire and get a shared lock, which then blocks the exclusive lock required for the updates.
The situation with a second session doing only an update is a little more complex. An update staement will first acquire an update lock (UPDLOCK) for selecting the rows that must be updated, which is probably similar to a shared lock, but at least not blocked by a shared lock. Next, when the data is actually updated, it tries to convert the update lock to an exclusive lock, which fails, because the first session is still holding the shared lock. Now both sessions block each other.

Select query skips records during concurrent updates

i have table that processed concurrently by N threads.
CREATE TABLE [dbo].[Jobs]
(
[Id] BIGINT NOT NULL CONSTRAINT [PK_Jobs] PRIMARY KEY IDENTITY,
[Data] VARBINARY(MAX) NOT NULL,
[CreationTimestamp] DATETIME2(7) NOT NULL,
[Type] INT NOT NULL,
[ModificationTimestamp] DATETIME2(7) NOT NULL,
[State] INT NOT NULL,
[RowVersion] ROWVERSION NOT NULL,
[Activity] INT NULL,
[Parent_Id] BIGINT NULL
)
GO
CREATE NONCLUSTERED INDEX [IX_Jobs_Type_State_RowVersion] ON [dbo].[Jobs]([Type], [State], [RowVersion] ASC) WHERE ([State] <> 100)
GO
CREATE NONCLUSTERED INDEX [IX_Jobs_Parent_Id_State] ON [dbo].[Jobs]([Parent_Id], [State] ASC)
GO
Job is adding to table with State=0 (New) โ€” it can be consumed by any worker in this state. When worker gets this queue item, State changed to 50 (Processing) and job becomes unavailable for other consumers (workers call [dbo].[Jobs_GetFirstByType] with arguments: Type=any, #CurrentState=0, #NewState=50).
CREATE PROCEDURE [dbo].[Jobs_GetFirstByType]
#Type INT,
#CurrentState INT,
#NewState INT
AS
BEGIN
SET TRANSACTION ISOLATION LEVEL READ COMMITTED;
DECLARE #JobId BIGINT;
BEGIN TRAN
SELECT TOP(1)
#JobId = Id
FROM [dbo].[Jobs] WITH (UPDLOCK, READPAST)
WHERE [Type] = #Type AND [State] = #CurrentState
ORDER BY [RowVersion];
UPDATE [dbo].[Jobs]
SET [State] = #NewState,
[ModificationTimestamp] = SYSUTCDATETIME()
OUTPUT INSERTED.[Id]
,INSERTED.[RowVersion]
,INSERTED.[Data]
,INSERTED.[Type]
,INSERTED.[State]
,INSERTED.[Activity]
WHERE [Id] = #JobId;
COMMIT TRAN
END
After processing, job State can be changed to 0 (New) again or it can be once set to 100 (Completed).
CREATE PROCEDURE [dbo].[Jobs_UpdateStatus]
#Id BIGINT,
#State INT,
#Activity INT
AS
BEGIN
UPDATE j
SET j.[State] = #State,
j.[Activity] = #Activity,
j.[ModificationTimestamp] = SYSUTCDATETIME()
OUTPUT INSERTED.[Id], INSERTED.[RowVersion]
FROM [dbo].[Jobs] j
WHERE j.[Id] = #Id;
END
Jobs has hierarchical structure, parent job gets State=100 (Completed) only when all childs are completed.
Some workers call stored procedures ([dbo].[Jobs_GetCountWithExcludedState] with #ExcludedState=100) that returns number of incompleted jobs, when it returns 0, parent job State can be set to 100 (Completed).
CREATE PROCEDURE [dbo].[Jobs_GetCountWithExcludedState]
#ParentId INT,
#ExcludedState INT
AS
BEGIN
SET TRANSACTION ISOLATION LEVEL READ COMMITTED;
SELECT COUNT(1)
FROM [dbo].[Jobs]
WHERE [Parent_Id] = #ParentId
AND [State] <> #ExcludedState
END
The main problem is strange behaviour of this stored procedure. Sometimes it returns 0 for parent job, but it exactly has incompleted jobs. I tryied turn on change data tracking and some debug information (including profiling) โ€” child jobs 100% doesn't have State=100 when SP return 0.
It seems that the SP skips records, that are not in 100 (Completed) state, but why it happen and how we can prevent this?
UPD:
Calling [dbo].[Jobs_GetCountWithExcludedState] starts when parent job has childs. There ัan be no situation when worker starts checking child jobs without their existence, because creating childs and setting to parent job checking activity wrapped in transaction:
using (var ts = new TransactionScope())
{
_jobManager.AddChilds(parentJob);
parentJob.State = 0;
parentJob.Activity = 30; // in this activity worker starts checking child jobs
ts.Complete();
}
It would be very disturbing if in fact your procedure Jobs_GetCountWithExcludedState was returning a count of 0 records when there were in fact committed records matching your criteria. It's a pretty simple procedure. So there are two possibilities:
The query is failing due to an issue with SQL Server or data
corruption.
There actually are no committed records matching the criteria at the time the
procedure is run.
Corruption is an unlikely, but possible cause. You can check for corruption with DBCC CHECKDB.
Most likely there really are no committed job records that have a Parent_ID equal to the #ParentId parameter and are not in a state of 100 at the time it is run.
I emphasize committed because that's what the transaction will see.
You never really explain in your question how the Parent_ID gets set on the jobs. My first thought is that maybe you are checking for unprocessed child jobs and it finds none, but then another process adds it as the Parent_ID of another incomplete job. Is this a possibility?
I see you added an update to show that when you add a child job record that the update of the parent and child records are wrapped in a transaction. This is good, but not the question I was asking. This is the scenario that I am considering as a possibility:
A Job Record is inserted and committed for the parent.
Jobs_GetFirstByType grabs the parent job.
A worker thread processes it and calls Jobs_UpdateStatus and updates it's status to 100.
Something calls Jobs_GetCountWithExcludedState with the job and returns 0.
A child job is created and attached to the completed parent job record... which makes it now incomplete again.
I'm not saying that this is what is happening... I'm just asking if it's possible and what steps are you taking to prevent it? For example, in your code above in the update to your question you are selecting a ParentJob to attach the child to outside of the transaction. Could it be that you are selecting a parent job and then it gets completed before you run the transaction that adds the child to the parent? Or maybe the last child job of a parent job completes so the worker thread checks and marks the parent complete, but some other worker thread has already selected the job to be the parent for a new child job?
There are many different scenarios that could cause the symptom you are describing. I believe that the problem is to be found in some code that you have not shared with us yet particularly about how jobs are created and the code surrounding calls to Jobs_GetCountWithExcludedState. If you can give more information I think you will be more likely to find a usable answer, otherwise the best we can do is guess all the things that could happen in the code we can't see.
Your problem is almost certainly caused by your choice of "READ COMMITTED" isolation level. The behaviour of this depends on your configuration setting for READ_COMMITTED_SNAPSHOT, but either way it allows another transaction thread to modify records that would have been seen by your SELECT, between your SELECT and your UPDATE - so you have a race condition.
Try it again with isolation level "SERIALIZABLE" and see if that fixes your problem. For more information on isolation levels, the documentation is very helpful:
https://msdn.microsoft.com/en-AU/library/ms173763.aspx
Your sql code looks fine. Therefore, the problem lies in how it is used.
Hypothesis #0
Procedure "Jobs_GetCountWithExcludedState" is called with a totally wrong ID. Because yes sometime problem are really just a little mistake. I doubt this is your case however.
Hypothesis #1
The code checking the field "Activity = 30" is doing it in "READ UNCOMMITED" isolation level. It would then call "Jobs_GetCountWithExcludedState" with an parentID that may not be really ready for it because the insertion transaction may not have ended yet or have been rollbacked.
Hypothesis #2
Procedure "Jobs_GetCountWithExcludedState" is called with an id that has no longer child. There could be many reason why this happen.
For example,
Transaction that inserted child job failed for whatever reason but this procedure was called anyway.
A single child job was deleted and was about to be replace.
etc
Hypothesis #3
Procedure "Jobs_GetCountWithExcludedState" is called before the childJob get their parentId assigned.
Conclusion
As you can see, we need more information on two things :
1. How "Jobs_GetCountWithExcludedState" is called.
2. How job are inserted. Is the parentId assigned at the insertion time or it is updated a bit later? Are they inserted in batch? Is there annexed code to it that do other stuff?
This is also where I recommend you to have a look to verify above hypothesis because the problem is most likely in the program.
Possible refactoring to invalidate all those hypothesis
Have the database tell the application which parents tasks are completed directly instead.
Just like "Jobs_GetFirstByType", there could be Jobs_GetFirstParentJobToComplete" which could return the next uncompleted parent job with completed childs if any. It could also be a view that return all of them. Either ways, usage of "Jobs_GetCountWithExcludedState" would then no longer be used thus invaliding all my hypothesis. The new procedure or view should be READ COMMITTED or above.
I have suggestion review client side and how you handle transaction and connection lifetime for each thread. Because all commands are running on client transaction.

Record locking and concurrency issues

My logical schema is as follows:
A header record can have multiple child records.
Multiple PCs can be inserting Child records, via a stored procedure that accepts details about the child record, and a value.
When a child record is inserted, a header record may need to be inserted if one doesn't exist with the specified value.
You only ever want one header record inserted for any given "value". So if two child records are inserted with the same "Value" supplied, the header should only be created once. This requires concurrency management during inserts.
Multiple PCs can be querying unprocessed header records, via a stored procedure
A header record needs to be queried if it has a specific set of child records, and the header record is unprocessed.
You only ever want one machine PC to query and process each header record. There should never be an instance where a header record and it's children should be processed by more than one PC. This requires concurrency management during selects.
So basically my header query looks like this:
BEGIN TRANSACTION;
SET TRANSACTION ISOLATION LEVEL READ COMMITTED;
SELECT TOP 1
*
INTO
#unprocessed
FROM
Header h WITH (READPAST, UPDLOCK)
JOIN
Child part1 ON part1.HeaderID = h.HeaderID AND part1.Name = 'XYZ'
JOIN
Child part2 ON part1.HeaderID = part2.HeaderID AND
WHERE
h.Processed = 0x0;
UPDATE
Header
SET
Processed = 0x1
WHERE
HeaderID IN (SELECT [HeaderID] FROM #unprocessed);
SELECT * FROM #unprocessed
COMMIT TRAN
So the above query ensures that concurrent queries never return the same record.
I think my problem is on the insert query. Here's what I have:
DECLARE #HeaderID INT
BEGIN TRAN
--Create header record if it doesn't exist, otherwise get it's HeaderID
MERGE INTO
Header WITH (HOLDLOCK) as target
USING
(
SELECT
[Value] = #Value, --stored procedure parameter
[HeaderID]
) as source ([Value], [HeaderID]) ON target.[Value] = source.[Value] AND
target.[Processed] = 0
WHEN MATCHED THEN
UPDATE SET
--Get the ID of the existing header
#HeaderID = target.[HeaderID],
[LastInsert] = sysdatetimeoffset()
WHEN NOT MATCHED THEN
INSERT
(
[Value]
)
VALUES
(
source.[Value]
)
--Get new or existing ID
SELECT #HeaderID = COALESCE(#HeaderID , SCOPE_IDENTITY());
--Insert child with the new or existing HeaderID
INSERT INTO
[Correlation].[CorrelationSetPart]
(
[HeaderID],
[Name]
)
VALUES
(
#HeaderID,
#Name --stored procedure parameter
);
My problem is that insertion query is often blocked by the above selection query, and I'm receiving timeouts. The selection query is called by a broker, so it can be called fairly quickly. Is there a better way to do this? Note, I have control over the database schema.
To answer the second part of the question
You only ever want one machine PC to query and process each header
record. There should never be an instance where a header record and
it's children should be processed by more than one PC
Have a look at sp_getapplock.
I use app locks within the similar scenario. I have a table of objects that must be processed, similar to your table of headers. The client application runs several threads simultaneously. Each thread executes a stored procedure that returns the next object for processing from the table of objects. So, the main task of the stored procedure is not to do the processing itself, but to return the first object in the queue that needs processing.
The code may look something like this:
CREATE PROCEDURE [dbo].[GetNextHeaderToProcess]
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
BEGIN TRANSACTION;
BEGIN TRY
DECLARE #VarHeaderID int = NULL;
DECLARE #VarLockResult int;
EXEC #VarLockResult = sp_getapplock
#Resource = 'GetNextHeaderToProcess_app_lock',
#LockMode = 'Exclusive',
#LockOwner = 'Transaction',
#LockTimeout = 60000,
#DbPrincipal = 'public';
IF #VarLockResult >= 0
BEGIN
-- Acquired the lock
-- Find the most suitable header for processing
SELECT TOP 1
#VarHeaderID = h.HeaderID
FROM
Header h
JOIN Child part1 ON part1.HeaderID = h.HeaderID AND part1.Name = 'XYZ'
JOIN Child part2 ON part1.HeaderID = part2.HeaderID
WHERE
h.Processed = 0x0
ORDER BY ....;
-- sorting is optional, but often useful
-- for example, order by some timestamp to process oldest/newest headers first
-- Mark the found Header to prevent multiple processing.
UPDATE Header
SET Processed = 2 -- in progress. Another procedure that performs the actual processing should set it to 1 when processing is complete.
WHERE HeaderID = #VarHeaderID;
-- There is no need to explicitly verify if we found anything.
-- If #VarHeaderID is null, no rows will be updated
END;
-- Return found Header, or no rows if nothing was found, or failed to acquire the lock
SELECT
#VarHeaderID AS HeaderID
WHERE
#VarHeaderID IS NOT NULL
;
COMMIT TRANSACTION;
END TRY
BEGIN CATCH
ROLLBACK TRANSACTION;
END CATCH;
END
This procedure should be called from the procedure that does actual processing. In my case the client application does the actual processing, in your case it may be another stored procedure. The idea is that we acquire the app lock for the short time here. Of course, if the actual processing is fast you can put it inside the lock, so only one header can be processed at a time.
Once the lock is acquired we look for the most suitable header to process and then set its Processed flag. Depending on the nature of your processing you can set the flag to 1 (processed) right away, or set it to some intermediary value, like 2 (in progress) and then set it to 1 (processed) later. In any case, once the flag is not zero the header will not be chosen for processing again.
These app locks are separate from normal locks that DB puts when reading and updating rows and they should not interfere with inserts. In any case, it should be better than locking the whole table as you do WITH (UPDLOCK).
Returning to the first part of the question
You only ever want one header record inserted for any given "value".
So if two child records are inserted with the same "Value" supplied,
the header should only be created once.
You can use the same approach: acquire app lock in the beginning of the inserting procedure (with some different name than the app lock used in querying procedure). Thus you would guarantee that inserts happen sequentially, not simultaneously. BTW, in practice most likely inserts can't happen simultaneously anyway. The DB would perform them sequentially internally. They will wait for each other, because each insert locks a table for update. Also, each insert is written to transaction log and all writes to transaction log are also sequential. So, just add sp_getapplock to the beginning of your inserting procedure and remove that WITH (HOLDLOCK) hint in the MERGE.
The caller of the GetNextHeaderToProcess procedure should handle correctly the situation when procedure returns no rows. This can happen if the lock acquisition timed out, or there are simply no more headers to process. Usually the processing part simply retries after a while.
Inserting procedure should check if the lock acquisition failed and retry the insert or report the problem to the caller somehow. I usually return the generated identity ID of the inserted row (the ChildID in your case) to the caller. If procedure returns 0 it means that insert failed. The caller may decide what to do.

SQL Server deadlock issue

I am using SQL Server 2008 Enterprise. I am wondering whether dead lock issue is only caused by cross dependencies (e.g. task A has lock on L1 but waits on lock on L2, and at the same time, task B has lock on L2 but waits on lock on L1)? Are there any other reasons and scenarios which will cause deadlock?
Are there any other way which will causes dead lock -- e.g. timeout (a S/I/D/U statement do not return for a very long time, and deadlock error will be returned) or can not acquire lock for a long time but not caused by cross-dependencies (e.g. task C needs to get lock on table T, but another task D acquire the lock on table T without releasing the lock, which causes task C not be able to get lock on table T for a long time)?
EDIT 1: will this store procedure cause deadlock if executed by multiple threads at the same time?
create PROCEDURE [dbo].[FooProc]
(
#Param1 int
,#Param2 int
,#Param3 int
)
AS
DELETE FooTable WHERE Param1 = #Param1
INSERT INTO FooTable
(
Param1
,Param2
,Param3
)
VALUES
(
#Param1
,#Param2
,#Param3
)
DECLARE #ID bigint
SET #ID = ISNULL(##Identity,-1)
IF #ID > 0
BEGIN
SELECT IdentityStr FROM FooTable WHERE ID = #ID
END
thanks in advance,
George
Deadlocks require a cycle where resources are locked by processes that are waiting on locks held by other processes to release the locks. Any number of processes can participate in a deadlock, and the normal method for detecting deadlocks is to take a graph of the dependencies on the locks and search for cycles in that graph.
You need to have that cycle for a deadlock to exist. Anything else is just a process held up waiting for a lock to be released. A quick way to see what processes are being blocked by others is sp_who2.
If you want to troubleshoot deadlocks, the best way is to run a trace, picking up 'deadlock graph' events. This will allow you to see what's going on by telling you what queries are holding the locks.
Also there are conversion deadlocks: both processes A and B have shared locks on resource C. Both want to get exclusive locks on C.
Even if two processes compete on only one resource, they still can embrace in a deadlock. The following scripts reproduce such a scenario. In one tab, run this:
CREATE TABLE dbo.Test ( i INT ) ;
GO
INSERT INTO dbo.Test
( i )
VALUES ( 1 ) ;
GO
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE ;
BEGIN TRAN
SELECT i
FROM dbo.Test ;
--UPDATE dbo.Test SET i=2 ;
After this script has completed, we have an outstanding transaction holding a shared lock. In another tab, let us have that another connection have a shared lock on the same resource:
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE ;
BEGIN TRAN
SELECT i
FROM dbo.Test ;
--UPDATE dbo.Test SET i=2 ;
This script completes and renders a result set, just like the first script did. Now let us highlight and execute the commented update commands in both tabs. To perform an update, each connection needs an exclusive lock. Neither connection can acquire that exclusive lock, because the other one is holding a shared lock. Although both connections are competing on only one resource, they have embraced in a conversion deadlock:
Msg 1205, Level 13, State 56, Line 1
Transaction (Process ID 59) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
Also note that more than two connections may embrace in a deadlock.
Simply not releasing a lock for a long time is not a deadlock.
A deadlock is a situation where you can never go forward. It's caused by 2 (or more) processes that are waiting for others to finish but all those involved are holding a lock that is preventing the other(s) from continuing.
The only way out of a deadlock is to kill processes to free the locks as it doesn't matter how long you wait, it can not complete on it's own.

Resources