Sometimes postgresql raise error deadlocks.
In trigger for table setted FOR UPDATE.
Table comment:
http://pastebin.com/L1a8dbn4
Log (INSERT sentences is cutted):
2012-01-26 17:21:06 MSK ERROR: deadlock detected
2012-01-26 17:21:06 MSK DETAIL: Process 2754 waits for ExclusiveLock on tuple (40224,15) of relation 735493 of database 734745; blocked by process 2053.
Process 2053 waits for ShareLock on transaction 25162240; blocked by process 2754.
Process 2754: INSERT INTO comment (user_id, content_id, reply_id, text) VALUES (1756235868, 935967, 11378142, 'text1') RETURNING comment.id;
Process 2053: INSERT INTO comment (user_id, content_id, reply_id, text) VALUES (4071267066, 935967, 11372945, 'text2') RETURNING comment.id;
2012-01-26 17:21:06 MSK HINT: See server log for query details.
2012-01-26 17:21:06 MSK CONTEXT: SQL statement "SELECT comments_count FROM content WHERE content.id = NEW.content_id FOR UPDATE"
PL/pgSQL function "increase_comment_counter" line 5 at SQL statement
2012-01-26 17:21:06 MSK STATEMENT: INSERT INTO comment (user_id, content_id, reply_id, text) VALUES (1756235868, 935967, 11378142, 'text1') RETURNING comment.id;
And trigger on table comment:
CREATE OR REPLACE FUNCTION increase_comment_counter() RETURNS TRIGGER AS $$
DECLARE
comments_count_var INTEGER;
BEGIN
SELECT INTO comments_count_var comments_count FROM content WHERE content.id = NEW.content_id FOR UPDATE;
UPDATE content SET comments_count = comments_count_var + 1, last_comment_dt = now() WHERE content.id = NEW.content_id;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER increase_comment_counter_trigger AFTER INSERT ON comment FOR EACH ROW EXECUTE PROCEDURE increase_comment_counter();
Why it can happens?
Thanks!
These are two comments being inserted with the same content_id. Merely inserting the comment will take out a SHARE lock on the content row, in order to stop another transaction deleting that row until the first transaction has completed.
However, the trigger then goes on to upgrade the lock to EXCLUSIVE, and this can be blocked by a concurrent transaction performing the same process. Consider the following sequence of events:
Txn 2754 Txn 2053
Insert Comment
Insert Comment
Lock Content#935967 SHARE
(performed by fkey)
Lock Content#935967 SHARE
(performed by fkey)
Trigger
Lock Content#935967 EXCLUSIVE
(blocks on 2053's share lock)
Trigger
Lock Content#935967 EXCLUSIVE
(blocks on 2754's share lock)
So- deadlock.
One solution is to immediately take an exclusive lock on the content row before inserting the comment. i.e.
SELECT 1 FROM content WHERE content.id = 935967 FOR UPDATE
INSERT INTO comment(.....)
Another solution is simply to avoid this "cached counts" pattern completely, except where you can prove it is necessary for performance. If so, consider keeping the cached count somewhere other than the content table-- e.g. a dedicated table for the counter. That will also cut down on the update traffic to the content table every time a comment gets added. Or maybe just re-select the count and use memcached in the application. There's no getting round the fact that wherever you store this cached count is going to be a choke point, it has to be updated safely.
Related
I have a trigger that basically get the last number inserted (folio) based on two columns (service_type and prefix) and then increment one value. Everything works fine but in some situations when there a lot of insert statements at the same time it causes that the function use the same last value inserted and it duplicates the folio column
CREATE OR REPLACE FUNCTION public.insert_folio()
RETURNS trigger
LANGUAGE plpgsql
AS $function$
DECLARE incremental INTEGER;
SELECT max(f.folio) INTO incremental FROM "folios" f
WHERE f.prefix = NEW."prefix" AND service_type = NEW."service_type";
NEW."folio" = incremental + 1;
IF NEW."folio" IS NULL THEN NEW."folio" = 1;
END IF;
RETURN NEW;
END;
$function$
CREATE TRIGGER insert_folio BEFORE INSERT ON "folios" FOR EACH ROW EXECUTE PROCEDURE insert_folio();
sample:
folio
service_type
prefix
1
DOCUMENT
DC
1
IMAGE
IMG
1
IMAGE
O
2
IMAGE
O
2 (This should be 3)
IMAGE
O
Any ideas?
Thanks!
It is because you have concurrent transactions that both see the same data. The second transaction does not see the record inserted by the first transaction until the first commits.
To prevent this behaviour you will have to lock the whole table when inserting to prevent concurrent writes or use advisory locks to prevent concurrent insertions of the same service_type and prefix.
We need to implement some singleton pattern to ensure a stored procedure cannot be run several times simultaneously.
As I cannot see this functionality in place, I thought about implementing this via a "Lock" table.
We are in a "batch" environment so waiting a few seconds is no problem.
SHARED.LOCK(LOCK_NAME STRING NOT NULL PRIMARY KEY
,SESSION_ID STRING NOT NULL
,ACQUIRED_AT TIMESTAMP_NTZ
)
LOCK_NAME is forced to upper case and used as a Primary Key
SESSION_ID is the current session
ACQUIRED_AT is just useful information
I then create a stored proc to "acquire" the lock $LOCK_NAME that tries to update the lock record with its own session id as long as it is not "locked" already
UPDATE SHARED.LOCK
SET LOAD_ID = $LOAD_ID
,SESSION_ID = CURRENT_SESSION()
,ACQUIRED_AT = CURRENT_TIMESTAMP()
WHERE LOCK_NAME = $LOCK_NAME
AND SESSION_ID IS NULL;
To avoid Snowflake optimistic locking side effects, I would ensure that this stored procedure is not called as part of an explicit transaction.
I then check whether I successfully "acquired" this lock
SELECT 1
FROM SHARED.LOCK
WHERE LOCK_NAME = $LOCK_NAME
AND LOAD_ID = $SESSION_ID;
If I get a record, then I have the lock.
Otherwise, I could wait X seconds and try again later, up to a certain number of attempts.
Once I am done, I can release the lock with a simple Update statement
UPDATE SHARED.LOCK
SET SESSION_ID = NULL
,ACQUIRED_AT = NULL
WHERE LOCK_NAME = $LOCK_NAME
AND SESSION_ID = $SESSION_ID;
And of course we'll have to do something about locks not released within a certain amount of time or locked by a session that is not live anymore, etc...
I think this should work... but maybe there is a simpler way to implement a singleton in Snowflake?
Any better ideas?
Depending on requirements, if the stored procedure is going to be run on schedule TASK could be used, which has OVERLAP protection built-in:
CREATE OR REPLACE TASK my_task
WAREHOUSE = compute_wh
SCHEDULE = '1 minute'
ALLOW_OVERLAPPING_EXECUTION = FALSE
AS
CALL procedure_call();
CREATE TASK - ALLOW_OVERLAPPING_EXECUTION :
ALLOW_OVERLAPPING_EXECUTION = TRUE | FALSE
Specifies whether to allow multiple instances of the task tree to run concurrently
FALSE ensures only one instance of a particular tree of tasks is allowed to run at a time.
Demo:
CREATE TABLE log(id INT NOT NULL IDENTITY(1,1), d TIMESTAMP);
CREATE OR REPLACE procedure insert_log()
returns string
language javascript
execute as owner
as
$$
snowflake.execute ({sqlText: "INSERT INTO log (d) SELECT CURRENT_TIMESTAMP()"});
snowflake.execute ({sqlText: "CALL SYSTEM$WAIT(2, 'MINUTES')"});
return "Succeeded.";
$$
;
ALTER TASK my_task RESUME;
SELECT * FROM log;
I am using Dapper on ADO.NET. So at present I am doing the following:
using (IDbConnection conn = new SqlConnection("MyConnectionString")))
{
conn.Open());
using (IDbTransaction transaction = conn.BeginTransaction())
{
// ...
However, there are various levels of transactions that can be set. I think this is the various settings.
My first question is how do I set the transaction level (where I am using Dapper)?
My second question is what is the correct level for each of the following cases? In each of these cases we have multiple instances of a web worker (Azure) service running that will be hitting the DB at the same time.
I need to run monthly charges on subscriptions. So in a transaction I need to read a record and if it's due for a charge create the invoice record and mark the record as processed. Any other read of that record for the same purpose needs to fail. But any other reads of that record that are just using it to verify that it is active need to succeed.
So what transaction do I use for the access that will be updating the processed column? And what transaction do I use for the other access that just needs to verify that the record is active?
In this case it's fine if a conflict causes the charge to not be run (we'll get it the next day). But it is critical that we not charge someone twice. And it is critical that the read to verify that the record is active succeed immediately while the other operation is in its transaction.
I need to update a record where I am setting just a couple of columns. One use case is I set a new password hash for a user record. It's fine if other access occurs during this except for deleting the record (I think that's the only problem use case). If another web service is also updating that's the user's problem for doing this in 2 places simultaneously.
But it's key that the record stay consistent. And this includes the use case of "set NumUses = NumUses + #ParamNum" so it needs to treat the read, calculation, write of the column value as an atomic action. And if I am setting 3 column values, they all get written together.
1) Assuming that Invoicing process is an SP with multiple statements your best bet is to create another "lock" table to store the fact that invoicing job is already running e.g.
CREATE TABLE InvoicingJob( JobStarted DATETIME, IsRunning BIT NOT NULL )
-- Table will only ever have one record
INSERT INTO InvoicingJob
SELECT NULL, 0
EXEC InvoicingProcess
ALTER PROCEDURE InvoicingProcess
AS
BEGIN
DECLARE #InvoicingJob TABLE( IsRunning BIT )
-- Try to aquire lock
UPDATE InvoicingJob WITH( TABLOCK )
SET JobStarted = GETDATE(), IsRunning = 1
OUTPUT INSERTED.IsRunning INTO #InvoicingJob( IsRunning )
WHERE IsRunning = 0
-- job has been running for more than a day i.e. likely crashed without releasing a lock
-- OR ( IsRunning = 1 AND JobStarted <= DATEADD( DAY, -1, GETDATE())
IF NOT EXISTS( SELECT * FROM #InvoicingJob )
BEGIN
PRINT 'Another Job is already running'
RETURN
END
ELSE
RAISERROR( 'Start Job', 0, 0 ) WITH NOWAIT
-- Do invoicing tasks
WAITFOR DELAY '00:01:00' -- to simulate execution time
-- Release lock
UPDATE InvoicingJob
SET IsRunning = 0
END
2) Read about how transactions work: https://learn.microsoft.com/en-us/sql/t-sql/language-elements/transactions-transact-sql?view=sql-server-2017
https://learn.microsoft.com/en-us/sql/t-sql/statements/set-transaction-isolation-level-transact-sql?view=sql-server-2017
You second question is quite broad.
The scenario is the following -
OrderTable with Columns "OrderId" and "OrderType"
OrderRelationTable with Columns "OrderId" and "CustId"
OrderProcessTable with Columns "OrderId", "OrderType", "CustId", and "ProcessFlag"
The flow goes like this-
Application1 creates the record in OrderTable -> Then pass the record to Application2 by using MQ protocol, Application 2 in this case insert/create the record passed in the OrderRelationTable -> Then a trigger is called in Oracle DB to create the record in OrderProcessTable
Problem
Sometimes the record is not inserted into table 3 OrderProcessTable. Not sure if it is cause by timing or there is something that is not correct with the trigger?
Application1 Code
boolean updated = false;
/** JDBC prepare statement execution insert into OrderTable in Java**/
int rowCount = ps.executeUpdate();
if(rowCount>0){
updated=true;
}
log.log("updated flag:"+updated);
/** I am able to see the log shows the flag is true, and recored inserted into OrderTable **/
Application2 Code
This doesn't really matter much assuming that it is some Java JDBC code that does the insert into OrderRelationTable and it is successful.
The Trigger
Assuming the syntax is correct.
CREATE OR REPLACE TRIGGER INSERTINTOOrderProcessTable
AFTER INSERT ON OrderRelationTable
FOR EACH ROW DECLEAR
v_order_type := null;
BEGIN
SELECT OrderType INTO v_order_type FROM OrderTable
WHERE OrderId = :new.OrderId
AND OrderType IS NOT NULL
AND rownum=1;
IF v_order_type IS NOT NULL THEN
INSERT INTO OrderProcessTable VALUES (:new.OrderId, v_order_type, :new.CustId, 'N');
END IF;
END;
Questions -
After the Application 1 Code is executed is guaranteed DB will have the OrderTable record avaliable for SELECT statement? (Assume that updated flag is true)
Is there a timing issue with the app code and trigger? for example when trigger calls the SELECT statement from OrderTable? (of course the order id is matching in the OrderRelationTable and OrderTable)
Basically right now my problem is that sometimes (rarely) the record is not inserted into OrderProcessTable via the trigger even though it should (Order Type is not null)? Any idea?
There's no timing issue, as far as I can tell.
As of trigger code: what is the purpose of and rownum = 1 condition? I'm not saying that it is wrong, I'm just asking. Do you expect several rows to be returned by that query? If so, is that a legal situation? Wouldn't you rather handle it with the WHEN TOO_MANY_ROWS exception handler (i.e. instead of using the ROWNUM condition)?
What happens if SELECT returns nothing? It raises then NO_DATA_FOUND exception and trigger fails and certainly doesn't insert anything. Is it propagated so that someone (human being) or something (error logging procedure) sees / catches it so that you'd know that something went wrong?
And, of course, the fact that V_ORDER_TYPE remains NULL which causes INSERT to fail (as P. Salmon already suggested).
How do I achieve the Transaction involving multiple DB operations to >1 tables using iBatis & Spring?
Let me explain in detail:
I have 2 tables A & B with Master-details relationship. [Both tables in single database].
/* Table A: */
a_id [Primary Key]
[plus other columns]
/* Table B: */
b_id [Primary Key]
a_id [Foreign Key = PK of table A]
[plus other columns]
In my Dao I have following methods (I am using iBatis sqlMap toperform DB operations):
insertA();
insertB();
updateA();
updateB();
deleteA();
deleteB();
Each of the above operations are Atomic (& can be called by client & commited in database -via Spring/iBatis).
Up to this point everything WORKS OK! [i.e. I am able to perform INDIVIDUAL insert/update/delete on each table.]
-- NEXT, I need to perform a combination of two of above DB operations as an ATOMIC operation;
Here is what I want to achieve from SVC layer:
start Tranaction
operation on Table-A (via method of Dao class) - op #1
operation on Table-B (via method of Dao class) - op #2
end Transaction
Example1:
start Tranaction
insertA();
insertB();
end Transaction
Example2:
start Tranaction
updateA();
updateB();
end Transaction
Here, if op#2 Fails, I want op#1 also to be Rolled back. i.e. Complete Rollback.
So, I wrote additional method within the Service layer, which calls above DAO methods.
Before running the (Svc) code, I manually [via cmd-line] change some data On database, so that 2nd operation FAILS due to DB Constraints.
Now, op #2 [Table-B] FAILS, but op #1 is commited in DB. i.e. there is NO complete rollback, ONLY PARTIAL rollback.
If op #2 Fails, shouldn't op#1 also Roll back?
Here is what I am using in ApplicationContext.xml:
"DataSourceTransactionManager" [Spring] for Transaction.
iBatis 2.3.x [SqlMapClient]
Spring 3.0
DefaultAutoCommit is set to FALSE.
In "tx:method": [service method from where ATOMIC operation is to be performed)
propagation="REQUIRED" [Tried with other values also, but no use]
rollback-for=Exception-Name-for-which-to-rollback
Is there anything else that needs to be done?
Am I doing something wrong?
Is this correct way or is there a better option?
<
In my opinion, you should consider the data integrity, if op #2 make the system loose data integrity, then it should roll back according to op #1.
To achieve what you want, just make a call to op #1 and #2, wrapper #2 on try/catch block, something like:
try {
start Tranaction ;
//pkA is primary key of A
Object pkA = insertA();
updateA(pkA);
try {
Object pkB = insertB(pkA);
updateB(pkB);
}
catch(Exception e) {
logger.ERROR("Error when inserting and updating B.Ignore. ",e);
}
commit transaction;
}
catch(Exception e) {
logger.ERROR(e);
rollback Transaction;
}
HTH.