Need to handle a error while inserting records into table (Ecpg PostgreSQL), but job should not abort/commit/rollback if any duplicate record (Primary Key).Job should skip and continue for next.
Note:SQL_CODE = sqlca.sqlcode
if ( SQL_CODE == -403 ) Other Way (sqlca.sqlcode == ECPG_DUPLICATE_KEY)
{
Log_error_tab();
}
else if ( SQL_CODE != SQL_SUCCESS )
{
Job_fail();
}
If i will handle as above its handling the error by calling function Log_error_tab(),but its failing in next DML operation with error "sqlerrm.sqlerrmc: current transaction is aborted, commands ignored until end of transaction block on line (sqlstate: 25P02)"
That's the way PostgreSQL works: if a statement inside a transaction fails, the transaction is aborted, and all subsequent statements will fail with that message.
So you should EXEC SQL ROLLBACK before you attempt your next SQL statement.
If you don't want to rollback the whole transaction, you can set a savepoint prior to executing the “dangerous” SQL statement:
SAVEPOINT sname
Then, when the critical part is over, you can release the savepoint:
RELEASE SAVEPOINT sname
If you hit an error, you can roll back everything since the savepoint was set, including the error, with
ROLLBACK TO SAVEPOINT sname
Note that you should use savepoints sparingly if you want decent performance.
Related
I am using Dapper on ADO.NET. So at present I am doing the following:
using (IDbConnection conn = new SqlConnection("MyConnectionString")))
{
conn.Open());
using (IDbTransaction transaction = conn.BeginTransaction())
{
// ...
However, there are various levels of transactions that can be set. I think this is the various settings.
My first question is how do I set the transaction level (where I am using Dapper)?
My second question is what is the correct level for each of the following cases? In each of these cases we have multiple instances of a web worker (Azure) service running that will be hitting the DB at the same time.
I need to run monthly charges on subscriptions. So in a transaction I need to read a record and if it's due for a charge create the invoice record and mark the record as processed. Any other read of that record for the same purpose needs to fail. But any other reads of that record that are just using it to verify that it is active need to succeed.
So what transaction do I use for the access that will be updating the processed column? And what transaction do I use for the other access that just needs to verify that the record is active?
In this case it's fine if a conflict causes the charge to not be run (we'll get it the next day). But it is critical that we not charge someone twice. And it is critical that the read to verify that the record is active succeed immediately while the other operation is in its transaction.
I need to update a record where I am setting just a couple of columns. One use case is I set a new password hash for a user record. It's fine if other access occurs during this except for deleting the record (I think that's the only problem use case). If another web service is also updating that's the user's problem for doing this in 2 places simultaneously.
But it's key that the record stay consistent. And this includes the use case of "set NumUses = NumUses + #ParamNum" so it needs to treat the read, calculation, write of the column value as an atomic action. And if I am setting 3 column values, they all get written together.
1) Assuming that Invoicing process is an SP with multiple statements your best bet is to create another "lock" table to store the fact that invoicing job is already running e.g.
CREATE TABLE InvoicingJob( JobStarted DATETIME, IsRunning BIT NOT NULL )
-- Table will only ever have one record
INSERT INTO InvoicingJob
SELECT NULL, 0
EXEC InvoicingProcess
ALTER PROCEDURE InvoicingProcess
AS
BEGIN
DECLARE #InvoicingJob TABLE( IsRunning BIT )
-- Try to aquire lock
UPDATE InvoicingJob WITH( TABLOCK )
SET JobStarted = GETDATE(), IsRunning = 1
OUTPUT INSERTED.IsRunning INTO #InvoicingJob( IsRunning )
WHERE IsRunning = 0
-- job has been running for more than a day i.e. likely crashed without releasing a lock
-- OR ( IsRunning = 1 AND JobStarted <= DATEADD( DAY, -1, GETDATE())
IF NOT EXISTS( SELECT * FROM #InvoicingJob )
BEGIN
PRINT 'Another Job is already running'
RETURN
END
ELSE
RAISERROR( 'Start Job', 0, 0 ) WITH NOWAIT
-- Do invoicing tasks
WAITFOR DELAY '00:01:00' -- to simulate execution time
-- Release lock
UPDATE InvoicingJob
SET IsRunning = 0
END
2) Read about how transactions work: https://learn.microsoft.com/en-us/sql/t-sql/language-elements/transactions-transact-sql?view=sql-server-2017
https://learn.microsoft.com/en-us/sql/t-sql/statements/set-transaction-isolation-level-transact-sql?view=sql-server-2017
You second question is quite broad.
Pseudo code here:
RecordSetA.Open(&DB);
try
{
DB.m_WorkSpace.begintrans();
RecordSetA.Close();
DB.m_WorkSpace.Commit();
}
catch(...)
{
DB.m_WorkSpace.RollBack();
}
I can perform RecordSetA.Move(),RecordSetA.Find() inside of the transaction without any problem. But if i close this recordset inside the transaction, it gives out DAO Error when doing RollBack() or Commit(): "You tried to commit or rollback a transaction without first beginning a transaction".
I know in ODBC, Close() should not be performed inside a transaction(https://msdn.microsoft.com/en-us/library/sx0e0xze.aspx), but why cannot i do so in DAO?
Hi i have the following stored procedure...
CREATE OR REPLACE PROCEDURE DB.INSERTGOOD
(
--CapRefCursor OUT Cap_Cur_Pkg.CapCur,
p_APPLIANT_TLT IN GOODRIGHT_MANUAL.APPLICANT_TLT%TYPE,
p_APPLIANT_NME IN GOODRIGHT_MANUAL.APPLICANT_NME%TYPE,
p_APPLICANT_SURNME IN GOODRIGHT_MANUAL.APPLICANT_SURNME%TYPE,
p_COMPANY_NME IN GOODRIGHT_MANUAL.COMPANY_NME%TYPE,
p_ID_CDE IN GOODRIGHT_MANUAL.ID_CDE%TYPE,
p_ADD1 IN GOODRIGHT_MANUAL.ADD1%TYPE,
p_OCCUPATION1 IN GOODRIGHT_MANUAL.OCCUPATION1%TYPE,
p_REMARK1 IN GOODRIGHT_MANUAL.REMARK1%TYPE,
p_SOURCE IN GOODRIGHT_MANUAL.SOURCE%TYPE
)
IS
BEGIN
INSERT
INTO GOODRIGHT_MANUAL
(
SEQ_ID,
APPLICANT_TLT,
APPLICANT_NME,
APPLICANT_SURNME,
COMPANY_NME,
ID_CDE,
ADD1,
OCCUPATION1,
REMARK1,
GOODRIGHT_MANUAL.SOURCE
)
VALUES
(
goodright_seq.nextval,
p_APPLIANT_TLT,
p_APPLIANT_NME,
p_APPLICANT_SURNME,
p_COMPANY_NME,
lower(p_ID_CDE),
p_ADD1,
p_OCCUPATION1,
p_REMARK1,
p_SOURCE
);
COMMIT;
-- OPEN CapRefCursor FOR
--select 'True';
EXCEPTION
WHEN DUP_VAL_ON_INDEX
THEN ROLLBACK;
-- select 'False';
END DB.INSERTGOOD;
/
Here i want to return a string TRUE if the transaction commit successfully and FALSE if transaction rollback.
An Output Variable CapRefCursor is defined but i don't know how to assign true false to that variable and return it.
Thanks in advance.
You have defined a procedure with no OUT parameter, therefore it can not return anything.
You have several options to return success information:
define a function instead of a procedure. A function always returns something, you can define a string to be returned as VARCHAR2 for example in your case.
add an OUT parameter to the procedure. OUT parameters are logically equivalent to function returned values. You can have more than one such parameters.
modify your logic so that the procedure returns nothing when it works and throws an exception when it fails.
I would go with solution (3) because:
solution (1) and (2) are bug-prone: you may easily forget to check the return code in which case your program will continue as if no error had happened in case of failure. Ignoring error is the surest way to transform a benign bug into a monstrosity because it can lead to extensive data corruption. Your program may go on for months without you realising that it is intermittently failing!
Exception logic is designed to overcome this problem and makes the code cleaner and clearer. No more ugly if-then-else after every single procedure call. For this reason alone, solutions (1) and (2) are considered code-smell (anti-pattern) when used extensively to return success/error state.
Less code is involved, just remove the EXCEPTION block and let the error propagate.
procedures that fail will undo their work without rolling back the whole transaction if you let the exception propagate (and don't issue intermediate commits).
Finally, in general you should not control transaction logic in your sub-procedures. A procedure that does a single insert is probably part of a larger transaction. You should not let this procedure either commit or rollback. Your calling code, be it PL/SQL, GUI or script should decide if the transaction should move forward and complete or be rolled back.
Sometimes postgresql raise error deadlocks.
In trigger for table setted FOR UPDATE.
Table comment:
http://pastebin.com/L1a8dbn4
Log (INSERT sentences is cutted):
2012-01-26 17:21:06 MSK ERROR: deadlock detected
2012-01-26 17:21:06 MSK DETAIL: Process 2754 waits for ExclusiveLock on tuple (40224,15) of relation 735493 of database 734745; blocked by process 2053.
Process 2053 waits for ShareLock on transaction 25162240; blocked by process 2754.
Process 2754: INSERT INTO comment (user_id, content_id, reply_id, text) VALUES (1756235868, 935967, 11378142, 'text1') RETURNING comment.id;
Process 2053: INSERT INTO comment (user_id, content_id, reply_id, text) VALUES (4071267066, 935967, 11372945, 'text2') RETURNING comment.id;
2012-01-26 17:21:06 MSK HINT: See server log for query details.
2012-01-26 17:21:06 MSK CONTEXT: SQL statement "SELECT comments_count FROM content WHERE content.id = NEW.content_id FOR UPDATE"
PL/pgSQL function "increase_comment_counter" line 5 at SQL statement
2012-01-26 17:21:06 MSK STATEMENT: INSERT INTO comment (user_id, content_id, reply_id, text) VALUES (1756235868, 935967, 11378142, 'text1') RETURNING comment.id;
And trigger on table comment:
CREATE OR REPLACE FUNCTION increase_comment_counter() RETURNS TRIGGER AS $$
DECLARE
comments_count_var INTEGER;
BEGIN
SELECT INTO comments_count_var comments_count FROM content WHERE content.id = NEW.content_id FOR UPDATE;
UPDATE content SET comments_count = comments_count_var + 1, last_comment_dt = now() WHERE content.id = NEW.content_id;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER increase_comment_counter_trigger AFTER INSERT ON comment FOR EACH ROW EXECUTE PROCEDURE increase_comment_counter();
Why it can happens?
Thanks!
These are two comments being inserted with the same content_id. Merely inserting the comment will take out a SHARE lock on the content row, in order to stop another transaction deleting that row until the first transaction has completed.
However, the trigger then goes on to upgrade the lock to EXCLUSIVE, and this can be blocked by a concurrent transaction performing the same process. Consider the following sequence of events:
Txn 2754 Txn 2053
Insert Comment
Insert Comment
Lock Content#935967 SHARE
(performed by fkey)
Lock Content#935967 SHARE
(performed by fkey)
Trigger
Lock Content#935967 EXCLUSIVE
(blocks on 2053's share lock)
Trigger
Lock Content#935967 EXCLUSIVE
(blocks on 2754's share lock)
So- deadlock.
One solution is to immediately take an exclusive lock on the content row before inserting the comment. i.e.
SELECT 1 FROM content WHERE content.id = 935967 FOR UPDATE
INSERT INTO comment(.....)
Another solution is simply to avoid this "cached counts" pattern completely, except where you can prove it is necessary for performance. If so, consider keeping the cached count somewhere other than the content table-- e.g. a dedicated table for the counter. That will also cut down on the update traffic to the content table every time a comment gets added. Or maybe just re-select the count and use memcached in the application. There's no getting round the fact that wherever you store this cached count is going to be a choke point, it has to be updated safely.
I am currently having a block like below. So with this we set autocommit off and and do a commit/rollback. Now at the rollback line , we are getting a failure saying that "rollback ineffective with AutoCommit enabled at" . How could this happen since AutoCommit was indeed disabled by the begin_work. This problem was not there for a long time and it is suddenly occuring.
On investigating further , i found that the update_sql1 created a #temp table , and update_sql2,update_sql3,update_sql4 query the same #temp table, and are failing with Invalid object name '#temp' error. Immediately control flows to if($#) where $dbh->{AutoCommit} is set to 1. First of all its really wierd as to why update_sql2 and onwards count not find object #temp , when update_sql1 was indeed successful.
Any pointers ?
====
$dbh->db_Main()->begin_work;
eval {
$dbh->do($update_sql1);
$dbh->do($update_sql2);
$dbh->do($update_sql3);
$dbh->do($update_sql4);
$dbh->commit;
1;
}
if ($#) {
$logger->info("inside catch");
$logger->info("autocommit is $dbh->{AutoCommit}");
$dbh->rollback;
}
===
Here is the full error message
Issuing rollback() due to DESTROY without explicit disconnect() of DBD::ODBC::db handle ..
rollback ineffective with AutoCommit enabled ...
Under autocommit, the begin starts a transaction, which is automatically committed. You have to turn off AutoCommit to get a transaction.