We have a number of configuration scripts which are a bit complicated. Those scripts are using a number of available stored procedure in our database to insert the configuration data. If any script attempts to insert invalid data, stored procedures will make a call to
RAISERROR #msg, 16, 1
And then the SP will return. The SPs start their own named transactions and they will rollback/commit the named transaction, however, LB doesn't detect the error raised and it takes the execution as a successful execution. But we don't want to continue if that happens.
failOnError in the changeSets are set to true but LB still proceeds. Even in DATABASECHANGELOG that failed changeset marked as executed as if it was successful.
We also tried removing nested transactions (named transactions), no luck.
We removed the name from the transaction and just using BEGIN TRAN, the LB execution stops at the incorrect script but the problem is, LB fails to commit its own transaction and can't release the lock so it remains LOCKED.
Is there anyway to tell LB that an error happened and make it stop?
We are using Liquibase 3.5.0. and Microsoft SQL Server
=== EDIT
So after debugging Liquibase, we found two things:
When connected to MS SQL Server, if a RAISERROR occurs and there are also resultsets in the script, it won't throw exception unless we make a call to statement.getMoreResults(). The same thing happens with Sybase (we tested it with Sybase too). So we thought maybe in LB, after executing the statement we need to make a call to getMoreResults() until it throws exception or it returns false which means no error happened.
A script makes a call to a stored procedure. The stored procedure, has 'BEGIN TRAN' and at the end it either COMMIT or ROLLBACK. If a rollback occurs, it also does RAISERROR. Note that our scripts don't do any update/insert, they are only providing the data in a temp table, so we don't do transaction handling in our scripts. In this scenario, consider we added code to make a call to getMoreResults(), the exception is throws correctly but then in LB, the executor tries to database.rollback() and then later again in StandardLockService, before releasing the lock, it tries to database.rollback() which ends in exception because our SP has rolled back the transaction already. This last rollback in LB, causes the error raised by JDBC to be swallowed and as the result not only do we see the error that caused it but also the lock remained unreleased and this is the most concern because even if we re-run the script and fix it, the lock hasn't been released and we need to do it manually.
One may argue that our transaction handling is not correct but all I am trying to say is that releasing lock should not be affecting if our script is incorrect. LB should be releasing the lock and throw exception or continue if a script/changeset is not run successfully.
If anybody is facing this too: In my case I had a very complex SQL script only for MS SQL Server. This also failed to stop the execution of the LB changes if an error occures in the SQL script, anyway if I use RAISERROR or THROW.
Things I need to do, to get it to work:
remove (or comment) all places where resultsets were created (SELECT)
start the SQL script with "SET NOCOUNT ON;" to avoid results from
insert or update (... lines affected)
end the SQL script with "SET NOCOUNT OFF;" to enable LB to work properly just after executing the SQL script (set EXECTYPE)
Use the precondition https://docs.liquibase.com/concepts/advanced/preconditions.html
create another changeset and check for the execution result before proceeding to next.
Related
1st .)
I have a Sequence container.
It has 4 different execute sql tasks and 4 different DFT where data is inserting into different tables .
I want to implement transaction with or without MSDTC service on the package failure i.e., each and every data should be rollback on failure of any of the DFT or execute SQL task .
How to implement it? when I am trying to implement with MSDTC service I get the "OLEDB Connection" error and without MSDTC the data is getting inserted only the last execute Sql task is getting rolled back . How to implement this on ssis 2017?
2nd.)
when I tried without MSDTC by setting the property of ServerConnection RetainSameConnection as TRUE and took two more execute sql task for begin transaction and commit. I faced a issue with the EVENT HANDLER i.e., I was not able to log error into different table. Either the Rollback is working or the Event Handler when tried to manipulate.
Soon as the error occurred the control goes to the event handler and then Rollback every thing including task in event handler
3rd.)
The Sequence Container is used for parallel execution of tasks. So the particular task among the 4 getting failed only that particular task getting rolled back rest SQL task was inserting data into tables.
Thanks in Advance!! ;-)
One option I've used (without MSDTC) is to configure your OLEDB conection as RetainSameConnection=True
(Via Properties Window)
Then Begin a Transaction Before your Sequence Container & Commit afterwards (all sharing the same OLEDB Connection.
Works quite well & pretty easy to implement.
According to my Scenario :
I used a sequence container(which contains different DFT's and tasks),and have taken 3 more Execute sql task :
1st Begin Transaction T1(before the sequence container)
2nd Commit transaction T1(after the sequence container)
3rd Rollback transaction T1(after the sequence container) with precedence as failure i.e, only when the sequence container fails the Execute Sql task containing rollback operation executes.
Note : I tried to rollback this way but only the current execute sql task i.e., the nearest to it was getting rolled back rest of the data were inserted. So whats the solution? In that same execute sql task I have truncated the table where rows were getting inserted. So, when the sequence container fails the execute sql task will truncate all the data in the respective table. (Rollback transaction T1 go truncate table Table_name1 go truncate table table_name2 go truncate table table_name3)
*IMPORTANT :***To make the above operation work make sure in the connection manager properties **RetainSameConnection is set to True by default it's false.
Now, to log errors into user defined tables we make use of event handler.So the scenario was, when the sequence container gets failed everything gets rolled back including the table used in execute sql task in the event handler. So whats the solution?
When you are not using SSIS transaction properties, by default every task have the properties set to supported. The Execute sql task in the Event handler also has the same properties as Supported so it follows the same transaction.To make the event handler work properly change the connection of the Execute Sql Task i.e, take a different connection and set its TransactionProperty to NotSupported. Thus, it will not follow the same transaction and when any error occurs it will log the errors into the table.
Note : we are using sequence container for parallel execution of tasks.What if? the error occurs inside the sequence container in any of the task and task does not allow to sequence container to fail.In that case, connect all the task serially.Yes, that makes no sense of the sequence container.I found my solution to work that way.
Hope it helps all! ;-)
I am writing in C++ against a SQL Server database. I have an object called SQLTransaction that when created at the start of a code block, sends 'begin transaction' to SQL Server.
I then send one or more SQL statements to the server. If all goes well, I set a flag in the SQLTransaction object to let it know the set of commands went well. When the SQLTransaction object then goes out of scope it either sends 'commit transaction' or 'rollback transaction to the server depending on the state of the flag.
It looks something like this:
{
TSQLTransaction SQLTran();
try
{
Send( SomeSQLCommand );
}
catch(EMSError &e)
{
InformOperator();
return;
}
SQLTran.commit();
}
I had a SQL statement in one of these blocks that sent a poor command and that command threw a SQL error 8114
Error converting data type varchar to numeric
I have since fixed that particular issue. What I don't understand is the fact that I was also receiving a second SQL error with the message
The ROLLBACK TRANSACTION request has no corresponding BEGIN TRANSACTION.
I can't find anything that tells me this transaction could or should not be rolled back after failure.
This exact same SQLTransaction object is used in many places in my application and always seemed to work correctly until now. This SQL error seems to be treated differently for some reason. Are there some errors that SQL Server automatically rolls back? I'd really like to understand what's happening here.
Thanks
There is a connection option, SET XACT_ABORT, that determines the fate of current transaction(s) when SQL statement throws an error. Basically, when set to OFF, the transaction (usually) survives and execution continues; if it's ON, all opened transactions in the current connection are rolled back and batch is terminated.
The option could be set:
On the connection level;
Different database access drivers might have different defaults for connection options;
There is a default value on the SQL Server instance level.
Check whether any of these were changed recently. Also, if you capture the trace in the SQL Profiler, the "ExistingConnection" event lists current connection settings. You can always check the option state there, and rule it out if it's turned off. In that case, I would look closer at the trace, there might be additional commands sent to the server that aren't apparent from your client code.
Let me also back up a step - I'm trying to implement a sanity check inside an IS package. The idea is that the entire package runs in a read uncommitted transaction, with the final step being a check that determines that certain row counts are present, that kind of stuff. If they are NOT, I want to raise an exception and rollback the transaction.
If you can tell me how to do this, or, even better, suggest a better way to implement a sanity check, that would be great.
In order to fail the package if your observed rowcount differs from your expected rowcount:
Create a Package Global Variable to hold your expected rowcount. This could be derived from a RowCount step in your DFT, set manually, etc.
Edit your Execute SQL Task that provides the observed rowcount, and set the Result Set to Single Row.
In the Result Set tab of your Execute SQL Task, assign this Result Set to a variable.
Edit your constraint condition prior to your final step. Set the Evaluation Operation to Expression and Constraint. Set the Value to Failure. In your expression, evaluate ResultSetVariable <> ExpectedRowCountVariable.
If the observed rowcount does not equal the expected rowcount, the package will fail.
You can raise an error and roll back the transaction in an SSIS "Execute SQL" task with the following SQL:
Raiserror('Something went wrong',16,1)
This will cause the "Execute SQL" task to return the error to the SSIS package and the task will follow the "red" (failure) path. You can then use this path to rollback the transaction and do any tidying-up.
This approach has the advantage you can do all your processing in the Execute SQL task then call Raiserror if you need to.
I have tried a huge insert query in DB2.
INSERT INTO MY_TABLE_COPY ( SELECT * FROM MY_TABLE);
Before that, I set the followings:
UPDATE DATABASE CONFIGURATION FOR MY_DB USING LOGFILSIZ 70000;
UPDATE DATABASE CONFIGURATION FOR MY_DB USING LOGPRIMARY 50;
UPDATE DATABASE CONFIGURATION FOR MY_DB USING LOGSECOND 2;
db2stop force;
db2start;
and I got this error:
DB21034E The command was processed as an SQL statement because it was not a
valid Command Line Processor command. During SQL processing it returned:
SQL0964C The transaction log for the database is full. SQLSTATE=57011
SQL0964C The transaction log for the database is full.
Explanation:
All space in the transaction log is being used.
If a circular log with secondary log files is being used, an
attempt has been made to allocate and use them. When the file
system has no more space, secondary logs cannot be used.
If an archive log is used, then the file system has not provided
space to contain a new log file.
The statement cannot be processed.
User Response:
Execute a COMMIT or ROLLBACK on receipt of this message (SQLCODE)
or retry the operation.
If the database is being updated by concurrent applications,
retry the operation. Log space may be freed up when another
application finishes a transaction.
Issue more frequent commit operations. If your transactions are
not committed, log space may be freed up when the transactions
are committed. When designing an application, consider when to
commit the update transactions to prevent a log full condition.
If deadlocks are occurring, check for them more frequently.
This can be done by decreasing the database configuration
parameter DLCHKTIME. This will cause deadlocks to be detected
and resolved sooner (by ROLLBACK) which will then free log
space.
If the condition occurs often, increase the database
configuration parameter to allow a larger log file. A larger log
file requires more space but reduces the need for applications to
retry the operation.
If installing the sample database, drop it and install the
sample database again.
sqlcode : -964
sqlstate : 57011
any suggestions?
I used the maximum values for LOGFILSIZ, LOGPRIMARY, and LOGSECOND;
The max value for LOGFILSIZ may be different for windows, linux, etc. But, you can try a very big number and the DB let you know what is the max. In my case it was 262144.
Also, LOGPRIMARY + LOGSECOND <= 256. I tried 128 for each and it works for my huge query.
Instead of performing trial and errors with the DB CFG parameters, you can put these INSERT statements in the Stored Procedure with commit interval.
Refer to the following post for details: This might help.
https://prasadspande.wordpress.com/2014/06/06/plsql-ways-updatingdeleting-the-bulk-data-from-the-table/
Thanks
I'm helping with a deadlock hunt. The environment: Tomcat 5.5, Java 5, Microsoft SQL Server 2008, jTDS (replacing an old driver). We have a legacy connection pool.
The database code always follows this scheme:
connection = connectionPool.getConnection(); // 1
boolean previousAutoCommitStatus = connection.getAutoCommit(); // 2
connection.setAutoCommit(false); // 3
// ... use the connection ...
// execute prepared statement 4
// execute prepared statement 5
// execute prepared statement 6
connection.commit(); // 7
connection.setAutoCommit(previousAutoCommitStatus); // 8
connectionPool.releaseConnection(connection); // 9
While we hunt the bug (pardon: the software defect) I was wondering: how does the driver work? My guess: whatever I do between (3) and (7) is queued by the driver/the DBMS. Only when I connection.commit() the DBMS begins a new transaction, acquires every lock the operations need (I hope that it is smart enough to lock the smaller possible set of objects), executes the statements and releases the lock, thus closing the transaction.
Or is it that as soon as I execute a prepared statement the DBMS locks the table?
EDIT: What I want to understand is if "commit()" translates in a set of SQL statements beginning with "begin trans/lock table" and ending in "commit/unlock table" or if any Java executeStatement() acquires the lock immediately.
TIA
If you are interested in inside details,
On connection.commit of a SQLServer JDBC implementation
Following command is issued
IF ##TRANCOUNT > 0 COMMIT TRAN
##TRANCOUNT = 0 -- no open transaction
##TRANCOUNT = 1 -- 1 open transaction
##TRANCOUNT =10 -- 10 open transaction
On setting autocommit to false,
set implicit_transactions on
These are MS SQLServer specific commands.
According to this resource the transaction starts as soon as you call setAutocommit(false);
I think it might still be driver dependant but this will be typical. See also MSDN which says the same.
//Switch to manual transaction mode by setting
//autocommit to false. Note that this starts the first
//manual transaction.
con.setAutoCommit(false);
connection.setAutoCommit(false);
Triggers "BEGIN TRAN" on DB Server and
connection.commit();
Would trigger "COMMIT TRAN"
If you want to prevent locks between these two statements, set connection's isolation level to "Read Uncommited". You will have to ensure that it is acceptable in this scenario.
setTransactionIsolation(Connection.TRANSACTION_READ_UNCOMMITTED);
What exactly goes on depends entirely on the driver implementation, so you need to check the documentation for the driver you are using to get a definitive answer.
setAutoCommit(false) does not necessarily begin the transaction on the database, and the statements do not necessarily execute on the database or even acquire the locks when you call the execute function in your code, as you surmised. That being said, as far as I know, shared (read) locks are ordinarily acquired when the statement is executed; update locks, when commit is called. You may be hitting a conversion deadlock (which happens when multiple statements's read locks on a given object are waiting to be converted to write locks), I'd check if your update statements have nested selects in them that may lead to such a lock.