We are currently on SQL 2005 at work and I am migrating an old Foxpro system to new web application backed by SQL Server. I am using TRY CATCH in T-SQL for transaction processing and it seems to be working very well. One of the other programmers at work was worried about this as he said he had heard of issues where the catch phrase did not always catch the error. I have beat the sproc to death and cannot get it to fail (miss a catch) and the only issues I have found searching around the net is that it will not return the correct error number for error numbers < 5000. Has anyone experienced any other issues with TRY CATCH in T-SQL - especially if it misses a catch? Thanks any input you may wish to provide.
TRY ... CATCH doesn't catch every possible error but the ones not caught are well documented in BOL Errors Unaffected by a TRY…CATCH Construct
TRY…CATCH constructs do not trap the
following conditions:
Warnings or informational messages that have a severity of 10 or lower.
Errors that have a severity of 20 or higher that stop the SQL Server
Database Engine task processing for
the session. If an error occurs that
has severity of 20 or higher and the
database connection is not disrupted,
TRY…CATCH will handle the error.
Attentions, such as client-interrupt requests or broken
client connections.
When the session is ended by a system administrator by using the KILL
statement.
The following types of errors are not
handled by a CATCH block when they
occur at the same level of execution
as the TRY…CATCH construct:
Compile errors, such as syntax errors, that prevent a batch from
running.
Errors that occur during statement-level recompilation, such as
object name resolution errors that
occur after compilation because of
deferred name resolution.
These errors are returned to the level
that ran the batch, stored procedure,
or trigger.
There was one case in my experience when TRY...CATCH block didn't catch the error. There was error connected with collation:
Cannot resolve the collation conflict between "Latin1_General_CI_AS" and "Latin1_General_CI_AI" in the equal to operation.
Maybe this error correspond one of the error type documented in BOL.
Errors that occur during statement-level recompilation, such as object
name resolution errors that occur after compilation because of
deferred name resolution.
TRY ... CATCH will fail to catch an error if you pass a "bad" search term to CONTAINSTABLE
For example:
DECLARE #WordList VARCHAR(800)
SET #WordList = 'crap"s'
CON
TAINSTABLE(table, *, #WordList)
The CONTAINSTABLE will give you a "syntax error", and any surrounding TRY ... CATCH does not catch this.
This is particularly nasty because the error is caused by data, not by a "real" syntax error in your code.
I'm working in SQL Server 2008. I built a big sql statement that had a try/catch. I tested it by renaming a table (in dev). The statement blew up and didn't catch the error. Try/catch in SQL Server is weak, but better than nothing. Here's a piece of my code. I can't put any more in because of my company's restrictions.
COMMIT TRAN T1;
END TRY
BEGIN CATCH
-- Save the error.
SET #ErrorNumber = ERROR_NUMBER();
SET #ErrorMessage = ERROR_MESSAGE();
SET #ErrorLine = ERROR_LINE();
-- Put GSR.dbo.BlahBlahTable back the way it was.
ROLLBACK TRAN T1;
END CATCH
-- Output a possible error message. Knowing what line the error happened at really helps with debugging.
SELECT #ErrorNumber as ErrorNumber,#ErrorMessage as ErrorMessage,#ErrorLine AS LineNumber;
I have never hit a situation where TRY...CATCH... failed. Neiteher, probably, have many of the people who read this question. This, alas, only means that if there is such a SQL bug, then we haven't seen it. The thing is, that's a pretty big "if". Believe it or not, Microsoft does put some effort into making their core software products pretty solid, and TRY...CATCH... is hardly a new concept. A quick example: In SQL 2005, I encountered a solid, demonstrable, and replicable bug while working out then-new table partitioning--which bug that had already been fixed by a patch. And TRY...CATCH... gets used a bit more frequently than table partitioning.
I'd say the burden of proof falls on your co-worker. If he "heard it somewhere", then he should try and back it up with some kind of evidence. The internet is full of proof for the old saying "just because everyone says its so, doesn't mean they're right".
Related
If I do the following query
EXEC spFoo
PRINT 'TEST'
and spFoo throws an error, it still executes the print statement.
However if I do
BEGIN TRY
EXEC cdb.spFoo
PRINT 'TEST'
END TRY
BEGIN CATCH
THROW;
END CATCH
It behaves as I expected and does not continue after an error.
Could someone please explain this behaviour for me? It still continues on even if I encapsulate it in a transaction. It is not just the with a print-statement but also with any other thing. My initial thought was that it was a severity problem but the severity was level 16. Is this normal T-SQL behaviour? If so, what motivates this design that contradicts every other language I have ever worked with that directly escalates the error?
I have tried and seen the same behaviour in SQL Server 2012, 2014 and 2017 across several different machines. The stored procedure in question is linked to a SQL CLR.
Severity was level 16 is a warning-level. The user is required to handle any errors - including defining when termination is required.
With your first example:
EXEC spFoo
PRINT 'TEST'
these are independent statements and, although spFoo may fail, the server will move onto the next statement. This is because severity is less than 20, the batch has not automatically been terminated.
With your second example,
BEGIN TRY
EXEC cdb.spFoo
PRINT 'TEST'
END TRY
BEGIN CATCH
THROW;
END CATCH
you have taken ownership of deciding what is associated with what.
Since one item in the TRY block failed, it would not move onto the next.
THROW always terminates a batch.
Once you called THROW, if you have any code that continues afterwards, it will not be carried out. If that's important, you can use RAISERROR to continue.
A detailed explanation of errors
Part 2 of explanations
An answer from the same person
Severity levels
One would think error handling was straightforward. Not really.
I have taken a screenshot from Erland Sommarskog's excellent article, which shows how different errors behave:
Seeing that PRINT 'TEST' is being ran after the failed exec, I guess you are in the Name=Statement-terminating, XACT_ABORT OFF, TRY-CATCH OFF cell which indicates "Aborts statement", which means the next statement still runs. Catching works and skips the next step.
You should explanation given here for TRY...CATCH
A TRY…CATCH construct catches all execution errors that have a
severity higher than 10 that do not close the database connection.
In SQL, TRY..CATCH works very differently than C#/VB etc. so based on severity it works.
If you need to see if any error occurred in previous statement, use ##ERROR.
Example, IF ##ERROR <> 0 PRINT 'TEST'
Try printing what result ##ERROR giving. It may give you any hint.
Returns 0 if the previous Transact-SQL statement encountered no errors.
(From above link) ##ERROR Returns an error number if the previous statement encountered an
error. If the error was one of the errors in the sys.messages catalog
view, then ##ERROR contains the value from the sys.messages.message_id
column for that error. You can view the text associated with an
##ERROR error number in sys.messages.
Check if any helpful information found from ##ERROR
TRY.. CATCH
Will not stop the process of the query. If you do not want to continue after catching the error, then please use RETURN. This will stop the process. Also if you are using BEGIN TRANSACTION, please use ROLLBACK before returning from the process. Otherwise it will end up in uncommitted transactions.
We have a number of configuration scripts which are a bit complicated. Those scripts are using a number of available stored procedure in our database to insert the configuration data. If any script attempts to insert invalid data, stored procedures will make a call to
RAISERROR #msg, 16, 1
And then the SP will return. The SPs start their own named transactions and they will rollback/commit the named transaction, however, LB doesn't detect the error raised and it takes the execution as a successful execution. But we don't want to continue if that happens.
failOnError in the changeSets are set to true but LB still proceeds. Even in DATABASECHANGELOG that failed changeset marked as executed as if it was successful.
We also tried removing nested transactions (named transactions), no luck.
We removed the name from the transaction and just using BEGIN TRAN, the LB execution stops at the incorrect script but the problem is, LB fails to commit its own transaction and can't release the lock so it remains LOCKED.
Is there anyway to tell LB that an error happened and make it stop?
We are using Liquibase 3.5.0. and Microsoft SQL Server
=== EDIT
So after debugging Liquibase, we found two things:
When connected to MS SQL Server, if a RAISERROR occurs and there are also resultsets in the script, it won't throw exception unless we make a call to statement.getMoreResults(). The same thing happens with Sybase (we tested it with Sybase too). So we thought maybe in LB, after executing the statement we need to make a call to getMoreResults() until it throws exception or it returns false which means no error happened.
A script makes a call to a stored procedure. The stored procedure, has 'BEGIN TRAN' and at the end it either COMMIT or ROLLBACK. If a rollback occurs, it also does RAISERROR. Note that our scripts don't do any update/insert, they are only providing the data in a temp table, so we don't do transaction handling in our scripts. In this scenario, consider we added code to make a call to getMoreResults(), the exception is throws correctly but then in LB, the executor tries to database.rollback() and then later again in StandardLockService, before releasing the lock, it tries to database.rollback() which ends in exception because our SP has rolled back the transaction already. This last rollback in LB, causes the error raised by JDBC to be swallowed and as the result not only do we see the error that caused it but also the lock remained unreleased and this is the most concern because even if we re-run the script and fix it, the lock hasn't been released and we need to do it manually.
One may argue that our transaction handling is not correct but all I am trying to say is that releasing lock should not be affecting if our script is incorrect. LB should be releasing the lock and throw exception or continue if a script/changeset is not run successfully.
If anybody is facing this too: In my case I had a very complex SQL script only for MS SQL Server. This also failed to stop the execution of the LB changes if an error occures in the SQL script, anyway if I use RAISERROR or THROW.
Things I need to do, to get it to work:
remove (or comment) all places where resultsets were created (SELECT)
start the SQL script with "SET NOCOUNT ON;" to avoid results from
insert or update (... lines affected)
end the SQL script with "SET NOCOUNT OFF;" to enable LB to work properly just after executing the SQL script (set EXECTYPE)
Use the precondition https://docs.liquibase.com/concepts/advanced/preconditions.html
create another changeset and check for the execution result before proceeding to next.
I have an INSERT trigger on one of my tables that issues a THROW when it finds a duplicate. Problem is my transactions seem to be implicitly rolled back at this point - this is a problem, I want to control when transactions are rolled back.
The issue can be re-created with this script:
CREATE TABLE xTable (
id int identity not null
)
go
create trigger xTrigger on xTable after insert as
print 'inserting...';
throw 1600000, 'blah', 1
go
begin tran
insert into xTable default values
rollback tran
go
drop table xTable
If you run the rollback tran - it will tell you there is no begin tran.
If i swap the THROW for a 'normal' exception (like SELECT 1/0) the transaction is not rolled back.
I have checked xact_abort flag - and it is off.
Using SQL Server 2012 and testing through SSMS
Any help appreciated, thanks.
EDIT
After reading the articles posted by #Dan Guzman, i came to the following conclusion/summary...
SQL Server automatically sets XACT_ABORT ON in triggers.
My example (above) does not illustrate my situation - In reality I'm creating an extended constraint using a trigger.
My use case was contrived, I was trying to test multiple situations in the SAME unit test (not a real world situation, and NOT good unit test practice).
My handling of the extended constraint check and throwing an error in the trigger is correct, however there is no real situation in which I would not want to rollback the transaction.
It can be useful to SET XACT_ABORT OFF inside a trigger for a particular case; but your transaction will still be undermined by general batch-aborting errors (like deadlocks).
Historical reasons aside, i don't agree with SQL Server's handling of this; just because there is no current situation in which you'd like to continue the transaction, does not mean such a situation may not arise.
I'd like to see one able to setup SQL Server to maintain the integrity of transactions, if your chosen architecture is to have transactions strictly managed at origin, i.e. "he alone who starts the transaction, must finish it". This, aside from usual fail-safes, e.g. if your code is never reached due to system failure etc.
THROW will terminate the batch when outside the scope of TRY/CATCH (https://msdn.microsoft.com/en-us/library/ee677615.aspx). The implication here is that no further processing of the batch takes place, including the statements following the insert. You'll need to either surround your INSERT with a TRY/CATCH or use RAISERROR instead of THROW.
T-SQL error handing is a rather large and complex topic. I suggest you peruse the series of error-handling articles by Erland Sommarskog: http://www.sommarskog.se/error_handling/Part1.html. Most relevant here is the topic Can I Prevent the Trigger from Rolling Back the Transaction? http://www.sommarskog.se/error_handling/Part3.html#Triggers. The take away from a best practices point of view is that triggers are not the right solution if you enforce business rules in a trigger without a rollback.
I've been sorting out the whole nested transaction thing in SQL server, and I've gleamed these nuggets of understanding of behavior of nested trans':
When nesting transactions, only the
outermost commit will actually
commit.
"Commit Trans txn_name", when nested
, will always apply to the innermost
transaction, even if txn_name refers
to an outer transaction.
"ROLLBACK TRAN" (no name) , even in
an inner transaction, will rollback
all transactions.
"ROLLBACK TRAN txn_name" - txn_name must
refer to the outermost txn name.
If not, it will fail.
Given these , is there any benefit of naming transactions? You cannot use it to target a specific tranasction, either for commit or rollback.
Is it only for code commenting purposes?
Thanks,
Yoni
Effectively it's just a programmers aide memoire. If you're dealing with a Tx that has a number of inner transactions, giving each meaningful names can help you make sure that the tranactions are appropriately nested and may catch logic errors.
You can have procedures rollback only their own work on error, allowing the caller to decide wether to abandon the entire transaction or recover and try an alternate path. See Exception handling and nested transactions for a procedure template that allows this atomic behavior.
The idea is to roll back part of your work, like a nested transaction. Does not always work as intended.
Stored procedures using old-style error handling and savepoints may not work as intended when they are used together with TRY … CATCH blocks: Avoid mixing old and new styles of error handling.
Already discussed here ##ERROR and/or TRY - CATCH
Been working with SQL Server since it was Sybase (early 90s for the greenies) and I'm a bit stumped on this one.
In Oracle and DB2, you can pass a SQL batch or script to a stored procedure to test if it can be parsed, then execute conditional logic based on the result, like this pseudocode example:
if (TrySQLParse(LoadSQLFile(filename)) == 1
{ execute logic if parse succeeds }
else
{ execute logic if parse fails }
I'm looking for a system proc or similar function in SQL Server 2008 -- not SHOWPLAN or the like -- to parse a large set of scripts from within a TSQL procedure, then conditionally control exception handling and script execution based on the results. But, I can't seem to find a similar straightforward gizmo in TSQL.
Any ideas?
The general hacky way to do this in any technology that does a full parse/compile before execution is to prepend the code in question with something that causes execution to stop. For example, to check if a vbscript passes syntax checking without actually running it, I prepend:
Wscript.exit(1)
This way I see a syntax error if there are any, or if there are none then the first action is to exit the script and ignore the rest of the code.
I think the analog in the sql world is to raise a high severity error. If you use severity 20+ it kills the connection, so if there are multiple batches in the script they are all skipped. I can't confirm that there is 100.00000% no way some kind of sql injection could make it past this prepended error, but I can't see any way that there could be. An example is to stick this at the front of the code block in question:
raiserror ('syntax checking, disregard error', 20, 1) with log
So this errors out from syntax error:
raiserror ('syntax checking, disregard error', 20, 1) with log
create table t1()
go
create table t2()
go
While this errors out from the runtime error (and t1/t2 are not created)
raiserror ('syntax checking, disregard error', 20, 1) with log
create table t1(i int)
go
create table t2( i int)
go
And to round out your options, you could reference the assembly C:\Program Files\Microsoft SQL Server\100\Tools\Binn\VSShell\Common7\IDE\Microsoft.SqlServer.SqlParser.dll in a clr utility (outside of the db) and do like:
SqlScript script = Parser.Parse(#"create proc sp1 as select 'abc' as abc1");
You could call an exec(), passing in the script as a string and wrap it in a Try/Catch
There isn't a mechanism in SQL Server to do this. You might be able to do it with a CLR component and SMO, but it seems like a lot of work for questionable gain.
How about wrapping the script in a try/catch block, and executing the "if fails" code in the catch block?
Potentially very dangerous. Google up "SQL injection" and see for yourslef.