I am a c# developer learning more TSQL. I wrote a script like this:
begin transaction
--Insert into several tables
end transaction
But I was told that was not a good idea and to use something like this:
BEGIN TRANSACTION;
BEGIN TRY
-- Generate a constraint violation error.
DELETE FROM Production.Product
WHERE ProductID = 980;
END TRY
BEGIN CATCH
SELECT
ERROR_NUMBER() AS ErrorNumber
,ERROR_SEVERITY() AS ErrorSeverity
,ERROR_STATE() AS ErrorState
,ERROR_PROCEDURE() AS ErrorProcedure
,ERROR_LINE() AS ErrorLine
,ERROR_MESSAGE() AS ErrorMessage;
IF ##TRANCOUNT > 0
ROLLBACK TRANSACTION;
END CATCH;
IF ##TRANCOUNT > 0
COMMIT TRANSACTION;
GO
I don't see why the second example is more correct. Would the first one not work the same way? It seems the first one would either update all tables, or not at all? I don't see why checking the ##TRANCOUNT is necessary before the commit.
Only Open a Transaction once you are inside the try block and just before the actual statement, And commit it straightaway, do not wait for your control to go to the end of the batch to commit your transactions.
Once you are in Try Block and you have opened a transaction, If something goes wrong The control will jump to CATCH block, Simply rollback your transaction there and do other error handling as required.
I have added a little check before actually rolling back the transaction checking for any open transaction using ##ROWCOUNT function, It doesnt really make much sence in this scenario. It is more useful when you are doing some validations checks in your try block before you open a transaction like checking param values and other stuff and raising error in try block if any of the validation checks fail, In that case control will jump to catch block without even opening a transaction there you can check for any open transaction and rollback if there are any open ones. In your case as it is, you really dont need to check for any open transaction as you will not entre the catch block unless something goes wrong inside your transaction.
BEGIN TRY
BEGIN TRANSACTION
-- Multiple Inserts
INSERT INTO....
INSERT INTO....
INSERT INTO....
COMMIT TRANSACTION
PRINT 'Rows inserted successfully...'
END TRY
BEGIN CATCH
IF (##TRANCOUNT > 0)
BEGIN
ROLLBACK TRANSACTION
PRINT 'Error detected, all changes reversed'
END
SELECT
ERROR_NUMBER() AS ErrorNumber,
ERROR_SEVERITY() AS ErrorSeverity,
ERROR_STATE() AS ErrorState,
ERROR_PROCEDURE() AS ErrorProcedure,
ERROR_LINE() AS ErrorLine,
ERROR_MESSAGE() AS ErrorMessage
END CATCH
I want to give my perspective here as a C# developer:
In the simple scenario given above (just inserting into a few tables from a script), there is no reason to add a try/catch, as it adds no benefit to the transaction. Both examples will produce exactly the same result: Either all tables will be inserted, or they will not be. The state of the database remains consistent. (Since COMMIT TRANSACTION is never called, the rollback is implicitly called by Sql Server at the end of the script.)
However, there are times when you can do things in the try/catch that are not possible with the integrated error handling. For example, logging the error to an error table.
In my C# experience, the only time to use a Try/Catch is when there are things which are outside of the developer's control, such as attempting to open a file. In such a case, the only way to manage an exception generated by the .Net framework is through Try/Catch.
If I were doing a stored procedure, and wanted to manually check on the state of data and manually call ROLLBACK TRANSACTION, I could see that. But it still would not require a try/catch.
I'm in two minds about TRY...CATCH in T-SQL.
While it's a potentially-useful addition to the language, the fact that it's available is not always a reason to use it.
Taking your
DELETE FROM Table WHERE...
example (which I realise is only an example). The only way that will ever fail with an error is if the code has got seriously out of whack from the schema. (for example, if someone creates a foreign key with Table on the PK end of it).
Given proper testing, that kind of mismatch between the code and the schema should never make it into production. Assuming that it does, IMHO the "rude", unmediated error message that will result is a better indication of what's gone wrong than a "polite" wrapping of it into a SELECT statement to return to the client. (Which can amount to the try/squelch antipattern SeanLange mentions).
For more complicated scenarios, I can see a use for TRY...CATCH. Though IMHO it's no substitute for careful validation of input parameters.
Related
I assumed uncaught throw inside a transaction will roll it back. But testing with this code, seems the transaction stays open. Is this normal behavior?
begin transaction;
throw 50001, N'Exception', 1;
commit transaction;
I use this query:
select * from sys.sysprocesses where open_tran = 1;
to list transactions and see 1 open. Once I run commit on that connection, it closes.
So, when I throw do I need to always rollback myself before? And what if some other code throws in my transaction but outside my code?
Normally it's not an issue as closing the connection ends it. But if I sp_getapplock bound to transaction, it stays locked if I throw without manual rollback.
I think you misunderstand the trow here. As you have written it, it leaves the connection opened because the commit transaction; is not executed. The begin transaction; (increments the ##TRANCOUNT counter), but throw does not automatically decrese it! (no automatic rollback).
When throw is executed it tries to find a catch statement if not found only error is shown and the the rest is terminated.
To quote the MSDN:
If a TRY...CATCH construct is not available, the statement batch is
terminated. The line number and procedure where the exception is
raised are set. The severity is set to 16.
In your simple case you could extend it like this:
BEGIN TRY
begin transaction;
throw 50001, N'Exception', 1;
END TRY
BEGIN CATCH
IF <boolean_expression_for_commit>
commit transaction;
ELSE
rollback transaction;
END CATCH
That will depend on your usecase. Both commit and rollback decrement the ##TRANCOUNT counter. You will get your transaction closed based on the <boolean_expression_for_commit>.
EDIT The OP wanted to know more about XACT_ABORT
If the XACT_ABORT is set to ON the TROW causes rollback on the whole transaction even when CATCH is missing. That will decrese the transaction counter and close the open transaction.
There is a catch however. If a developer wants to create a log in a CATCH then rollback is performed also on the CATCH block! (no log is created)
Things can get weird when SET XACT_ABORT OFF (the default). It may or may not leave the transaction opened based on severity of the error. If the error is considered sereve enough it will still rollback. In your case a simple THROW is not severe enough.
Note: That is also a reason why now THROW should be used instread of RAISERROR. The THROW follows the XACT_ABORT setting, RAISEERROR ignores it.
Using SQL Server 2014:
I am going through the following article that includes useful patterns for TSQL error handling:
https://msdn.microsoft.com/en-IN/library/ms175976.aspx
I like to log errors so later on I can query, monitor, track and inspect the errors took place in my application's store procedures.
I was thinking to create a table and insert the error details as a row into the table in the CATCH block; however I am concern this might not be a good pattern OR there might be a built-in SQL server feature that can log the errors generated by the ;THROW statement.
What would be the best way to log the errors?
Update 1
I should mention that I always set XACT_ABORT on top of my SPs:
SET XACT_ABORT, NOCOUNT ON
Is it safe to assume that there is no way to log errors when XACT_ABORT is ON?
Update 2
The SET XACT_ABORT ON is according to this post:
http://www.sommarskog.se/error_handling/Part1.html#jumpXACT_ABORT
Can xp_logevent be a better alternative than adding an error record to a log table?
You have to be very careful with logging from CATCH locks. First and foremost, you must check the XACT_STATE() and honor it. If xact_state is -1 (
'uncommittable transaction') you cannot do any transactional operation, so the INSERT fail. You must first rollback, then insert. But you cannot simply rollback, because you may be in xact_state 0 (no transaction) in which case rollback would fail. And if xact_state is 1, you are still in the original transaction, and your INSERT may still be rolled back later and you'll loose all track of this error ever occurring.
Another approach to consider is to generate a user defined profiler event using sp_trace_generateevent and have a system trace monitoring your user event ID. This works in any xact_state state and has the advantage of keeping the record even if the encompassing transaction will roll back later.
I should mention that I always set XACT_ABORT
Stop doing this. Read Exception handling and nested transactions for a good SP pattern vis-a-vis error handling and transactions.
Yes it is better.
If you want to store then try this.
declare #Error_msg_desc varchar(500)
,#Error_err_code int
,#Error_sev_num int
,#Error_proc_nm varchar(100)
,#Error_line_num int
begin try
select 1/0
end try
begin catch
select #Error_err_code = ERROR_NUMBER()
,#Error_msg_desc = ERROR_MESSAGE()
,#Error_sev_num = ERROR_SEVERITY()
,#Error_proc_nm = ERROR_PROCEDURE()
,#Error_line_num = ERROR_LINE()
--create SqlLog Table
--Insert into Log Table
Insert into Sqllog values(#Error_err_code,#Error_msg_desc,#Error_sev_num,#Error_proc_nm,#Error_line_num)
end catch
I am new to T-SQL programming. I need to write a main procedures to execute multiple transactions. How could i structure the program so that each transaction will not abort. Instead, the procedure will raise the error and report them back to the main program in the output parameters after all the transaction finish running. Please provide me with pseudo code if you can. Thanks.
You need to follow the template from Exception handling and nested transactions
create procedure [usp_my_procedure_name]
as
begin
set nocount on;
declare #trancount int;
set #trancount = ##trancount;
begin try
if #trancount = 0
begin transaction
else
save transaction usp_my_procedure_name;
-- Do the actual work here
lbexit:
if #trancount = 0
commit;
end try
begin catch
declare #error int, #message varchar(4000), #xstate int;
select #error = ERROR_NUMBER(), #message = ERROR_MESSAGE(), #xstate = XACT_STATE();
if #xstate = -1
rollback;
if #xstate = 1 and #trancount = 0
rollback
if #xstate = 1 and #trancount > 0
rollback transaction usp_my_procedure_name;
raiserror ('usp_my_procedure_name: %d: %s', 16, 1, #error, #message) ;
end catch
end
go
As you can see you can't always continue, because sometime the exception has already aborted the transaction by the time you catch it (the typical example being deadlock exception 1205). And you must use a savepoint and revert to the savepoint in case of exception, to keep the database consistent. However, you do not abort the caller's work, if possible.
You could use try/catch
BOL - TRY/CATCH
Here's an example
I have previously encapsulated logic into stored procedures and put in exec statements in the TRY/CATCH block. In the CATCH you can use this link to get error information (example B in the link)
BOL - ERROR_MESSAGE
Something similar to -
BEGIN TRY
BEGIN TRAN
EXEC StoredProcedure01
EXEC StoredProcedure02
COMMIT
END TRY
BEGIN CATCH
ROLLBACK TRAN
SELECT
ERROR_NUMBER() AS ErrorNumber
,ERROR_SEVERITY() AS ErrorSeverity
,ERROR_STATE() AS ErrorState
,ERROR_PROCEDURE() AS ErrorProcedure
,ERROR_LINE() AS ErrorLine
,ERROR_MESSAGE() AS ErrorMessage;
END CATCH;
GO
I might consider trying this. Each transaction is a separate stored proc with a an overall stored proc that calls each in turn. Save the error information you want in a table variable. IN the catch block of each proc, rollback that transaction and then insert the data form the table variable with the error information into a logging table. Do not return a failure to the calling proc.
If you want to report the errors out of the main proc in real time, you can do a select from the logging table at the end.
It would work best if you create a batchid at the start of the calling proc and have that be an input variable to each of the procs you call and also include that data in the information you add to to the logging table. Then if the procs fail multiple times during nonworking hours, you have the errors for all of them and can see the batch they were associated with. This helps tremendously in tracking down problems.
You will need to give some thought as to what information you will want for each proc when designing your logging table. I would suggest that part of what you store is any input variables that are sent into the proc. Also if you are using dynamic SQl, then store the generated sql as well. If you can identify the user who ran the proc, that too is useful for tracking down permissions issues for instance.
Having a logging table is far more useful than just returning errors at run time. You can look for trends, see if the same error is frequently happening, look at the information that caused failures and the information that succeeded if you choose to also log the variables for successful runs.
None of this is out of the box easy to write code. It requires a great deal of design on your part to determine, exactly what information will be useful over time in troubleshooting issues with the process. How detailed you need to get is a business decision based on both how the data is used and what you will want to know about a failure. As such, we cannot make this determination for you.
If I run a sql statement such as the following:
SELECT 1/0;
Is there a way to capture the statement "SELECT 1/0;" in an error message? The following does not give me the SQL that failed:
BEGIN TRY
SELECT 1/0;
END TRY
BEGIN CATCH
SELECT
ERROR_NUMBER() AS ErrorNumber,
ERROR_SEVERITY() AS ErrorSeverity,
ERROR_STATE() AS ErrorState,
ERROR_PROCEDURE() AS ErrorProcedure,
ERROR_LINE() AS ErrorLine,
ERROR_MESSAGE() AS ErrorMessage;
END CATCH;
GO
Also, I want to see if I can avoid using try catch at every statement. I have a SP that is executing a lot of SQL statements between a TRY and a CATCH statement. I want to know which one of the SQL statements failed among the numerous SQL statements in the TRY ... CATCH block.
All I have found so far is giving the error message details but not the T-SQL that failed.
You can leverage the fact that table variables are not rolled back.
After each stament insert a success line into a table variable for logging.
If the proc succeed, nothing needs to be returned from the table variable. If it hits the catch block though, you can rollback the transaction and either retrun the select from the table variable or better, insert that information into a log table. If your proc sets a lot of variables, I would also log those values in this table so you can see what the values were at the time the proc failed. By putting it into a logging table, you have a record for all the times the proc fails, so if it fails on Friday night and fails several times but not every time over the weekend, you have your data about what worked and what the variables were at the time of failure to use to figure out what is happening. This is especially useful if you use dynamic sql because you could log the sql statement produced as well.
If your stored procedure got bunch of statements, it's always good to have a log variable set up at the beginning of the stored procedure. You can adjust the value of this variable to make sure you would want to log steps or not. In your current situation, you can set it to 1 and start logging all/necessary steps. That will help you in finding out what got executed and where is the error occurring .
Ex: DECLARE #bLog BIT = 0 -- Default when you do not want to log
SET #bLog = 1
IF (#bLog = 1)
BEGIN
----- Add log here, take back up of result executed from previous steps --etc.
END
You can use line number for same but if you want to check which statement having problem then you have to define statement number also to check the number and set the same number in a variable like this.
Declare #StmNo as int
BEGIN TRY
set #StmNo=1
SELECT GETDATE();
set #StmNo=2
SELECT 1/0;
END TRY
BEGIN CATCH
SELECT
#StmNo AS StatementNumber,
ERROR_NUMBER() AS ErrorNumber,
ERROR_SEVERITY() AS ErrorSeverity,
ERROR_STATE() AS ErrorState,
ERROR_PROCEDURE() AS ErrorProcedure,
ERROR_LINE() AS ErrorLine,
ERROR_MESSAGE() AS ErrorMessage;
END CATCH;
GO
Will Try-Catch capture all errors that ##ERROR can? In the following code fragment, is it worthwhile to check for ##ERROR? Will RETURN 1111 ever occur?
SET XACT_ABORT ON
BEGIN TRANSACTION
BEGIN TRY
--do sql command here <<<<<<<<<<<
SELECT #Error=##ERROR
IF #Error!=0
BEGIN
IF XACT_STATE()!=0
BEGIN
ROLLBACK TRANSACTION
END
RETURN 1111
END
END TRY
BEGIN CATCH
IF XACT_STATE()!=0
BEGIN
ROLLBACK TRANSACTION
END
RETURN 2222
END CATCH
IF XACT_STATE()=1
BEGIN
COMMIT
END
RETURN 0
The following article is a must read by Erland Sommarskog, SQL Server MVP: Implementing Error Handling with Stored Procedures
Also note that Your TRY block may fail, and your CATCH block may be bypassed
One more thing: Stored procedures using old-style error handling and savepoints may not work as intended when they are used together with TRY … CATCH blocks.Avoid mixing old and new styles of error handling.
TRY/CATCH traps more. It's hugely and amazingly better.
DECLARE #foo int
SET #foo = 'bob' --batch aborting pre-SQL 2005
SELECT ##ERROR
GO
SELECT ##ERROR --detects 245. But not much use, really if the batch was a stored proc
GO
DECLARE #foo int
BEGIN TRY
SET #foo = 'bob'
SELECT ##ERROR
END TRY
BEGIN CATCH
SELECT ERROR_MESSAGE(), ERROR_NUMBER()
END CATCH
GO
Using TRY/CATCH in triggers also works. Trigger rollbacks used to be batch aborting too: no longer if TRY/CATCH is used in the trigger too.
Your example would be better if the BEGIN/ROLLBACK/COMMIT is inside, not outside, the construct
Try Catch will not trap everything
here is some code to demonstrate that
BEGIN TRY
BEGIN TRANSACTION TranA
DECLARE #cond INT;
SET #cond = 'A';
END TRY
BEGIN CATCH
PRINT 'a'
END CATCH;
COMMIT TRAN TranA
Server: Msg 3930, Level 16, State 1, Line 9
The current transaction cannot be committed and cannot support operations that write to the log file. Roll back the transaction.
Server: Msg 3998, Level 16, State 1, Line 1
Uncommittable transaction is detected at the end of the batch. The transaction is rolled back.
I don't believe control will ever reach the RETURN statement-- once you're in a TRY block, any error raised will transfer control to the CATCH block. However, there are some very serious errors that can cause the batch or even the connection itself to abort (Erland Sommarskog has written on the topic of errors in SQL Server here and here-- unfortunately, he hasn't updated them to include TRY...CATCH). I'm not sure if you can CATCH those kind of error, but then, ##ERROR is no good either.
It has been my experience that, as per Books Online, TRY...CATCH blocks will trap all events that would generate errors (and, thus, set ##ERROR to a non-zero value). I can think of no circumstances where this would not apply. So no, the return value would never be set to 1111, and it would not be worthwhile to include that ##Error check.
However, error handling can be very critical, and I'd hedge my bets for fringe situations such as DTC, linked servers, notification or brokerage services, and other SQL feature that I've had very little experience with. If you can, test your more bizarre situations to see what will actually happen.
The whole point of "Try..Catch" is so that you don't have to check for ##ERROR for every statement.
So it's not worthwhile.