I have this SQL code that checks for some parameters, if invalid will raise an error, I want also to insert an error record into errors table. the problem is that if error happens the whole transaction will be roll-backed including the error record, I want to rollback the whole transaction except the error record.
I tried creating a separate transaction and commit it with no luck.
IF #Input IS NULL
BEGIN
insert into [dbo].Errors('Field1') values ('Input is null')
RAISERROR ('some message', 16, 1)
RETURN -1
END
Is there a way to isolate the insert statement alone in a separate transaction?
Edit:
This stored procedure is called from other procedures and need to be roll-backed, even from outside, so probably i need to separate this insert statement into a separate transaction.
You need to do this in a separate transaction. No lock changes if the table is locked throughout the course of the transaction, but it does not affect what is rolled back or not. If an error occurs, the whole transaction is rolled back. Can you add a 'try-catch' block to the code that you are calling the stored procedure from? If you 'try' the stored procedure and it throws an error, you should catch the error. In your catch block, you could call a different stored procedure that records the error in the necessary table.
It sounds like the issue that you are having is where to call the stored procedure that records the errors from. This can be tricky and it all depends on how you are handling the transaction and where you are catching the errors. Let's say you have code a that calls code b and b calls the first stored procedure. Now, when the stored procedure throws an error, if you are handling that error in code a, everything that you did in b is rolled back. If you try to insert the error record in code b, that will be rolled back from code a as well.
Related
I have a SQL Server stored procedure that is returning me the very common error
"db_ErrorCode Transaction count after EXECUTE indicates a mismatching number of BEGIN and COMMIT statements. Previous count = 1, current count = 0."
What I've found after Googling this is that its really saying that there's an error happening before the transaction is committed.
There's a
BEGIN TRY
BEGIN TRANSACTION
At the beginning of the SP, and
COMMIT TRANSACTION
END TRY
BEGIN CATCH
ROLLBACK TRANSACTION
SELECT #ErrorNumber = ERROR_NUMBER(),
#ErrorLine = ERROR_LINE(),
#ErrorMessage = ERROR_MESSAGE()
RAISERROR (#Flag, 18, 120);
END CATCH
END
The problem is is that there's about 1100 lines of code in between those lines, and if there's a problem, the entire SP needs to be rolled back, so we can't put try/catch statements in between. And why does my final Catch block not return the actual error, instead of giving me that unhelpful Transaction count error?
I can't answer as to why Sql Server gives the error messages it does especially not without seeing the whole 1100 lines of code. However, if you want to know what to do to be able to pinpoint the error, I can give you some hints.
First in any large stored proc I always have an #Debug variable as an input variable. Make it the last variable and give it a default Value of 0 (not in debug mode). If it has a default value then adding it as the last variable should not break existing calls of the code.
When you want to debug, you can then add tests or results that show you what steps have been completed or what the results of various operations were. Wrap these steps in if statements like
IF #DEBUG=1
BEGIN
<add your tests here>
END
You may add this code after every significant step in the proc or maybe have a one with multiple steps in it later in the proc or both. I tend to put checks to see the state of what is going to happen in a steps through the proc and ones that show what the results should be at the end.
That code will only execute when you are in debug mode. The kind of things you might put in might be printing or selecting the variables at that point of the proc, printing the name of the step you are on, running a select that would normally be the basis of an insert or the results after after an operation, etc.
Another thing you can do is create a table variable to store the steps as they complete. Table variable stay in scope after the rollback, so you can then do a select and see what steps were completed before the rollback.
When trying to validate a user supplied GUID within a stored procedure a simple approach was used; take the user input as a CHAR(36) then explicitly CAST it as a UNIQUEIDENTIFIER within a TRY CATCH. The CATCH then bubbles the error with a custom error description using a RAISERROR.
Running the stored procedure manually everything performs as expected and the error is raised.
Create a tSQLt test to call the unit (the procedure with GUID validation) and handle the error that is output and compare with the expected error continually fails with a transaction error; tSQLt has detected an error and handled within the tSQLt framework.
This suggests to me that the severity of a failure to CAST to a different datatype is being handled by tSQLt and it is preventing the TRY/CATCH within the stored procedure to handle it. Much like nested procedures sometimes ignore the TRY/CATCH within the child procedure and bubble up to the parent procedure; example being if the child proc. references a table that doesn't exist.
Has anyone had a similar issue? Just simply to validate my current line of thinking.
I've removed the test and it's being tested elsewhere, but this has caused me a 'hole' it my DB unit tests.
Finally, I think I should mention that I know I can perform a different validation on a supplied CHAR parameter, other than a CAST, and raise an error that way, but this is a tSQLt query and not a tSQL query.
EDIT
Example of the code:
#sGUID is a CHAR(36) and is a parameter passed to the procedure.
BEGIN TRY
SELECT CAST(#sGUID AS UNIQUEIDENTIFIER)
END TRY
BEGIN CATCH
RAISERROR('Invalid GUID format',16,1)
END CATCH
The SELECT line never triggers the CATCH tSQLt appears to intervene before hand and throws the ROLLBACK transaction error.
When you call RAISEERROR(), you're terminating the transaction that tSQLt is running --> hence the transaction error you're seeing.
To improve this for the purpose of unit testing, one option you might consider would be to replace the RAISEERROR() statement with a call to a custom stored procedure that only contains RAISERROR(). That way, you can unit-test that stored procedure seperately.
BEGIN TRY
SELECT CAST(#sGUID AS UNIQUEIDENTIFIER)
END TRY
BEGIN CATCH
EXEC dbo.customprocedure
--RAISERROR('Invalid GUID format',16,1)
END CATCH
In a script used for interactive analysis of subsets of data, it is often useful to store the results of queries into temporary tables for further analysis.
Many of my analysis scripts contain this structure:
CREATE TABLE #Results (
a INT NOT NULL,
b INT NOT NULL,
c INT NOT NULL
);
INSERT INTO #Results (a, b, c)
SELECT a, b, c
FROM ...
SELECT *
FROM #Results;
In SQL Server, temporary tables are connection-scoped, so the query results persist after the initial query execution. When the subset of data I want to analyze is expensive to calculate, I use this method instead of using a table variable because the subset persists across different batches of queries.
The setup part of the script is run once, and following queries (SELECT * FROM #Results is a placeholder here) are run as often as necessary.
Occasionally, I want to refresh the subset of data in the temporary table, so I run the entire script again. One way to do this would be to create a new connection by copying the script to a new query window in Management Studio, I find this difficult to manage.
Instead, my usual workaround is to precede the create statement with a conditional drop statement like this:
IF OBJECT_ID(N'tempdb.dbo.#Results', 'U') IS NOT NULL
BEGIN
DROP TABLE #Results;
END;
This statement correctly handles two situations:
On the first run when the table does not exist: do nothing.
On subsequent runs when the table does exist: drop the table.
Production scripts written by me would always use this method because it raises no errors for in the two expected situations.
Some equivalent scripts written by my fellow developers sometimes handle these two situations using exception handling:
BEGIN TRY DROP TABLE #Results END TRY BEGIN CATCH END CATCH
I believe in the database world it is better always to ask permission than seek forgiveness, so this method makes me uneasy.
The second method swallows an error while taking no action to handle non-exceptional behavior (table does not exist). Also, it is possible that an error would be raised for a reason other than that the table does not exist.
The Wise Owl warns about the same thing:
Of the two methods, the [OBJECT_ID method] is more difficult to understand but
probably better: with the [BEGIN TRY method], you run the risk of trapping
the wrong error!
But it does not explain what the practical risks are.
In practice, the BEGIN TRY method has never caused problems in systems I maintain, so I'm happy for it to stay there.
What possible dangers are there in managing temporary table existence using BEGIN TRY method? What unexpected errors are likely to be concealed by the empty catch block?
What possible dangers? What unexpected errors are likely to be concealed?
If try catch block is inside a transaction, it will cause a failure.
BEGIN
BEGIN TRANSACTION t1;
SELECT 1
BEGIN TRY DROP TABLE #Results END TRY BEGIN CATCH END CATCH
COMMIT TRANSACTION t1;
END
This batch will fail with an error like this:
Msg 3930, Level 16, State 1, Line 7
The current transaction cannot be committed and cannot support operations that write to the log file. Roll back the transaction.
Msg 3998, Level 16, State 1, Line 1
Uncommittable transaction is detected at the end of the batch. The transaction is rolled back.
Books Online documents this behavior:
Uncommittable Transactions and XACT_STATE
If an error generated in a TRY block causes the state of the current transaction to be invalidated, the transaction is classified as an uncommittable transaction. An error that ordinarily ends a transaction outside a TRY block causes a transaction to enter an uncommittable state when the error occurs inside a TRY block. An uncommittable transaction can only perform read operations or a ROLLBACK TRANSACTION. The transaction cannot execute any Transact-SQL statements that would generate a write operation or a COMMIT TRANSACTION.
now replace TRY/Catch with the Test Method
BEGIN
BEGIN TRANSACTION t1;
SELECT 1
IF OBJECT_ID(N'tempdb.dbo.#Results', 'U') IS NOT NULL
BEGIN
DROP TABLE #Results;
END;
COMMIT TRANSACTION t1;
END
and run again.Transaction will commit without any error.
A better solution may be to use a table variable rather than a temporary table
ie:
declare #results table(
a INT NOT NULL,
b INT NOT NULL,
c INT NOT NULL
);
I also think that a try block is dangerous because can hide an unexpected problem. Some programing languages can catch only selected errors and don't catch unexpected ones, if your programing language has this functionality then use it (T-SQL can't catch for an specific error)
For your scenario, I can explain that I codify exactly like you, with this try catch block.
The desirable behavior would be:
begin try
drop table #my_temp_table
end try
begin catch __table_dont_exists_error__
end catch
But this don't exists! Then you can write some think like:
begin try
drop table #my_temp_table
end try
begin catch
declare #err_n int, #err_d varchar(MAX)
SELECT
#err_n = ERROR_NUMBER() ,
#err_d = ERROR_MESSAGE() ;
IF #err_n <> 3701
raiserror( #err_d, 16, 1 )
end catch
This will raise an event when error deleting table is different that 'table don't exists'.
Notice that for your issue all this code not worth it. But can be useful for other approach. For your problem, elegant solution is drop table only if exists or use table variable.
Not in you question but possibly overlooked is the resources used by the temp table. I always drop the table at the end of the script so it does not tie up resources. What if you put a million rows in the table? Then I also test for the table at the start of the script to handle the condition there was an error in the last run and the table was not dropped. If you want to reuse the temp then at least clear out the rows.
A table variable is another option. It is lighter weight and has limitations. Avoid a table variable if you are going to use it in a query join as the query optimizer does not handle a table variable was well as it does a temp.
SQL documentation:
If more than one temporary table is created inside a single stored procedure or batch, they must have different names.
If a local temporary table is created in a stored procedure or application that can be executed at the same time by several users, the Database Engine must be able to distinguish the tables created by the different users. The Database Engine does this by internally appending a numeric suffix to each local temporary table name. The full name of a temporary table as stored in the sysobjects table in tempdb is made up of the table name specified in the CREATE TABLE statement and the system-generated numeric suffix. To allow for the suffix, table_name specified for a local temporary name cannot exceed 116 characters.
Temporary tables are automatically dropped when they go out of scope, unless explicitly dropped by using DROP TABLE:
A local temporary table created in a stored procedure is dropped automatically when the stored procedure is finished. The table can be referenced by any nested stored procedures executed by the stored procedure that created the table. The table cannot be referenced by the process that called the stored procedure that created the table.
All other local temporary tables are dropped automatically at the end of the current session.
Global temporary tables are automatically dropped when the session that created the table ends and all other tasks have stopped referencing them. The association between a task and a table is maintained only for the life of a single Transact-SQL statement. This means that a global temporary table is dropped at the completion of the last Transact-SQL statement that was actively referencing the table when the creating session ended.
I'm trying to insert a duplicate value in to a primary key column which raises a primary key violation error.I want to log this error inside the catch block .
Code Block :-
SET XACT_ABORT OFF
BEGIN TRY
BEGIN TRAN
INSERT INTO #Calender values (9,'Oct')
INSERT INTO #Calender values (2,'Unknown')
COMMIT TRAN
END TRY
BEGIN CATCH
Insert into #LogError values (1,'Error while inserting a duplicate value')
if ##TRANCOUNT >0
rollback tran
raiserror('Error while inserting a duplicate value ',16,20)
END CATCH
when i execute the above code it prints out the custom error message which is displayed in the catch block but doesn't insert the value in to the #LogError table
Error while inserting a duplicate value
But when i use SET XACT_ABORT ON i get a different error message but still it doesn't inserts the error message into the table
The current transaction cannot be committed and cannot support operations
that write to the log file. Roll back the transaction.
My question is
1.How to log error into the table
2.Why do i get different error message when i set xact_ABORT on .Is it a good practice to set XACT_ABORT on before every transaction
It does insert the record into #LogError but then you rollback the transaction which removes it.
You need to do the insert after the rollback or insert into a table variable instead (that are not affected by the rollback).
When an error is encountered in the try block this can leave your transaction in a doomed state. You should test the value of XACT_STATE() (see example c in the TRY ... CATCH topic) in the catch block to check for this before doing anything that writes to the log or trying to commit.
When XACT_ABORT is on any error of severity > 10 in a try block will have this effect.
As SqlServer doesn't support Autonomous transaction (nested and independent transaction), it's not possible (in fact, you can, under some condition, use CLR SP with custom connectstring - doing it's own, non local, connection) to use a database table to log SP execution activity/error messages.
To fix, this missing functionnality, I've developed a toolbox (100% T-SQL) based on the use of XML parameter passed as reference (OUTPUT parameter) which is filled during SP execution and can be save into a dedicated database table at the end.
Disclamer: I'm a Iorga employee (cofounder) and I've developped the following LGPL v3 toolbox. My objective is not Iorga/self promotion but sharing knowledge and simplify T-SQL developper life.
See, teach, enhance as you wish SPLogger
Today (October 19th of 2015) I've just released the 1.3 including a Unit Test System based on SPLogger.
I have a stored procedure that calls several others, one of which is failing to insert a row into a table due to a duplicate primary key
The error raised is
Msg 2627, Level 14, State 1, Procedure ..., Line 16
Violation of PRIMARY KEY constraint '...'. Cannot insert duplicate key in object '...'.
I am calling the this from an Excel spreadsheet via VBA, with usual On Error handling in place, but the routine is failing silently without triggering the error.
I'm not sure if this is down to the stored-proc within stored-proc or the severity of the error being too low.
Has anyone experienced anything like this and can suggest a work around?
My initial attempt was to put a BEGIN TRY / BEGIN CATCH block around the stored procedure call, with the CATCH running RAISERROR at a higher severity, but it doesn't seem to be triggering.
Thanks
In the outer proc add an explicit transaction. BEGIN TRANSACTION at the beginning and COMMIT TRANSACTION at the end.
Then before the begin transaction add SET XACT_ABORT ON;. That will take care of batch failures.
After the inner proc with the error, check the error value for statement level errors e.g.
IF ##ERROR <> 0
BEGIN
ROLLBACK TRANSACTION;
RETURN 1;
END
We do this with OUTPUT variables in T-SQL, though I only know the SQL side of it: you'll need to google getting an output parameter back from SQL with VBA. Declare an output variable as a varchar, and set it immediately to '':
DECLARE #MyError VARCHAR(500) OUTPUT
SET #MyError = ''
Do your normal error checking in T-SQL and, if you do find an error, SET a description into the #MyError variable. Your VBA code will always get the #MyError message, but it will normally be an empty string, ''. If it isn't, then you go to your error handling in VBA.