Bypassing error in SQL Server - sql-server

While inserting multiple rows into a table in SQL Server, if a get an error, how can I just log that error and move on to insert the next record by bypassing that error?

You can't when all the insert run in one go (like in a insert into ... select. A database implements ACID, and one of the components (A, Atomicy) means (according to Wikipedia):
Atomicity requires that each transaction is "all or nothing"
That means that you have to separate your action in single statements, which in SQL Server are single transactions (in Oracle on the other hand, this won't still work since it runs all subsequent queries in one transaction by default).
You can although create separate insert statements and put every statement in a try catch.
Sample:
-- first
begin try
insert into ...
end try
begin catch
-- log error
end catch;
-- second
begin try
insert into ...
end try
begin catch
-- log error
end catch;

Related

Stored procedure with multiple updates, insert and deletes. What happens if last DML command fails?

So, if I have multiple DML commands inside a stored procedure in SQL Server, if the last one fails, will all the other ones rollback? Considering I am not inside a transaction scope!
You need your stored procedure to use TRY CATCH and a TRANSACTION .
BEGIN TRAN
BEGIN TRY
COMMIT TRANSACTION
END TRY
BEGIN CATCH
-- if error, roll back any changes done by any of the SQL statements
ROLLBACK TRANSACTION
END CATCH
Refer to this
http://techfunda.com/howto/192/transaction-in-stored-procedure

What are the dangers of BEGIN TRY DROP TABLE?

In a script used for interactive analysis of subsets of data, it is often useful to store the results of queries into temporary tables for further analysis.
Many of my analysis scripts contain this structure:
CREATE TABLE #Results (
a INT NOT NULL,
b INT NOT NULL,
c INT NOT NULL
);
INSERT INTO #Results (a, b, c)
SELECT a, b, c
FROM ...
SELECT *
FROM #Results;
In SQL Server, temporary tables are connection-scoped, so the query results persist after the initial query execution. When the subset of data I want to analyze is expensive to calculate, I use this method instead of using a table variable because the subset persists across different batches of queries.
The setup part of the script is run once, and following queries (SELECT * FROM #Results is a placeholder here) are run as often as necessary.
Occasionally, I want to refresh the subset of data in the temporary table, so I run the entire script again. One way to do this would be to create a new connection by copying the script to a new query window in Management Studio, I find this difficult to manage.
Instead, my usual workaround is to precede the create statement with a conditional drop statement like this:
IF OBJECT_ID(N'tempdb.dbo.#Results', 'U') IS NOT NULL
BEGIN
DROP TABLE #Results;
END;
This statement correctly handles two situations:
On the first run when the table does not exist: do nothing.
On subsequent runs when the table does exist: drop the table.
Production scripts written by me would always use this method because it raises no errors for in the two expected situations.
Some equivalent scripts written by my fellow developers sometimes handle these two situations using exception handling:
BEGIN TRY DROP TABLE #Results END TRY BEGIN CATCH END CATCH
I believe in the database world it is better always to ask permission than seek forgiveness, so this method makes me uneasy.
The second method swallows an error while taking no action to handle non-exceptional behavior (table does not exist). Also, it is possible that an error would be raised for a reason other than that the table does not exist.
The Wise Owl warns about the same thing:
Of the two methods, the [OBJECT_ID method] is more difficult to understand but
probably better: with the [BEGIN TRY method], you run the risk of trapping
the wrong error!
But it does not explain what the practical risks are.
In practice, the BEGIN TRY method has never caused problems in systems I maintain, so I'm happy for it to stay there.
What possible dangers are there in managing temporary table existence using BEGIN TRY method? What unexpected errors are likely to be concealed by the empty catch block?
What possible dangers? What unexpected errors are likely to be concealed?
If try catch block is inside a transaction, it will cause a failure.
BEGIN
BEGIN TRANSACTION t1;
SELECT 1
BEGIN TRY DROP TABLE #Results END TRY BEGIN CATCH END CATCH
COMMIT TRANSACTION t1;
END
This batch will fail with an error like this:
Msg 3930, Level 16, State 1, Line 7
The current transaction cannot be committed and cannot support operations that write to the log file. Roll back the transaction.
Msg 3998, Level 16, State 1, Line 1
Uncommittable transaction is detected at the end of the batch. The transaction is rolled back.
Books Online documents this behavior:
Uncommittable Transactions and XACT_STATE
If an error generated in a TRY block causes the state of the current transaction to be invalidated, the transaction is classified as an uncommittable transaction. An error that ordinarily ends a transaction outside a TRY block causes a transaction to enter an uncommittable state when the error occurs inside a TRY block. An uncommittable transaction can only perform read operations or a ROLLBACK TRANSACTION. The transaction cannot execute any Transact-SQL statements that would generate a write operation or a COMMIT TRANSACTION.
now replace TRY/Catch with the Test Method
BEGIN
BEGIN TRANSACTION t1;
SELECT 1
IF OBJECT_ID(N'tempdb.dbo.#Results', 'U') IS NOT NULL
BEGIN
DROP TABLE #Results;
END;
COMMIT TRANSACTION t1;
END
and run again.Transaction will commit without any error.
A better solution may be to use a table variable rather than a temporary table
ie:
declare #results table(
a INT NOT NULL,
b INT NOT NULL,
c INT NOT NULL
);
I also think that a try block is dangerous because can hide an unexpected problem. Some programing languages can catch only selected errors and don't catch unexpected ones, if your programing language has this functionality then use it (T-SQL can't catch for an specific error)
For your scenario, I can explain that I codify exactly like you, with this try catch block.
The desirable behavior would be:
begin try
drop table #my_temp_table
end try
begin catch __table_dont_exists_error__
end catch
But this don't exists! Then you can write some think like:
begin try
drop table #my_temp_table
end try
begin catch
declare #err_n int, #err_d varchar(MAX)
SELECT
#err_n = ERROR_NUMBER() ,
#err_d = ERROR_MESSAGE() ;
IF #err_n <> 3701
raiserror( #err_d, 16, 1 )
end catch
This will raise an event when error deleting table is different that 'table don't exists'.
Notice that for your issue all this code not worth it. But can be useful for other approach. For your problem, elegant solution is drop table only if exists or use table variable.
Not in you question but possibly overlooked is the resources used by the temp table. I always drop the table at the end of the script so it does not tie up resources. What if you put a million rows in the table? Then I also test for the table at the start of the script to handle the condition there was an error in the last run and the table was not dropped. If you want to reuse the temp then at least clear out the rows.
A table variable is another option. It is lighter weight and has limitations. Avoid a table variable if you are going to use it in a query join as the query optimizer does not handle a table variable was well as it does a temp.
SQL documentation:
If more than one temporary table is created inside a single stored procedure or batch, they must have different names.
If a local temporary table is created in a stored procedure or application that can be executed at the same time by several users, the Database Engine must be able to distinguish the tables created by the different users. The Database Engine does this by internally appending a numeric suffix to each local temporary table name. The full name of a temporary table as stored in the sysobjects table in tempdb is made up of the table name specified in the CREATE TABLE statement and the system-generated numeric suffix. To allow for the suffix, table_name specified for a local temporary name cannot exceed 116 characters.
Temporary tables are automatically dropped when they go out of scope, unless explicitly dropped by using DROP TABLE:
A local temporary table created in a stored procedure is dropped automatically when the stored procedure is finished. The table can be referenced by any nested stored procedures executed by the stored procedure that created the table. The table cannot be referenced by the process that called the stored procedure that created the table.
All other local temporary tables are dropped automatically at the end of the current session.
Global temporary tables are automatically dropped when the session that created the table ends and all other tasks have stopped referencing them. The association between a task and a table is maintained only for the life of a single Transact-SQL statement. This means that a global temporary table is dropped at the completion of the last Transact-SQL statement that was actively referencing the table when the creating session ended.

Log custom error message using try and catch block in sql server

I'm trying to insert a duplicate value in to a primary key column which raises a primary key violation error.I want to log this error inside the catch block .
Code Block :-
SET XACT_ABORT OFF
BEGIN TRY
BEGIN TRAN
INSERT INTO #Calender values (9,'Oct')
INSERT INTO #Calender values (2,'Unknown')
COMMIT TRAN
END TRY
BEGIN CATCH
Insert into #LogError values (1,'Error while inserting a duplicate value')
if ##TRANCOUNT >0
rollback tran
raiserror('Error while inserting a duplicate value ',16,20)
END CATCH
when i execute the above code it prints out the custom error message which is displayed in the catch block but doesn't insert the value in to the #LogError table
Error while inserting a duplicate value
But when i use SET XACT_ABORT ON i get a different error message but still it doesn't inserts the error message into the table
The current transaction cannot be committed and cannot support operations
that write to the log file. Roll back the transaction.
My question is
1.How to log error into the table
2.Why do i get different error message when i set xact_ABORT on .Is it a good practice to set XACT_ABORT on before every transaction
It does insert the record into #LogError but then you rollback the transaction which removes it.
You need to do the insert after the rollback or insert into a table variable instead (that are not affected by the rollback).
When an error is encountered in the try block this can leave your transaction in a doomed state. You should test the value of XACT_STATE() (see example c in the TRY ... CATCH topic) in the catch block to check for this before doing anything that writes to the log or trying to commit.
When XACT_ABORT is on any error of severity > 10 in a try block will have this effect.
As SqlServer doesn't support Autonomous transaction (nested and independent transaction), it's not possible (in fact, you can, under some condition, use CLR SP with custom connectstring - doing it's own, non local, connection) to use a database table to log SP execution activity/error messages.
To fix, this missing functionnality, I've developed a toolbox (100% T-SQL) based on the use of XML parameter passed as reference (OUTPUT parameter) which is filled during SP execution and can be save into a dedicated database table at the end.
Disclamer: I'm a Iorga employee (cofounder) and I've developped the following LGPL v3 toolbox. My objective is not Iorga/self promotion but sharing knowledge and simplify T-SQL developper life.
See, teach, enhance as you wish SPLogger
Today (October 19th of 2015) I've just released the 1.3 including a Unit Test System based on SPLogger.

Ignoring errors in Trigger

I have a stored procedure which is called inside a trigger on Insert/Update/Delete.
The problem is that there is a certain code block inside this SP which is not critical.
Hence I want to ignore any erros arising from this code block.
I inserted this code block inside a TRY CATCH block. But to my surprise I got the following error:
The current transaction cannot be committed and cannot support operations that write to the log file. Roll back the transaction.
Then I tried using SAVE & ROLLBACK TRANSACTION along with TRY CATCH, that too failed with the following error:
The current transaction cannot be committed and cannot be rolled back to a savepoint. Roll back the entire transaction.
My server version is: Microsoft SQL Server 2008 (SP2) - 10.0.4279.0 (X64)
Sample DDL:
IF OBJECT_ID('TestTrigger') IS NOT NULL
DROP TRIGGER TestTrigger
GO
IF OBJECT_ID('TestProcedure') IS NOT NULL
DROP PROCEDURE TestProcedure
GO
IF OBJECT_ID('TestTable') IS NOT NULL
DROP TABLE TestTable
GO
CREATE TABLE TestTable (Data VARCHAR(20))
GO
CREATE PROC TestProcedure
AS
BEGIN
SAVE TRANSACTION Fallback
BEGIN TRY
DECLARE #a INT = 1/0
END TRY
BEGIN CATCH
ROLLBACK TRANSACTION Fallback
END CATCH
END
GO
CREATE TRIGGER TestTrigger
ON TestTable
FOR INSERT, UPDATE, DELETE
AS
BEGIN
EXEC TestProcedure
END
GO
Code to replicate the error:
BEGIN TRANSACTION
INSERT INTO TestTable VALUES('data')
IF ##ERROR > 0
ROLLBACK TRANSACTION
ELSE
COMMIT TRANSACTION
GO
I was going through the same torment, and I just solved it!!!
Just add this single line at the very first step of your TRIGGER and you're going to be fine:
SET XACT_ABORT OFF;
In my case, I'm handling the error feeding a specific table with the batch that caused the error and the error variables from SQL.
Default value for XACT_ABORT is ON, so the entire transaction won't be commited even if you're handling the error inside a TRY CATCH block (just as I'm doing). Setting its value for OFF will cause the transaction to be commited even when an error occurs.
However, I didn't test it when the error is not handled...
For more info:
SET XACT_ABORT (Transact-SQL) | Microsoft Docs
I'd suggest re-architecting this so that you don't poison the original transaction - maybe have the transaction send a service broker message (or just insert relevant data into some form of queue table), so that the "non-critical" part can take place in a completely independent transaction.
E.g. your trigger becomes:
CREATE TRIGGER TestTrigger
ON TestTable
FOR INSERT, UPDATE, DELETE
AS
BEGIN
INSERT INTO QueueTable (Col1,Col2)
SELECT COALESCE(i.Col1,d.Col1),COALESCE(i.Col2,d.Col2) from inserted i,deleted d
END
GO
You shouldn't do anything inside a trigger that might fail, unless you do want to force the transaction that initiated the trigger action to also fail.
This is a very similar question to Why try catch does not suppress exception in trigger
Also see the answer here T-SQL try catch transaction in trigger
I don’t think you can use savepoints inside a trigger. I mean, you can but I googled about it and I saw a few people saying that they don’t work. If you replace your “save transaction” for a begin transaction, it compiles. Of course it is not necessary because you have the outer transaction control and the inner rollback would rollback everything.

SQL Server - results of one statement in transaction immediately visible to the next one?

This:
use test;
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
BEGIN TRANSACTION;
EXEC sp_RENAME 'table1.asd' , 'ads', 'COLUMN';
INSERT INTO table1 (ads) VALUES (12);
COMMIT
is a simple example that demonstrates what I would like to do.
I want to alter the table in some way and perform inserts/deletes in one transaction (or other modifications to the table).
The problem is that the results from sp_RENAME are never immediately visible to the INSERT statement. I've played with different transaction isolation levels - it's always the same (therefore the transaction never commits).
Normally I would just use GO statements for this to be in separate batches, but I need that in one batch, because...
My real task is to write a script that adds identity and FK to a table (this requires creating another table with the new schema, performing identity inserts from the old one, renaming the table and applying constraints). I need to play it safe - if any part of the procedure fails I have to rollback the whole transaction. This is why I wanted to do something like this:
BEGIN TRAN
--some statement
IF (##ERROR <> 0) GOTO ERR_HANDLER
-- some other statement
IF (##ERROR <> 0) GOTO ERR_HANDLER
COMMIT TRAN
RETURN 0
ERR_HANDLER:
PRINT 'Unexpected error occurred!'
ROLLBACK TRAN
RETURN 1
Since labels work only inside a batch I cannot use GO statements.
So how can I:
make statements(ie. ALTER TABLE, sp_RENAME) have an immediate effect ?
or
write the whole solution some other way so that it is safe to run in production DB ?
The issue is that the parsing of the batch fails when it encounters the reference to the renamed column so the entire batch never gets executed - not that the effects of the transaction are not visible.
You can put your statements referencing the new name of the column in an EXEC('') block to defer compilation until after the column is renamed.
EXEC sp_rename 'table1.asd' , 'ads', 'COLUMN';
EXEC('INSERT INTO table1 (ads) VALUES (12);')

Resources