I have a query that I want to execute via a linked server. The query looks like this:
USE db1;
SET xact_abort ON;
DECLARE #statement NVARCHAR(max);
SET #statement = 'EXECUTE (''INSERT INTO T1(V1, V2) VALUES (1, 2)'') AT LS1';
BEGIN try
BEGIN TRANSACTION
EXEC Sp_executesql #statement
COMMIT TRANSACTION
END try
BEGIN catch
IF ( Xact_state() ) = -1
BEGIN
PRINT Error_message()
ROLLBACK TRANSACTION
END
IF ( Xact_state() ) = 1
BEGIN
PRINT 'COMMIT OPEN TRANSACTION'
COMMIT TRANSACTION
END
INSERT INTO tblerrmsg (errornumber, errorseverity, errorstate, errorline, errormessage) EXECUTE Usp_geterrorinfo;
END catch
This fails with an entry in my TblErrMsg table.
ErrorNumber = 8501, ErrorSeverity = 16, ErrorState = 3, ErrorLine = 1, ErrorMessage = MSDTC on server 'XXX' is unavailable.
So I researched for the specific error message and checked if the Distributed Transaction Coordinator Service was running on the server, but this was already the case. Even a restart of the service did not bring any change. Next I tried to remove the transaction and execute the following procedure:
USE db1;
DECLARE #statement NVARCHAR(max);
SET #statement = 'EXECUTE (''INSERT INTO T1(V1, V2) VALUES (1, 2)'') AT LS1';
BEGIN try
EXEC Sp_executesql #statement
END try
BEGIN catch
PRINT Error_message()
END catch
And this time it worked. There were no errors and the INSERT also worked. So I'm wondering what really the problem is. Apparently there seems to be no problem with the execution of the procedure nor with the Linked Server connection.
Has anyone ever had a similar problem, or has an explanation for this behavior?
It's possible you're missing the key word DISTRIBUTED when specifying the explicit transaction.
Instead of
BEGIN TRANSACTION
Try
BEGIN DISTRIBUTED TRANSACTION
According to the Docs this is the way to specify "... the start of a Transact-SQL distributed transaction managed by Microsoft Distributed Transaction Coordinator (MS DTC)"
Without the explicit transaction the default behavior afaik is the transaction across linked server is not guaranteed to be atomic and cannot be rolled back. This why it works when no explicit transaction is specified.
When BEGIN TRANSACTION is specified without 'DISTRIBUTED' the statement is asking something that's not possible when the procedure contains a linked server reference.
Related
I have a larger stored procedure which utilizes several TRY/CATCH blocks in order to catch and log individual errors. I have also wrapped a transaction around the entire contents of the procedure, so as to be able to roll back the entire thing in the event of an error raised somewhere along the way (in order to prevent a lot of messy cleanup); XACT_ABORT has been enabled since it would otherwise not roll back the entire transaction.
Key component:
There is a table in my database which gets a record inserted each time this procedure is run with the results of operations and details on what went wrong.
Funny thing is happening - actually, when I finally figured out what was wrong, it was pretty obvious... the the insert statement into my log table is getting rolled back as well, hence, if I am not running this out of SSMS, I will not be able to see that this was even run, as the rollback removes all trances of activity.
Question:
Would it be possible to have the entire transaction roll back with the exception of this single insert statement? I would still want to preserve the error message which I compile during the running of the stored procedure.
Thanks so much!
~Eli
Update 6/28
Here's a code sample of what I'm looking at. Key difference between this and the samples posed by #Alex and #gameiswar is that in my case, the try/catch blocks are all nested inside the single transaction. The purpose of this is to have multiple catches (for the multiple tables), though we would the entire mess to be rolled back even if the last update failed.
SET XACT_ABORT ON;
BEGIN TRANSACTION
DECLARE #message AS VARCHAR(MAX) = '';
-- TABLE 1
BEGIN TRY
UPDATE TABLE xx
SET yy = zz
END TRY
BEGIN CATCH
SET #message = 'TABLE 1 '+ ERROR_MESSAGE();
INSERT INTO LOGTABLE
SELECT
GETDATE(),
#message
RETURN;
END CATCH
-- TABLE 2
BEGIN TRY
UPDATE TABLE sss
SET tt = xyz
END TRY
BEGIN CATCH
SET #message = 'TABLE 2 '+ ERROR_MESSAGE();
INSERT INTO LOGTABLE
SELECT
GETDATE(),
#message
RETURN;
END CATCH
COMMIT TRANSACTION
You can try something like below ,which ensures you log the operation.This takes advantage of the fact that table variables dont get rollbacked..
Psuedo code only to give you idea:
create table test1
(
id int primary key
)
create table logg
(
errmsg varchar(max)
)
declare #errmsg varchar(max)
set xact_abort on
begin try
begin tran
insert into test1
select 1
insert into test1
select 1
commit
end try
begin catch
set #errmsg=ERROR_MESSAGE()
select #errmsg as "in block"
if ##trancount>0
rollback tran
end catch
set xact_abort off
select #errmsg as "after block";
insert into logg
select #errmsg
select * from logg
OK... I was able to solve this using a combination of the great suggestions put forth by Alex and GameisWar, with the addition of the T-SQL GOTO control flow statement.
The basic ideas was to store the error message in a variable, which survives a rollback, then have the Catch send you to a FAILURE label which will do the following:
Rollback the transaction
Insert a record into the log table, using the data from the aforementioned variable
Exit the stored procedure
I also use a second GOTO statement to make sure that a successful run will skip over the FAILURE section and commit the transaction.
Below is a code snippet of what the test SQL looked like. It worked like a charm, and I have already implemented this and tested it (successfully) in our production environment.
I really appreciate all the help and input!
SET XACT_ABORT ON
DECLARE #MESSAGE VARCHAR(MAX) = '';
BEGIN TRANSACTION
BEGIN TRY
INSERT INTO TEST_TABLE VALUES ('TEST'); -- WORKS FINE
END TRY
BEGIN CATCH
SET #MESSAGE = 'ERROR - SECTION 1: ' + ERROR_MESSAGE();
GOTO FAILURE;
END CATCH
BEGIN TRY
INSERT INTO TEST_TABLE VALUES ('TEST2'); --WORKS FINE
INSERT INTO TEST_TABLE VALUES ('ANOTHER TEST'); -- ERRORS OUT, DATA WOULD BE TRUNCATED
END TRY
BEGIN CATCH
SET #MESSAGE = 'ERROR - SECTION 2: ' + ERROR_MESSAGE();
GOTO FAILURE;
END CATCH
GOTO SUCCESS;
FAILURE:
ROLLBACK
INSERT INTO LOGG SELECT #MESSAGE
RETURN;
SUCCESS:
COMMIT TRANSACTION
I don't know details but IMHO general logic can be like this.
--set XACT_ABORT ON --not include it
declare #result varchar(max) --collect details in case you need it
begin transaction
begin try
--your logic here
--if something wrong RAISERROR(...#result)
--everything OK
commit
end try
begin catch
--collect error_message() and other into #result
rollback
end catch
insert log(result) values (#result)
I am new to T-SQL programming. I need to write a main procedures to execute multiple transactions. How could i structure the program so that each transaction will not abort. Instead, the procedure will raise the error and report them back to the main program in the output parameters after all the transaction finish running. Please provide me with pseudo code if you can. Thanks.
You need to follow the template from Exception handling and nested transactions
create procedure [usp_my_procedure_name]
as
begin
set nocount on;
declare #trancount int;
set #trancount = ##trancount;
begin try
if #trancount = 0
begin transaction
else
save transaction usp_my_procedure_name;
-- Do the actual work here
lbexit:
if #trancount = 0
commit;
end try
begin catch
declare #error int, #message varchar(4000), #xstate int;
select #error = ERROR_NUMBER(), #message = ERROR_MESSAGE(), #xstate = XACT_STATE();
if #xstate = -1
rollback;
if #xstate = 1 and #trancount = 0
rollback
if #xstate = 1 and #trancount > 0
rollback transaction usp_my_procedure_name;
raiserror ('usp_my_procedure_name: %d: %s', 16, 1, #error, #message) ;
end catch
end
go
As you can see you can't always continue, because sometime the exception has already aborted the transaction by the time you catch it (the typical example being deadlock exception 1205). And you must use a savepoint and revert to the savepoint in case of exception, to keep the database consistent. However, you do not abort the caller's work, if possible.
You could use try/catch
BOL - TRY/CATCH
Here's an example
I have previously encapsulated logic into stored procedures and put in exec statements in the TRY/CATCH block. In the CATCH you can use this link to get error information (example B in the link)
BOL - ERROR_MESSAGE
Something similar to -
BEGIN TRY
BEGIN TRAN
EXEC StoredProcedure01
EXEC StoredProcedure02
COMMIT
END TRY
BEGIN CATCH
ROLLBACK TRAN
SELECT
ERROR_NUMBER() AS ErrorNumber
,ERROR_SEVERITY() AS ErrorSeverity
,ERROR_STATE() AS ErrorState
,ERROR_PROCEDURE() AS ErrorProcedure
,ERROR_LINE() AS ErrorLine
,ERROR_MESSAGE() AS ErrorMessage;
END CATCH;
GO
I might consider trying this. Each transaction is a separate stored proc with a an overall stored proc that calls each in turn. Save the error information you want in a table variable. IN the catch block of each proc, rollback that transaction and then insert the data form the table variable with the error information into a logging table. Do not return a failure to the calling proc.
If you want to report the errors out of the main proc in real time, you can do a select from the logging table at the end.
It would work best if you create a batchid at the start of the calling proc and have that be an input variable to each of the procs you call and also include that data in the information you add to to the logging table. Then if the procs fail multiple times during nonworking hours, you have the errors for all of them and can see the batch they were associated with. This helps tremendously in tracking down problems.
You will need to give some thought as to what information you will want for each proc when designing your logging table. I would suggest that part of what you store is any input variables that are sent into the proc. Also if you are using dynamic SQl, then store the generated sql as well. If you can identify the user who ran the proc, that too is useful for tracking down permissions issues for instance.
Having a logging table is far more useful than just returning errors at run time. You can look for trends, see if the same error is frequently happening, look at the information that caused failures and the information that succeeded if you choose to also log the variables for successful runs.
None of this is out of the box easy to write code. It requires a great deal of design on your part to determine, exactly what information will be useful over time in troubleshooting issues with the process. How detailed you need to get is a business decision based on both how the data is used and what you will want to know about a failure. As such, we cannot make this determination for you.
I have stored procedure which is for inserting data into a table. This procedure is called from asp.net application which handles the transaction start, commit and rollback functionality. Inside the stored procedure there is no transaction.
In this scenario my application is working fine and it is hosted in the live. Now, inside the store procedure, I have to add a new functionality to insert another table by linked server to another database and if error appears then I have to store it in the database.
We want to implement this insertion in such a way so that the previous sp will be working fine.
Noted that if error from the insertion of linked server comes then entire process is rolled back and also save point is not working. We can do this by dot net code but this is for more 40 modules, so I have to do this by sp . So, how can we implement this.
MSDN SAVE TRANSACTION
CREATE PROCEDURE [dbo].[SaveCustomer]
#Firstname nvarchar(50),
#Lastname nvarchar(50)
AS
DECLARE #ReturnCode int = 1 -- 1 - success
,#NewID int
---------------------------------------------- These variables are for TRY/CATCH RAISEERROR and BEGIN TRAN/SAVE POINT use
,#tranCounter int -- #TranCounter > 0 means an active transaction was started before the procedure was called.
,#errorMessage nvarchar(4000) -- echo error information to the caller. Message text.
,#errorSeverity int -- Severity.
,#errorState int; -- State
SET #tranCounter = ##TRANCOUNT;
IF #tranCounter > 0
SAVE TRANSACTION SaveCustomer_Tran;
ELSE
BEGIN TRANSACTION;
BEGIN TRY
INSERT dbo.Customer (
Firstname
,Lastname)
VALUES (
#Firstname
,#Lastname)
SET #NewID = scope_identity()
IF #tranCounter = 0
COMMIT TRANSACTION;
END TRY
BEGIN CATCH
IF #tranCounter = 0
ROLLBACK TRANSACTION;
ELSE
IF XACT_STATE() <> -1
ROLLBACK TRANSACTION SaveCustomer_Tran;
SELECT #errorMessage = 'Error in [SaveCustomer]: ' + ERROR_MESSAGE(), #errorSeverity = ERROR_SEVERITY(), #errorState = ERROR_STATE();
RAISERROR (#errorMessage, #errorSeverity, #errorState);
SET #ReturnCode = -1
END CATCH
SELECT #ReturnCode as [ReturnCode], #NewID as [NewID]
GO
There are two accepted ways to escape the current transaction rollback tarpit.
One way is to use a looopback connection. Either via linked server, or from SQLCLR, connect back to the current server and write the INSERT, making sure DTC enrollment is not allowed. As the loopback connection is a different transaction, you can safely rollback in the original transaction and preserve the INSERT. This approach has the drawback that is very easy to deadlock yourself.
The other, less known, way is to use sp_trace_generateevent to fire off a user-configurable event. This may not seem much, but you can create event notifications for these events and then impelemnt handling that does the INSERT in processing the event. Eg. see Sql progress logging in transaction. This approach has the drawback of being difficult.
I am in the process of creating a stored procedure. This stored procedure runs local as well as external stored procedures. For simplicity, I'll call the local server [LOCAL] and the remote server [REMOTE].
Here's a simple topology:
The procedure
USE [LOCAL]
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER PROCEDURE [dbo].[monthlyRollUp]
AS
SET NOCOUNT, XACT_ABORT ON
BEGIN TRY
EXEC [REOMTE].[DB].[table].[sp]
--This transaction should only begin if the remote procedure does not fail
BEGIN TRAN
EXEC [LOCAL].[DB].[table].[sp1]
COMMIT
BEGIN TRAN
EXEC [LOCAL].[DB].[table].[sp2]
COMMIT
BEGIN TRAN
EXEC [LOCAL].[DB].[table].[sp3]
COMMIT
BEGIN TRAN
EXEC [LOCAL].[DB].[table].[sp4]
COMMIT
END TRY
BEGIN CATCH
-- Insert error into log table
INSERT INTO [dbo].[log_table] (stamp, errorNumber,
errorSeverity, errorState, errorProcedure, errorLine, errorMessage)
SELECT GETDATE(), ERROR_NUMBER(), ERROR_SEVERITY(), ERROR_STATE(), ERROR_PROCEDURE(),
ERROR_LINE(), ERROR_MESSAGE()
END CATCH
GO
When using a transaction on the remote procedure, it throws this error:
OLE DB provider ... returned message "The partner transaction manager has disabled its support for remote/network transactions.".
I get that I'm unable to run a transaction locally for a remote procedure.
How can I ensure that the this procedure will exit and rollback if any part of the procedure fails?
Notes
With regards to combining the simple procedures, some of them are used individually.
IMO easiest way is to
Add Return value to remote proc.
Wrap remote proc into transaction and try catch (inside remote proc). If error happened return false.
On local stored proc if false, simply do not continue.
I also fail to understand the reason behind multiple BEGIN TRANS / COMMIT in the local proc. I mean if this is month end rollup, shuldn't this be one big transaction rather than a bunch of small? Otherwise your trans 1 and 2 may commit successfully, but 3 will fail and that's that.
Names are made up ofc:
CREATE PROC [remote].db.REMOTE_PROC (
#return_value int output
)
AS
BEGIN
SET XACT_ABORT ON;
BEGIN TRY
BEGIN TRANS
... do stuff ...
set #return_value = 1;
COMMIT;
END TRY
BEGIN CATCH
set #return_value = 0;
END CATCH
END
and the local proc
CREATE PROC [local].db.[monthlyRollUp] AS
BEGIN
SET XACT_ABORT ON;
declare #ret int;
EXECUTE [remote].dbo.REMOTE_PROC #return_value = #ret OUTPUT;
IF #ret = 0
PRINT 'ERROR :('
RETURN
END IF
BEGIN TRANS
-- one big transaction here
EXEC [LOCAL].[DB].[table].[sp1];
EXEC [LOCAL].[DB].[table].[sp2];
EXEC [LOCAL].[DB].[table].[sp3];
EXEC [LOCAL].[DB].[table].[sp4];
COMMIT;
END;
afair [remote].dbo.REMOTE_PROC runs its own transaction space, and returns 1 if successful. Local proc, checks the return value and decides whether to proceed or not.
sp1 sp2 sp3 and sp4 are all running in one single transactions, as having multiple transactions for each of them does not really make much sense to me.
You can try to execute both stored procedure into seperate TRY CATCH block and check for corresponding ERROR_NUMBER in CATCH block. If ERROR_NUMBER is same as error you are getting you can simply return or raiseerror as per your requirement.
Is it causing a fatal error. Please check what error severity is in the exception.
I might be a little unclear on what you want. If you need the entire monthlyRollUp SP to rollback on a failure of either the remote or local procedures, then you will need a distributed transaction coordinator. This will allow the servers to communicate the information about the transaction and coordinate the commits. I.e., both servers have to indicate that all necessary locks were gained and then coordinate commits on both servers so that the operation is automic. Here is one example of setting up a DTC:
http://social.msdn.microsoft.com/forums/en-US/adodotnetdataproviders/thread/7172223f-acbe-4472-8cdf-feec80fd2e64/
If you don't want the remote procedures to participate/affect the transaction, you can try setting:
SET REMOTE_PROC_TRANSACTIONS OFF;
http://msdn.microsoft.com/en-us/library/ms178549%28SQL.90%29.aspx
I haven't used that setting before though so I'm not sure if it will accomplish what you need.
If you can't or don't want to use DTC, and don't want to use CLR, then then you need to call the remote sp last, as you won't be able to rollback the remote sp call.
SET NOCOUNT, XACT_ABORT ON
SET REMOTE_PROC_TRANSACTIONS OFF;
BEGIN TRY
DECLARE #ret INT
BEGIN TRAN
--Perform these in a transaction, so they all rollback together
EXEC [LOCAL].[DB].[table].[sp1]
EXEC [LOCAL].[DB].[table].[sp2]
EXEC [LOCAL].[DB].[table].[sp3]
EXEC [LOCAL].[DB].[table].[sp4]
--We call remote sp last so that if it fails we rollback the above transactions
--We'll have to assume that remote sp takes care of itself on error.
EXEC [REMOTE].[DB].[table].[sp]
COMMIT
END TRY
BEGIN CATCH
--We rollback
ROLLBACK
-- Insert error into log table
INSERT INTO [dbo].[log_table] (stamp, errorNumber,
errorSeverity, errorState, errorProcedure, errorLine, errorMessage)
SELECT GETDATE(), ERROR_NUMBER(), ERROR_SEVERITY(), ERROR_STATE(),ERROR_PROCEDURE(),
ERROR_LINE(), ERROR_MESSAGE()
END CATCH
If the local sp's depend on results from the remote stored procedure, then you can use a CLR sp (will need EXTERNAL_ACCESS permissions) and manage the transactions explicitly (basically, a roll your own DTC, but no two-phase commit. You're effectively delaying the remote commit.)
//C# fragment to roll your own "DTC" This is not true two-phase commit, but
//may be sufficient to meet your needs. The edge case is that if you get an error
//while trying to commit the remote transaction, you cannot roll back the local tran.
using(SqlConnection cnRemote = new SqlConnection("<cnstring to remote>"))
{
try {
cnRemote.Open();
//Start remote transaction and call remote stored proc
SqlTransaction trnRemote = cnRemote.BeginTransaction("RemoteTran");
SqlCommand cmdRemote = cnRemote.CreateCommand();
cmdRemote.Connection = cnRemote;
cmdRemote.Transaction = trnRemote;
cmdRemote.CommandType = CommandType.StoredProcedure;
cmdRemote.CommandText = '[dbo].[sp1]';
cmdRemote.ExecuteNonQuery();
using(SqlConnection cnLocal = new SqlConnection("context connection=true"))
{
cnLocal.Open();
SqlTransaction trnLocal = cnLocal.BeginTransaction("LocalTran");
SqlCommand cmdLocal = cnLocal.CreateCommand();
cmdLocal.Connection = cnLocal;
cmdLocal.Transaction = trnLocal;
cmdLocal.CommandType = CommandType.StoredProcedure;
cmdLocal.CommandText = '[dbo].[sp1]';
cmdLocal.ExecuteNonQuery();
cmdLocal.CommandText = '[dbo].[sp2]';
cmdLocal.ExecuteNonQuery();
cmdLocal.CommandText = '[dbo].[sp3]';
cmdLocal.ExecuteNonQuery();
cmdLocal.CommandText = '[dbo].[sp4]';
cmdLocal.ExecuteNonQuery();
//Commit local transaction
trnLocal.Commit();
}
//Commit remote transction
trnRemote.Commit();
} // try
catch (Exception ex)
{
//Cleanup stuff goes here. rollback remote tran if needed, log error, etc.
}
}
I'm having a similar issue to The current transaction cannot be committed and cannot support operations that write to the log file, but I have a follow-up question.
The answer there references Using TRY...CATCH in Transact-SQL, which I'll come back to in a second...
My code (inherited, of course) has the simplified form:
SET NOCOUNT ON
SET XACT_ABORT ON
CREATE TABLE #tmp
SET #transaction = 'insert_backtest_results'
BEGIN TRANSACTION #transaction
BEGIN TRY
--do some bulk insert stuff into #tmp
END TRY
BEGIN CATCH
ROLLBACK TRANSACTION #transaction
SET #errorMessage = 'bulk insert error importing results for backtest '
+ CAST(#backtest_id as VARCHAR) +
'; check backtestfiles$ directory for error files ' +
' error_number: ' + CAST(ERROR_NUMBER() AS VARCHAR) +
' error_message: ' + CAST(ERROR_MESSAGE() AS VARCHAR(200)) +
' error_severity: ' + CAST(ERROR_SEVERITY() AS VARCHAR) +
' error_state ' + CAST(ERROR_STATE() AS VARCHAR) +
' error_line: ' + CAST(ERROR_LINE() AS VARCHAR)
RAISERROR(#errorMessage, 16, 1)
RETURN -666
END CATCH
BEGIN TRY
EXEC usp_other_stuff_1 #whatever
EXEC usp_other_stuff_2 #whatever
-- a LOT of "normal" logic here... inserts, updates, etc...
END TRY
BEGIN CATCH
ROLLBACK TRANSACTION #transaction
SET #errorMessage = 'error importing results for backtest '
+ CAST(#backtest_id as VARCHAR) +
' error_number: ' + CAST(ERROR_NUMBER() AS VARCHAR) +
' error_message: ' + CAST(ERROR_MESSAGE() AS VARCHAR(200)) +
' error_severity: ' + CAST(ERROR_SEVERITY() AS VARCHAR) +
' error_state ' + CAST(ERROR_STATE() AS VARCHAR) +
' error_line: ' + CAST(ERROR_LINE() AS VARCHAR)
RAISERROR(#errorMessage, 16, 1)
RETURN -777
END CATCH
RETURN 0
I think I have enough information to just play with it and figure it out myself... unfortunately reproducing the error is proving damn near impossible. So I'm hoping that asking here will help clarify my understanding of the problem and solution.
This stored procedure is, intermittently, throwing errors like this one:
error importing results for backtest 9649 error_number: 3930 error_message: The current transaction cannot be committed and cannot support operations that write to the log file. Roll back the transaction. error_severity: 16 error_state 1 error_line: 217
So obviously the error is coming from the 2nd catch block
Based on what I've read in Using TRY...CATCH in Transact-SQL, I think what's happening is that when the exception is thrown, the use of XACT_ABORT is causing the transaction to be "terminated and rolled back"... and then the first line of the BEGIN CATCH is blindly attempting to roll back again.
I don't know why the original developer enabled XACT_ABORT, so I'm thinking the better solution (than removing it) would be to use XACT_STATE() to only roll back if there is a transaction (<>0). Does that sound reasonable? Am I missing something?
Also, the mention of logging in the error message makes me wonder: Is there another problem, potentially with configuration? Is our use of RAISEERROR() in this scenario contributing to the problem? Does that get logged, in some sort of case where logging isn't possible, as the error message alludes to?
You always need to check for XACT_STATE(), irrelevant of the XACT_ABORT setting. I have an example of a template for stored procedures that need to handle transactions in the TRY/CATCH context at Exception handling and nested transactions:
create procedure [usp_my_procedure_name]
as
begin
set nocount on;
declare #trancount int;
set #trancount = ##trancount;
begin try
if #trancount = 0
begin transaction
else
save transaction usp_my_procedure_name;
-- Do the actual work here
lbexit:
if #trancount = 0
commit;
end try
begin catch
declare #error int, #message varchar(4000), #xstate int;
select #error = ERROR_NUMBER(),
#message = ERROR_MESSAGE(),
#xstate = XACT_STATE();
if #xstate = -1
rollback;
if #xstate = 1 and #trancount = 0
rollback
if #xstate = 1 and #trancount > 0
rollback transaction usp_my_procedure_name;
raiserror ('usp_my_procedure_name: %d: %s', 16, 1, #error, #message) ;
end catch
end
There are a few misunderstandings in the discussion above.
First, you can always ROLLBACK a transaction... no matter what the state of the transaction. So you only have to check the XACT_STATE before a COMMIT, not before a rollback.
As far as the error in the code, you will want to put the transaction inside the TRY. Then in your CATCH, the first thing you should do is the following:
IF ##TRANCOUNT > 0
ROLLBACK TRANSACTION #transaction
Then, after the statement above, then you can send an email or whatever is needed. (FYI: If you send the email BEFORE the rollback, then you will definitely get the "cannot... write to log file" error.)
This issue was from last year, so I hope you have resolved this by now :-)
Remus pointed you in the right direction.
As a rule of thumb... the TRY will immediately jump to the CATCH when there is an error. Then, when you're in the CATCH, you can use the XACT_STATE to decide whether you can commit. But if you always want to ROLLBACK in the catch, then you don't need to check the state at all.
I have encountered this error while updating records from table which has trigger enabled.
For example - I have trigger 'Trigger1' on table 'Table1'.
When I tried to update the 'Table1' using the update query - it throws the same error. THis is because if you are updating more than 1 record in your query, then 'Trigger1' will throw this error as it doesn't support updating multiple entries if it is enabled on same table.
I tried disabling trigger before update and then performed update operation and it was completed without any error.
DISABLE TRIGGER Trigger1 ON Table1;
Update query --------
Enable TRIGGER Trigger1 ON Table1;
I encountered a similar issue to the above and was receiving the same error message. The above answers were helpful but not quite what I needed, which was actually a bit simpler.
I had a stored procedure that was structured as below:
SET XACT_ABORT ON
BEGIN TRY
--Stored procedure logic
BEGIN TRANSACTION
--Transaction logic
COMMIT TRANSACTION
--More stored procedure logic
END TRY
BEGIN CATCH
--Handle errors gracefully
END CATCH
TRY...CATCH was used to handle errors in the stored procedure logic. Just one part of the procedure contained a transaction, and if an error occurred during this it would not get picked up by the CATCH block, but would error out with the SQL Transaction Error message.
This was resolved by adding another TRY...CATCH wrapper that would ROLLBACK the transaction and THROW the error. This meant any errors in this step could be handled gracefully in the main CATCH block, as per the rest of the stored procedure.
SET XACT_ABORT ON
BEGIN TRY
--Stored procedure logic
BEGIN TRY
BEGIN TRANSACTION;
--Transaction logic
COMMIT TRANSACTION;
END TRY
BEGIN CATCH
ROLLBACK;
THROW;
END CATCH
--More stored procedure logic
END TRY
BEGIN CATCH
--Handle errors gracefully
END CATCH
None of this helped me so here is what fixed my issue.
A teammate configured a server trigger to monitor DDL changes.
Once I disabled it, I could install the package then I enabled it again and package is still working.
Had the exact same error in a procedure.
It turns out the user running it (a technical user in our case) did not have sufficient rigths to create a temporary table.
EXEC sp_addrolemember 'db_ddladmin', 'username_here';
did the trick