I have a SQL Server stored procedure that is returning me the very common error
"db_ErrorCode Transaction count after EXECUTE indicates a mismatching number of BEGIN and COMMIT statements. Previous count = 1, current count = 0."
What I've found after Googling this is that its really saying that there's an error happening before the transaction is committed.
There's a
BEGIN TRY
BEGIN TRANSACTION
At the beginning of the SP, and
COMMIT TRANSACTION
END TRY
BEGIN CATCH
ROLLBACK TRANSACTION
SELECT #ErrorNumber = ERROR_NUMBER(),
#ErrorLine = ERROR_LINE(),
#ErrorMessage = ERROR_MESSAGE()
RAISERROR (#Flag, 18, 120);
END CATCH
END
The problem is is that there's about 1100 lines of code in between those lines, and if there's a problem, the entire SP needs to be rolled back, so we can't put try/catch statements in between. And why does my final Catch block not return the actual error, instead of giving me that unhelpful Transaction count error?
I can't answer as to why Sql Server gives the error messages it does especially not without seeing the whole 1100 lines of code. However, if you want to know what to do to be able to pinpoint the error, I can give you some hints.
First in any large stored proc I always have an #Debug variable as an input variable. Make it the last variable and give it a default Value of 0 (not in debug mode). If it has a default value then adding it as the last variable should not break existing calls of the code.
When you want to debug, you can then add tests or results that show you what steps have been completed or what the results of various operations were. Wrap these steps in if statements like
IF #DEBUG=1
BEGIN
<add your tests here>
END
You may add this code after every significant step in the proc or maybe have a one with multiple steps in it later in the proc or both. I tend to put checks to see the state of what is going to happen in a steps through the proc and ones that show what the results should be at the end.
That code will only execute when you are in debug mode. The kind of things you might put in might be printing or selecting the variables at that point of the proc, printing the name of the step you are on, running a select that would normally be the basis of an insert or the results after after an operation, etc.
Another thing you can do is create a table variable to store the steps as they complete. Table variable stay in scope after the rollback, so you can then do a select and see what steps were completed before the rollback.
Related
I have some TSQL code with the following pattern:
BEGIN TRAN;
DECLARE #Set int;
BEGIN TRY
SET #Set = dbo.UDFThatCanTriggerAnError('ByCastingTheDescriptiveErrorToInt');
END TRY
BEGIN CATCH
SET #Set = -1; --Default value;
END CATCH
--Really in another sproc called from here but shown here for brevity
SAVE TRAN Testing;
ROLLBACK TRAN Testing;
--Back to the original sproc
ROLLBACK TRAN;
(In reality the rollbacks are only triggered if there are further errors inside the sprocs, but hopefully that gives the idea.)
In testing on SQL Server 2008 R2 SP1, the transaction save operation consistently throws the error:
The current transaction cannot be committed and cannot support operations that write to the log file. Roll back the transaction.
If I change the UDF call to one which doesn't raise the error, or replace the line with a different way of setting the variable, the code completes.
It'd be so much easier if TRY..CATCH blocks here could be made to work in the same way the analogous blocks do in .Net and let me elect to reset the error flag, rather than automatically assuming every last error is terminal - this one definitely isn't, and is much less code than any alternative way of controlling flow.
So - is there a way to reset the error status inside the Catch block or do I need to rewrite the logic to avoid the caught error being thrown in the first place?
For so long, I've omitted using SQL Transactions, mostly out of ignorance.
But let's say I have a procedure like this:
CREATE PROCEDURE CreatePerson
AS
BEGIN
declare #NewPerson INT
INSERT INTO PersonTable ( Columns... ) VALUES ( #Parameters... )
SET #NewPerson = SCOPE_IDENTITY()
INSERT INTO AnotherTable ( #PersonID, CreatedOn ) VALUES ( #NewPerson, getdate() )
END
GO
In the above example, the second insert depends on the first, as in it will fail if the first one fails.
Secondly, and for whatever reason, transactions are confusing me as far as proper implementation. I see one example here, another there, and I just opened up adventureworks to find another example with try, catch, rollback, etc.
I'm not logging errors. Should I use a transaction here? Is it worth it?
If so, how should it be properly implemented? Based on the examples I've seen:
CREATE PROCEURE CreatePerson
AS
BEGIN TRANSACTION
....
COMMIT TRANSACTION
GO
Or:
CREATE PROCEDURE CreatePerson
AS
BEGIN
BEGIN TRANSACTION
COMMIT TRANSACTION
END
GO
Or:
CREATE PROCEDURE CreatePerson
AS
BEGIN
BEGIN TRY
BEGIN TRANSACTION
...
COMMIT TRANSACTION
END TRY
BEGIN CATCH
IF ##TRANCOUNT > 0
BEGIN
ROLLBACK TRANSACTION
END
END CATCH
END
Lastly, in my real code, I have more like 5 separate inserts all based on the newly generated ID for person. If you were me, what would you do? This question is perhaps redundant or a duplicate, but for whatever reason I can't seem to reconcile in my mind the best way to handle this.
Another area of confusion is the rollback. If a transaction must be committed as a single unit of operation, what happens if you don't use the rollback? Or is the rollback needed only in a Try/Catch similar to vb.net/c# error handling?
You are probably missing the point of this: transactions are suppose to make a set of separate actions into one, so if one fails, you can rollback and your database will stay as if nothing happened.
This is easier to see if, let's say, you are saving the details of a purchase in a store. You save the data of the customer (like Name or Address), but somehow in between, you missed the details (server crash). So now you know that John Doe bought something, but you don't know what. You Data Integrity is at stake.
Your third sample code is correct if you want to handle transactions in the SP. To return an error, you can try:
RETURN ##ERROR
After the ROLLBACK. Also, please review about:
set xact_abort on
as in: SQL Server - transactions roll back on error?
If the first insert succeeds and the second fails you will have a database in a bad state because SQL Server cannot read your mind. It will leave the first insert (change) in the database even though you probably wanted it all tosucceed or all fail.
To ensure this you should wrap all the statements in begin transaction as you illustrated in the last example. Its important to have a catch so any half completed transaction are explicitly rolled back and the resources (used by the transaction) released as soon as possible.
I have a stored procedure where I need to cast to a type, but do not know if the cast will succeed. In an imperative language, I would use some sort of TryCast pattern. I figured that this would be equivalent in T-SQL:
begin try
select cast(#someValue as SomeType)
end try begin catch end catch
On the surface, it does appear to be equivalent. If #SomeTypeVar is uninitialized and the cast fails, I get NULL to work with; the correct value if the cast succeeds.
I used this same code in a stored procedure, but that yields an error: Msg 2812, Level 16, State 62, Line 20. The current transaction cannot be committed and cannot support operations that write to the log file. Roll back the transaction. Some research led me to other questions on Stack Overflow and this table of times when try-catch fails in T-SQL:
TRY…CATCH constructs do not trap the following conditions:
Warnings or informational messages that have a severity of 10 or
lower.
Errors that have a severity of 20 or higher that stop the SQL
Server Database Engine task processing for the session. If an error
occurs that has severity of 20 or higher and the database connection
is not disrupted, TRY…CATCH will handle the error.
Attentions, such as
client-interrupt requests or broken client connections.
When the
session is ended by a system administrator by using the KILL
statement.
The following types of errors are not handled by a CATCH
block when they occur at the same level of execution as the TRY…CATCH
construct:
Compile errors, such as syntax errors, that prevent a batch from
running.
Errors that occur during statement-level recompilation, such
as object name resolution errors that occur after compilation because
of deferred name resolution.
These errors are returned to the level
that ran the batch, stored procedure, or trigger.
At first, I thought I fell into the statement-level recompilation bucket (as my error level is 16) until I tried to bisect the problem. The minimal reproduction is as follows:
create procedure failsInTransactions
as
begin
begin try
select cast(#someValue as SomeType)
end try begin catch end catch
end
and the calling code:
begin tran
exec failsInTransactions
commit
This yields the error I discussed above. However, I remembered that if a stored procedure doesn't have any parameters, you can call it without exec. This:
begin tran
failsInTransactions
commit
succeeds with Command(s) completed successfully. Further experimentation led me to another error with level 16:
begin try
select 1/0
end try begin catch end catch
which works in both cases, producing no rows of output.
I have two questions:
Why is there different behavior calling the procedure with and without exec?
Why does another error of the same error level proceed after the catch?
The EXECUTE keyword is optional only if it is the first statement in the batch. It is not related to parameters and is required in all other contexts. Microsoft inherited this odd behavior from the Sybase code base as well as other many lax T-SQL parsing rules. I suggest you follow a strict T-SQL coding style to avoid gotchas.
The code below runs without error because it is not executing a proc at all. Since there are no semicolon statement terminators, the stored procedure name becomes part of the BEGIN TRAN statement and is interpreted as a transaction name.
begin tran
failsInTransactions
commit
You will get the expected syntax error during compilation if you add statement terminators and this will lead you down the path to specify EXEC.
begin tran;
EXEC failsInTransactions;
commit;
Be aware that not using statement terminators is deprecated so I suggest you get in the habit of specifying them. See https://www.dbdelta.com/always-use-semicolon-statement-terminators/.
I have this SQL code that checks for some parameters, if invalid will raise an error, I want also to insert an error record into errors table. the problem is that if error happens the whole transaction will be roll-backed including the error record, I want to rollback the whole transaction except the error record.
I tried creating a separate transaction and commit it with no luck.
IF #Input IS NULL
BEGIN
insert into [dbo].Errors('Field1') values ('Input is null')
RAISERROR ('some message', 16, 1)
RETURN -1
END
Is there a way to isolate the insert statement alone in a separate transaction?
Edit:
This stored procedure is called from other procedures and need to be roll-backed, even from outside, so probably i need to separate this insert statement into a separate transaction.
You need to do this in a separate transaction. No lock changes if the table is locked throughout the course of the transaction, but it does not affect what is rolled back or not. If an error occurs, the whole transaction is rolled back. Can you add a 'try-catch' block to the code that you are calling the stored procedure from? If you 'try' the stored procedure and it throws an error, you should catch the error. In your catch block, you could call a different stored procedure that records the error in the necessary table.
It sounds like the issue that you are having is where to call the stored procedure that records the errors from. This can be tricky and it all depends on how you are handling the transaction and where you are catching the errors. Let's say you have code a that calls code b and b calls the first stored procedure. Now, when the stored procedure throws an error, if you are handling that error in code a, everything that you did in b is rolled back. If you try to insert the error record in code b, that will be rolled back from code a as well.
I have a very long-running stored procedure in SQL Server 2005 that I'm trying to debug, and I'm using the 'print' command to do it. The problem is, I'm only getting the messages back from SQL Server at the very end of my sproc - I'd like to be able to flush the message buffer and see these messages immediately during the sproc's runtime, rather than at the very end.
Use the RAISERROR function:
RAISERROR( 'This message will show up right away...',0,1) WITH NOWAIT
You shouldn't completely replace all your prints with raiserror. If you have a loop or large cursor somewhere just do it once or twice per iteration or even just every several iterations.
Also: I first learned about RAISERROR at this link, which I now consider the definitive source on SQL Server Error handling and definitely worth a read:
http://www.sommarskog.se/error-handling-I.html
Building on the answer by #JoelCoehoorn, my approach is to leave all my PRINT statements in place, and simply follow them with the RAISERROR statement to cause the flush.
For example:
PRINT 'MyVariableName: ' + #MyVariableName
RAISERROR(N'', 0, 1) WITH NOWAIT
The advantage of this approach is that the PRINT statements can concatenate strings, whereas the RAISERROR cannot. (So either way you have the same number of lines of code, as you'd have to declare and set a variable to use in RAISERROR).
If, like me, you use AutoHotKey or SSMSBoost or an equivalent tool, you can easily set up a shortcut such as "]flush" to enter the RAISERROR line for you. This saves you time if it is the same line of code every time, i.e. does not need to be customised to hold specific text or a variable.
Yes... The first parameter of the RAISERROR function needs an NVARCHAR variable. So try the following;
-- Replace PRINT function
DECLARE #strMsg NVARCHAR(100)
SELECT #strMsg = 'Here''s your message...'
RAISERROR (#strMsg, 0, 1) WITH NOWAIT
OR
RAISERROR (n'Here''s your message...', 0, 1) WITH NOWAIT
Another better option is to not depend on PRINT or RAISERROR and just load your "print" statements into a ##Temp table in TempDB or a permanent table in your database which will give you visibility to the data immediately via a SELECT statement from another window. This works the best for me. Using a permanent table then also serves as a log to what happened in the past. The print statements are handy for errors, but using the log table you can also determine the exact point of failure based on the last logged value for that particular execution (assuming you track the overall execution start time in your log table.)
Just for the reference, if you work in scripts (batch processing), not in stored procedure, flushing output is triggered by the GO command, e.g.
print 'test'
print 'test'
go
In general, my conclusion is following: output of mssql script execution, executing in SMS GUI or with sqlcmd.exe, is flushed to file, stdoutput, gui window on first GO statement or until the end of the script.
Flushing inside of stored procedure functions differently, since you can not place GO inside.
Reference: tsql Go statement
To extend Eric Isaac's answer, here is how to use the table approach correctly:
Firstly, if your sp uses a transaction, you won't be able monitor the contents of the table live, unless you use the READ UNCOMMITTED option:
SELECT *
FROM table_log WITH (READUNCOMMITTED);
or
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
SELECT *
FROM table_log;
To solve rollback issues, put an increasing ID on the log table, and use this code:
SET XACT_ABORT OFF;
BEGIN TRY
BEGIN TRANSACTION mytran;
-- already committed logs are not affected by a potential rollback
-- so only save logs created in this transaction
DECLARE #max_log_id = (SELECT MAX(ID) FROM table_log);
/*
* do stuff, log the stuff
*/
COMMIT TRANSACTION mytran;
END TRY
BEGIN CATCH
DECLARE #log_table_saverollback TABLE
(
ID INT,
Msg NVARCHAR(1024),
LogTime DATETIME
);
INSERT INTO #log_table_saverollback(ID, Msg, LogTime)
SELECT ID, Msg, LogTime
FROM table_log
WHERE ID > #max_log_id;
ROLLBACK TRANSACTION mytran; -- this deletes new log entries from the log table
SET IDENTITY_INSERT table_log ON;
INSERT INTO table_log(ID, Msg, LogTime)
SELECT ID, Msg, LogTime
FROM #log_table_saverollback;
SET IDENTITY_INSERT table_log OFF;
END CATCH
Notice these important details:
SET XACT_ABORT OFF; prevents SQL Server from just shutting down the entire transaction instead of running your catch block, always include it if you use this technique.
Use a #table_variable, not a #temp_table. Temp tables are also affected by rollbacks.