2 tables with data image
I have two tables tblproduct and tblproductsales.
I want to see how Begin Tran, Rollback work.
There is one insert statement and update statement in the stored proc. If either one of statements fail, both should rollback. but rollback is not working even if Update statement is failing. any suggestion please?
CREATE PROCEDURE spbasam_ProductSales
#ProductId int,
#QtyNeeded int
AS
BEGIN
--Check the if you have enough stock to see
DECLARE #productavailabiltycount int
SELECT #productavailabiltycount = QtyAvailable FROM dbo.tblProduct where ProductId = #ProductId
--check to see if you have enough
If (#productavailabiltycount < #QtyNeeded)
BEGIN
Raiserror('Not enough stock available',16,1)
END
Else
BEGIN
BEGIN TRY
BEGIN TRAN
--Step 1 to reduce tblProduct table
DECLARE #toupdate int
UPDATE dbo.tblProduct set QtyAvailable = #productavailabiltycount - #QtyNeeded
WHERE ProductId = #ProductId
-- Step 2 insert into tblProductSales table
--First get the max count of the productSalesId
DECLARE #maxcountid int
SELECT #maxcountid = MAX(ProductSalesId) from dbo.tblProductSales
INSERT INTO dbo.tblProductSales Values(#maxcountid+1 , #ProductId, #QtyNeeded)
COMMIT TRAN
END TRY
BEGIN CATCH
Rollback Transaction
SELECT
ERROR_NUMBER() as ErrorNumber,
ERROR_MESSAGE() as ErrorMessage,
ERROR_PROCEDURE() as ErrorProcedure,
ERROR_STATE() as ErrorState,
ERROR_LINE() as ErrorLine
END CATCH
END
END
Exec spbasam_ProductSales 1,10
If the update fails because of a SQL error, then an error will be raised and it will jump to the catch block. However, if the update fails because it the SQL simply doesn't find a record to update, then no error will be raise and the next code will continue to execute. So, you should add a check to see how many rows were updated, right after the update statement, and raise your own error if nothing was updated.
...
UPDATE dbo.tblProduct set QtyAvailable = QtyAvailable - #QtyNeeded
WHERE ProductId = #ProductId and QtyAvailable >= #QtyNeeded
if ##rowcount = 0
raiserror('Insufficient quantity available.',16,1)
...
This will kick execution to the catch block if no records were updated.
And, per #JacobH suggestion in the comment to the original post, it would be good to NEVER allow the available quantity to go negative. So, if you add an additional check in the where clause, you could avoid going negative, and one user who orders at the same time as another won't be able to complete their order because the quantity will have decremented before their order is processed. (Good catch Jacob!)
Really, you could rewrite the whole thing to eliminate the variable #productavailabiltycount since you can simply check this inside the transaction, and only update the record if there is enough quantity available for the current transaction.
Related
The catch block does not execute after the first time any edit is made to the script, but works fine from 2nd time onwards.
Below is some script to demonstrate the problem.
use master
go
print 'rollback demo'
declare #errordemo bit = 1
select 'Transaction Count Before =', ##TRANCOUNT
begin try
begin transaction
if (#errordemo = 0) select 'abc' as column1 into MyTestTable
insert into MyTestTable values ('xyz')
commit transaction
end try
begin catch
rollback transaction
end catch
go
select 'Transaction Count After =', ##TRANCOUNT
go
-- this is only to bring back the system to its previous state
if (##TRANCOUNT > 0) rollback transaction
go
if exists(select * from sys.tables where name ='MyTestTable') drop table MyTestTable
go
Copy the above script in SQL Server Management Studio and run it.
You would get the following result with an error.
Transaction Count Before = 0
Transaction Count After = 1
Press F5 again and again and it will execute successfully with:
Transaction Count Before = 0
Transaction Count After = 0
which means the catch block executed.
Now comment out the first line print 'rollback demo', or change its text. You will get the error again. Press F5 again any number of times and there is no error. Repeat by uncommenting that line (or making any other change in script) and you can see a predictable/reproducible pattern.
What is going on here?
Below are some screenshots to show what is happening.
When successful:
When unsuccessful:
This is because your error is binding error that cannot be catch in TRY..CATCH block.
When you reference a non-existing object, SQL Server does not even try to check its columns, it does not compile this piece of code, it leavs it for the execution time.
This is called deferred name resolution.
Only when it comes to the execution of this statement, it checks for table and raises an error.
This is compilation error and cannot be catch within the same scope (only in outer scope).
There is a connect item that you can check:
Try-catch should capture the parse errors.
So in fact your catch block cannot be reached when this error is raised.
On the next execution, if noone character of your query was changed, the cached plan is used. So it is not compiled on the second execution.
But if you change your query text (try to add -- in any part of it), or if you instruct the server to not to cache the plan(using recompile option) like this:
declare #errordemo bit = 1
select 'Transaction Count Before =', ##TRANCOUNT
begin try
begin transaction
if (#errordemo = 0) select 'abc' as column1 into dbo.MyTestTable
print #errordemo
insert into dbo.MyTestTable values ('xyz')
option (recompile) -------------------------------!!!!!!!!!!!!!!!!!!!
commit transaction
end try
begin catch
print 'catch block'
rollback transaction
end catch
go
select 'Transaction Count After =', ##TRANCOUNT
go
if (##TRANCOUNT > 0) rollback transaction
the plan will not be cached and the query will be compiled every time, and you'll see compilation error every time and your catch block will never be reached.
Here is the plan cached: plan
And here you can see that there is no real insert plan:
This is how real Insert plan looks like:
UPDATE
I tried to reproduce the same with SELECT query and find any difference in the plan, but I was not able to extract the plan for SELECT from plan cache.
The entry for it exists, has the same size as INSERT plan, but it's not possible to see this plan, it seems that it's not cached, but the entry does exist...
To reproduce it you can use the following code:
/*select query F7CA8D53-E171-4B5F-8CEA-B19461819C0D*/
declare #errordemo bit = 1
select 'Transaction Count Before =', ##TRANCOUNT
begin try
begin transaction
if (#errordemo = 0) select 'abc' as column1 into dbo.MyTestTable
print #errordemo
select * from MyTestTable
commit transaction
end try
begin catch
print 'catch block'
rollback transaction
end catch
go
select 'Transaction Count After =', ##TRANCOUNT
go
if (##TRANCOUNT > 0) rollback transaction;
go
-------------------------------------------
-------------------------------------------
/*insert query C7D24848-E2BB-46E7-8B1B-334406789CF9*/
declare #errordemo bit = 1
select 'Transaction Count Before =', ##TRANCOUNT
begin try
begin transaction
if (#errordemo = 0) select 'abc' as column1 into dbo.MyTestTable
print #errordemo
insert into MyTestTable values(1)
commit transaction
end try
begin catch
print 'catch block'
rollback transaction
end catch
go
select 'Transaction Count After =', ##TRANCOUNT
go
if (##TRANCOUNT > 0) rollback transaction
go
-----------------------
-----------------------
select *
from sys.dm_exec_cached_plans p
cross APPLY sys.dm_exec_query_plan(p.plan_handle) pl
cross apply sys.dm_exec_sql_text (p.plan_handle) t
where (t.text like '%F7CA8D53-E171-4B5F-8CEA-B19461819C0D%' -- select
or t.text like '%C7D24848-E2BB-46E7-8B1B-334406789CF9%')-- insert
and t.text not like '%sys.dm_exec_cached_plans%'
I have a larger stored procedure which utilizes several TRY/CATCH blocks in order to catch and log individual errors. I have also wrapped a transaction around the entire contents of the procedure, so as to be able to roll back the entire thing in the event of an error raised somewhere along the way (in order to prevent a lot of messy cleanup); XACT_ABORT has been enabled since it would otherwise not roll back the entire transaction.
Key component:
There is a table in my database which gets a record inserted each time this procedure is run with the results of operations and details on what went wrong.
Funny thing is happening - actually, when I finally figured out what was wrong, it was pretty obvious... the the insert statement into my log table is getting rolled back as well, hence, if I am not running this out of SSMS, I will not be able to see that this was even run, as the rollback removes all trances of activity.
Question:
Would it be possible to have the entire transaction roll back with the exception of this single insert statement? I would still want to preserve the error message which I compile during the running of the stored procedure.
Thanks so much!
~Eli
Update 6/28
Here's a code sample of what I'm looking at. Key difference between this and the samples posed by #Alex and #gameiswar is that in my case, the try/catch blocks are all nested inside the single transaction. The purpose of this is to have multiple catches (for the multiple tables), though we would the entire mess to be rolled back even if the last update failed.
SET XACT_ABORT ON;
BEGIN TRANSACTION
DECLARE #message AS VARCHAR(MAX) = '';
-- TABLE 1
BEGIN TRY
UPDATE TABLE xx
SET yy = zz
END TRY
BEGIN CATCH
SET #message = 'TABLE 1 '+ ERROR_MESSAGE();
INSERT INTO LOGTABLE
SELECT
GETDATE(),
#message
RETURN;
END CATCH
-- TABLE 2
BEGIN TRY
UPDATE TABLE sss
SET tt = xyz
END TRY
BEGIN CATCH
SET #message = 'TABLE 2 '+ ERROR_MESSAGE();
INSERT INTO LOGTABLE
SELECT
GETDATE(),
#message
RETURN;
END CATCH
COMMIT TRANSACTION
You can try something like below ,which ensures you log the operation.This takes advantage of the fact that table variables dont get rollbacked..
Psuedo code only to give you idea:
create table test1
(
id int primary key
)
create table logg
(
errmsg varchar(max)
)
declare #errmsg varchar(max)
set xact_abort on
begin try
begin tran
insert into test1
select 1
insert into test1
select 1
commit
end try
begin catch
set #errmsg=ERROR_MESSAGE()
select #errmsg as "in block"
if ##trancount>0
rollback tran
end catch
set xact_abort off
select #errmsg as "after block";
insert into logg
select #errmsg
select * from logg
OK... I was able to solve this using a combination of the great suggestions put forth by Alex and GameisWar, with the addition of the T-SQL GOTO control flow statement.
The basic ideas was to store the error message in a variable, which survives a rollback, then have the Catch send you to a FAILURE label which will do the following:
Rollback the transaction
Insert a record into the log table, using the data from the aforementioned variable
Exit the stored procedure
I also use a second GOTO statement to make sure that a successful run will skip over the FAILURE section and commit the transaction.
Below is a code snippet of what the test SQL looked like. It worked like a charm, and I have already implemented this and tested it (successfully) in our production environment.
I really appreciate all the help and input!
SET XACT_ABORT ON
DECLARE #MESSAGE VARCHAR(MAX) = '';
BEGIN TRANSACTION
BEGIN TRY
INSERT INTO TEST_TABLE VALUES ('TEST'); -- WORKS FINE
END TRY
BEGIN CATCH
SET #MESSAGE = 'ERROR - SECTION 1: ' + ERROR_MESSAGE();
GOTO FAILURE;
END CATCH
BEGIN TRY
INSERT INTO TEST_TABLE VALUES ('TEST2'); --WORKS FINE
INSERT INTO TEST_TABLE VALUES ('ANOTHER TEST'); -- ERRORS OUT, DATA WOULD BE TRUNCATED
END TRY
BEGIN CATCH
SET #MESSAGE = 'ERROR - SECTION 2: ' + ERROR_MESSAGE();
GOTO FAILURE;
END CATCH
GOTO SUCCESS;
FAILURE:
ROLLBACK
INSERT INTO LOGG SELECT #MESSAGE
RETURN;
SUCCESS:
COMMIT TRANSACTION
I don't know details but IMHO general logic can be like this.
--set XACT_ABORT ON --not include it
declare #result varchar(max) --collect details in case you need it
begin transaction
begin try
--your logic here
--if something wrong RAISERROR(...#result)
--everything OK
commit
end try
begin catch
--collect error_message() and other into #result
rollback
end catch
insert log(result) values (#result)
I'm new to SQL Server error handling, and my English isn't too clear, so I apologize in advance for any misuderstandings.
The problem is: I insert multiple records into a table. The table has an AFTER INSERT trigger, which is processing the records one by one in the FETCH WHILE cycle with a cursor. If something error happens, everything is rolling back. So if there is just one wrong field in the inserted records, I lost all of them. And the insert rolls back also, so I can't find the wrong record. So I need to handle the errors inside of the cursor, to rollback only the wrong record.
I made a test database with 3 tables:
tA
VarSmallint smallint
VarTinyint tinyint
String varchar(20)
tB
ID int (PK, identity)
Timestamp datetime (default: getdate())
VarSmallint smallint
VarTinyint tinyint
String varchar(20)
tC
ID int PK
Timestamp datetime
VarTinyint1 tinyint
VarTinyint2 tinyint
String varchar(10)
tA contains 3 records with 1 wrong one. I insert this content into tB.
tB has the trigger, and inserts the records into tC ony by one.
tC has only tinyint variables, so there can be problem to insert values greater than 255. This is the point where the error occurs for the test!
My trigger is:
ALTER TRIGGER [dbo].[trg_tB]
ON [dbo].[tB]
AFTER INSERT
AS
BEGIN
IF ##rowcount = 0
RETURN;
SET NOCOUNT ON;
DECLARE
#ID AS int,
#Timestamp AS datetime,
#VarSmallint AS smallint,
#VarTinyint AS tinyint,
#String AS varchar(20),
DECLARE curNyers CURSOR DYNAMIC
FOR
SELECT
[ID], [Timestamp], [VarSmallint], [VarTinyint], [String]
FROM INSERTED
ORDER BY [ID]
OPEN curNyers
FETCH NEXT FROM curNyers INTO #ID, #Timestamp, #VarSmallint, #VarTinyint, #String
WHILE ##FETCH_STATUS = 0
BEGIN
BEGIN TRY
BEGIN TRAN
INSERT INTO [dbo].[tC]([ID], [Timestamp], [VarTinyint1], [VarTinyint2], [String])
VALUES (#ID, #Timestamp, #VarSmallint, #VarTinyint, #String)
COMMIT TRAN
END TRY
BEGIN CATCH
ROLLBACK TRAN
INSERT INTO [dbo].[tErrorLog]([ErrorTime], [UserName], [ErrorNumber],
[ErrorSeverity], [ErrorState],
[ErrorProcedure], [ErrorLine],
[ErrorMessage], [RecordID])
VALUES (SYSDATETIME(), SUSER_NAME(), ERROR_NUMBER(),
ERROR_SEVERITY(), ERROR_STATE(),
ERROR_PROCEDURE(), ERROR_LINE(),
ERROR_MESSAGE(), #ID)
END CATCH
FETCH NEXT FROM curNyers INTO #ID, #Timestamp, #VarSmallint, #VarTinyint, #String
END
CLOSE curNyers
DEALLOCATE curNyers
END
If I insert 2 good records with 1 wrong, everything is rolling back and I got an error:
Msg 3609, Level 16, State 1, Line 1
The transaction ended in the trigger. The batch has been aborted.
Please help me! How to modify this trigger to work well?
If I insert the wrong record, I need:
All the inserted records in tB
All the good records in tC
Error logged in tErrorLog
Thanks!
You have TWO major disasters in your trigger:
do not use a cursor inside a trigger - that's just horrible! Triggers fire whenever a given operation happens - you have little control over when and how many times they fire. Therefore, in order not to compromise your system performance too much, triggers should be very small, fast, nimble - do not do any heavy lifting and extensive processing in a trigger. A cursor is anything but nimble and fast - it's a resource-hog, processor-hog, memory-leaking monster - AVOID those whenever you can, and most definitely inside a trigger! (and you don't need them, 99% of the cases, anyway)
You can rewrite your whole logic into this one single, fast, set-based statement:
ALTER TRIGGER [dbo].[trg_tB]
ON [dbo].[tB]
AFTER INSERT
AS
BEGIN
INSERT INTO [dbo].[tC]([ID], [Timestamp], [VarTinyint1], [VarTinyint2], [String])
SELECT
[ID], [Timestamp], [VarSmallint], [VarTinyint], [String]
FROM
INSERTED
END
Never call COMMIT TRAN inside a trigger. The trigger executes inside the context and transaction of the statement that caused it to fire - if everything is OK, just let the trigger finish and then the transaction will be committed just fine. If you need to abort, call ROLLBACK. But never ever call COMMIT TRAN in the middle of a trigger. Just don't.....
I deleted the TRIGGER, and copy-pasted the code from it into a STORED PROCEDURE.
Then I added a row Status to tB, and set defaults to 0.
1 is "Record processed OK", 2 is "Record processing fault".
I fill the cursor with WHERE Status = 0.
In the TRY section I update the status to 1, in the CATCH section I UPDATE it to 2.
I have no jobs, so I run the SP from Windows scheduler with a batch file with the SQLCMD command.
Now the processing works well, moreover it worked well for the first time. Thanks for the help!
I have written a procedure like below lines of code
ALTER PROCEDURE [dbo].[CountrySave]
(
#CountryId uniqueidentifier,
#CountryName nvarchar(max)
)
AS
begin tran
if exists (select * from Country where CountryID =#CountryId)
begin
update Country set
CountryID = #CountryId,
CountryName =#CountryName
where CountryID = #CountryId
end
else
begin
insert INTO Country(CountryID, CountryName) values
(NewID(),#CountryName)
end
It throws "Transaction count after EXECUTE indicates a mismatching number of BEGIN and COMMIT statements. Previous count = 0, current count = 1.
A transaction that was started in a MARS batch is still active at the end of the batch. The transaction is rolled back." error message when executed!!!
Please Help...
Add COMMIT TRAN
ALTER PROCEDURE [dbo].[CountrySave]
#CountryId uniqueidentifier,
#CountryName nvarchar(max)
AS
BEGIN
BEGIN TRY
BEGIN TRAN
if exists (select * from Country where CountryID =#CountryId)
begin
update Country
set CountryID = #CountryId,
CountryName =#CountryName
where CountryID = #CountryId;
end
else
begin
insert INTO Country(CountryID, CountryName)
values(NewID(),#CountryName)
end
COMMIT TRAN
END TRY
BEGIN CATCH
/* Error occured log it */
ROLLBACK
END CATCH
END
The error message is fairly clear. When you open (begin) a transaction, you will need to do something at the end of it as well.
So either you ROLLBACK the transaction (in case one of the statements within the transaction fails), or you COMMIT the transaction in order to actually implement all changes your statements made.
From MSDN:
BEGIN TRANSACTION represents a point at which the data referenced by a
connection is logically and physically consistent. If errors are
encountered, all data modifications made after the BEGIN TRANSACTION
can be rolled back to return the data to this known state of
consistency. Each transaction lasts until either it completes without
errors and COMMIT TRANSACTION is issued to make the modifications a
permanent part of the database, or errors are encountered and all
modifications are erased with a ROLLBACK TRANSACTION statement.
More information: https://msdn.microsoft.com/en-us/library/ms188929.aspx
Your Problem is that you begin a transaction but you never commit it / do a rollback.
Try this structure for your procedure, worked very well for me in the past:
CREATE PROCEDURE [dbo].SomeProc
(#Parameter INT)
AS
BEGIN
--if you want to be to only active transaction then uncomment this:
--IF ##TRANCOUNT > 0
--BEGIN
-- RAISERROR('Other Transactions are active at the moment - Please try again later',16,1)
--END
BEGIN TRANSACTION
BEGIN TRY
/*
DO SOMETHING
*/
COMMIT TRANSACTION
END TRY
BEGIN CATCH
--Custom Error could be raised here
--RAISERROR('Something bad happened when doing something',16,1)
ROLLBACK TRANSACTION
END CATCH
END
I have a stored procedure which does a lot of probing of the database to determine if some records should be updated
Each record (Order) has a TIMESTAMP called [RowVersion]
I store the candidate record ids and RowVersions in a temporary table called #Ids
DECLARE #Ids TABLE (id int, [RowVersion] Binary(8))
I get the count of candidates with the the following
DECLARE #FoundCount int
SELECT #FoundCount = COUNT(*) FROM #Ids
Since records may change from when i SELECT to when i eventually try to UPDATE, i need a way to check concurrency and ROLLBACK TRANSACTION if that check fails
What i have so far
BEGIN TRANSACTION
-- create new combinable order group
INSERT INTO CombinableOrders DEFAULT VALUES
-- update orders found into new group
UPDATE Orders
SET Orders.CombinableOrder_Id = SCOPE_IDENTITY()
FROM Orders AS Orders
INNER JOIN #Ids AS Ids
ON Orders.Id = Ids.Id
AND Orders.[RowVersion] = Ids.[RowVersion]
-- if the rows updated dosnt match the rows found, then there must be a concurrecy issue, roll back
IF (##ROWCOUNT != #FoundCount)
BEGIN
ROLLBACK TRANSACTION
set #Updated = -1
END
ELSE
COMMIT
From the above, i'm filtering the UPDATE with the stored [RowVersion] this will skip any records that have since been changed (hopefully)
However i'm not quite sure if i'm using transactions or optimistic concurrency in regards to TIMESTAMP correctly, or if there are better ways to achieve my desired goals
It's difficult to understand what logic you are trying to implement.
But, if you absolutely must perform several non-atomic actions in a procedure and make sure that the whole block of code is not executed again while it is running (for example, by another user), consider using sp_getapplock.
Places a lock on an application resource.
Your procedure may look similar to this:
CREATE PROCEDURE [dbo].[YourProcedure]
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
BEGIN TRANSACTION;
BEGIN TRY
DECLARE #VarLockResult int;
EXEC #VarLockResult = sp_getapplock
#Resource = 'UniqueStringFor_app_lock',
#LockMode = 'Exclusive',
#LockOwner = 'Transaction',
#LockTimeout = 60000,
#DbPrincipal = 'public';
IF #VarLockResult >= 0
BEGIN
-- Acquired the lock
-- perform your complex processing
-- populate table with IDs
-- update other tables using IDs
-- ...
END;
COMMIT TRANSACTION;
END TRY
BEGIN CATCH
ROLLBACK TRANSACTION;
END CATCH;
END
When you SELECT the data, try using HOLDLOCK and UPDLOCK while inside of an explicit transaction. It's going to mess with the concurrency of OTHER transactions but not yours.
http://msdn.microsoft.com/en-us/library/ms187373.aspx