Catch a cancelled stored procedure in SQL Server - sql-server

I have a stored procedure in SQL server that looks something like this:
insert into Record(StartTimestamp) values (GETDATE())
SET #MyID = SCOPE_IDENTITY()
begin try
-- do something
UPDATE Record SET EndTimestamp = GETDATE() WHERE ID = #MyID
end try
begin catch
UPDATE Record SET EndTimestamp = GETDATE(), Error = ERROR_MESSAGE() WHERE ID = #MyID
end catch
It gets called from an application and takes a few seconds to run. If a user cancels it while it's running I end up with a StartTimestamp in the Record table but no error and no EndTimestamp. I always want to know that the user initiated this stored proc but I also want to always record when it finished, either by success, error, or being cancelled.
Is there any way to do that in SQL Server?
Thanks

Maybe you could rework the cancellation logic? So instead of calling SqlCommand.Cancel from the business layer execute some second proc that sets a flag somewhere, then have the first proc check for that flag while it's running...
The whole point of SqlCommand.Cancel is that it really stops execution, stopping the SP from running and doing a rollback. I think you'll have to approach this from the business layer.
GJ

Related

Commit Transaction take too long?

I have a stored procedure that have the following code:
BEGIN TRY
--BEGIN TRANSACTION #TranName
DECLARE #ID int
INSERT INTO [dbo].[a] ([Comment],[Type_Id],[CreatedBy])
VALUES ('test',1,2)
SET #ID = SCOPE_IDENTITY()
INSERT INTO [dbo].[b] ([Can_ID],[Com_ID],[Cal_ID],[CreatedBy])
VALUES (1,#ID,null,2)
UPDATE c SET LastUpdated = GETDATE(), LastUpdatedBy = 2 WHERE b.id = #ID
--COMMIT TRANSACTION #TranName
SELECT * from [View] where a.id=#ID
END TRY
BEGIN CATCH
--ROLLBACK TRANSACTION #TranName
END CATCH
Each of the statements in there running individually (as it is now) run fast. But when we remove the comments from the Transaction's piece of code the scripts run time increases from 1s to more than 2 minutes.
The system has been running for quite a while now, and this wasn't a problem before, I've been trying to search documentation about how SQL Server handle Transactions just in case there is anything that may affect SQL performance and the only thing that I have in mind is the Transaction Log... but ideally these individual statements run in a individual transaction as well, any idea?
As Jens suggested The problems was because of some Tables blocking, after resetting SQL Server Service this locks disappeared and the DB started working properly again.

Statement execution in stored procedure in SQL Server

I have a stored procedure (simplified version) as below. In general it will push the pending items to email queue and update the status of those items to pushed. This stored procedure has been called in a 60 seconds interval.
ALTER PROCEDURE [dbo].[CreateEmailQueue]
AS
BEGIN
insert into EmailQueue
select *
from actionstatus
where actionstatus='Pending'
update actionstatus
set actionstatus = 'Pushed'
where actionstatus = 'Pending'
END
It works fine but I found one exception record. The exception's action status has been changed to "Pushed" but the record did not appear in the EmailQueue. I suspected that the action status record has been generated after the insert but before the update. But I am not 100% sure as I assume that SQL Server always execute a stored procedure in an isolation block so the data change won't affect the data set inside the execution. Could someone help explain the actual process inside the stored procedure, review this stored procedure and advise if it is OK, or it should be put in a transaction block? and, any performance or locking issue for doing this in transaction? Thanks for your input.
You must use transactins inside stored procedure. SQL Server don't generate transactions for stored procedures calls automatically. You can simply check it by execute code something like taht:
insert ...
--this will falls
select 1/0
Insertion will be done and than you'll have division by zero error

SQL Server Stored Procedure Error Handling

I have a stored procedure which is runs automatically every morning in SQL Server 2008 R2, part of this stored procedure involves executing other stored procedures. The format can be summarised thus:
BEGIN TRY
-- Various SQL Commands
EXECUTE storedprocedure1
EXECUTE storedprocedure2
-- etc
END TRY
BEGIN CATCH
--This logs the error to a table
EXECUTE errortrappingprocedure
END CATCH
storedprocedure1 and storedprocedure2 basically truncate a table and select into it from another table. Something along the lines of:
BEGIN TRY
TRUNCATE Table1
INSERT INTO Table1 (A, B, C)
SELECT A, B, C FROM MainTable
END TRY
BEGIN CATCH
EXECUTE errortrappingprocedure
END CATCH
The error trapping procedure contains this:
INSERT INTO
[Internal].dbo.Error_Trapping
(
[Error_Number],
[Error_Severity],
[Error_State],
[Error_Procedure],
[Error_Line],
[Error_Message],
[Error_DateTime]
)
(
SELECT
ERROR_NUMBER(),
ERROR_SEVERITY(),
ERROR_STATE(),
ERROR_PROCEDURE(),
ERROR_LINE(),
ERROR_MESSAGE(),
GETDATE()
)
99% of the time this works, however occasionally we will find that storedprocedure1 hasn't completed, with Table1 only being part populated. However no errors are logged in our error table. I've tested the error trapping procedure and it does work.
When I later run storedprocedure1 manually it completes fine. No data in the source table will have changed by this point so it's obviously not a problem with the data, something else has happened in that instant which has caused the procedure to fail. Is there a better way for me to log errors here, or somewhere else within the database I can look to try and find out why it is failing?
Try to use SET ARITHABORT (see link). It must ROLLBACK in your case. Also the answer of #Kartic seem reasonable.
I recommned also to read about implicit and explicit transactions - I think that this is your problem. You have several implicit transactions and when error happeneds you are in the middle of the job - so only part is rollbackеd and you have some data in that tables.
There are some type of Errors that TRY..CATCH block will not handle them, look here for more information https://technet.microsoft.com/en-us/library/ms179296(v=sql.105).aspx . for such Errors you should handle them in your application.
also I think you might have transaction management problem in your application too.
I am not sure if I understood you completely. Below code is too big for comment. So posting as an answer for your reference. If this is not what you want, I'll delete it.
Can we add transaction handling part as well.
DECLARE #err_msg NVARCHAR(MAX)
BEGIN TRY
BEGIN TRAN
-- Your code goes here
COMMIT TRAN
END TRY
BEGIN CATCH
SET #err_msg = ERROR_MESSAGE()
SET #err_msg = REPLACE(#err_msg, '''', '''''')
ROLLBACK TRAN
-- Do something with #err_msg
END CATCH

How can I make a stored procedure commit immediately?

EDIT This questions is no longer valid as the issue was something else. Please see my explanation below in my answer.
I'm not sure of the etiquette so i'l leave this question in its' current state
I have a stored procedure that writes some data to a table.
I'm using Microsoft Practices Enterprise library for making my stored procedure call.
I invoke the stored procedure using a call to ExecuteNonQuery.
After ExecuteNonQuery returns i invoke a 3rd party library. It calls back to me on a separate thread in about 100 ms.
I then invoke another stored procedure to pull the data I had just written.
In about 99% of cases the data is returned. Once in a while it returns no rows( ie it can't find the data). If I put a conditional break point to detect this condition in the debugger and manually rerun the stored procedure it always returns my data.
This makes me believe the writing stored procedure is working just not committing when its called.
I'm fairly novice when it comes to sql, so its entirely possible that I'm doing something wrong. I would have thought that the writing stored procedure would block until its contents were committed to the db.
Writing Stored Procedure
ALTER PROCEDURE [dbo].[spWrite]
#guid varchar(50),
#data varchar(50)
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
-- see if this guid has already been added to the table
DECLARE #foundGuid varchar(50);
SELECT #foundGuid = [guid] from [dbo].[Details] where [guid] = #guid;
IF #foundGuid IS NULL
-- first time we've seen this guid
INSERT INTO [dbo].[Details] ( [guid], data ) VALUES (#guid, #data)
ELSE
-- updaeting or verifying order
UPDATE [dbo].[Details] SET data =#data WHERE [guid] = #guid
END
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
Reading Stored Procedure
ALTER PROCEDURE [dbo].[spRead]
#guid varchar(50)
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
SELECT * from [dbo].[Details] where [guid] = #guid;
END
To actually block other transactions and manually commit,
maybe adding
BEGIN TRANSACTION
--place your
--transactions you wish to do here
--if everything was okay
COMMIT TRANSACTION
--or
--ROLLBACK TRANSACTION if something went wrong
could help you?
I’m not familiar with the data access tools you mention, but from your description I would guess that either the process does not wait for the stored procedure to complete execution before proceeding to the next steps, or ye olde “something else” is messing with the data in between your write and read calls.
One way to tell what’s going on is to use SQL Profiler. Fire it up, monitor all possible query execution events on the database (including stored procedure and stored procedures line start/stop events), watch the Text and Started/Ended columns, correlate this with the times you are seeing while tracing the application, and that should help you figure out what’s going on there. (SQL Profiler can be complex to use, but there are many sources on the web that explain it, and it is well worth learning how to use it.)
I'll leave my answer below as there are comments on it...
Ok, I feel shame I had simplified my question too much. What was actually happening is two things:
1) the inserting procedure is actually running on a separate machine( distributed system).
2) the inserting procedure actually inserts data into two tables without a transaction.
This means the query can run at the same time and find the tables in a state where one has been written to and the second table hasn't' yet had its write committed.
A simple transaction fixes this as the reading query can handle either case of no write or full write but couldn't handle the case of one table written to and the other having a pending commit.
Well it turns out that when I created the stored procedure the MSSQLadmin tool added a line to it by default:
SET NOCOUNT ON;
If I turn that to:
SET NOCOUNT OFF;
then my procedure actually commits to the database properly. Strange that this default would actually end up causing problems.
Easy way using try-catch, like it if useful
BEGIN TRAN
BEGIN try
INSERT INTO meals
(
...
)
Values(...)
COMMIT TRAN
END try
BEGIN catch
ROLLBACK TRAN
SET #resp = (convert(varchar,ERROR_LINE()), ERROR_MESSAGE() )
END catch

How do you write a recursive stored procedure

I simply want a stored procedure that calculates a unique id (that is separate from the identity column) and inserts it. If it fails it just calls itself to regenerate said id. I have been looking for an example, but cant find one, and am not sure how I should get the SP to call itself, and set the appropriate output parameter. I would also appreciate someone pointing out how to test this SP also.
Edit
What I have now come up with is the following (Note I already have an identity column, I need a secondary id column.
ALTER PROCEDURE [dbo].[DataInstance_Insert]
#DataContainerId int out,
#ModelEntityId int,
#ParentDataContainerId int,
#DataInstanceId int out
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
WHILE (#DataContainerId is null)
EXEC DataContainer_Insert #ModelEntityId, #ParentDataContainerId, #DataContainerId output
INSERT INTO DataInstance (DataContainerId, ModelEntityId)
VALUES (#DataContainerId, #ModelEntityId)
SELECT #DataInstanceId = scope_identity()
END
ALTER PROCEDURE [dbo].[DataContainer_Insert]
#ModelEntityId int,
#ParentDataContainerId int,
#DataContainerId int out
AS
BEGIN
BEGIN TRY
SET NOCOUNT ON;
DECLARE #ReferenceId int
SELECT #ReferenceId = isnull(Max(ReferenceId)+1,1) from DataContainer Where ModelEntityId=#ModelEntityId
INSERT INTO DataContainer (ReferenceId, ModelEntityId, ParentDataContainerId)
VALUES (#ReferenceId, #ModelEntityId, #ParentDataContainerId)
SELECT #DataContainerId = scope_identity()
END TRY
BEGIN CATCH
END CATCH
END
In CATCH blocks you must check the XACT_STATE value. You may be in a doomed transaction (-1) and in that case you are forced to rollback. Or your transaction may had already had rolled back and you should not continue to work under the assumption of an existing transaction. For a template procedure that handles T-SQL exceptions, try/catch blcoks and transactions correctly, see Exception handling and nested transactions
Never, under any languages, do recursive calls in exception blocks. You don't check why you hit an exception, therefore you don't know if is OK to try again. What if the exception is 652, read-only filegroup? Or your database is at max size? You'll re-curse until you'll hit stackoverflow...
Code that reads a value, makes a decision based on that value, then writes something is always going to fail under concurrency unless properly protected. You need to wrap the SELECT and INSERT in a transaction and your SELECT must be under SERIALISABLE isolation level.
And finally, ignoring the blatantly wrong code in your post, here is how you call a stored procedure passing in OUTPUT arguments:
exec DataContainer_Insert #SomeData, #DataContainerId OUTPUT;
Better yet, why not make UserID an identity column instead of trying to re-implement an identity column manually?
BTW: I think you meant
VALUES (#DataContainerId + 1 , SomeData)
Why not use the:
NewId()
T SQL function? (assuming sql server 2005/2008)
that sp will never ever do a successful insert, you have an identity property on the DataContainer table but you are inserting the ID, in that case you will need to set identity_insert on but then scope_identity() won't work
A PK violation also might not be trapped so you might also need to check for XACT_STATE()
why are you messing around with max, use scope_identity() and be done with it

Resources