Commit Transaction take too long? - sql-server

I have a stored procedure that have the following code:
BEGIN TRY
--BEGIN TRANSACTION #TranName
DECLARE #ID int
INSERT INTO [dbo].[a] ([Comment],[Type_Id],[CreatedBy])
VALUES ('test',1,2)
SET #ID = SCOPE_IDENTITY()
INSERT INTO [dbo].[b] ([Can_ID],[Com_ID],[Cal_ID],[CreatedBy])
VALUES (1,#ID,null,2)
UPDATE c SET LastUpdated = GETDATE(), LastUpdatedBy = 2 WHERE b.id = #ID
--COMMIT TRANSACTION #TranName
SELECT * from [View] where a.id=#ID
END TRY
BEGIN CATCH
--ROLLBACK TRANSACTION #TranName
END CATCH
Each of the statements in there running individually (as it is now) run fast. But when we remove the comments from the Transaction's piece of code the scripts run time increases from 1s to more than 2 minutes.
The system has been running for quite a while now, and this wasn't a problem before, I've been trying to search documentation about how SQL Server handle Transactions just in case there is anything that may affect SQL performance and the only thing that I have in mind is the Transaction Log... but ideally these individual statements run in a individual transaction as well, any idea?

As Jens suggested The problems was because of some Tables blocking, after resetting SQL Server Service this locks disappeared and the DB started working properly again.

Related

SQL Server stored procedures reading data before insert completed

I'm new to SQL Server and stored procedures and could do with a couple of pointers regarding transaction handling on a bug I've inherited.
I have two stored procedures, one inserts a record passed into it, then it calls another one where the first thing it does is read what was inserted.
But sometimes it completes successfully without processing the data. My suspicion is that the selects are happening before the insert has 'hit' the table and retrieve no records, and the stored procedure doesn't handle that.
I don't have time to re-engineer just yet, but the transaction handling looks suspect. Below is a rough outline of what the stored procedures do.
procedure sp1
(#id, #pbody)
as
begin
begin try
set nocount on;
begin
insert into tbl1 (id, tbody)
values (#id, #pbody)
exec sp2 #id
end
end try
begin catch
execute sperror
end catch
end
go
procedure sp2 (#id)
as
begin
begin try
set nocount on;
declare #vbody varchar(max)
select #vbody = tbody -- I don't believe this step always retrieves the row inserted by sp1
from tbl1 with (nolock)
where id = #id
create table #tmp1 (id, msg)
insert into #tmp1
select id, msg
from openjson........
while exists(select top 1 * from #tmp1) -- this looks similar to above, not sure the insert has finished before the read
begin
** do some stuff **
end
end try
begin catch
execute sperror
end catch
end
go
sp2 is using the WITH (NOLOCK) query hint, which can have unintended side-effects. Missing rows is just one of them.
Using NOLOCK? Here's How You'll Get the Wrong Query Results. - Brent Ozar Unlimited®
I'd strongly recommend removing that hint unless you really understand what it does and have a very good reason for using it.

The ROLLBACK TRANSACTION request has no corresponding BEGIN TRANSACTION when ran UDATE statement

Got an error when ran update statement.
For one record it worked fine, but for a chunk of records it gives me an error.
Also, why it tells me that 64801 row(s) affected and then 1 row(s) affected and then 0? How should I interpret that?
This is the script:
update tblQuotes
set QuoteStatusID = 11, --Not Taken Up
QuoteStatusReasonID = 9 --"Not Competitive"
where CAST(EffectiveDate as DATE) < CAST('2013-11-27' as DATE)
and CompanyLocationGuid = '32828BB4-E1FA-489F-9764-75D8AF7A78F1' -- Plaza Insurance Company
and LineGUID = '623AA353-9DFE-4463-97D7-0FD398400B6D' --Commercial Auto
I added BEGIN TRANSACTION statement, but it still won't work.
BEGIN TRANSACTION
update tblQuotes
set QuoteStatusID = 11, --Not Taken Up
QuoteStatusReasonID = 9 --"Not Competitive"
where CAST(EffectiveDate as DATE) < CAST('2017-11-27' as DATE)
AND CompanyLocationGuid = '32828BB4-E1FA-489F-9764-75D8AF7A78F1' -- Plaza Insurance Company
and LineGUID = '623AA353-9DFE-4463-97D7-0FD398400B6D' --Commercial Auto
IF ##TRANCOUNT>0
COMMIT TRANSACTION
In my opinion this is a "flaw", if not a "bug" in SQL Server. When you COMMIT a transaction, TRANCOUNT is decremented by 1. When you ROLLBACK any transaction, all transactions in the calling stack are rolled back! This means that any calling procedure that tries to commit or rollback will have this error and you've lost the integrity of your calling stack.
I worked through this when building a mechanism do do unit testing on SQL Server. I get around it by always using named transactions as shown in the example below. You can obviously also check XACT_STATE. The point is simply that, rather than blindly committing and rolling back anonymous transactions, if you manage transactions by name or transaction id you have better control.
For unit testing, I write a stored procedure as a test that calls the procedure under test. The unit test is in either serializable or snapshot mode and ONLY includes a rollback statement. I call the procedure under test, validate the results, build test output (pass/fail, parameters, etc.) as XML output, then everything gets rolled back. This gets around the need to build "mock data". I can use the data on any environment as the transaction is always rolled back.
--
-- get #procedure from object_name(##procid)
-------------------------------------------------
DECLARE #procedure SYSNAME = N'a_procedure_name_is_a_synonym_so_can_be_longer_than_a_transaction_name'
, #transaction_id BIGINT;
DECLARE #transaction_name NVARCHAR(32) = RIGHT(#procedure + N'_tx', 32);
--
BEGIN TRANSACTION #transaction_name;
BEGIN
SELECT #transaction_id = [transaction_id]
FROM [sys].[dm_tran_active_transactions]
WHERE [name] = #transaction_name;
SELECT *
FROM [sys].[dm_tran_active_transactions]
WHERE [name] = #transaction_name;
-- Perform work here
END;
IF EXISTS
(SELECT *
FROM [sys].[dm_tran_active_transactions]
WHERE [name] = #transaction_name)
ROLLBACK TRANSACTION #transaction_name;
This error states that in SQL Server, you have given a Commit or Commit Transaction without specifying a Begin Transaction or the number of commit transactions is greater than the number of begin transactions. To avoid this make sure you check the existing transactions on the current session before committing.
So a normal Commit Transaction will be updated as below
IF ##TRANCOUNT>0
COMMIT TRANSACTION
There is a trigger, that making sure about the integrity of the QuoteStatusID. So im my WHERE clause I have to exactly specify what current status ID policy have to have in order to be updated.

In SQL Server 2005 emulating autonomous transaction

I have needs to keep some of log data in different tables even my transaction is rolled back.
I already learned that in SQL Server it is impossible do something like this
begin tran t1
insert ...
insert ...
select ...
begin tran t2
insert into log
commit tran t2
rollback tran t1
select * from log -- IS EMPTY ALWAYS
So I try hacking SQL Server that I madded CLR which is going to export data need for LOG to local server disk in XML format. CLR Code is simple as it can be:
File.WriteAllText(fileName, xmlLog.Value.ToString());
Before I release this in production bases Ill love to hear your toughs about this technique.
Here are few questions:
Is there other better way to accomplish autonomous transaction in SQL Server 2005
How can be bad holding my transaction uncommitted while SQL Server is executing CLR (amount of data written by SQL is relative small about 50 - 60 records of 3 integers and 4 floats)
I would suggest using a Table Variable as it is not affected by the Transaction (this is one of the methods listed in the blog noted by Martin below the question). Consider doing this, which will work in SQL Server 2005:
DECLARE #TempLog TABLE (FieldList...)
BEGIN TRY
BEGIN TRAN
INSERT...
INSERT INTO #TempLog (FieldList...) VALUES (#Variables or StaticValues...)
INSERT...
INSERT INTO #TempLog (FieldList...) VALUES (#Variables or StaticValues...)
COMMIT TRAN
END TRY
BEGIN CATCH
IF (##TRANCOUNT > 0)
BEGIN
ROLLBACK TRAN
END
/* Maybe add a Log message to note that we ran into an error */
INSERT INTO #TempLog (FieldList...) VALUES (#Variables or StaticValues...)
END CATCH
INSERT INTO RealLogTable (FieldList...)
SELECT FieldsList
FROM #TempLog
Please note that while we are making use of the fact that Table Variables are not part of the transaction, that does create a potential situation where this code does a COMMIT but errors (or server crashes) before the INSERT INTO RealLogTable and you will have lost the logging for the data that did make it in. At this point there would be a disconnect as there is data but no record of it being inserted as far as RealLogTable is concerned. But this is just the obvious trade-off for being able to bypass the Transaction.

How can I ensure that nested transactions are committed independently of each other?

If I have a stored procedure that executes another stored procedure several times with different arguments, is it possible to have each of these calls commit independently of the others?
In other words, if the first two executions of the nested procedure succeed, but the third one fails, is it possible to preserve the results of the first two executions (and not roll them back)?
I have a stored procedure defined something like this in SQL Server 2000:
CREATE PROCEDURE toplevel_proc ..
AS
BEGIN
...
while #row_count <= #max_rows
begin
select #parameter ... where rownum = #row_count
exec nested_proc #parameter
select #row_count = #row_count + 1
end
END
First off, there is no such thing as a nested transaction in SQL Server
However, you can use SAVEPOINTs as per this example (too long to reproduce here sorry) from fellow SO user Remus Rusanu
Edit: AlexKuznetsov mentioned (he deleted his answer though) that this won't work if a transaction is doomed. This can happen with SET XACT_ABORT ON or some trigger errors.
From BOL:
ROLLBACK TRANSACTION without a
savepoint_name or transaction_name
rolls back to the beginning of the
transaction. When nesting
transactions, this same statement
rolls back all inner transactions to
the outermost BEGIN TRANSACTION
statement.
I also found the following from another thread here:
Be aware that SQL Server transactions
aren't really nested in the way you
might think. Once an explict
transaction is started, a subsequent
BEGIN TRAN increments ##TRANCOUNT
while a COMMIT decrements the value.
The entire outmost transaction is
committed when a COMMIT results in a
zero ##TRANCOUNT. But a ROLLBACK
without a savepoint rolls back all
work including the outermost
transaction.
If you need nested transaction
behavior, you'll need to use SAVE
TRANSACTION instead of BEGIN TRAN and
use ROLLBACK TRAN [savepoint_name]
instead of ROLLBACK TRAN.
So it would appear possible.

TSQL logging inside transaction

I'm trying to write to a log file inside a transaction so that the log survives even if the transaction is rolled back.
--start code
begin tran
insert [something] into dbo.logtable
[[main code here]]
rollback
commit
-- end code
You could say just do the log before the transaction starts but that is not as easy because the transaction starts before this S-Proc is run (i.e. the code is part of a bigger transaction)
So, in short, is there a way to write a special statement inside a transaction that is not part of the transaction. I hope my question makes sense.
Use a table variable (#temp) to hold the log info. Table variables survive a transaction rollback.
See this article.
I do this one of two ways, depending on my needs at the time. Both involve using a variable, which retain their value following a rollback.
1) Create a DECLARE #Log varchar(max) value and use this: #SET #Log=ISNULL(#Log+'; ','')+'Your new log info here'. Keep appending to this as you go through the transaction. I'll insert this into the log after the commit or the rollback as necessary. I'll usually only insert the #Log value into the real log table when there is an error (in theCATCH` block) or If I'm trying to debug a problem.
2) create a DECLARE #LogTable table (RowID int identity(1,1) primary key, RowValue varchar(5000). I insert into this as you progress through your transaction. I like using the OUTPUT clause to insert the actual IDs (and other columns with messages, like 'DELETE item 1234') of rows used in the transaction into this table with. I will insert this table into the actual log table after the commit or the rollback as necessary.
If the parent transaction rolls back the logging data will roll back as well - SQL server does not support proper nested transactions. One possibility is to use a CLR stored procedure to do the logging. This can open its own connection to the database outside the transaction and enter and commit the log data.
Log output to a table, use a time delay, and use WITH(NOLOCK) to see it.
It looks like #arvid wanted to debug the operation of the stored procedure, and is able to alter the stored proc.
The c# code starts a transaction, then calls a s-proc, and at the end it commits or rolls back the transaction. I only have easy access to the s-proc
I had a similar situation. So I modified the stored procedure to log my desired output to a table. Then I put a time delay at the end of the stored procedure
WAITFOR DELAY '00:00:12'; -- 12 second delay, adjust as desired
and in another SSMS window, quickly read the table with READ UNCOMMITTED isolation level (the "WITH(NOLOCK)" below
SELECT * FROM dbo.NicksLogTable WITH(NOLOCK);
It's not the solution you want if you need a permanent record of the logs (edit: including where transactions get rolled back), but it suits my purpose to be able to debug the code in a temporary fashion, especially when linked servers, xp_cmdshell, and creating file tables are all disabled :-(
Apologies for bumping a 12-year old thread, but Microsoft deserves an equal caning for not implementing nested transactions or autonomous transactions in that time period.
If you want to emulate nested transaction behaviour you can use named transactions:
begin transaction a
create table #a (i int)
select * from #a
save transaction b
create table #b (i int)
select * from #a
select * from #b
rollback transaction b
select * from #a
rollback transaction a
In SQL Server if you want a ‘sub-transaction’ you should use save transaction xxxx which works like an oracle checkpoint.

Resources