For some reason we have a code like this (inside a procedure), on SQL Server 2008 R2:
update foo set value = 'xxx' where id = 123
exec usp_analyzeFoo #id = 123
where usp_analyzefoo calls out a web service that connects to the database on its own and tries to update table foo by itself. The web service uses native code, connects to hardware
(so I didn't want to have this code running inside SQL server) etc, works something out and updates foo. But it's made from a different connection and outside the current transaction.
It has worked until somebody wrapped these calls in a transaction and now the update command locks the table foo and the web service (which connects through its own database connection) is blocked by this transaction and times out.
Is there a reasonable solution for this problem? Something along calling the web service after the current transaction is over, scheduling it several seconds later (it's not critical) or something similar?
You can call that SP whit a job like Justicator commented or you can put that call in two separate transactions.
If you don't explicit wrap that in a transaction both operations ill be done in implicit transactions by itself.
update -- implicit tran
exec SP -- implicit tran
when you wrapped it you mean to do both operations in a atomic tran, that does not work due the SP is calling a third party WS and it trying to do another transaction.
begin tran
update
exec SP -- call a third party ill be blocked by that update cuz it's not committed yet
commit tran -- ill never reach this point since exec does not complete, its a deadlock baby
you can separate the transactions if you wish, they ill work like before that (bad) wrap.
begin tran
update
commit tran
begin tran
exec SP
commit tran
Related
For so long, I've omitted using SQL Transactions, mostly out of ignorance.
But let's say I have a procedure like this:
CREATE PROCEDURE CreatePerson
AS
BEGIN
declare #NewPerson INT
INSERT INTO PersonTable ( Columns... ) VALUES ( #Parameters... )
SET #NewPerson = SCOPE_IDENTITY()
INSERT INTO AnotherTable ( #PersonID, CreatedOn ) VALUES ( #NewPerson, getdate() )
END
GO
In the above example, the second insert depends on the first, as in it will fail if the first one fails.
Secondly, and for whatever reason, transactions are confusing me as far as proper implementation. I see one example here, another there, and I just opened up adventureworks to find another example with try, catch, rollback, etc.
I'm not logging errors. Should I use a transaction here? Is it worth it?
If so, how should it be properly implemented? Based on the examples I've seen:
CREATE PROCEURE CreatePerson
AS
BEGIN TRANSACTION
....
COMMIT TRANSACTION
GO
Or:
CREATE PROCEDURE CreatePerson
AS
BEGIN
BEGIN TRANSACTION
COMMIT TRANSACTION
END
GO
Or:
CREATE PROCEDURE CreatePerson
AS
BEGIN
BEGIN TRY
BEGIN TRANSACTION
...
COMMIT TRANSACTION
END TRY
BEGIN CATCH
IF ##TRANCOUNT > 0
BEGIN
ROLLBACK TRANSACTION
END
END CATCH
END
Lastly, in my real code, I have more like 5 separate inserts all based on the newly generated ID for person. If you were me, what would you do? This question is perhaps redundant or a duplicate, but for whatever reason I can't seem to reconcile in my mind the best way to handle this.
Another area of confusion is the rollback. If a transaction must be committed as a single unit of operation, what happens if you don't use the rollback? Or is the rollback needed only in a Try/Catch similar to vb.net/c# error handling?
You are probably missing the point of this: transactions are suppose to make a set of separate actions into one, so if one fails, you can rollback and your database will stay as if nothing happened.
This is easier to see if, let's say, you are saving the details of a purchase in a store. You save the data of the customer (like Name or Address), but somehow in between, you missed the details (server crash). So now you know that John Doe bought something, but you don't know what. You Data Integrity is at stake.
Your third sample code is correct if you want to handle transactions in the SP. To return an error, you can try:
RETURN ##ERROR
After the ROLLBACK. Also, please review about:
set xact_abort on
as in: SQL Server - transactions roll back on error?
If the first insert succeeds and the second fails you will have a database in a bad state because SQL Server cannot read your mind. It will leave the first insert (change) in the database even though you probably wanted it all tosucceed or all fail.
To ensure this you should wrap all the statements in begin transaction as you illustrated in the last example. Its important to have a catch so any half completed transaction are explicitly rolled back and the resources (used by the transaction) released as soon as possible.
at most pages I have read that "DDL commands have AutoCommit in SQL Server", if I am not wrong this statement simply means that we don't need explicit commit command for DDL commands.
then why...
1) Alter TABLE EMP Add Age INT;
UPDATE EMP SET Age=20;
fails ,saying Invalid column name 'Age'
2) BEGIN TRAN
Alter TABLE EMP Add Age INT;
ROLLBACK
can be rollbacked successfully.
Maybe I am wrong with concept of AutoCommit, please explain it with example where it actully has effects.
Thanks for any help.
Autocommit Transactions:
A connection to an instance of the Database Engine operates in autocommit mode until a BEGIN TRANSACTION statement starts...
So, your second example doesn't apply. Further down:
In autocommit mode, it sometimes appears as if an instance of the Database Engine has rolled back an entire batch instead of just one SQL statement. This happens if the error encountered is a compile error, not a run-time error. A compile error prevents the Database Engine from building an execution plan, so nothing in the batch is executed.
Which is what your first example deals with.
So, neither one is actually dealing with Autocommit transactions.
So, lets take a statement like:
Alter TABLE EMP Add Age INT;
If you have an open connection in Autocommit mode, execute the above, and it completes without errors, then you will find that this connection has no open transactions, and the changes are visible to any other connection immediately.
If you have an open connection in Implicit Transactions mode, execute the above, and it completes without errors, then you will find that this connection has an open transaction. Other connections will be blocked on any operations that require a schema lock on EMP, until you execute either COMMIT or ROLLBACK.
If you have an open connection, in which you have executed BEGIN TRANSACTION, execute the above, and it completes without errors - then you'll be in the same situation as for Implicit Transactions. However, having COMMITed or ROLLBACKed, your connection will revert to either Autocommit mode or Implicit Transactions mode (whichever was active before the call to BEGIN TRANSACTION).
If you have a begin tran explicitly like in your second example, then you have to commit it. (you can roll back as well)
If you dont specify it explicity like in your first example then it is autocommit.
autocommit is the default mode in sql server, which can be turned off if requried
If you add batch in your code it will work
Alter TABLE EMP Add Age INT;
go
UPDATE EMP SET Age=20;
If I have a stored procedure that executes another stored procedure several times with different arguments, is it possible to have each of these calls commit independently of the others?
In other words, if the first two executions of the nested procedure succeed, but the third one fails, is it possible to preserve the results of the first two executions (and not roll them back)?
I have a stored procedure defined something like this in SQL Server 2000:
CREATE PROCEDURE toplevel_proc ..
AS
BEGIN
...
while #row_count <= #max_rows
begin
select #parameter ... where rownum = #row_count
exec nested_proc #parameter
select #row_count = #row_count + 1
end
END
First off, there is no such thing as a nested transaction in SQL Server
However, you can use SAVEPOINTs as per this example (too long to reproduce here sorry) from fellow SO user Remus Rusanu
Edit: AlexKuznetsov mentioned (he deleted his answer though) that this won't work if a transaction is doomed. This can happen with SET XACT_ABORT ON or some trigger errors.
From BOL:
ROLLBACK TRANSACTION without a
savepoint_name or transaction_name
rolls back to the beginning of the
transaction. When nesting
transactions, this same statement
rolls back all inner transactions to
the outermost BEGIN TRANSACTION
statement.
I also found the following from another thread here:
Be aware that SQL Server transactions
aren't really nested in the way you
might think. Once an explict
transaction is started, a subsequent
BEGIN TRAN increments ##TRANCOUNT
while a COMMIT decrements the value.
The entire outmost transaction is
committed when a COMMIT results in a
zero ##TRANCOUNT. But a ROLLBACK
without a savepoint rolls back all
work including the outermost
transaction.
If you need nested transaction
behavior, you'll need to use SAVE
TRANSACTION instead of BEGIN TRAN and
use ROLLBACK TRAN [savepoint_name]
instead of ROLLBACK TRAN.
So it would appear possible.
I'm trying to write to a log file inside a transaction so that the log survives even if the transaction is rolled back.
--start code
begin tran
insert [something] into dbo.logtable
[[main code here]]
rollback
commit
-- end code
You could say just do the log before the transaction starts but that is not as easy because the transaction starts before this S-Proc is run (i.e. the code is part of a bigger transaction)
So, in short, is there a way to write a special statement inside a transaction that is not part of the transaction. I hope my question makes sense.
Use a table variable (#temp) to hold the log info. Table variables survive a transaction rollback.
See this article.
I do this one of two ways, depending on my needs at the time. Both involve using a variable, which retain their value following a rollback.
1) Create a DECLARE #Log varchar(max) value and use this: #SET #Log=ISNULL(#Log+'; ','')+'Your new log info here'. Keep appending to this as you go through the transaction. I'll insert this into the log after the commit or the rollback as necessary. I'll usually only insert the #Log value into the real log table when there is an error (in theCATCH` block) or If I'm trying to debug a problem.
2) create a DECLARE #LogTable table (RowID int identity(1,1) primary key, RowValue varchar(5000). I insert into this as you progress through your transaction. I like using the OUTPUT clause to insert the actual IDs (and other columns with messages, like 'DELETE item 1234') of rows used in the transaction into this table with. I will insert this table into the actual log table after the commit or the rollback as necessary.
If the parent transaction rolls back the logging data will roll back as well - SQL server does not support proper nested transactions. One possibility is to use a CLR stored procedure to do the logging. This can open its own connection to the database outside the transaction and enter and commit the log data.
Log output to a table, use a time delay, and use WITH(NOLOCK) to see it.
It looks like #arvid wanted to debug the operation of the stored procedure, and is able to alter the stored proc.
The c# code starts a transaction, then calls a s-proc, and at the end it commits or rolls back the transaction. I only have easy access to the s-proc
I had a similar situation. So I modified the stored procedure to log my desired output to a table. Then I put a time delay at the end of the stored procedure
WAITFOR DELAY '00:00:12'; -- 12 second delay, adjust as desired
and in another SSMS window, quickly read the table with READ UNCOMMITTED isolation level (the "WITH(NOLOCK)" below
SELECT * FROM dbo.NicksLogTable WITH(NOLOCK);
It's not the solution you want if you need a permanent record of the logs (edit: including where transactions get rolled back), but it suits my purpose to be able to debug the code in a temporary fashion, especially when linked servers, xp_cmdshell, and creating file tables are all disabled :-(
Apologies for bumping a 12-year old thread, but Microsoft deserves an equal caning for not implementing nested transactions or autonomous transactions in that time period.
If you want to emulate nested transaction behaviour you can use named transactions:
begin transaction a
create table #a (i int)
select * from #a
save transaction b
create table #b (i int)
select * from #a
select * from #b
rollback transaction b
select * from #a
rollback transaction a
In SQL Server if you want a ‘sub-transaction’ you should use save transaction xxxx which works like an oracle checkpoint.
I am trying to write some integration tests for some SQL server stored procedures and functions. I'd like to have a database that has a known set of test data in it, and then wrap each test in a transaction, rolling it back when complete so that tests are effectively independent.
The stored procedures/functions do anything from fairly simple join queries, to complex filtering with many layers of joins, to insertion of data to multiple tables.
There are a couple stored procedures that actually use transactions - and so these are harder to test. I'll show an example of the overall code that is executed, but keep in mind this would normally be in two different spots (test setup/teardown, and the actual stored procedure). For this sample, I'm also using a very simple temp table:
CREATE TABLE #test (
val nvarchar(500)
)
Example:
-- for this example, just ensuring that the table is empty
delete from #test
go
-- begin of test setup code --
begin transaction
go
-- end of test setup code --
-- begin of code under test --
insert into #test values('aaaa')
begin transaction
go
insert into #test values('bbbbb')
rollback transaction
go
insert into #test values('ccccc')
-- Example select #1:
select * from #test
-- end of code under test --
-- begin of test teardown --
rollback transaction
go
-- end of test teardown
-- checking that #temp is still empty, like it was before test
-- Example select #2:
select * from #test
The problem here is that at "Example select #1", I would expect "aaaa" and "cccc" to be in the table, but actually only "cccc" is in the table, as SQL Server actually rolls back ALL transactions (see http://abdulaleemkhan.blogspot.com/2006/07/nested-t-sql-transactions.html). In addition, the second rollback causes an error, and although this can be avoided with:
-- begin of test teardown --
if ##trancount > 0
begin
rollback transaction
end
go
-- end of test teardown
it doesn't solve the real problem: at "Example select #2", we still get "cccc" in the table -- it no longer gets rolled back because there is no transaction active.
Is there a way around this? Is there a better strategy for this type of testing?
Note: I'm not sure if the codebase ever does do anything after rollback or not (the insert 'cccc' part) -- but if it ever does, either intentionally or accidentally, it would be possible for tests to break in strange ways as unexpected data can be left over from another test.
Somewhat similar to Nested stored procedures containing TRY CATCH ROLLBACK pattern? but there is no real solution to the problem posed here.
the rollbacks in your code don't nest. they rollback everything back to the first BEGIN TRANSACTION.
for every BEGIN TRANSACTION, ##trancount gets incremented by one, however, any ROLLBACK sets ##trancount back to zero.
if you want to rollback a portion of a transaction, you need to use TRANSACTION savepoints. you can look them up in BOL, for more info than I can enter here.
http://msdn.microsoft.com/en-us/library/ms188378.aspx
I'd like to have a database that has
a known set of test data in it, and
then wrap each test in a transaction,
rolling it back when complete so that
tests are effectively independent.
Don't. For one, you won't actually test the functionality, because in the real world the procedures will commit. Second, and this is far more important, you will get a gazilion false failures and need to implement workarounds to read dirty data because you don't actually commit, and you cannot do any proper verification.
Instead have a database backup with the well know set, then quickly restore it before the test. Group tests into suites that can all run on a fresh database restore without affecting each other, so you reduce the number of restores needed.
You can also use database snapshots, take a snapshot a the suite startup, then restore the database from the snapshot before each test, see How to: Revert a Database to a Database Snapshot (Transact-SQL).
Or combine the two methods: suite setup (ie. unit test #class method) restores the database from .bak file and creates a snapshot, then each test restores the database from the snapshot.
I had similar issues with that kind of setup and my take on that was to create a "SetupTest" script and a "ClearTest" script to run before and after test execution. Unless you're talking about a huge volume of data here - which would make the test execution too slow, this should work well and make the tests repeatable, because you know that everytime you're running that test suite, you will have the correct data awaiting to be executed.