Script for running DDL statements in a transaction - sql-server

I am trying to execute a bunch of DDL statements in a transaction scope. I am trying to execute all the statements that relate to a model change request as as a single transaction so that all the DDL statements fail or succeed together. My objective is not to leave the DB in an inconsistent statement after the execution of the group of DDL statements.
I have found that SQL Server 2008 R2 supports transactions on DDL statements. I am not talking about DROP DB kind of DDL statements - I am referring to CREATE TABLE, ALTER TABLE, DROP TABLE, etc.
I have read the following related threads but did not find an answer.
Is it possible to run multiple DDL statements inside a transaction (within SQL Server)?
Unit testing DDL statements that need to be in a transaction
DDL scripts in a transaction block takes effect even when there are errors
What I need is a template script for executing a set of DDL statements as a transaction and to roll them back if one of the statements fail and I want the error to be printed or to be stored in an error table. Can anyone help?
In my research, I have found multiple alternatives, but I am not sure which one to pick as I am new in this area. I need some help from experienced hands.
Use XACT_ABORT in the transaction scope to abort on first error within the transaction
Use a TRY and CATCH block and put the DDL statements inside a transaction inside TRY block
Here are the pages that I have read through.
http://msdn.microsoft.com/en-us/library/ms179296.aspx
http://msdn.microsoft.com/en-us/library/ms188792.aspx
http://www.codeproject.com/KB/database/sqlservertransactions.aspx

Download Red Gate SQL Compare and see how the scripts are generated there.
This does transactional DDL and can be extended for logging.
TRY/CATCH doesn't span batches which makes is trickier to use without dynamic SQL

Related

Execute a statement within a transaction without enlisting it in that transaction

I have some SQL statements in a batch that I want to profile for performance. To that end, I have created a stored procedure that logs execution times.
However, I also want to be able to roll back the changes the main batch performs, while still retaining the performance logs.
The alternative is to run the batch, copy the performance data to another database, restore the database from backup, re-apply all the changes made that I want to profile, plus any more, and start again. That is rather more time-consuming than not including the act of logging in the transaction.
Let us say we have this situation:
BEGIN TRANSACTION
SET #StartTime = SYSDATETIME
-- Do stuff here
UPDATE ABC SET x = fn_LongRunningFunction(x)
EXECUTE usp_Log 'Do stuff', #StartTime
SET #StartTime = SYSDATETIME
-- Do more stuff here
EXEC usp_LongRunningSproc()
EXECUTE usp_Log 'Do more stuff', #StartTime
ROLLBACK
How can I persist the results that usp_Log saves to a table without rolling them back along with the changes that take place elsewhere in the transaction?
It seems to me that ideally usp_Log would somehow not enlist itself into the transaction that may be rolled back.
I'm looking for a solution that can be implemented in the most reliable way, with the least coding or work possible, and with the least impact on the performance of the script being profiled.
EDIT
The script that is being profiled is extremely time-consuming - taking from an hour to several days - and I need to be able to see intermediate profiling results before the transaction completes or is rolled back. I cannot afford to wait for the end of the batch before being able to view the logs.
You can use a table variable for this. Table variables, like normal variables, are not affected by ROLLBACK. You would need to insert your performance log data into a table variable, then insert it into a normal table at the end of the procedure, after all COMMIT and ROLLBACK statements.
It might sound a bit overkill (given the purpose), but you can create a CLR stored procedure which will take over the progress logging, and open a separate connection inside it for writing log data.
Normally, it is recommended to use context connection within CLR objects whenever possible, as it simplifies many things. In your particular case however, you wish to disentangle from the context (especially from the current transaction), so regular connection is a way to go.
Caveat emptor: if you never dabbled with CLR programming within SQL Server before, you may find the learning curve a bit too steep. That, and the amount of server reconfiguration (both the SQL Server instance and the underlying OS) required to make it work might also seem to be prohibitively expensive, and not worth the hassle. Still, I would consider it a viable approach.
So, as Roger mentions above, SQLCLR is one option. However, since SQLCLR is not permitted, you are out of luck.
In SQL Server 2017 there is another option and that is to use the SQL Server extensibility framework and the support for Python.
You can use this to have Python code which calls back into your SQL Server instance and executes the usp_log procedure.
Another, rather obscure, option is to bind other sessions to the long-running transaction for monitoring.
At the beginning of the transaction call sp_getbindtoken and display the bind token.
Then in another session call sp_bindsession, and you can examine the intermediate state of the transaction.
Or you can read the logs with (NOLOCK).
Or you can use RAISERROR WITH LOG to send debug messages to the client and mirror them to the SQL Log.
Or you can use custom user-configurable trace events, and monitor them in SQL Trace or XEvents.
Or you can use a Loopback linked server configured to not propagate the transaction.

What is the fastest method for autonomous transaction in SQL Server?

I am trying to simulate autonomous transaction in SQL Server, but the problem is, that using CLR DLL procedure (that is using different session) slows down the performance about 5 times.
To be clear:
Let's assume that in one transaction I am calling procedure for every of 100k rows in table, which gives 100k proc calls in one transaction. If any of this procedure fails, I want to rollback the entire transaction (AFTER all procedures calls), but I need to keep logs from the procedures that fails (in case of failure, insert to ErrorLog table).
The problem is, that in such case, I perform 100k connections, and it cost in terms of performance.
Using table variable is not a solution, because I am not able to control every transaction (some are controlled by frontend), using Loopback (to the same server) is not recommended in production (from what I read), so the solution was to use CLR for different session purpose.
Is there any solution to maybe create altered session for every session, and use that session for all those insert instead of creating new connection every time, or is my understanding of using CLR wrong, and it must open new connection every time. (From what I read, context_session uses the same session from what it was called, so in case of rollback, it will delete my logs from ErrorLog table).
You can use "SAVE TRAN XXXX" in the procedure and a "ROLLBACK TRAN XXXX" where an exception occurs. The Insert into the ERRORLOG table should be after the "ROLLBACK TRAN XXXX in the Procedure not to be rolled back.
Hope this helps...

ssis transaction with out Msdtc

One of the packages is going to implement using
SQL Server Integration Services SSIS Transactions without MSDTC.
The Execute SQL task has placed ,before the data flow(Df_insert) for begin transaction.There are several update steps and index creation steps ,after this First data flow(Df_Insert).There is an update scripts which is in another sequence container and ,need to be part of this transaction.
Is there any way to include only the Df_insert and the update scripts
in the transaction.
The control flow looks like, the below
From SQL Transaction point of view ALL DML statements, i.e. inserts-updates-deletes, between BEGIN TRAN and COMMIT are part of this transaction and not deducible. Your task - committing only DFT and update script - means that update, update2 and delete are temp data used in your update script and discarded later on.
Approach - rework your logic to move results of update, update2 and possibly delete results into TEMP tables and use it afterwards. Regular #temp_table will be fine since you have to use RetainSameConnection=true for transaction without MSDTC.

SQL Server 2005 how do I see what variables are being passed into a proc

I'm trying to debug a stored procedure. I'm creating a logtable (just a regular table) and inserting the stored proc input values into it, however this is in the middle of transaction and my inserts are getting rolled back. Is there any way to commit my inserts that have them saved even when a rollback is issued.
Thanks
In code if you are calling procedures in a nested fashion only the outermost BEGIN TRAN ... ROLLBACK TRAN will have the ability to create records that will not be rolled back. However, SQL Server Profiler allows you to see exactly what Transact-SQL statements are submitted to the server and how the server accesses the database to return result sets.
Check out the SQL Server Profiler to see what is being passed into your procs...

What is the difference between SET xact_abort ON and try/catch block with Transaction handling in sqlserver 2005?

I need to improve some existing stored procedures in my project for better transaction handling. I understand I can use the SET XACT_Abort ON statement in my procedure so that transaction will be automatically rolled back in case of errors. I can also use Try/Catch block for error handling and roll back the transaction in the Catch block in case of errors? My question what is the main difference between these two and why I should use one over the another? Are there any guidelines that I should use when deciding between these two?
Try/Catch blocks are new with SQL server 2005 and allow you to handle errors as opposed to just having them rolled back - Try/Catch blocks restrict you to a single batch, but of course that's moot within a stored procedure. If your procedures must remain compatible with previous versions of SQL server, you might consider XACT_ABORT if it helps, but I would submit that Try/Catch is the way to go going forward.

Resources