Sql Server 2005 Transactions For Deployment Scripts - sql-server

I want to wrap my SQL deployment script in a transaction (containing a bunch of schema changes). I am doing this because if one part of it fails, I want the db to revert to what it was before I ran the script.
I have a few simple questions I would like to have resolved prior to pushing these changes:
Is it necessary to explicitly call COMMIT on the transaction at the bottom of the script?
Is it necessary to explicitly check for errors and call ROLLBACK at the bottom, or will simply using a transaction provide this effect?

Yes.
Yes.
You should also investigate SET XACT_ABORT ON. SET XACT_ABORT ON instructs SQL Server to rollback the entire transaction and abort the batch when a run-time error occurs.
This article Error Handling in SQL 2005 and Later is worth reading.

Related

Execute a statement within a transaction without enlisting it in that transaction

I have some SQL statements in a batch that I want to profile for performance. To that end, I have created a stored procedure that logs execution times.
However, I also want to be able to roll back the changes the main batch performs, while still retaining the performance logs.
The alternative is to run the batch, copy the performance data to another database, restore the database from backup, re-apply all the changes made that I want to profile, plus any more, and start again. That is rather more time-consuming than not including the act of logging in the transaction.
Let us say we have this situation:
BEGIN TRANSACTION
SET #StartTime = SYSDATETIME
-- Do stuff here
UPDATE ABC SET x = fn_LongRunningFunction(x)
EXECUTE usp_Log 'Do stuff', #StartTime
SET #StartTime = SYSDATETIME
-- Do more stuff here
EXEC usp_LongRunningSproc()
EXECUTE usp_Log 'Do more stuff', #StartTime
ROLLBACK
How can I persist the results that usp_Log saves to a table without rolling them back along with the changes that take place elsewhere in the transaction?
It seems to me that ideally usp_Log would somehow not enlist itself into the transaction that may be rolled back.
I'm looking for a solution that can be implemented in the most reliable way, with the least coding or work possible, and with the least impact on the performance of the script being profiled.
EDIT
The script that is being profiled is extremely time-consuming - taking from an hour to several days - and I need to be able to see intermediate profiling results before the transaction completes or is rolled back. I cannot afford to wait for the end of the batch before being able to view the logs.
You can use a table variable for this. Table variables, like normal variables, are not affected by ROLLBACK. You would need to insert your performance log data into a table variable, then insert it into a normal table at the end of the procedure, after all COMMIT and ROLLBACK statements.
It might sound a bit overkill (given the purpose), but you can create a CLR stored procedure which will take over the progress logging, and open a separate connection inside it for writing log data.
Normally, it is recommended to use context connection within CLR objects whenever possible, as it simplifies many things. In your particular case however, you wish to disentangle from the context (especially from the current transaction), so regular connection is a way to go.
Caveat emptor: if you never dabbled with CLR programming within SQL Server before, you may find the learning curve a bit too steep. That, and the amount of server reconfiguration (both the SQL Server instance and the underlying OS) required to make it work might also seem to be prohibitively expensive, and not worth the hassle. Still, I would consider it a viable approach.
So, as Roger mentions above, SQLCLR is one option. However, since SQLCLR is not permitted, you are out of luck.
In SQL Server 2017 there is another option and that is to use the SQL Server extensibility framework and the support for Python.
You can use this to have Python code which calls back into your SQL Server instance and executes the usp_log procedure.
Another, rather obscure, option is to bind other sessions to the long-running transaction for monitoring.
At the beginning of the transaction call sp_getbindtoken and display the bind token.
Then in another session call sp_bindsession, and you can examine the intermediate state of the transaction.
Or you can read the logs with (NOLOCK).
Or you can use RAISERROR WITH LOG to send debug messages to the client and mirror them to the SQL Log.
Or you can use custom user-configurable trace events, and monitor them in SQL Trace or XEvents.
Or you can use a Loopback linked server configured to not propagate the transaction.

Data locks caused by non-committed transactions

My web application is connected to a SQL Server 2016 Express database, and we have been plagued by data locks in certain areas of the system.
My colleague noticed just today that, when a KILL process was used to kill a long-running transaction, that several transactions that had ostensibly already been committed were rolled-back.
I have checked using #vladV's script on In SQL Server, how do I know what transaction mode I'm currently using? that in fact the database seems to be in auto-commit mode.
So therefore it must be that that something in the database is opening a new transaction and not committing it.
So I found in the database four stored procedures which contain the following
SET IMPLICIT_TRANSACTIONS ON
... code ...
IF ##TRAN_COUNT>0 COMMIT WORK
Am I right in saying that in some/most situations such a stored procedure would leave transactions open, even after exiting the stored procedure, and that this could be the source of the data-lock problems?
And if so, then could I just remedy the code by doing
SET IMPLICIT_TRANSACTIONS OFF
when the stored procedure exits?
Am I right in saying that in some/most situations such a stored procedure would leave transactions open
Some. Depends on what comes after. With IMPLICIT TRANSACTIONS in SQL Server, transactions are not automatically started until you run a query that reads the database.
could I just remedy the code by doing SET IMPLICIT_TRANSACTIONS OFF
No. That won't end any open transactions.
Note that COMMIT doesn't reduce the ##trancount to 0. It decrements it by 1. So if you have multiple BEGIN TRAN statements, or an explicit BEGIN TRAN after an transaction has implicitly begun, then you will need multiple COMMITs.
You might try
WHILE ##trancount > 0 COMMIT TRANSACTION
which will definitely commit any outstanding transactions.

What happens to connections when taking SQl Server Database Offline?

I have recently tried a big merge of 2 databases. We recreated the schema from Database 2 into Database 1 and created a script to transfer all data from database 2 into Database 1. This script takes about 35 min to run and have transaction handling with:
BEGIN TRANSACTION
...
IF(##error<>0)
COMMIT TRANSACTION
ELSE
ROLLBACK TRANSACTION
The full script is a bit sensitive but here is some SQL that have the same structure: http://pastebin.com/GWJ3ZnkF
We ran the script and all data was transfered without errors. We tested the systems running with the new combined database (removed access rights to the old database).
But as a last task we wanted to take the old database offline to make sure no one used that database. To do this we used:
ALTER DATABASE <dbname> SET OFFLINE WITH ROLLBACK IMMEDIATE
This was bad. After this line of SQL code all data in the combined database that we just copied was suddenly gone. I first asumed it wasn't really finished so the "Rollback immediate" sounds like it have performed a rollback on my transaction..
But why? Wasn't the transaction allready committed?
Also I tried running the same script again a few times but after every attempt no data was copied even if it said the script was successfull. I have no idea why... did it remember my offline rollback somehow?
What is really happening to my connections?
Sounds like you had a pending transaction uncommitted and you forced it to rollback, loosing some of the work. The rest is explained by how your scripts are structured. Is unlikely your script had a single transaction from start to bottom. Only the last transaction was rolled back, so the database was left now in a state in which it is 'half copied'. Probably your script does various checks and this intermediate state sends the script on the 'ELSE' branches where it does not do the proper work (ie. apparently does nothing).
W/o posting the exact script, is all speculation anyway.
Right now you need to restore the database to a consistent state, the one before your data copy. Use the backup you took before the data move (you did take a backup, right?). for extra credit, make sure your script is idempotent and works correctly on a half-updated database.
I'd double-check to make sure that there are no outstanding transactions. Either go through the file and count the number of BEGIN TRANSACTION vs COMMIT TRANSACTION lines, or add a statement to the end of it to SELECT ##TRANCOUNT to ensure that there are no open transactions remaining.
If your data has been committed, there should be no way it can be lost by disconnecting you.
WITH ROLLBACK IMMEDIATE:
All incomplete transactions will be rolled back and any other
connections to the database will be
immediately disconnected.
Sounds like someone got the 2 databases mixed up or maybe there is an outstanding transaction?.... Can you post your entire script?
Ref: ALTER DATABASE.
Rather than only checking ##ERROR, inspect ##TRANCOUNT as well.

How to rollback SQL update command?

In SQL server 2008 R2. Is it possible to do a rollback on a single update command?
I know there are other questiones like this on SO but i havent seen one specific for 2008 R2 and hence I may get the same answer, if that is the case then we can close this thread.
I did the a simple update without any transactions commands:
UPDATE myTable SET col1=somevalue WHERE....
Of course you can use explicit transactions such as
BEGIN TRAN
UPDATE ...
ROLLBACK
but I don't think you are asking about that?
If you have the option SET IMPLICIT_TRANSACTIONS ON then the command will not be committed or rolled back until you do so explicitly but this is not the default behaviour.
By default transactions are auto committed so when the command finishes successfully the results of the update will be committed. If the update was to encounter an error - including the connection being killed mid update it would auto rollback.
If your database is in full recovery mode you might want to try reading transaction log, finding which rows were affected and then reverting the update.
However, this is not supported by default, because MS stored transaction log in its own format that is not well documented.
Solution is to use commands such as DBCC LOG or fn_log or third party tool such as ApexSQL Log which does all of this automatically but comes with a price.
If you need more details, here are couple of posts on reading transaction log:
Read the log file (*.LDF) in sql server 2008
SQL Server Transaction Log Explorer/Analyzer

SQL Server UNDO

I am a part time developer (full time student) and the company I am working for uses SQL Server 2005. The thing I find strange about SQL Server that if you do a script that involves inserting, updating etc there isn't any real way to undo it except for a rollback or using transactions.
You might say what's wrong with those 2 options? Well if for example someone does an update statement and forgets to put in a WHERE clause, you suddenly find yourself with 13k rows updated and suddenly all the clients in that table are named 'bob'. Now you have the wrath of 13k bobs to face since that "someone" forgot to use a transaction and if you do a rollback you are going to undo critical changes that were needed in other fields.
In my studies I have Oracle. In Oracle you can first run the script then commit it if you find that there isn't any mistakes. I was wondering if there was something that I missed in SQL Server since I am still relatively new in working developer world.
I don't believe you missed anything. Using transactions to prevent against these kind of errors is the best mechanism and it is the same mechanism Oracle uses to protected the end user. The difference is that Oracle implicitly begins a transaction for you whereas in SQL Server you must do it explicitly.
SET IMPLICIT_TRANSACTIONS is what you are probably looking for.
I'm no database/sql server expert and I'm not sure if this is what you're looking for, but there is the possibility to create snapshots of a database. A snapshot allows you to revert the database to that state at any time.
Check this link for more information:
http://msdn.microsoft.com/en-us/library/ms175158.aspx
I think transactions work well. You could rollback the DB (to a previous backup or point in the log), but I think transactions are much simpler.
How about this: never make changes to a production database that have not 1st been tested on your development server, and always make a backup before trying anything that is un-proven.
From what I understand, SQL Server 2008 added an Auditing feature that logs all changes made by users to the various databases and also has the option to roll them back after the fact.
Again, this is from what I've read or overheard from our DBA, but might be worth looking into.
EDIT: After looking into it, it appears to only give the ability to rollback on schema changes, not data modifications (DDL triggers).
If I am doing something with any risk in SQL Server, I write the script like this:
BEGIN TRAN
Insert .... whatever
Update .... whatever
-- COMMIT
The last line is a comment on purpose: I first run the lines before, then make sure there's no error, and then highlight just the word Commit and execute that. This works because in Management Studio you can select a part of the T-SQL and just execute the selected portion.
There are a couple of advantages: Implicit Transactions works too, but it's not the default for SQL Server so you have to remember to turn it on or set options to do that. Also, if it's on all the time, I find it's easy for people to "forget" and leave uncommitted transactions open, which can block others. That's mainly because it's not the default behavior and SQL Server folks aren't used to it.

Resources