SSDT creating a rollback deployment script? - sql-server

we can create a SQLServer deployment script with TFS&SSDT, but is there a way to create a rollback script, so that we can roll back the deployment?
Thank's

As SSDT (and similar products) all work by comparing the schema in the project against a live database to bring the database in sync with the model, there's not a direct way to create a rollback script. There are also considerations regarding data changed/added/removed through pre or post-deploy scripts.
That being said, there are a handful of options.
Take a snapshot each time you do a release. You can use that snapshot from the prior release to do another compare for rollback purposes.
Maintain a prior version elsewhere - perhaps do a schema compare from your production system against your local machine. You can use that to compare against production and do a rollback.
Generate a dacpac of the existing system prior to release (use SQLPackage or SSDT to do that). You can use that to deploy that version of the schema back to the database if something goes wrong.
Take a database snapshot prior to release. Best case scenario, you don't need it and can drop the snapshot. Worst case, you could use that to rollback. Of course, you need to watch out for space and IO as you'll be maintaining that original state elsewhere.
Run your changes through several environments to minimize the need for a rollback. Ideally if you've run this through Development, QA, and Staging/User Acceptance environments, your code and releases should be solid enough to be able to release without any issues.
You'll need to code accordingly for rolling back data changes. That could be a bit trickier as each scenario is different. You'll need to make sure that you write a script that can undo whatever changes were part of your release. If you inserted a row, you'll need a rollback script to delete it. If you updated a bunch of data, you'll either need a backup of that data or some other way to get it back.

Before making any changes to a database project I take a snapshot (a dacpac) which I can compare the modified database project against to generate a release script. Although it's easy enough to swap the source and target to do a reverse schema compare I've discovered it won't let me generate an update script (which would be the rollback script) from the reverse comparison, presumably because the target is a database project.
To get around that problem and generate a rollback script I do the following:
Deploy the modified database project to my (localdb) development database;
Check out a previous version of the database project from source control, from before the changes were made;
Run a schema compare from the previous version of the database project to the (localdb) development database;
Use the schema compare to generate an update script. This update script will be a rollback script.
Although it would be nice to be able to generate a rollback script more directly the four step process above takes less than five minutes.

Related

How to do Flyway deployments in multiple stages

We are using Flyway successfully on a number of applications and now we need to run some of them in multiple stages:
Run database clean up scripts (truncate tables, etc.)
Do some deployment steps
Run other database scripts (schema changes, insert new data, etc.)
The first step is likely to be needed on multiple releases so we could mark it as a repeatable migration instead of as a normal one. However it shouldn't be run on all releases, only when it is needed.
Do you know how can this be done please? I assume we can first run the repeatable migration scripts, then add the other scripts to the migration folder and run the second migration step.
Can we choose when to run those repeatable migrations? E.g. providing a flag or a specific folder?
Would Flyway complain if we modify those repeatable scripts? E.g. if we want to add more columns/tables to the clean up scripts. We may be able to solve it by running a repair command.
Thanks
Repeatable migrations are only run again if their checksum has changed since they were last applied. The checksum is calculated from the file content, so you trigger a repeatable migration to be re-applied by changing it. This includes placeholder values, so for example a repeatable migration containing the ${flyway:timestamp} placeholder will always be re-applied because it's value is constantly changing.
By including ${flyway:timestamp} in the repeatable migration (e.g. in a comment) it will always be re-applied if it's found, but you could control when this happens by storing it in a separate folder which is only configured when you want the it to run.
To include the repeatable migration:
./flyway migrate "-locations=migrations,repeatable_migration"
To exclude it:
./flyway migrate "-locations=migrations" "-ignoreMissingMigrations=true"
This means you can modify the Repeatable script without causing any problems since the checksum is changing anyway - no need to run repair.

Tips for manually writing SQL Server upgrade scripts

We have some large schema changes coming down the pipe and are in needs of some tips in writing upgrade scripts manually. We're using SQL Server 2000 and do not have access to automated tools nor are they an option at this point in time. The only database tool we have is SQL Server Management Studio.
You can import the database to a local machine with has a newer version of SQL, then you can use the 'Generate Scripts' feature to script out a lot of the database objects.
Make sure to set in the Advanced Settings to script for SQL Server 2000.
If you are having problems with the script generated, you can try breaking it up into chunks and run it in small batches. That way if you have any specific generated scripts you can just write the SQL manually to get it to run.
While not quite what you had in mind, you can use Schema comparing tools like SQL Compare, and then just script the changes to a sql file, which you can then edit by hand before running it. I guess that would be as close to writing it manually without writing it manually.
If you -need- to write it all manually i would suggest getting some intellisense-type of tools to speed things up.
Your upgrade strategy is probably going to be somewhat customized for your deployment scenario, but here are a few points that might help.
You're going to want to test early and often (not that you wouldn't do this anyway), so be sure to have a testing DB in your initial schema, with a backup so you can revert back to "start" and test your upgrade any number of times.
Backups & restores can be time-consuming, so it might be helpful to have a DB with no data rows (schema-only) to test your upgrade script. Remember to get a "start" backup so you can go back there on-demand.
Consider stringing a series of scripts together - you can use one per build, or feature, or whatever. This way, once you've got part of the script working, you can leave it alone.
Big data migration can get tricky. If you're doing data transformations, copying or moving rows to new tables, etc., be sure to check row counts before the move and account for all rows afterwards.
Plan for failure. If something goes wrong, have a plan to fix it -- whether that's rolling everything back to a backup taken at the beginning of the deployment, or whatever. Just be sure you've got a plan and you understand where your go / no-go points are.
Good luck!

Are Distributed Transactions a good idea for enabling rollback of database upgrades in Windows Installer Custom Actions?

I've outgrown the Sql Server custom actions available in WiX, so I'm taking the bold step of creating my own using Deployment Tools Foundation. I want to be a good citizen and make sure that mine support rollback. But what's the best way of doing it?
I need to support SQL Server 2005 and later, all editions.
The problem, as I see it, is that Windows Installer works in two phases: it does the work, storing undo information as it goes. Then, when all the pieces are in place it either commits (deleting the undo information) or does a rollback.
This means that standard transactions won't do the job. They would have to be completed inside my Execute custom action, and I wouldn't get a chance to roll them back later.
I've considered taking a copy-only backup of the database that I can restore in the rollback action if necessary but I think this approach, whilst simple has shortcomings. I don't know how big our databases will get, for example - so I can't guarantee that there will be space available to hold the backup on the target machine. Also, backup and restore can take a while to complete, and I don't want typical installs (where rollback doesn't happen) to be unnecessarily slow.
So that brings me to my current favoured idea: make sure the Distributed Transaction Coordinator is started up, then initialise a Distributed Transaction before making changes, then either committing it or rolling it back in the appropriate custom actions.
It seems I can uses the members of the TransactionInterop class to export a cookie that will enable me to share the transaction between my different custom actions.
Can anyone with experience of this kind of thing say if it is likely to work?
Some database/instance operations cannot be done inside a transaction (eg. CREATE/ALTER/DROP ENDPOINT), and other operations cannot be done inside a distributed transaction (eg. SAVE TRANSACTION). So you won't be able to do them at all in your proposed plan. Also your DB upgrade scripts will have to all work correctly when run inside an uncommitted transaction.
I would say that there are fewer risks of going down the backup/restore path (or alternatively creating a database snapshot and restoring from the snapshot on rollback, with the drawback of requiring EE).
Also an option is to have an undo script for every do script run during upgrade, and have the undo script run during rollback and remove the effects of the installation. I understand that this is a hard problem, probably doubles the amount of scripts that have to be developed (and tested...) and requires some serious developer discipline.
I've done quite a few installers with SQL scripts over the years and I've kind of come to the opinion that it's only suited for simple databases like here's my VB app with a local MSDE / MySQL database or here's my local store for code table lookups and temporary commits while we wait to sync it somewhere else.
Once you get into industrial strength heavy lifting enterprise app type situations I like to get my DB configuration out of the installer and into the application as a first run type story. You can do a lot heavier lifting with C# there and not be constrained by MSI.

SQL Server 2008 Auto Backup

We want to have our test servers databases updated from our production server databases on a nightly basis to ensure we're developing on the most recent data. We, however, want to ensure that any fn, sp, etc that we're currently working on in the development environment doesn't get overwritten by the backup process. What we were thinking of doing was having a prebackup program that saves objects selected by our developers and a postbackup program to add them back in after the backup process is complete.
I was wondering what other developers have been doing in a situation like this. Is there an existing tool to do this for us that can run automatically on a daily basis and allow us to set objects not to overwrite (without requiring the attention of a sysadmin to run it daily).
All the objects in our databases are maintained in code - tables, view, triggers, stored procedures, everything - if we expect to find it in the database then it should be in DDL in code that we can run. Actual schema changes are versioned - so there's a table in the database that says this is schema version "n" and if this is not the current version (according to the update code) then we make the necessary changes.
We endeavour to separate out triggers and views - don't, although we probably should, do much with SP and FN - with drop and re-create code that is valid for the current schema version. Accordingly it should be "safe" to drop and recreate anything that isn't a table, although there will be sequencing issues with both the drop and the create if there are dependencies between objects. The nice thing about this generally is that we can confidently bring a schema from new to current and have confidence that any instance of the schema is consistent.
Expanding to your case, if you have the ability to run the schema update code including the code to recreate all the database objects according to the current definitions then your problem should substantially go away... backup, restore, run schema maint logic. This would have the further benefit that you can introduce schema (table) changes in the dev servers and still keep the same update logic.
I know this isn't a completely generic solution. And its worth noting that it probably works better with database per developer (I'm an old fashioned programmer, so I see all problems as having code based solutions (-:) but as a general approach I think it has considerable merit because it gives you a consistent mechanism to address a number of problems.

Nightly importable or attachable copies of production database

We would like to be able to nightly make a copy/backup/snapshot of a production database so that we can import it in the dev environment.
We don't want to log ship to the dev environment because it needs to be something we can reset whenever we like to the last taken copy of the production database.
We need to be able to clear certain logging and/or otherwise useless or heavy tables that would just bloat the copy.
We prefer the attach/detach method as opposed to something like sql server publishing wizard because of how much faster an attach is than an import.
I should mention we only have SQL Server Standard, so some features won't be available.
What's the best way to do this?
MSDN
I'd say use those procedures inside a SQL Agent job (use master.xp_cmdshell to perform the copy).
You might want to put the big huge tables on their own partition and have this partition belong to a different file group. You would backup then backup and restore the main file group.
You might want to also consider doing incremental backups. Say, a full backup every weekend and an incremental every night. I haven't done file group backups, so I don't know if these work well together.
I'm guessing that you are already doing regular backups of your production database? If you aren't, stop reading this reply and go set it up right now.
I'd recommend that you write a script that automatically runs, say once a day, that:
Drops your current test database.
Restores your current production backup to your test environment.
You can write a simple script to do this and execute it using the isql.exe command line tool.

Resources