When using Continuous or Automated Deployment, how do you deploy databases? - database

I'm looking at implementing Team City and Octopus Deploy for CI and Deployment on demand. However, database deployment is going to be tricky as many are old .net applications with messy databases.
Redgate seems to have a nice plug-in for Team City, but the price will probably be stumbling block
What do you use? I'm happy to execute scripts, but it's the comparison aspect (i.e. what has changed) I'm struggling with.

We utilize a free tool called RoundhousE for handling database changes with our project, and it was rather easy to use it with Octopus Deploy.
We created a new project in our solution called DatabaseMigration, included the RoundhousE exe in the project, a folder where we keep the db change scripts for RoundhousE, and then took advantage of how Octopus can call powershell scripts before, during, and after deployment (PreDeploy.ps1, Deploy.ps1, and PostDeploy.ps1 respectively) and added a Deploy.ps1 to the project as well with the following in it:
$roundhouse_exe_path = ".\rh.exe"
$scripts_dir = ".\Databases\DatabaseName"
$roundhouse_output_dir = ".\output"
if ($OctopusParameters) {
$env = $OctopusParameters["RoundhousE.ENV"]
$db_server = $OctopusParameters["SqlServerInstance"]
$db_name = $OctopusParameters["DatabaseName"]
} else {
$env="LOCAL"
$db_server = ".\SqlExpress"
$db_name = "DatabaseName"
}
&$roundhouse_exe_path -s $db_server -d $db_name -f $scripts_dir --env $env --silent -o > $roundhouse_output_dir
In there you can see where we check for any octopus variables (parameters) that are passed in when Octopus runs the deploy script, otherwise we have some default values we use, and then we simply call the RoundhousE executable.
Then you just need to have that project as part of what gets packaged for Octopus, and then add a step in Octopus to deploy that package and it will execute that as part of each deployment.

We've looked at the RedGate solution and pretty much reached the same conclusion you have, unfortunately it's the cost that is putting us off that route.
The only things I can think of are to generate version controlled DB migration scripts based upon your existing database, and then execute these as part of your build process. If you're looking at .NET projects in future (that don't use a CMS), could potentially consider using entity framework code first migrations.

I remember looking into this a while back, and for me it seems that there's a whole lot of trust you'd have to get put into this sort of process, as auto-deploying to a Development or Testing server isn't so bad, as the data is probably replaceable... But the idea of auto-updating a UAT or Production server might send the willies up the backs of an Operations team, who might be responsible for the database, or at least restoring it if it wasn't quite right.
Having said that, I do think its the way to go, though, as its far too easy to be scared of database deployment scripts, and that's when things get forgotten or missed.
I seem to remember looking at using Red Gate's SQL Compare and SQL Data Compare tools, as (I think) there was a command-line way into it, which would work well with scripted deployment processes, like Team City, CruiseControl.Net, etc.

The risk and complexity comes in more when using relational databases. In a NoSQL database where everything is "document" I guess continuous deployment is not such a concern. Some objects will have the "old" data structure till they are updated via the newly released code. In this situation your code would need to be able to support different data structures potentially. Missing properties or those with a different type should probably be covered in a well written, defensively coded application anyway.
I can see the risk in running scripts against the production database, however the point of CI and Continuous Delivery is that these scripts will be run and tested in other environments first to iron out any "gotchas" :-)
This doesn't reduce the amount of finger crossing and wincing when you actually push the button to deploy though!

Having database deploy automation is a real challenge especially when trying to perform the build once deploy many approach as being done to native application code.
In the build once deploy many, you compile the code and creates binaries and then copy them within the environments. From the database point of view, is the equivalent to generate the scripts once and execute them in all environments. This approach doesn't handle merges from different branches, out-of-process changes (critical fix in production) etc…
What I know works for database deployment automation (disclaimer - I'm working at DBmaestro) as I hear this from my customers is using the build and deploy on demand approach. With this method you build the database delta script as part of the deploy (execute) process. Using base-line aware analysis the solution knows if to generate the deploy script for the change or protect the target and not revert it or pause and allow you to merge changes and resolve the conflict.

Consider a simple solution we have tried successfully at this thread - How to continuously delivery SQL-based app?
Disclaimer - I work at CloudMunch

We using Octopus Deploy and database projects in visual studio solution.
Build agent creates a nuget packages using octopack with a dacpac file and publish profiles inside and pushes it onto NuGet server.
Then release process utilizes the SqlPackage.exe utility to generate the update script for the release environment and adds it as an artifact to the release.
Previously created script executed in the next step with SQLCMD.exe utility.
This separation of create and execute steps gives us a possibility to have a manual step in between, so that someone verifies before the script is executed on Live environment, not to mention, that script saved as an artifact in the release can always be referred to, at any later point.
Would there be a demand I would provide more details and step scripts.

Related

Managing different publish profiles for each developers in SSDT

In our current dev. workflow there is main database --> DbMain. There is the process that takes the latest version of the project and automatically deploys it there and after that it triggers unit tests. As we would like to always have working version of the project in the source control each developer should be sure that he checks in the working code and all tests would be passed.
For this purpose we decided to create individual databases for each developers that has following naming convention --> DbMain_XX (where XX are the developers initial). So every developer before the check-in is suppose to publish all the changes to that database manually and run the unit tests. It is useful to setup publish config for this purpose with that is the copy of the main publish config with the only difference in the database names.
That would introduce that we will have a lot of different publish profiles in the solution that is quite a mess.
If we will not add these profiles to the source control, then .sqlproj file would still have reference to these files so the project will have reference to the not existing files.
So the actual question. Can I have single publish profile for all developers where the database name will be changed using variables? For example DbName_$(dev_initials)? Or can we have that each developer would have their own publish configs only locally and it wouldn't break the project?
UPDATE:
According to the Peter Schott comments:
I can create local publish profile, but if I don't add it to the source control, then the still be an entry in sqlproj file, but the file itself will be unavailable.
Running tests locally have at least 2 disadvantages. The first one is that everybody is supposed to install SQL Server locally. We are mainly working via virtual machines and the disk space is quite limited there. Another thing is that developers will definitely forget or not will not run tests manually every time. Sometimes they will push changes to the repo without building it or/and running tests. We would like to avoid such situations and "catch" failed build as soon as possible.
Another approach that was mentioned is to have 1 common build database. And in my case we have one (DbMain). All of developers can use it for it's needs but we will definitely catch the situation when the 2 developers will publish at the same time and that can make a lot of confusion by figuring out what's really went wrong.
A common approach to this kind of thing - not only for SSDT publish profiles but for config files in general - is to commit a generic version of the file with a name something like DbMain.publish.xml.template, and provide instructions to the developer to rename the file to DbMain.publish.xml - or whatever - and .gitignore this local copy of the file, allowing the developers to make whatever changes they want, but inherit the common settings from the .template version of the file.
Publish profiles don't need to be added to the .sqlproj to be used at deploy time, this is merely a convenience in Visual Studio to make them easier to find and edit, so you don't need to worry about broken references.
You are right in wanting to avoid multiple developers publishing to a common "build" database, this is a recipe for frustration.
Really, you want the "build" database to be published to as part of your CI process, meaning after the developers have pushed their changes.

Integration tests in Continuous Integration environment: Database and filesystem state

I'm trying to implement automated integration tests for my application. It's a very complex monster. You could say that its database and part of the filesystem are part of its state, because it saves image files in the hard drive, and references to those in the DB. The software needs all those, in a coherent state, to work properly.
Back to writing tests: To run any relevant test, I need some image files in the filesystem, and certain records filled in the database. I thought of putting all of these in a separate folder called TestEnvironmentData in the repository, and retrieving them from the Continuous Integration Server (Team City), but a colleague said the repo is quite full as it is, and that I should set up a special directory, and databases, only in the Continuous Integration server. I don't like that because the tests success depend on me manually mantaining stuff in the server, and restoring initial state before every test becomes cumbersome.
What do you guys do when you need to write integration tests for an app like this? The main goal is having an automated test harness to approach a large scale refactoring. There's lots of spaghetti code and the app's current architecture is hardly unit testable, that's why I decided on integration tests first.
Any alternative approach is welcome.
Developer Repeatability is key when setting up a Continous Integrations Server. I have set one up for my last three employers and I have found the key to success is the developers being able to run the same tests from their dev system in order to get the same results as the CI Server.
The easiest way to do this would be to check in the test artifacts into source control but you could also use dropbox or a Network Share that you copy them from in one of the build steps.
For a .Net solution I have always used MsBuild as you can most easily replicate the build process of Visual Studio and get the same binaries/deployables. As for keeping your database in sync so that tests can be repeatable in the past I used the MbUnit test framework and the [Rollback] attribute as it would roll back any changes to Sql Server that happened in the test. I believe that Nunit now has this attribute as well.
The CI server is great for finding code that breaks existing functionality but unless developers can reproduce the error on their machine they won't trust the CI server for some time.
First of all, we use Maven to build our code. It's like ant, but it relies on convention instead of configuration for many things, like Ruby On Rails does. One of those conventions is a standardized directory structure:
(project)----src----main----(language)
| | \--resources
| \--test----(language)
| \--resources
\--target---...
Using a directory structure like this makes it easy to keep your application resources and testing resources near each other, yet still be able to build for test or build for production, or just build both but just package up the application parts after running the tests.
As far as resetting the database between tests, how you do that is greatly dependent on the DBMS you're using. For instance, if you're using MySQL it's very easy to get the test data the way you want and do a mysqldump to a file you then load before the test. With other DBMSs you may have to drop and recreate the tables and reload the data, or make separate tables for the starting point and use a CREATE/SELECT sql statement to duplicate it each time.
There really is no reliable way around the "reset the database between tests" step.

version control/maintaining development local copies and working live copies and databases

This is a subject of common discussion, but through all my research I have not actually found a sound answer to this.
I develop my websites offline, and then launch them live through my hosting account.
I utilize codeigniter, and on that basis there are some fundamental differences between my offline and online copies, namely base urls and database configurations. As such I cannot simply develop and test my websites offline and then upload them as it requires small configuration changes which are easy to overlook and good lead to a none working live website.
The other factor is that when I am developing offline, I might add a database table or a column whilst creating some functionality. When I upload my local developments to my host, they often do not work as I have forgotten to upload the new database structure. Obviously this cannot happen - there cannot be any opportunity for a damaged or broken live website.
Further to this, I'd like to be able to have logs of my development - version control of sorts such that if i develop a feature, and then something else stops working I can easily look backwards to at least see the code changes which could have caused the change.
My fourth requirement is as follows: if i go away on holiday for a week without my development laptop, and then get a bug report, I have no way of fixing it. If i fix it on the live copy, not only is it dangerous, but i'll inevitably not update it on my local copy - as such when i update my live copy next time, that change will be lost. Is there a way that on any computer i can access my development setup, edit and test, launch to the live site, whilst also committing it such that my laptop local copy is up to date.
So yes.. in general im looking for a solution to make my development processes more efficient/suitable. Any ideas?
Thanks
Don't deploy by simply copying. Deploy by using a script (I use Apache Ant) that will automate the copy of specific files for each environment, the replacement of some values, etc.
This just needs rigor. Make a todo list while developing, and check that every modification on the server is done. You might also test the deploy procedure on a pre-production server which has an similar configuration as the production server, make sure everything is OK, and then apply the same, tested procedure on the production server
Just use a version control system. SVN or Git are two free candidates.
Make your version control server available from anywhere. If it's an open-source project, free hosting solutions exist. Of course, if you don't have a development computer wvailable, you'll have to checkout the whole project, and probably install some tools to be able to develop, test and deploy. Just try to make it as easy as possible, or always have your laptop available. If you plan to work, have your toolbox with you. If you don't plan to work, then don't work. When you have finished some development, commit to the server. When you go back to your laptop, update your working copy from the server.
Small additions and clarifications to JB
Use any VCS, which can work (in a good way) with branches - your local and prod systems are good candidates for separate branches, where you share common code but have branch-specific config. It'll require some changes in your everyday workflow (code in "test", merge finished with "prod", deploy /by tools, not hand/ only after merge...), but it's fair price
Changing of workflow, again. As JB noted - don't deploy by hand, don't deploy wrong branch, don't deploy "prod" before finished merge. But now build-tools are rather smart, you can check such pre-condition inside builder
Just use VCS, maybe DVCS will be somehow better. I say strong "No-no" for Git as first VCS, but you have wide choice even without it - SVN (poor branch|merge comparing to DVCS), Bazaar (not a tool of my dream, but, who knows), Mercurial, Fossil SCM, Monotone
Don't work on live, never do anyting outside your SCM. One source of changes is a rule of happy developer. Or don't work at all at free-time, or have codebase always reacheable for you (free code-hosting /GoogleCode, SourceForge, BitBucket, Github, Assembla, LaunchPad/ or own server), get it as needed, change, save, deploy

SQL Server Database Management with Continuous Integration

Let's say we have a continuous integration server. When I check in, the post-hook pulls the latest code, runs the tests, packages everything. What is the best way to also automate the database changes?
Ideally, I'd build an installer that could either build a database from scratch or update an existing one using some automated syncing method.
I've recently bumped into an article, that might be of use.
The author explained some of the best continuous integration practices including testing, processing and automation.
Here are some of the key takeaways:
In many shops code is unit tested at the point of commit. For databases, it is preferred running all unit tests at once and in sequence against a QA database, vs development, as a part of the Test step
The test step is a critical part of any CI/CD process. Test scripts, including unit tests themselves, should also be versioned in source control, extracted at the point of the Build step and executed
Pulling data from production is appealing as a quick expedient, but is never a good idea
The best approach is using a tool or script to quickly, repeatedly and reliably create synthetic test data for your transactional tables
Running unit tests to produce manual summary results for human consumption defeats the purpose of automation. We need machine readable results, that can allow an automated process to abort, branch and/or continue.
Running a CI process, which requires 100% of all tests to pass, is akin to not having CI at all, if the workflow pipeline is set up atomically to stop on failure, which it should. To thread the needle, tests should have built in thresholds, that will raise an error based on either the % of tests failing or in some cases, if certain high priority tests fail.
All processes should ultimately produce a Boolean result of pass or fail, but some non-automated processes can easily find their way into your CI workflow pipeline (e.g. unit testing). Software should be plug-n-play into any workflow pipeline, taking known inputs and producing expected outputs – like pass, fail.
CI/CD process should be aborted on failure and a notification email should be immediately sent vs continuing to cycle the pipeline.
The CI process should not cycle again until any errors in the last build are fixed. On failure, the entire team should get the failure notification, including as many details as to what failed as possible.
If a pipeline takes 1 hour, from start to finish, to complete, including all the testing, then all the build intervals should be set to no less than one hour and all new commits should be queued, and applied to the next build.
No plain text passwords should exist in automation scripts
If you have the opportunity to define and control the whole database management and db creation process, have a serious look at DB Ghost - it's more than just a tool - it's a process.
If you like it and can implement it, you'll get great returns on it - but it's a bit of a "all-or-nothing" kind of approach. Recommended.
I would caution against using a db backup as a development artifact, most CI best practices suggest that you manage the schema, procedures, triggers, and views as first class development artifacts. The side effects is that you can take this one step further and use them to build a new database whenever you want, ideally you also have some data that can be pushed into the database.
Here is a cliff notes version to get your feet wet, but there is lots out there in this space:
http://www.infoq.com/news/2008/02/versioning_databases_series
I like some of the ideas that Scott Ambler has here as well, the site is good but the book is surprisingly deep for such a difficult set of problems.
http://www.agiledata.org/
http://www.amazon.com/exec/obidos/ASIN/0321293533/ambysoftinc
Red Gate is a quite robust solution and it works out of the box.
But the best thing is that you can integrate it with your continuous integration process. I use it with Msbuild and Hudson.
quickly explaining how it works:
http://blog.vincentbrouillet.com/post/2011/02/10/Database-schema-synchronisation-with-RedGate
if you need to know more about this, feel free to ask
The Red Gate approach using SQL Source Control and the SQL Compare Pro command line is detailed with code samples here:
http://downloads.red-gate.com/HelpPDF/ContinuousIntegrationForDatabasesUsingRedGateSQLTools.pdf
Troy Hunt wrote an article on Simple Talk entitled "Continuous Integration for SQL Server Databases":
http://www.simple-talk.com/content/article.aspx?article=1247
Have you looked at FluentMigrator? The default download includes Nant scripts that would be easy to add in to a CI. Free, open source and easy to use. Works for a wide variety of databases.
The latest version (5.0) of DB Ghost doesn't suffer from the "non ASCII character" problem (it just means that the file is UTF8 encoded) and it should be able to do exactly what you need.
Also, the tools can actually be used standalone to perform the various functions (scripting, building, comparing, upgrading and packaging) if you want, it's just that using them all together provides a full end-to-end process thus making the overall value greater than the sum of it's parts.
In essence, to make changes to the schema you update individual object creation scripts and per-table insert scripts (for reference data) that are held under source control just like you were developing a “day one” greenfield database. The DB Ghost tools are used to enable the whole thing by building these scripts into a brand new database (using continuous integration if required) and then comparing and upgrading a target database, which can be a copy of the production database. This process produces a delta script which can be used on the real production database during go-live.
You can even produce a Visual Studio database project and add it into any solutions you currently have.
Malc
I know this post is old, but we have a new solution that takes the following approach:
Developers script individual SQL changes and commit them to source
control.
Our program (OneScript) pulls the change script files from
source control, filters and sorts them, and generates a single
release script file.
That release script file is then applied to a
database to do a release.
Our home page here explains this process in more detail and has a link to an example that does these steps automatically from a Subversion hook. So soon after a commit, the developer receives an email saying if the release was successful or had errors. The PowerScript code is included.
Disclaimer -I'm working at the company that makes OneScript.

What's the best way to create ClickOnce deployments

Our team develops distributed winform apps. We use ClickOnce for deployment and are very pleased with it.
However, we've found the pain point with ClickOnce is in creating the deployments. We have the standard dev/test/production environments and need to be able to create deployments for each of these that install and update separate from one another. Also, we want control over what assemblies get deployed. Just because an assembly was compiled doesn't mean we want it deployed.
The obvious first choice for creating deployments is Visual Studio. However, VS really doesn't address the issues stated. The next in line is the SDK tool, Mage. Mage works OK but creating deployments is rather tedious and we don't want every developer having our code signing certificate and password.
What we ended up doing was rolling our own deployment app that uses the command line version of Mage to create the ClickOnce manifest files.
I'm satisfied with our current solution but is seems like there would be an industry-wide, accepted approach to this problem. Is there?
I would look at using msbuild. It has built in tasks for handling clickonce deployments. I included some references which will help you get started, if you want to go down this path. It is what I use and I have found it to fit my needs. With a good build process using msbuild, you should be able to accomplish squashing the pains you have felt.
Here is detailed post on how ClickOnce manifest generation works with MsBuild.
I've used nAnt to run the overall build strategy, but pass parameters into MSBuild to compile and create the deployment package.
Basically, nAnt calls into MSBuild for each environment you need to deploy to, and generates a separate deployment output for each. You end up with a folder and all ClickOnce files you need for every environment, which you can just copy out to the server.
This is how we handled multiple production environments as well -- we had separate instances of our application for the US, Canada, and Europe, so each build would end up creating nine deployments, three each for dev, qa, and prod.

Resources