Version control for SqlServer - sql-server

I'm working on a legacy SqlServer database with no version control. I've tried importing it into a VS 2017 database project, but it takes more than an hour to load ("your project will be ready after 1200000 operations are completed"), and usually crashes out in less time than it took to load.
Does anyone have any suggestions for a version control system I can try that will cope with real-life databases?

Baseline your Database and call this Version 0.1.0
As you need to make changes to it, like add columns, data etc. Script this and add this to the source control of your choice. Call this file something like:
Version-0.1.1.sql
As you make more and more changes the amounts of files will be added to.
Version-0.1.2.sql
Version-0.1.3.sql
Version-0.1.4.sql
Of course you will test these before you deploy to live. As you are working with what is a legacy system I would probably shy away from investment in expensive tools for what is a legacy system in the first place.
To bring a database to a particular version you would run the scripts in the order. Obviously in each script you have failover etc that handles anything that may go wrong within the scripts.
It is a manual process but the best points are it's cheap, easily to understand, does not require much expense and it's a methodical system to manage change.
Note: Obviously deploy scripts to a UAT version before directly on Live.

I have had a lot of success using flyway with both sql server and postgres. It allows you to create numbered versions as betelgeuce described in his answer, but also offers additional protection of ensuring your earlier versions haven't been changed before deploying any new changes

Related

When using Continuous or Automated Deployment, how do you deploy databases?

I'm looking at implementing Team City and Octopus Deploy for CI and Deployment on demand. However, database deployment is going to be tricky as many are old .net applications with messy databases.
Redgate seems to have a nice plug-in for Team City, but the price will probably be stumbling block
What do you use? I'm happy to execute scripts, but it's the comparison aspect (i.e. what has changed) I'm struggling with.
We utilize a free tool called RoundhousE for handling database changes with our project, and it was rather easy to use it with Octopus Deploy.
We created a new project in our solution called DatabaseMigration, included the RoundhousE exe in the project, a folder where we keep the db change scripts for RoundhousE, and then took advantage of how Octopus can call powershell scripts before, during, and after deployment (PreDeploy.ps1, Deploy.ps1, and PostDeploy.ps1 respectively) and added a Deploy.ps1 to the project as well with the following in it:
$roundhouse_exe_path = ".\rh.exe"
$scripts_dir = ".\Databases\DatabaseName"
$roundhouse_output_dir = ".\output"
if ($OctopusParameters) {
$env = $OctopusParameters["RoundhousE.ENV"]
$db_server = $OctopusParameters["SqlServerInstance"]
$db_name = $OctopusParameters["DatabaseName"]
} else {
$env="LOCAL"
$db_server = ".\SqlExpress"
$db_name = "DatabaseName"
}
&$roundhouse_exe_path -s $db_server -d $db_name -f $scripts_dir --env $env --silent -o > $roundhouse_output_dir
In there you can see where we check for any octopus variables (parameters) that are passed in when Octopus runs the deploy script, otherwise we have some default values we use, and then we simply call the RoundhousE executable.
Then you just need to have that project as part of what gets packaged for Octopus, and then add a step in Octopus to deploy that package and it will execute that as part of each deployment.
We've looked at the RedGate solution and pretty much reached the same conclusion you have, unfortunately it's the cost that is putting us off that route.
The only things I can think of are to generate version controlled DB migration scripts based upon your existing database, and then execute these as part of your build process. If you're looking at .NET projects in future (that don't use a CMS), could potentially consider using entity framework code first migrations.
I remember looking into this a while back, and for me it seems that there's a whole lot of trust you'd have to get put into this sort of process, as auto-deploying to a Development or Testing server isn't so bad, as the data is probably replaceable... But the idea of auto-updating a UAT or Production server might send the willies up the backs of an Operations team, who might be responsible for the database, or at least restoring it if it wasn't quite right.
Having said that, I do think its the way to go, though, as its far too easy to be scared of database deployment scripts, and that's when things get forgotten or missed.
I seem to remember looking at using Red Gate's SQL Compare and SQL Data Compare tools, as (I think) there was a command-line way into it, which would work well with scripted deployment processes, like Team City, CruiseControl.Net, etc.
The risk and complexity comes in more when using relational databases. In a NoSQL database where everything is "document" I guess continuous deployment is not such a concern. Some objects will have the "old" data structure till they are updated via the newly released code. In this situation your code would need to be able to support different data structures potentially. Missing properties or those with a different type should probably be covered in a well written, defensively coded application anyway.
I can see the risk in running scripts against the production database, however the point of CI and Continuous Delivery is that these scripts will be run and tested in other environments first to iron out any "gotchas" :-)
This doesn't reduce the amount of finger crossing and wincing when you actually push the button to deploy though!
Having database deploy automation is a real challenge especially when trying to perform the build once deploy many approach as being done to native application code.
In the build once deploy many, you compile the code and creates binaries and then copy them within the environments. From the database point of view, is the equivalent to generate the scripts once and execute them in all environments. This approach doesn't handle merges from different branches, out-of-process changes (critical fix in production) etc…
What I know works for database deployment automation (disclaimer - I'm working at DBmaestro) as I hear this from my customers is using the build and deploy on demand approach. With this method you build the database delta script as part of the deploy (execute) process. Using base-line aware analysis the solution knows if to generate the deploy script for the change or protect the target and not revert it or pause and allow you to merge changes and resolve the conflict.
Consider a simple solution we have tried successfully at this thread - How to continuously delivery SQL-based app?
Disclaimer - I work at CloudMunch
We using Octopus Deploy and database projects in visual studio solution.
Build agent creates a nuget packages using octopack with a dacpac file and publish profiles inside and pushes it onto NuGet server.
Then release process utilizes the SqlPackage.exe utility to generate the update script for the release environment and adds it as an artifact to the release.
Previously created script executed in the next step with SQLCMD.exe utility.
This separation of create and execute steps gives us a possibility to have a manual step in between, so that someone verifies before the script is executed on Live environment, not to mention, that script saved as an artifact in the release can always be referred to, at any later point.
Would there be a demand I would provide more details and step scripts.

Twist to the standard “SQL database change workflow best practices”

Twist to the standard “SQL database change workflow best practices”
Background
ASP.NET/C# Web App
MS SQL
Environments
Production
UAT
Test
Dev
We create patch scripts (XML and sql) that are source controlled in Mercurial. We have cmd line utility that installs patches to DB (utitlity.exe install –patch) from a Release folder the build packages. Patches have meta data that helps with when patch should run and we log patches installed in a table in the target DB. All these were covered in the 3 year old question:
SQL Server database change workflow best practices
Our Problem/Twist
I think this works well for tables, views, functions and stored procedures. We struggle with application configuration data. Here are some touch points on application configurations.
New client. BA performs system study and fit analysis. Out of this comes a configuration word document of what application configurations need to be setup. Note some of these may also come in phases over time. We need to get these new configurations into the system for the developer and client UAT.
Developer works on feature request or bug fix. A new configuration change comes out of that change. The configuration needs to make it into the system for testing and promotion to UAT and up.
QA finds that the developer missed an associated configuration change. That configuration needs to make it into the system for promotion to UAT and up.
Build goes to UAT. Client performs acceptance testing but find they really want to change another unassociated configuration and have it promoted with the changes. In other words they found they want to change a business process by a configuration. The configuration needs to make it into the system for promotion to PRD.
As the client operates in PRD they may tweak application settings. These configurations need to make it into the system for future development and testing.
The general issue is making sure we are accounting for all the configurations and accidently not miss any during promotions which causes grief.
Our Attempts At A Process
a. We have had member of the QA team to write patches (xml and sql) and check those in. This requires a build to make sure those get into the package. With this approach it really just took care of item 1 above and we fell apart on the other items. The nice thing is for the items that made it into the patches it was just an install with the utility.
b. A developer threw together a Config page on the application. All the configurations could be uploaded and downloaded via XML document but it requires the app to be running. For item 1, member of QA team would manually setup configurations in the application and then would download the Config.xml file. This XML file would be used to upload configurations in other environments. We would use text diff tool to look at differences between config.xml files from different environments. This addressed item 1 and the others items but had problems. Problems were not all configurations made it into the XML document (just needs to be fixed by developer), some of the configurations didn’t have a UI in the application so you still had to manually go to the database on some, comparing the XML document with text diff was difficult at time (looked mostly due to sorting but I’m sure there are other issues), XML was not very human readable and finally the XML document did not allow for deleting existing incorrect or outdated configs.
c. Recently we went with option B, but over time for a new client we just started manually tracking configs and promoting them manually by hand (UI and DB) through the promotions. Needless to say lots of human errors.
So we have been looking at solutions. Eventually it would be great to get as much automation in as possible. I’m looking at going with the scripting approach and just focusing on process, documentation and looking at using Redgate data compare in addition to what we had been doing with compare on config.xml. With Redgate we have to create views though and there is no way to create update scripts from that approach except to manually update the scripts. It does at least allow a comparison without the app running. I’m also looking at pulling out the configs from our normal patches and making it a system independent of the build (utility.exe –patch –config). When I say focus on process it will be things like if we compare and find a config change either reported by client or not, we still script it, just means we have to have a process in place to quickly revalidate config install before promoting to the next level. As for documentation looking at making the original QA document a living document instead of just an upfront document. The goal is to try and enhance clarity and reduce missing configurations during promotion. Unfortunately it doesn’t improve speed of delivery.
Does anyone have any recommendations or best practices to pass along. Thanks.
Can I ask exactly what you mean by application configuration. I'm interpreting that as both:
Config files in the web application
Static reference data inside the database
Full disclosure I work for Red Gate. You might be interested in taking a look at Deployment Manager, it's a deployment tool that deploys applications, databases and configuration. It's free for up to 5 projects and target servers.
The approach it uses is to package application code and the database state into packages. These packages can be deployed into dev, test, staging and production environments. The same package is deployed to each environment.
Any application configuration that needs to change between environments is handled in one of the ways below:
Variable substitution in web.config. The tool allows you to specify override values for variables in these files, and set these per environment/server
Substituting the web.config file per environment.
Custom powershell scripts that are run pre/post deploy. You could use these to execute custom SQL based on the environment or server.
Static data within the database, using SQL Source Control's static
data feature. I've written a blog post about how to supply
different sets of static data to different environments/customers.
This allows you to source control the application configurations and deploy them to different environments.

Committing Stored Procedures to SVN Repository

My current development environment for C# projects is Visual Studio, with a SQL Server database and using VisualSVN to connect to my SVN repository. To manage revisions of my Stored Proceduress, Views, etc I save the ALTER script to a folder watched by my SVN client so these get included in the repository.
I have checked out some (now older) posts like this one (How to keep Stored Procedures and other scripts in SVN/Other repository? and Is there a SVN plugin for SQL Server Management Studio 2005 or 2008?) and have seen a recommendation for these tools: http://www.red-gate.com/products/sql-development/sql-source-control/ and http://www.zeusedit.com/agent/ssms/ms_ssms.html .
As I infrequently work with projects doing much DB-side programming, this has never been a major bother (a dozen scripts in a folder with some naming scheme is not much to manage manually), but I have just inherited a project with a few hundred views and 1000+ Stored Procedures which have never been included in version control.
My question is:
What process do others follow for managing the versioning of their SQL Server code - is there a an accepted, clever or otherwise obvious approach I am missing here? I am leaning currently towards the purchase of one of the aforementioned tools - but am looking for advice from the community before I do this.
I realize this may result in a tool recommendation rather than a code solution but posted to SO as I think this is the appropriate crowd to ask this of.
I would recommend you go with something like the redgate tool, and treat any SQL database in the same way you'd treat your C# source code; manually keeping track of the ALTER statements will trip you up sonner or later as the number of modifications grow..can't speak for the zeus edit tool but having used the redgate one, it "just works" - and another benefit of using a tool like this is that it can manage your migration scripts so you can make a bunch of changes on your development version, then generate a single update script to update your testing database, etc,including data changes which is imho the biggest PITA to manually manage.
The other thing to consider, even if the number of changes are infrequent and you get away with manually tracking the ALTER statements, what if someone else ends up working on the same project; now you have another potential for mismanaged change scripts....
Anyway, do let us know how you get on and best of luck with it!
I’ve been maintaining a database with around 800+ db objects in it. We've always just scripted the database objects to a svn-watched folder as you describe. We have had some issues with this method, mostly with people forgetting to script new or modified objects. At the end of the day it hasn't been a huge problem for our project, but yours may be different.
We’ve looked into a couple tools, but they always assume you are starting from scratch, and we have almost 10 years of history we’d like to preserve. In the end we just end up settling back into our text-based manual solution. It's cheap and easy.
Another option you might want to look into is setting up a Visual Studio Database Project. It will script all your objects and provide some deployment options as well. My opinion was that it tired to be a little too tightly integrated for our tastes - we have a few named references to linked databases that it just wouldn't give up on.

version control/maintaining development local copies and working live copies and databases

This is a subject of common discussion, but through all my research I have not actually found a sound answer to this.
I develop my websites offline, and then launch them live through my hosting account.
I utilize codeigniter, and on that basis there are some fundamental differences between my offline and online copies, namely base urls and database configurations. As such I cannot simply develop and test my websites offline and then upload them as it requires small configuration changes which are easy to overlook and good lead to a none working live website.
The other factor is that when I am developing offline, I might add a database table or a column whilst creating some functionality. When I upload my local developments to my host, they often do not work as I have forgotten to upload the new database structure. Obviously this cannot happen - there cannot be any opportunity for a damaged or broken live website.
Further to this, I'd like to be able to have logs of my development - version control of sorts such that if i develop a feature, and then something else stops working I can easily look backwards to at least see the code changes which could have caused the change.
My fourth requirement is as follows: if i go away on holiday for a week without my development laptop, and then get a bug report, I have no way of fixing it. If i fix it on the live copy, not only is it dangerous, but i'll inevitably not update it on my local copy - as such when i update my live copy next time, that change will be lost. Is there a way that on any computer i can access my development setup, edit and test, launch to the live site, whilst also committing it such that my laptop local copy is up to date.
So yes.. in general im looking for a solution to make my development processes more efficient/suitable. Any ideas?
Thanks
Don't deploy by simply copying. Deploy by using a script (I use Apache Ant) that will automate the copy of specific files for each environment, the replacement of some values, etc.
This just needs rigor. Make a todo list while developing, and check that every modification on the server is done. You might also test the deploy procedure on a pre-production server which has an similar configuration as the production server, make sure everything is OK, and then apply the same, tested procedure on the production server
Just use a version control system. SVN or Git are two free candidates.
Make your version control server available from anywhere. If it's an open-source project, free hosting solutions exist. Of course, if you don't have a development computer wvailable, you'll have to checkout the whole project, and probably install some tools to be able to develop, test and deploy. Just try to make it as easy as possible, or always have your laptop available. If you plan to work, have your toolbox with you. If you don't plan to work, then don't work. When you have finished some development, commit to the server. When you go back to your laptop, update your working copy from the server.
Small additions and clarifications to JB
Use any VCS, which can work (in a good way) with branches - your local and prod systems are good candidates for separate branches, where you share common code but have branch-specific config. It'll require some changes in your everyday workflow (code in "test", merge finished with "prod", deploy /by tools, not hand/ only after merge...), but it's fair price
Changing of workflow, again. As JB noted - don't deploy by hand, don't deploy wrong branch, don't deploy "prod" before finished merge. But now build-tools are rather smart, you can check such pre-condition inside builder
Just use VCS, maybe DVCS will be somehow better. I say strong "No-no" for Git as first VCS, but you have wide choice even without it - SVN (poor branch|merge comparing to DVCS), Bazaar (not a tool of my dream, but, who knows), Mercurial, Fossil SCM, Monotone
Don't work on live, never do anyting outside your SCM. One source of changes is a rule of happy developer. Or don't work at all at free-time, or have codebase always reacheable for you (free code-hosting /GoogleCode, SourceForge, BitBucket, Github, Assembla, LaunchPad/ or own server), get it as needed, change, save, deploy

SQL Server Database Management with Continuous Integration

Let's say we have a continuous integration server. When I check in, the post-hook pulls the latest code, runs the tests, packages everything. What is the best way to also automate the database changes?
Ideally, I'd build an installer that could either build a database from scratch or update an existing one using some automated syncing method.
I've recently bumped into an article, that might be of use.
The author explained some of the best continuous integration practices including testing, processing and automation.
Here are some of the key takeaways:
In many shops code is unit tested at the point of commit. For databases, it is preferred running all unit tests at once and in sequence against a QA database, vs development, as a part of the Test step
The test step is a critical part of any CI/CD process. Test scripts, including unit tests themselves, should also be versioned in source control, extracted at the point of the Build step and executed
Pulling data from production is appealing as a quick expedient, but is never a good idea
The best approach is using a tool or script to quickly, repeatedly and reliably create synthetic test data for your transactional tables
Running unit tests to produce manual summary results for human consumption defeats the purpose of automation. We need machine readable results, that can allow an automated process to abort, branch and/or continue.
Running a CI process, which requires 100% of all tests to pass, is akin to not having CI at all, if the workflow pipeline is set up atomically to stop on failure, which it should. To thread the needle, tests should have built in thresholds, that will raise an error based on either the % of tests failing or in some cases, if certain high priority tests fail.
All processes should ultimately produce a Boolean result of pass or fail, but some non-automated processes can easily find their way into your CI workflow pipeline (e.g. unit testing). Software should be plug-n-play into any workflow pipeline, taking known inputs and producing expected outputs – like pass, fail.
CI/CD process should be aborted on failure and a notification email should be immediately sent vs continuing to cycle the pipeline.
The CI process should not cycle again until any errors in the last build are fixed. On failure, the entire team should get the failure notification, including as many details as to what failed as possible.
If a pipeline takes 1 hour, from start to finish, to complete, including all the testing, then all the build intervals should be set to no less than one hour and all new commits should be queued, and applied to the next build.
No plain text passwords should exist in automation scripts
If you have the opportunity to define and control the whole database management and db creation process, have a serious look at DB Ghost - it's more than just a tool - it's a process.
If you like it and can implement it, you'll get great returns on it - but it's a bit of a "all-or-nothing" kind of approach. Recommended.
I would caution against using a db backup as a development artifact, most CI best practices suggest that you manage the schema, procedures, triggers, and views as first class development artifacts. The side effects is that you can take this one step further and use them to build a new database whenever you want, ideally you also have some data that can be pushed into the database.
Here is a cliff notes version to get your feet wet, but there is lots out there in this space:
http://www.infoq.com/news/2008/02/versioning_databases_series
I like some of the ideas that Scott Ambler has here as well, the site is good but the book is surprisingly deep for such a difficult set of problems.
http://www.agiledata.org/
http://www.amazon.com/exec/obidos/ASIN/0321293533/ambysoftinc
Red Gate is a quite robust solution and it works out of the box.
But the best thing is that you can integrate it with your continuous integration process. I use it with Msbuild and Hudson.
quickly explaining how it works:
http://blog.vincentbrouillet.com/post/2011/02/10/Database-schema-synchronisation-with-RedGate
if you need to know more about this, feel free to ask
The Red Gate approach using SQL Source Control and the SQL Compare Pro command line is detailed with code samples here:
http://downloads.red-gate.com/HelpPDF/ContinuousIntegrationForDatabasesUsingRedGateSQLTools.pdf
Troy Hunt wrote an article on Simple Talk entitled "Continuous Integration for SQL Server Databases":
http://www.simple-talk.com/content/article.aspx?article=1247
Have you looked at FluentMigrator? The default download includes Nant scripts that would be easy to add in to a CI. Free, open source and easy to use. Works for a wide variety of databases.
The latest version (5.0) of DB Ghost doesn't suffer from the "non ASCII character" problem (it just means that the file is UTF8 encoded) and it should be able to do exactly what you need.
Also, the tools can actually be used standalone to perform the various functions (scripting, building, comparing, upgrading and packaging) if you want, it's just that using them all together provides a full end-to-end process thus making the overall value greater than the sum of it's parts.
In essence, to make changes to the schema you update individual object creation scripts and per-table insert scripts (for reference data) that are held under source control just like you were developing a “day one” greenfield database. The DB Ghost tools are used to enable the whole thing by building these scripts into a brand new database (using continuous integration if required) and then comparing and upgrading a target database, which can be a copy of the production database. This process produces a delta script which can be used on the real production database during go-live.
You can even produce a Visual Studio database project and add it into any solutions you currently have.
Malc
I know this post is old, but we have a new solution that takes the following approach:
Developers script individual SQL changes and commit them to source
control.
Our program (OneScript) pulls the change script files from
source control, filters and sorts them, and generates a single
release script file.
That release script file is then applied to a
database to do a release.
Our home page here explains this process in more detail and has a link to an example that does these steps automatically from a Subversion hook. So soon after a commit, the developer receives an email saying if the release was successful or had errors. The PowerScript code is included.
Disclaimer -I'm working at the company that makes OneScript.

Resources