Deploying MSSQL change scripts - sql-server

So I am in the throes of developing our Continuous Integration practices. We are a .Net/MSSQL shop. We will all soon be on VS2012. We have settled on CruiseControl.Net for CI server, using msbuild to compile our projects. We use SVN (possibly switching to Git later, but that's another discussion) for source control. I'm leaning towards using InstallShield to deploy code packages (usually web apps and/or batch exeutables) to our QA and production servers. (CCNet would build these MSI's as part of our CI.) We are also starting to include unit testing in our projects, and will use NUnit integrated with CCNet to run them automatically upon check-in.
So far this works for our standard web app/exe development. Where it does not fit in (yet) is with our MSSQL change management, or lack thereof. It's been pretty cowboy how we've done this. Some folks have used Migrator.Net. Others just do a SQL Compare with Redgate and generate a script. Still others have hand-written sql scripts. It may or may not be in SVN. "Source control" at the db level is basically "we have backups of our databases." Boo, hiss. Needless to say that if we want some consistency with our CI and with our deployments, we need to settle on something. So far I am leaning towards using VS SQL projects to handle the change management and deployment.
Note: we (developers) are not supposed to push changes. Sys admins do that. So we can't run anything to deploy code or sql.
So, 2 problems to solve (I think):
What "technique" to use so that our CI server blows away a CI version of the database so that unit tests can be tested against it. I've settled that VS2012 SQL projects can do that. CCNet can run msbuild against the db project, which recreates the database. This is fairly easy.
How to generate change scripts for our QA and prod environments? This one I'm stuck on.
VS can do a schema compare and then generate the sql script -- but it is dependent on sqlcmd. So our sys admins would have to run sqlcmd from the command prompt to deploy it... probably not ideal. Right?
I could run msbuild again to deploy... but I don't want the database re-created, I just want changes deployed.
So what are the options here? I need something self-contained for the admins to run -- and check-in to SVN. Should I make another msi for database deployments? Can CCNet/msbuild make some other kind of "deployment package" for database changes (not re-creation) where the sys admins can double-click and go?
How do you all handle this?
Thanks
Tom

Check out the SQL Server Data Tools package from the Microsoft site.
This will register a new SQL Server 2012 Database type project to contain the definition for all of your database structures. Upon build, this will generate a create script that you can use to deploy your database.
Then for upgrading your database, use the SQLPACKAGE.EXE tool using the create script and target database server name to generate an Update.sql script.
Update: Also on the issue of how you're running unit tests, you could create supplemental methodologies that invoke the create scripts by launching a process and and passing the path to the output create.sql script, then have your tests 'tear down' the database using the same method but with a drop database statement.

Related

How to build CI/CD for MS SQL Server?

I'm trying to build a CI/CD for my Microsoft SQL Server database projects. It will work with Microsoft DevOps pipelines.
I have all databases in Visual Studio databases projects with the GIT as source control. My objective is to have something that I can release databases with the use of DevOps pipelines to the diferents enviroments:
DEV
UAT
PROD
I was thinking of using DBGhost: http://www.innovartis.co.uk/ but I can't find updated information about this tool (only very old info) and there is very little information about it on the internet and how to use it (is it still in use?).
I would like to use a mix of DBGhost and DevOps. DBGhost to Source Scripting, Building, Comparing, Synchronizing, Creating Delta Scripts, Upgrading and DevOps to make releases (that would call the builds created by DBGhost)
If you have any ideas using this or other methods I am grateful because currently all releases are manual and it is not very advisable to do.
We have this configured in our environment using just DevOps. Our database is in a Visual Studio database project. The MSBuild task builds the project and generates a DACPAC file as an artifact, and the Release uses the "SQL Server Database Deploy" task to deploy this to the database. The deploy task needs to use an account with enough privileges to create the database, logins, etc., but takes care of performing the schema compare, generating the delta scripts, and executing them. If your deploy is going to make changes that could result in data loss such as removing columns, you will need to include the additional argument /p:BlockOnPossibleDataLoss=false in the deploy task. This flag is not recommended unless you know there will be changes that will cause data loss; without the flag any deploy which would result in data lost will fail.

Deploy SQL server database on multiple server using DevOps (an mac as development laptop)

I,
We are currently working on a .net core project that will use multiple databases with the same structure.
In short, this is a multi tenant project and each tenant will use the same web application (multiple instances behind a load balancer) BUT each tenant will have its own database.
We are searching for the best solution to ease our deployment process.
When the application (and DB) is updated, we need to update the database structure on all SQL servers and all databases (one SQL can contain x databases).
FYI, application and SQL server are hosted on AWS, our CI/CD is Azure DevOps.
And last (but not least) limitation: we are working on VSCode only (MAC & Linux laptop).
So, we looked for some solutions :
Using Database projects (.sqlproj) + DACPAC generation deployed using DevOps, but it's not available on VSCode
Using Migration: not working with multiple databases and dynamic connection strings
Using SQL script: too complicated to maintains by hand a SQL script that takes care of possible cases
So could someone give us some advice to solve this problem?
The general solution here is to generate SQL Scripts for each deployment, and integrate those into your CI/CD process.
You could use EF Migrations to generate a SQL Script, that is then tested, deployed to your repo as a first-class asset, and deployed by your CI/CD pipeline. Or you could use SSDT to manage the schema and generate change scripts. But those aren't the only reasonable ways.
If you are modifying the schema by hand without using SSDT, you would normally just use a tool to generate the change script. And go from there.
There are many tools (including SSDT) that help you to diff a development environment against a target production schema and generate the change scripts. Eg Redgate ReadyRoll
Note that if you intend to perform online schema updates you need to review the change scripts manually for offline DDL operations, and to ensure that your code/database changes have the correct forward and backward compatibility to support a rollout while the application is online.
And preparing, reviewing, testing, and editing the database change scripts is not something that everyone on the team dev needs to do. So you can always consider jumping onto a Windows VM for that task.

Deploying a database package to SQL Server through Octopus & TeamCity

I am implementing CI/CD for SQL Server Database through Redgate software and TeamCity. I manage to Build and push the NuGet Database Package to Octopus. I can see the NuGet package in Library section of Octopus. But I am facing issues in deploying that package to SQL Server. I cant find the Built-in Step Template "Deploy a NuGet package" in Octopus process section. I have also tried "Deploy a package" step template but it didnt worked.I am following this guide.
https://documentation.red-gate.com/sr1/worked-examples/deploying-a-database-package-using-octopus-deploy-step-templates
Any Help will be highly Appreciated.
Good question, to use Redgate's tooling with Octopus Deploy you will need to install the step templates they provided. I recommend create a database release and deploy a database release. When you are browsing the step template you might notice the step template to deploy directly from a package. The state-based functionality for SQL Change Automation works by comparing the state of the database stored in the NuGet package with the destination database. Each time it runs it creates a new set of delta scripts to apply. Because of that, the recommended process is:
Download the database package onto the jump box.
Create the delta script by comparing the package on the jump box with the database on SQL Server.
Review the delta script (can be skipped in dev and test).
Run the script on SQL Server using the tentacle on the jump box.
Let's go ahead and walk through each one. The download a package step is very straightforward, no custom settings aside from picking the package name.
The Redgate - Create Database Release step is a little more interesting. This is the step which generates the actual delta script that will be run on the database. What trips up most people is the Export Path. The export path is where the delta script will be exported to. This needs to be a directory outside of the Octopus Deploy tentacle folder. This is because the "Redgate - Deploy from Database Release" step needs access to that path and the Tentacle folder will be different for each step.
What I like to do is use a project variable.
The full value of the variable is:
C:\RedGate\#{Octopus.Project.Name}\#{Octopus.Release.Number}\Database\Export
The next step is approving the database release. I recommend creating a custom team to be responsible for this. My preference is to skip this step in Dev and QA.
The create database release step makes use of the artifact functionality built into Octopus Deploy. This allows the approver to download the files and review them.
The final step is deploying the database release. This step takes the delta script in the export data path and runs it on the target server. This is why I recommend putting the export path in a variable.
Some other general items to help get going. First, don't install tentacles directly onto SQL Server instances. In production, the typical SQL Server set up is a cluster or they have multiple nodes with always-on high availability. Access to SQL Server is handled via a virtual IP.
If you were to install tentacles on both nodes, Octopus Deploy would attempt to run the change script on both nodes at the same time (by default). That will cause a lot of drama. I recommend using a jump box because you will need something to sit between Octopus Deploy and SQL Server. When you get comfortable with that I'd recommend using workers (but that is a bit of scope creep, so I won't cover that).
If you would like to know more on how to wire this up, check out the blog post I wrote (and copied from for this answer) here.
I also have written an entire series on database deployments with Octopus Deploy, which you can find here.
Finally, our documentation covers jump boxes and permissions you will need for the user doing the database deployments.
Hope that helps!

Proper structure of asp.net website and database in visual studio

My main problem is where does database go?
The project will be on SVN and is developed using asp.net mvc repository pattern. Where do I put the sql server database (mdf file)? If I put it in app_data, then my other team mates can check out the source and database and run it with the database being deployed in the vs instance.
The problem with this method are:
I cannot use SQL Management Studio with this database.
Most web hosts require me to deploy the database using their UI or SQL Management studio. Putting it in App Data will make no sense.
Connection String has to be edited each time I'm moving from testing locally to testing on the web host.
If I create the database using SQL Management studio, my problems are:
How do I keep this consistent with the source control (team mates have to re-script the db if the schema changes).
Connection string again. (I'd like to automatically use the string when on production server).
Is there a solution to all my problems above? Maybe some form of patterns of tools that I am missing?
Basically your two points are correct - unless you're working off a central database everyone will have to update their database when changes are made by someone else. If you're working off a central database you can also get into the issues where a database change is made (ie: a column dropped), and the corresponding source code isn't checked in. Then you're all dead in the water until the source code is checked in, or the database is rolled back. Using a central database also means developers have no control over when databsae schema changes are pushed to them.
We have the database installed on each developer's machine (especially good since we target different DBs, each developer has one of the supported databases giving us really good cross platform testing as we go).
Then there is the central 'development' database which the 'development' environment points to. It is build by continuous integration each checkin, and upon successful build/test it publishes to development.
Changes that developers make to the database schema on their local machine need to be checked into source control. They are database upgrade scripts that make the required changes to the database from version X to version Y. The database is versioned. When a customer upgrades, these database scripts are run on their database to bring it up from their current version to the required version they're installing.
These dbpatch files are stored in the following structure:
./dbpatches
./23
./common
./CONV-2345.dbpatch
./pgsql
./CONV-2323.dbpatch
./oracle
./CONV-2323.dbpatch
./mssql
./CONV-2323.dbpatch
In the above tree, version 23 has one common dbpatch that is run on any database (is ANSI SQL), and a specific dbpatch for the three databases that require vendor specific SQL.
We have a database update script that developers can run which runs any dbpatch that hasn't been run on their development machine yet (irrespective of version - since multiple dbpatches may be committed to source control during a single version's development).
Connection strings are maintained in NHibernate.config, however if present, NHibernate.User.config is used instead, however NHibernate.User.config is ignored from source control. Each developer has their own NHibernate.User.config, which points to their local database and sets the appropriate dialects etc.
When being pushed to development we have a NAnt script which does variable substitution in the config templates for us. This same script is used when going to staging as well as when doing packages for release. The NAnt script populates a templates config file with variable values from the environment's settings file.
Use management studio or Visual Studios server explorer. App_Data isn't used much "in the real world".
This is always a problem. Use a tool like SqlCompare from Redgate or the built in Database Compare tools of Visual Studio 2010.
Use Web.Config transformations to automatically update the connection string.
I'm not an expert by any means but here's what my partner and I did for our most recent ASP.NET MVC project:
Connection strings were always the same since we were both running SQL Server Express on our development machines, as were our staging and production servers. You can just use a dot instead of the computer name (eg. ".\SQLEXPRESS" or ".\SQL_Named_Instance").
Alternatively you could also use web.config transformations for deploying to different machines.
As far as the database itself, we just created a "Database Updates" folder in the SVN repository and added new SQL scripts when updates needed to be made. I always thought it was a good idea to have an organized collection of database change scripts anyway.
A common solution to this type of problem is to have the database versioning handled in code rather than storing the database itself in version control. The code is typically executed on app_start but could be triggered in other ways (build/deploy process). Then developers can run their own local databases or use a shared development database. The common term for this is called database migrations (migrating from one version to the next). Here is a stackoverflow question for .net tools/libraries to make this easier: https://stackoverflow.com/questions/8033/database-migration-library-for-net
This is the only way I would handle this on projects with multiple developers. I've used this successfully with teams of over 50 developers and it's worked great.
The Red Gate solution would be to use SQL Source Control, which integrates into SSMS. Its maintains a sql scripts folder structure in source control, which you can keep in the same folder/ respository that you keep your app code in.
http://www.red-gate.com/products/SQL_Source_Control/

What is the best website/web-app upgrade process?

We have a great process for upgrading our clients' websites as far as updating html/js code and assets is concerned (by using Subversion) that we are very happy with.
However, when it comes to upgrading databases, we are without any formal process.
If we add new tables/fields to our development database, when it comes to rolling it out to the production server we have to remember our changes and replicate them. We cannot simply copy the development database on top of the production database as client data would be lost (e.g. blog posts, account info etc).
We are also now in the process of building a web-app which is going to come across the same issues.
Does anyone have a solution that makes this process easier and less prone to error? How do big web-apps get round the problem?
Thanks.
I think that adding controls to the development process is paramount. At one of my past jobs, we had to script out all database changes. These scripts were then passed to the DBA with instructions on what environment to deploy them in. At the end of the day, you can implement technical solutions, but if the project is properly documented (IF!!!) then when it comes time for deployment, the developers should remember to migrate scripts, along with code files. My $.02
In my opinion your code should always be able to create your database from scratch, therefore it should also handle upgrades too. It should check a field in the database to see what version the schema is at and handle the upgrades to the latest version.
I had some good luck with: http://anantgarg.com/2009/04/22/bulletproof-subversion-web-workflow/
The author has a database versioning workflow (with PHP script), which is decent.
Some frameworks have tools which deal with the database upgrade. For example rails migrations are pretty nice.
If no convenient tool is available for your platform you could try scripting modifications to your development database.
In my company we use this model for some of our largest projects:
If the X is the just deployed version of our application and it's not different then the latest development version.
We create a new directory for the scripts naming it for example - version x + 1 and add it to the subversion repository.
When developer wants to make modification to the development database, he creates the .sql script with a name "1 - does something.sql" that makes the modifications (they must be indestructible), saves it and then runs it on the development database. He commits the web app code and the sql scripts. Each developer does the same and maintains the order of the execution of scripts.
When we need to deploy the version X+1 - we copy the x+1 web app code and the scripts to the production server, we backup the database, run the sql scripts one by one on the production database and deploy the new web application code.
After that we open a new (x + 2) sql script directory and repeat the proces ...
We basically have a similar approach as Senad, we maintain a changes.sql file in our repo that developers put their changes in. When we deploy to production, we:
Run a test deployment to the QA server:
first reproduce the production environment (app & db) in the QA server
run changes.sql against the qa db
deploy the app to qa
run integration tests.
When we are sure the app runs fine in qa with the scripted changes to the db (ie. nobody forgot to include their db changes in the changes.sql, or references, etc.) we:
backup the production database
run the scripts in the changes.sql file against the production db
deploy the app
clear the changes.sql file
All the deployment is run through automated scripts so we now we can reproduce it.
Hope this help
We have folder migrations/ inside almost every project and tehere are so called, "up" and "down" scripts (sql). Every developer is obliged to write his own up/down script and to verify it against testing environment.
There are other tools and frameworks for migrations, but we haven't got the time to test it...
Some are: DoctrineDB, rails migrations, propel (I think...), capistrano can do it also..

Resources