How to build CI/CD for MS SQL Server? - sql-server

I'm trying to build a CI/CD for my Microsoft SQL Server database projects. It will work with Microsoft DevOps pipelines.
I have all databases in Visual Studio databases projects with the GIT as source control. My objective is to have something that I can release databases with the use of DevOps pipelines to the diferents enviroments:
DEV
UAT
PROD
I was thinking of using DBGhost: http://www.innovartis.co.uk/ but I can't find updated information about this tool (only very old info) and there is very little information about it on the internet and how to use it (is it still in use?).
I would like to use a mix of DBGhost and DevOps. DBGhost to Source Scripting, Building, Comparing, Synchronizing, Creating Delta Scripts, Upgrading and DevOps to make releases (that would call the builds created by DBGhost)
If you have any ideas using this or other methods I am grateful because currently all releases are manual and it is not very advisable to do.

We have this configured in our environment using just DevOps. Our database is in a Visual Studio database project. The MSBuild task builds the project and generates a DACPAC file as an artifact, and the Release uses the "SQL Server Database Deploy" task to deploy this to the database. The deploy task needs to use an account with enough privileges to create the database, logins, etc., but takes care of performing the schema compare, generating the delta scripts, and executing them. If your deploy is going to make changes that could result in data loss such as removing columns, you will need to include the additional argument /p:BlockOnPossibleDataLoss=false in the deploy task. This flag is not recommended unless you know there will be changes that will cause data loss; without the flag any deploy which would result in data lost will fail.

Related

Deploy SQL server database on multiple server using DevOps (an mac as development laptop)

I,
We are currently working on a .net core project that will use multiple databases with the same structure.
In short, this is a multi tenant project and each tenant will use the same web application (multiple instances behind a load balancer) BUT each tenant will have its own database.
We are searching for the best solution to ease our deployment process.
When the application (and DB) is updated, we need to update the database structure on all SQL servers and all databases (one SQL can contain x databases).
FYI, application and SQL server are hosted on AWS, our CI/CD is Azure DevOps.
And last (but not least) limitation: we are working on VSCode only (MAC & Linux laptop).
So, we looked for some solutions :
Using Database projects (.sqlproj) + DACPAC generation deployed using DevOps, but it's not available on VSCode
Using Migration: not working with multiple databases and dynamic connection strings
Using SQL script: too complicated to maintains by hand a SQL script that takes care of possible cases
So could someone give us some advice to solve this problem?
The general solution here is to generate SQL Scripts for each deployment, and integrate those into your CI/CD process.
You could use EF Migrations to generate a SQL Script, that is then tested, deployed to your repo as a first-class asset, and deployed by your CI/CD pipeline. Or you could use SSDT to manage the schema and generate change scripts. But those aren't the only reasonable ways.
If you are modifying the schema by hand without using SSDT, you would normally just use a tool to generate the change script. And go from there.
There are many tools (including SSDT) that help you to diff a development environment against a target production schema and generate the change scripts. Eg Redgate ReadyRoll
Note that if you intend to perform online schema updates you need to review the change scripts manually for offline DDL operations, and to ensure that your code/database changes have the correct forward and backward compatibility to support a rollout while the application is online.
And preparing, reviewing, testing, and editing the database change scripts is not something that everyone on the team dev needs to do. So you can always consider jumping onto a Windows VM for that task.

Deploying a database package to SQL Server through Octopus & TeamCity

I am implementing CI/CD for SQL Server Database through Redgate software and TeamCity. I manage to Build and push the NuGet Database Package to Octopus. I can see the NuGet package in Library section of Octopus. But I am facing issues in deploying that package to SQL Server. I cant find the Built-in Step Template "Deploy a NuGet package" in Octopus process section. I have also tried "Deploy a package" step template but it didnt worked.I am following this guide.
https://documentation.red-gate.com/sr1/worked-examples/deploying-a-database-package-using-octopus-deploy-step-templates
Any Help will be highly Appreciated.
Good question, to use Redgate's tooling with Octopus Deploy you will need to install the step templates they provided. I recommend create a database release and deploy a database release. When you are browsing the step template you might notice the step template to deploy directly from a package. The state-based functionality for SQL Change Automation works by comparing the state of the database stored in the NuGet package with the destination database. Each time it runs it creates a new set of delta scripts to apply. Because of that, the recommended process is:
Download the database package onto the jump box.
Create the delta script by comparing the package on the jump box with the database on SQL Server.
Review the delta script (can be skipped in dev and test).
Run the script on SQL Server using the tentacle on the jump box.
Let's go ahead and walk through each one. The download a package step is very straightforward, no custom settings aside from picking the package name.
The Redgate - Create Database Release step is a little more interesting. This is the step which generates the actual delta script that will be run on the database. What trips up most people is the Export Path. The export path is where the delta script will be exported to. This needs to be a directory outside of the Octopus Deploy tentacle folder. This is because the "Redgate - Deploy from Database Release" step needs access to that path and the Tentacle folder will be different for each step.
What I like to do is use a project variable.
The full value of the variable is:
C:\RedGate\#{Octopus.Project.Name}\#{Octopus.Release.Number}\Database\Export
The next step is approving the database release. I recommend creating a custom team to be responsible for this. My preference is to skip this step in Dev and QA.
The create database release step makes use of the artifact functionality built into Octopus Deploy. This allows the approver to download the files and review them.
The final step is deploying the database release. This step takes the delta script in the export data path and runs it on the target server. This is why I recommend putting the export path in a variable.
Some other general items to help get going. First, don't install tentacles directly onto SQL Server instances. In production, the typical SQL Server set up is a cluster or they have multiple nodes with always-on high availability. Access to SQL Server is handled via a virtual IP.
If you were to install tentacles on both nodes, Octopus Deploy would attempt to run the change script on both nodes at the same time (by default). That will cause a lot of drama. I recommend using a jump box because you will need something to sit between Octopus Deploy and SQL Server. When you get comfortable with that I'd recommend using workers (but that is a bit of scope creep, so I won't cover that).
If you would like to know more on how to wire this up, check out the blog post I wrote (and copied from for this answer) here.
I also have written an entire series on database deployments with Octopus Deploy, which you can find here.
Finally, our documentation covers jump boxes and permissions you will need for the user doing the database deployments.
Hope that helps!

Azure continuous deployment from GitHub and database upgrades

I have a Web application that I usually deployed using Web Deploy directly from Visual Studio (whatever branch I am currently using in VS - normally master). But now I'm introducing a second web app on Azure that will be built from the same repo but different branch. To make things simpler I will be configuring both Web apps on Azure to integrate directly with GitHub and associate them with specific branch.
I also added two additional web.config files: Web.Primary.config and Web.Secondary.config and configured app settings on Azure portal of each web app by adding additional value SCM_BUILD_ARGS and set them to
SCM_BUILD_ARGS=-p:PublishProfile=Primary // in primary web app
SCM_BUILD_ARGS=-p:PublishProfile=Secondary // in secondary web app
which I understand will transform correct config file with specific external services' configurations (DB connection, mail server, etc.).
Now the additional step that I would like to include in continuous deployment is run a set of SQL scripts that I have in my repo that I used to manually upgrade database during Web Deploy in VS. Individual scripts are actually doing specific database upgrade steps:
backup current tables - backup creates a set of Backup_OriginalTableName tables that are copied from existing ones and populated with existing data
drop whole DB model - all non-backup objects are being dropped from procedures, functions, types, views, tables...
create model - creates all tables, views and indices
create user types
create user functions
create stored procedures
restore data to new tables from backup tables - this step may occasionally break if we introduce new non-nullable columns to tables in the new model don't have defaults defined on them; I will somehow have to mitigate this problem by adding an additional script that will add missing columns to backup tables and give them some defaults, but that's a completely different issue.
I used to also have a set of batch files (BAT) in my VS solution that simply executed sqlcmd against specific database instance and executed these scripts in predefined order (as above). Hence I had batches:
Recreate Local.bat - this one used additional SQL scripts to not restore from backup but rather to recreate an empty DB with only lookup tables being populated and some default data for development purposes (like predefined test users)
Restore Local.bat - I used this script to simply restore database from backup tables discarding any invalid data I may have created while debugging/testing since last DB recreate/upgrade/restore
Upgrade Local.bat - upgrade local development DB executing scripts mentioned above
Upgrade Production.bat - upgrade production DB on Azure executing scripts mentioned above
So to support the whole deployment process I was now doing manually in VS I would now like to also execute these scripts against specific Azure SQL DB during continuous deployment. I suppose I should be running these right after code deployment because if that one fails, DB shouldn't be upgraded either.
I'm a bit confused where and how to do this? Can I configure this somewhere in Azure portal? I was looking for resources on the Web but I can't seem to find any relevant information how to do additional deployment steps to execute these scripts. I think this is some everyday scenario as it's hard to think of web apps not requiring databases these days.
Maybe it's just my process that is wrong for DB upgrade/deployment so let me also know if there is any other normal way that does DB upgrade/migration with continuous deployment on Azure... I may change my process to accommodate for this.
Note 1: I'm not using Entity Framework or any other full blown ORM. I'm rather using NPoco and all my DB logic is built in SPs that DAL is using.
Note 2: I'm aware of recently introduced staging capabilities of Azure, but my apps are on cheaper plan that doesn't support staging and I want to keep it this way as I may be introducing additional web apps along the way that will be using additional code branches and resources (DB, mail etc.)
It sounds to me like your db project is a good candidate for SSDT and inclusion in source control. You can create a MyDB.sqlproj that builds your db as a dacpac, and then you can use SqlPackage.exe Publish to accomplish your deployment to Azure.
We recently brought our databases under source control and follow a similar process to build and automatically deploy them (but not to a SQL Azure DB). We've found the source control, SSDT tooling support, and automated deployment options to be worth the effort of setting up and maintaining our project this way.
This SO question has some good notes for Azure deployment of a dacpac using SSDT:
How to publish DACPAC file to a SQL Server database project via SQLPackage.exe of SSDT?

Deploying MSSQL change scripts

So I am in the throes of developing our Continuous Integration practices. We are a .Net/MSSQL shop. We will all soon be on VS2012. We have settled on CruiseControl.Net for CI server, using msbuild to compile our projects. We use SVN (possibly switching to Git later, but that's another discussion) for source control. I'm leaning towards using InstallShield to deploy code packages (usually web apps and/or batch exeutables) to our QA and production servers. (CCNet would build these MSI's as part of our CI.) We are also starting to include unit testing in our projects, and will use NUnit integrated with CCNet to run them automatically upon check-in.
So far this works for our standard web app/exe development. Where it does not fit in (yet) is with our MSSQL change management, or lack thereof. It's been pretty cowboy how we've done this. Some folks have used Migrator.Net. Others just do a SQL Compare with Redgate and generate a script. Still others have hand-written sql scripts. It may or may not be in SVN. "Source control" at the db level is basically "we have backups of our databases." Boo, hiss. Needless to say that if we want some consistency with our CI and with our deployments, we need to settle on something. So far I am leaning towards using VS SQL projects to handle the change management and deployment.
Note: we (developers) are not supposed to push changes. Sys admins do that. So we can't run anything to deploy code or sql.
So, 2 problems to solve (I think):
What "technique" to use so that our CI server blows away a CI version of the database so that unit tests can be tested against it. I've settled that VS2012 SQL projects can do that. CCNet can run msbuild against the db project, which recreates the database. This is fairly easy.
How to generate change scripts for our QA and prod environments? This one I'm stuck on.
VS can do a schema compare and then generate the sql script -- but it is dependent on sqlcmd. So our sys admins would have to run sqlcmd from the command prompt to deploy it... probably not ideal. Right?
I could run msbuild again to deploy... but I don't want the database re-created, I just want changes deployed.
So what are the options here? I need something self-contained for the admins to run -- and check-in to SVN. Should I make another msi for database deployments? Can CCNet/msbuild make some other kind of "deployment package" for database changes (not re-creation) where the sys admins can double-click and go?
How do you all handle this?
Thanks
Tom
Check out the SQL Server Data Tools package from the Microsoft site.
This will register a new SQL Server 2012 Database type project to contain the definition for all of your database structures. Upon build, this will generate a create script that you can use to deploy your database.
Then for upgrading your database, use the SQLPACKAGE.EXE tool using the create script and target database server name to generate an Update.sql script.
Update: Also on the issue of how you're running unit tests, you could create supplemental methodologies that invoke the create scripts by launching a process and and passing the path to the output create.sql script, then have your tests 'tear down' the database using the same method but with a drop database statement.

Proper structure of asp.net website and database in visual studio

My main problem is where does database go?
The project will be on SVN and is developed using asp.net mvc repository pattern. Where do I put the sql server database (mdf file)? If I put it in app_data, then my other team mates can check out the source and database and run it with the database being deployed in the vs instance.
The problem with this method are:
I cannot use SQL Management Studio with this database.
Most web hosts require me to deploy the database using their UI or SQL Management studio. Putting it in App Data will make no sense.
Connection String has to be edited each time I'm moving from testing locally to testing on the web host.
If I create the database using SQL Management studio, my problems are:
How do I keep this consistent with the source control (team mates have to re-script the db if the schema changes).
Connection string again. (I'd like to automatically use the string when on production server).
Is there a solution to all my problems above? Maybe some form of patterns of tools that I am missing?
Basically your two points are correct - unless you're working off a central database everyone will have to update their database when changes are made by someone else. If you're working off a central database you can also get into the issues where a database change is made (ie: a column dropped), and the corresponding source code isn't checked in. Then you're all dead in the water until the source code is checked in, or the database is rolled back. Using a central database also means developers have no control over when databsae schema changes are pushed to them.
We have the database installed on each developer's machine (especially good since we target different DBs, each developer has one of the supported databases giving us really good cross platform testing as we go).
Then there is the central 'development' database which the 'development' environment points to. It is build by continuous integration each checkin, and upon successful build/test it publishes to development.
Changes that developers make to the database schema on their local machine need to be checked into source control. They are database upgrade scripts that make the required changes to the database from version X to version Y. The database is versioned. When a customer upgrades, these database scripts are run on their database to bring it up from their current version to the required version they're installing.
These dbpatch files are stored in the following structure:
./dbpatches
./23
./common
./CONV-2345.dbpatch
./pgsql
./CONV-2323.dbpatch
./oracle
./CONV-2323.dbpatch
./mssql
./CONV-2323.dbpatch
In the above tree, version 23 has one common dbpatch that is run on any database (is ANSI SQL), and a specific dbpatch for the three databases that require vendor specific SQL.
We have a database update script that developers can run which runs any dbpatch that hasn't been run on their development machine yet (irrespective of version - since multiple dbpatches may be committed to source control during a single version's development).
Connection strings are maintained in NHibernate.config, however if present, NHibernate.User.config is used instead, however NHibernate.User.config is ignored from source control. Each developer has their own NHibernate.User.config, which points to their local database and sets the appropriate dialects etc.
When being pushed to development we have a NAnt script which does variable substitution in the config templates for us. This same script is used when going to staging as well as when doing packages for release. The NAnt script populates a templates config file with variable values from the environment's settings file.
Use management studio or Visual Studios server explorer. App_Data isn't used much "in the real world".
This is always a problem. Use a tool like SqlCompare from Redgate or the built in Database Compare tools of Visual Studio 2010.
Use Web.Config transformations to automatically update the connection string.
I'm not an expert by any means but here's what my partner and I did for our most recent ASP.NET MVC project:
Connection strings were always the same since we were both running SQL Server Express on our development machines, as were our staging and production servers. You can just use a dot instead of the computer name (eg. ".\SQLEXPRESS" or ".\SQL_Named_Instance").
Alternatively you could also use web.config transformations for deploying to different machines.
As far as the database itself, we just created a "Database Updates" folder in the SVN repository and added new SQL scripts when updates needed to be made. I always thought it was a good idea to have an organized collection of database change scripts anyway.
A common solution to this type of problem is to have the database versioning handled in code rather than storing the database itself in version control. The code is typically executed on app_start but could be triggered in other ways (build/deploy process). Then developers can run their own local databases or use a shared development database. The common term for this is called database migrations (migrating from one version to the next). Here is a stackoverflow question for .net tools/libraries to make this easier: https://stackoverflow.com/questions/8033/database-migration-library-for-net
This is the only way I would handle this on projects with multiple developers. I've used this successfully with teams of over 50 developers and it's worked great.
The Red Gate solution would be to use SQL Source Control, which integrates into SSMS. Its maintains a sql scripts folder structure in source control, which you can keep in the same folder/ respository that you keep your app code in.
http://www.red-gate.com/products/SQL_Source_Control/

Resources