Strategy to auto update database on new app version deploy - database

We have an JavaEE web application running with Maven to do the build process, JSF 2.2, Tomcat 7 as our server and MySQL 5.5 as our database. With the development of new features, sometimes we need to change our database structure. At this moment we have the work to do all this manually:
Wait until we have no clients online (around midnight)
Go to Tomcat manager
Undeploy context
Deploy new context
Go to phpMyAdmin and execute the SQL scripts
While our application is still "small" is still viable to do this process, but we are looking forward to automatize this. We already know about Jenkins, that can read our Git, build the .war using Maven and - not sure yet - do the deploy at Tomcat.
But I am not sure about how we will automatize our SQL scripts to execute when we deploy a new version. It needs to be robust, so it doesn't mess with our database, by for example, running it twice or something like that.
My question is if there is a better deployment process focusing on database changes that can help me.

Just to append the previous answer about liquibase, you could use the flywaydb too.

There are solutions for this out there. One of them is called Liquibase.

You can use liquibase to apply incremental database changes along with jenkins to automate build process.

Related

Deploying a database package to SQL Server through Octopus & TeamCity

I am implementing CI/CD for SQL Server Database through Redgate software and TeamCity. I manage to Build and push the NuGet Database Package to Octopus. I can see the NuGet package in Library section of Octopus. But I am facing issues in deploying that package to SQL Server. I cant find the Built-in Step Template "Deploy a NuGet package" in Octopus process section. I have also tried "Deploy a package" step template but it didnt worked.I am following this guide.
https://documentation.red-gate.com/sr1/worked-examples/deploying-a-database-package-using-octopus-deploy-step-templates
Any Help will be highly Appreciated.
Good question, to use Redgate's tooling with Octopus Deploy you will need to install the step templates they provided. I recommend create a database release and deploy a database release. When you are browsing the step template you might notice the step template to deploy directly from a package. The state-based functionality for SQL Change Automation works by comparing the state of the database stored in the NuGet package with the destination database. Each time it runs it creates a new set of delta scripts to apply. Because of that, the recommended process is:
Download the database package onto the jump box.
Create the delta script by comparing the package on the jump box with the database on SQL Server.
Review the delta script (can be skipped in dev and test).
Run the script on SQL Server using the tentacle on the jump box.
Let's go ahead and walk through each one. The download a package step is very straightforward, no custom settings aside from picking the package name.
The Redgate - Create Database Release step is a little more interesting. This is the step which generates the actual delta script that will be run on the database. What trips up most people is the Export Path. The export path is where the delta script will be exported to. This needs to be a directory outside of the Octopus Deploy tentacle folder. This is because the "Redgate - Deploy from Database Release" step needs access to that path and the Tentacle folder will be different for each step.
What I like to do is use a project variable.
The full value of the variable is:
C:\RedGate\#{Octopus.Project.Name}\#{Octopus.Release.Number}\Database\Export
The next step is approving the database release. I recommend creating a custom team to be responsible for this. My preference is to skip this step in Dev and QA.
The create database release step makes use of the artifact functionality built into Octopus Deploy. This allows the approver to download the files and review them.
The final step is deploying the database release. This step takes the delta script in the export data path and runs it on the target server. This is why I recommend putting the export path in a variable.
Some other general items to help get going. First, don't install tentacles directly onto SQL Server instances. In production, the typical SQL Server set up is a cluster or they have multiple nodes with always-on high availability. Access to SQL Server is handled via a virtual IP.
If you were to install tentacles on both nodes, Octopus Deploy would attempt to run the change script on both nodes at the same time (by default). That will cause a lot of drama. I recommend using a jump box because you will need something to sit between Octopus Deploy and SQL Server. When you get comfortable with that I'd recommend using workers (but that is a bit of scope creep, so I won't cover that).
If you would like to know more on how to wire this up, check out the blog post I wrote (and copied from for this answer) here.
I also have written an entire series on database deployments with Octopus Deploy, which you can find here.
Finally, our documentation covers jump boxes and permissions you will need for the user doing the database deployments.
Hope that helps!

Building Deployment Azure With Database Migration Scripts

We are very new to Azure, we have a large existing website (multiple instances with different customers) with an associated windows service and SQL Server 2008 database.
We are in the process of migrating the website in development to Azure. We are creating an Azure worker role to wrap the code that the Windows service executes to run every hour but my main concern is deployment, in an ideal world we will automate the deployment but for now we want to get to the following point where we can build a single deployment package for a customer which runs the database migration scripts to the Azure SQL database and migrates the web role and site and during this time a holding page is displayed to let users know the site is currently being updated. Also it would be great if there was a way to rollback if something goes wrong.
Despite my research I cannot find the answer to the process above, everything seems to involve deploying the database changes (this will include custom scripts for data migration/changes) and then publishing the site to Azure from visual studio but as we want to deploy this multiple times it would be ideal to build a package that we run against each customer site/database when we/they are ready to migrate to the next version as this may not always happen at the same time.
Our current deployment strategy is a mess, we stop the application in IIS and then start another one which shows a holding page saying the site is being updated. We then stop the windows service and manually copy the latest version overwriting the existing website and windows service. We then run a SQL script which has any changes in on the database and then we restart the windows service and IIS application.
This is not sustainable but in my research I have not come across the best option moving forward. I have looked at MSBuild but it is completely new to me and finding somewhere to start with such unfamiliar technology is proving hard. I also looked at FluentMigrator and considered running our database changes from application_start but I am not sure I am comfortable with that.
As we are looking to move to Azure where we will have the site running with 5 different configurations at the moment we are looking at publishing profiles and the hosted build controller but I cannot figure out how to deploy the database with these changes.
I would appreciate some insight into how others handle deployments like this taking into account that we are using all the latest tech, VS2013, VSO, Azure, etc. and also that we need to be able to change the configurations for different customers and I am assuming publishing profiles is the best way to do that but I may be wrong.

Grails database migration changelog "rebase"

Is there an easy way to do a "git rebase"-like operation for Grails Database Migration plugin changelog scripts?
I have already several changelog scripts on top of the initial changelog from an old domain model. Now I'm deploying the application to a new environment and there's no need to migrate the database contents.
I could delete the scripts and generate a fresh initial script from the current domain model but then I'd have to install Grails to the old environment and execute dbm-clear-checksums there, right?
Is there an easier way to tell dbm that I don't want to create an old domain and patch it to current level?
Run the dbm-changelog-sync script - it marks everything as having been run.

Deploying MSSQL change scripts

So I am in the throes of developing our Continuous Integration practices. We are a .Net/MSSQL shop. We will all soon be on VS2012. We have settled on CruiseControl.Net for CI server, using msbuild to compile our projects. We use SVN (possibly switching to Git later, but that's another discussion) for source control. I'm leaning towards using InstallShield to deploy code packages (usually web apps and/or batch exeutables) to our QA and production servers. (CCNet would build these MSI's as part of our CI.) We are also starting to include unit testing in our projects, and will use NUnit integrated with CCNet to run them automatically upon check-in.
So far this works for our standard web app/exe development. Where it does not fit in (yet) is with our MSSQL change management, or lack thereof. It's been pretty cowboy how we've done this. Some folks have used Migrator.Net. Others just do a SQL Compare with Redgate and generate a script. Still others have hand-written sql scripts. It may or may not be in SVN. "Source control" at the db level is basically "we have backups of our databases." Boo, hiss. Needless to say that if we want some consistency with our CI and with our deployments, we need to settle on something. So far I am leaning towards using VS SQL projects to handle the change management and deployment.
Note: we (developers) are not supposed to push changes. Sys admins do that. So we can't run anything to deploy code or sql.
So, 2 problems to solve (I think):
What "technique" to use so that our CI server blows away a CI version of the database so that unit tests can be tested against it. I've settled that VS2012 SQL projects can do that. CCNet can run msbuild against the db project, which recreates the database. This is fairly easy.
How to generate change scripts for our QA and prod environments? This one I'm stuck on.
VS can do a schema compare and then generate the sql script -- but it is dependent on sqlcmd. So our sys admins would have to run sqlcmd from the command prompt to deploy it... probably not ideal. Right?
I could run msbuild again to deploy... but I don't want the database re-created, I just want changes deployed.
So what are the options here? I need something self-contained for the admins to run -- and check-in to SVN. Should I make another msi for database deployments? Can CCNet/msbuild make some other kind of "deployment package" for database changes (not re-creation) where the sys admins can double-click and go?
How do you all handle this?
Thanks
Tom
Check out the SQL Server Data Tools package from the Microsoft site.
This will register a new SQL Server 2012 Database type project to contain the definition for all of your database structures. Upon build, this will generate a create script that you can use to deploy your database.
Then for upgrading your database, use the SQLPACKAGE.EXE tool using the create script and target database server name to generate an Update.sql script.
Update: Also on the issue of how you're running unit tests, you could create supplemental methodologies that invoke the create scripts by launching a process and and passing the path to the output create.sql script, then have your tests 'tear down' the database using the same method but with a drop database statement.

What is the best website/web-app upgrade process?

We have a great process for upgrading our clients' websites as far as updating html/js code and assets is concerned (by using Subversion) that we are very happy with.
However, when it comes to upgrading databases, we are without any formal process.
If we add new tables/fields to our development database, when it comes to rolling it out to the production server we have to remember our changes and replicate them. We cannot simply copy the development database on top of the production database as client data would be lost (e.g. blog posts, account info etc).
We are also now in the process of building a web-app which is going to come across the same issues.
Does anyone have a solution that makes this process easier and less prone to error? How do big web-apps get round the problem?
Thanks.
I think that adding controls to the development process is paramount. At one of my past jobs, we had to script out all database changes. These scripts were then passed to the DBA with instructions on what environment to deploy them in. At the end of the day, you can implement technical solutions, but if the project is properly documented (IF!!!) then when it comes time for deployment, the developers should remember to migrate scripts, along with code files. My $.02
In my opinion your code should always be able to create your database from scratch, therefore it should also handle upgrades too. It should check a field in the database to see what version the schema is at and handle the upgrades to the latest version.
I had some good luck with: http://anantgarg.com/2009/04/22/bulletproof-subversion-web-workflow/
The author has a database versioning workflow (with PHP script), which is decent.
Some frameworks have tools which deal with the database upgrade. For example rails migrations are pretty nice.
If no convenient tool is available for your platform you could try scripting modifications to your development database.
In my company we use this model for some of our largest projects:
If the X is the just deployed version of our application and it's not different then the latest development version.
We create a new directory for the scripts naming it for example - version x + 1 and add it to the subversion repository.
When developer wants to make modification to the development database, he creates the .sql script with a name "1 - does something.sql" that makes the modifications (they must be indestructible), saves it and then runs it on the development database. He commits the web app code and the sql scripts. Each developer does the same and maintains the order of the execution of scripts.
When we need to deploy the version X+1 - we copy the x+1 web app code and the scripts to the production server, we backup the database, run the sql scripts one by one on the production database and deploy the new web application code.
After that we open a new (x + 2) sql script directory and repeat the proces ...
We basically have a similar approach as Senad, we maintain a changes.sql file in our repo that developers put their changes in. When we deploy to production, we:
Run a test deployment to the QA server:
first reproduce the production environment (app & db) in the QA server
run changes.sql against the qa db
deploy the app to qa
run integration tests.
When we are sure the app runs fine in qa with the scripted changes to the db (ie. nobody forgot to include their db changes in the changes.sql, or references, etc.) we:
backup the production database
run the scripts in the changes.sql file against the production db
deploy the app
clear the changes.sql file
All the deployment is run through automated scripts so we now we can reproduce it.
Hope this help
We have folder migrations/ inside almost every project and tehere are so called, "up" and "down" scripts (sql). Every developer is obliged to write his own up/down script and to verify it against testing environment.
There are other tools and frameworks for migrations, but we haven't got the time to test it...
Some are: DoctrineDB, rails migrations, propel (I think...), capistrano can do it also..

Resources