How to have same database state between distributed vagrant boxes? - database

Scenario:
I built original Vagrant box, which got installed to several computers. Git tracks changes to the code, but when someone locally collects data to the database (say mysql), or changes schema, I'd like to somehow automate the task of keeping all vagrant boxes "updated" with database changes.
Same question goes for installing new packages/software locally on vagrant box, and then distributing those changes to other installations. I did this with custom shell script that executes on vagrant up, but it seems like a wrong way to do the same for the database (dump, drop, import).
Can it be done with some provisioning tool? All the google results are pointing to initial building of a box, and provisioning; no luck in finding out how to deal with local changes...

Related

Automating Database Deployment with TeamCity

We are currently using TeamCity and I am wondering if it is possible to have it handle our database process. Here is what I am trying to accomplish.
User runs a build
TeamCity remotes into database server (or tells a program to via command line)
SQL script is run that updates record(s)
Copys the mdf/ldf back to team city for manipulation in the build
Alternatively it could work like this if this is easier
User logs in to database server and runs batch file which does the following:
SQL script is run that updates record(s)
MDF/LDF is copied and then uploaded to repository
Build process is called through web hook with parameter
I cant seem to find anything that even gets me started. Any help getting pointed in the right direction would be helpful.
From your description above I am going to guess you are trying to make a copy of a shared (development) database, which you then want to modify and run tests against on the CI server.
There is nothing to stop you doing what you describe with TeamCity (as it can run any arbitrary code as a build step) but it is obviously a bit clunky and it provides no specific support for what you are trying to do.
Some alternative approaches:
Consider connecting directly to your shared database, but place all your operations within a transaction so you can discard all changes. If your database offers the capability, consider database snapshots.
Deploy a completely new database on the CI when you need one. Automate the schema deployment and populate it will test data. Use a lightweight database such as SQL Local DB, or SQL Lite.

Building Deployment Azure With Database Migration Scripts

We are very new to Azure, we have a large existing website (multiple instances with different customers) with an associated windows service and SQL Server 2008 database.
We are in the process of migrating the website in development to Azure. We are creating an Azure worker role to wrap the code that the Windows service executes to run every hour but my main concern is deployment, in an ideal world we will automate the deployment but for now we want to get to the following point where we can build a single deployment package for a customer which runs the database migration scripts to the Azure SQL database and migrates the web role and site and during this time a holding page is displayed to let users know the site is currently being updated. Also it would be great if there was a way to rollback if something goes wrong.
Despite my research I cannot find the answer to the process above, everything seems to involve deploying the database changes (this will include custom scripts for data migration/changes) and then publishing the site to Azure from visual studio but as we want to deploy this multiple times it would be ideal to build a package that we run against each customer site/database when we/they are ready to migrate to the next version as this may not always happen at the same time.
Our current deployment strategy is a mess, we stop the application in IIS and then start another one which shows a holding page saying the site is being updated. We then stop the windows service and manually copy the latest version overwriting the existing website and windows service. We then run a SQL script which has any changes in on the database and then we restart the windows service and IIS application.
This is not sustainable but in my research I have not come across the best option moving forward. I have looked at MSBuild but it is completely new to me and finding somewhere to start with such unfamiliar technology is proving hard. I also looked at FluentMigrator and considered running our database changes from application_start but I am not sure I am comfortable with that.
As we are looking to move to Azure where we will have the site running with 5 different configurations at the moment we are looking at publishing profiles and the hosted build controller but I cannot figure out how to deploy the database with these changes.
I would appreciate some insight into how others handle deployments like this taking into account that we are using all the latest tech, VS2013, VSO, Azure, etc. and also that we need to be able to change the configurations for different customers and I am assuming publishing profiles is the best way to do that but I may be wrong.

Cannot attach database after using syncing with git

I have a problem attaching ms sql server database after syncing with git. My steps are:
1. I use Dropbox to keep the bare repository instead of github.
2. The working repository stored in C drive. This repository contains my code and database (ms sql server 2008). So I commit and push changes to bare repository (in dropbox).
3. On the second computer, I clone that project from dropbox (bare repository).
4. At first, the 2nd computer can attach database. Then it edits database, detach database, and push changes to bare repository in dropbox.
5. The problem occurs here, the first computer cannot pull the changes from bare repository. It shows permission deny. I use SmartGit as GUI for git. Beside, when I do these steps again, the other problem is that the first computer can pull the changes but it cannot attach the database any more.
I guess when the second computer edits database, it gave its own permission to access database, that’s why the first one cannot get changes or attach database.
The purpose of the above steps is that I want to keep my database in dropbox using git so that I can work with database from any computer without copying it. Before I used Dropbox to do this, but it cannot sync database, that’s why I think of using git. I do not need git to version control my database, just keeps it in one place for portable working.
If you guys think the above method is not practical, could you suggest me a way of doing this? Thank you so much in advance.
Update:
Problem is solved.
Resolution: 2 ways:
- Generate script based on ms sqlserver using msdeploy and version control that script. (I'm using this method).
- Using data tier app (DAC), but I'm new in this. Will play with it later.

Build Action of Database

I have a WPF application. My question is What should be the build action for the database in my case. I have a database but every time I build my solution the database entries are removed and a new database is created. What build action should I keep for my database to overcome this problem.
How come you remove all of your Database data? What happens to your data then? Is it possible in a live system?
To overcome this problem, you should keep track of your DB changes as DB scripts. This means that by any DB change, you should keep its relevant script somewhere in your source control. The scripts should be traceable. This means that if you want to migrate from release A to release D, you should know which scripts should be executed for the DB migration.
Here is the solution we use in our project.
Each developer keeps a script in source control for any DB change.
The scripts are numbered as SVN revision numbers, e.g. 2835.sql.
While building the application, all scripts are copied to the
application installer. But, by comparing the current installed version and the new version, the installer knows which scripts should be executed.
Using this way, the migration becomes and easy process.

managing Sql Server databases version control in large teams

For the last few years I was the only developer that handled the databases we created for our web projects. That meant that I got full control of version management. I can't keep up with doing all the database work anymore and I want to bring some other developers into the cycle.
We use Tortoise SVN and store all repositories on a dedicated server in-house. Some clients require us not to have their real data on our office servers so we only keep scripts that can generate the structure of their database along with scripts to create useful fake data. Other times our clients want us to have their most up to date information on our development machines.
So what workflow do larger development teams use to handle version management and sharing of databases. Most developers prefer to deploy the database to an instance of Sql Server on their development machine. Should we
Keep the scripts for each database in SVN and make developers export new scripts if they make even minor changes
Detach databases after changes have been made and commit MDF file to SVN
Put all development copies on a server on the in-house network and force developers to connect via remote desktop to make modifications
Some other option I haven't thought of
Never have an MDF file in the development source tree. MDFs are a result of deploying an application, not part of the application sources. Thinking at the database in terms of development source is a short-cut to hell.
All the development deliverables should be scripts that deploy or upgrade the database. Any change, no matter how small, takes the form of a script. Some recommend using diff tools, but I think they are a rat hole. I champion version the database metadata and having scripts to upgrade from version N to version N+1. At deployment the application can check the current deployed version, and it then runs all the upgrade scripts that bring the version to current. There is no script to deploy straight the current version, a new deployment deploys first v0 of the database, it then goes through all version upgrades, including dropping object that are no longer used. While this may sound a bit extreme, this is exactly how SQL Server itself keeps track of the various changes occurring in the database between releases.
As simple text scripts, all the database upgrade scripts are stored in version control just like any other sources, with tracking of changes, diff-ing and check-in reviews.
For a more detailed discussion and some examples, see Version Control and your Database.
Option (1). Each developer can have their own up to date local copy of the DB. (Up to date meaning, recreated from latest version controlled scripts (base + incremental changes + base data + run data). In order to make this work you should have the ability to 'one-click' deploy any database locally.
You really cannot go wrong with a tool like Visual Studio Database Edition. This is a version of VS that manages database schemas and much more, including deployments (updates) to target server(s).
VSDE integrates with TFS so all your database schema is under TFS version control. This becomes the "source of truth" for your schema management.
Typically developers will work against a local development database, and keep its schema up to date by synchronizing it with the schema in the VSDE project. Then, when the developer is satisfied with his/her changes, they are checked into TFS, and a build and then deployment can be done.
VSDE also supports refactoring, schema compares, data compares, test data generation and more. It's a great tool, and we use it to manage our schemas.
In a previous company (which used Agile in monthly iterations), .sql files were checked into version control, and (an optional) part of the full build process was to rebuild the database from production then apply each .sql file in order.
At the end of the iteration, the .sql instructions were merged into the script that creates the production build of the database, and the script files moved out. So you're only applying updates from the current iteration, not going back til the beginning of the project.
Have you looked at a product called DB Ghost? I have not personally used it but it looks comprehensive and may offer an alternative as part point 4 in your question.

Resources