Repeatable Flyway Migration - sql-server

How do I achieve repeateable migration of sql scripts to every database? I have a segment called API and this need to be deployed in all the existing databases in sql server.
Though I am able to repeatedly run/execute the set of scripts based on the naming convention, not able to run on every dbs.
As of now, I have a data-system.json file where all the dbs and segments are registered and I am using this to run the particular segment of a single db.

I'm not 100% on what you're asking, but in reference to the first part of your question:
How do I achieve repeateable migration of sql scripts to every database?
If you want to to run your Flyway scripts on multiple databases, you can use the 'migrate' command in the Flyway CLI to do that (https://flywaydb.org/documentation/command/migrate).
You can configure the environment specific info (e.g. login credentials) using environment variables (https://flywaydb.org/documentation/envvars).
Thanks

Related

Automating Database Deployment with TeamCity

We are currently using TeamCity and I am wondering if it is possible to have it handle our database process. Here is what I am trying to accomplish.
User runs a build
TeamCity remotes into database server (or tells a program to via command line)
SQL script is run that updates record(s)
Copys the mdf/ldf back to team city for manipulation in the build
Alternatively it could work like this if this is easier
User logs in to database server and runs batch file which does the following:
SQL script is run that updates record(s)
MDF/LDF is copied and then uploaded to repository
Build process is called through web hook with parameter
I cant seem to find anything that even gets me started. Any help getting pointed in the right direction would be helpful.
From your description above I am going to guess you are trying to make a copy of a shared (development) database, which you then want to modify and run tests against on the CI server.
There is nothing to stop you doing what you describe with TeamCity (as it can run any arbitrary code as a build step) but it is obviously a bit clunky and it provides no specific support for what you are trying to do.
Some alternative approaches:
Consider connecting directly to your shared database, but place all your operations within a transaction so you can discard all changes. If your database offers the capability, consider database snapshots.
Deploy a completely new database on the CI when you need one. Automate the schema deployment and populate it will test data. Use a lightweight database such as SQL Local DB, or SQL Lite.

Azure continuous deployment from GitHub and database upgrades

I have a Web application that I usually deployed using Web Deploy directly from Visual Studio (whatever branch I am currently using in VS - normally master). But now I'm introducing a second web app on Azure that will be built from the same repo but different branch. To make things simpler I will be configuring both Web apps on Azure to integrate directly with GitHub and associate them with specific branch.
I also added two additional web.config files: Web.Primary.config and Web.Secondary.config and configured app settings on Azure portal of each web app by adding additional value SCM_BUILD_ARGS and set them to
SCM_BUILD_ARGS=-p:PublishProfile=Primary // in primary web app
SCM_BUILD_ARGS=-p:PublishProfile=Secondary // in secondary web app
which I understand will transform correct config file with specific external services' configurations (DB connection, mail server, etc.).
Now the additional step that I would like to include in continuous deployment is run a set of SQL scripts that I have in my repo that I used to manually upgrade database during Web Deploy in VS. Individual scripts are actually doing specific database upgrade steps:
backup current tables - backup creates a set of Backup_OriginalTableName tables that are copied from existing ones and populated with existing data
drop whole DB model - all non-backup objects are being dropped from procedures, functions, types, views, tables...
create model - creates all tables, views and indices
create user types
create user functions
create stored procedures
restore data to new tables from backup tables - this step may occasionally break if we introduce new non-nullable columns to tables in the new model don't have defaults defined on them; I will somehow have to mitigate this problem by adding an additional script that will add missing columns to backup tables and give them some defaults, but that's a completely different issue.
I used to also have a set of batch files (BAT) in my VS solution that simply executed sqlcmd against specific database instance and executed these scripts in predefined order (as above). Hence I had batches:
Recreate Local.bat - this one used additional SQL scripts to not restore from backup but rather to recreate an empty DB with only lookup tables being populated and some default data for development purposes (like predefined test users)
Restore Local.bat - I used this script to simply restore database from backup tables discarding any invalid data I may have created while debugging/testing since last DB recreate/upgrade/restore
Upgrade Local.bat - upgrade local development DB executing scripts mentioned above
Upgrade Production.bat - upgrade production DB on Azure executing scripts mentioned above
So to support the whole deployment process I was now doing manually in VS I would now like to also execute these scripts against specific Azure SQL DB during continuous deployment. I suppose I should be running these right after code deployment because if that one fails, DB shouldn't be upgraded either.
I'm a bit confused where and how to do this? Can I configure this somewhere in Azure portal? I was looking for resources on the Web but I can't seem to find any relevant information how to do additional deployment steps to execute these scripts. I think this is some everyday scenario as it's hard to think of web apps not requiring databases these days.
Maybe it's just my process that is wrong for DB upgrade/deployment so let me also know if there is any other normal way that does DB upgrade/migration with continuous deployment on Azure... I may change my process to accommodate for this.
Note 1: I'm not using Entity Framework or any other full blown ORM. I'm rather using NPoco and all my DB logic is built in SPs that DAL is using.
Note 2: I'm aware of recently introduced staging capabilities of Azure, but my apps are on cheaper plan that doesn't support staging and I want to keep it this way as I may be introducing additional web apps along the way that will be using additional code branches and resources (DB, mail etc.)
It sounds to me like your db project is a good candidate for SSDT and inclusion in source control. You can create a MyDB.sqlproj that builds your db as a dacpac, and then you can use SqlPackage.exe Publish to accomplish your deployment to Azure.
We recently brought our databases under source control and follow a similar process to build and automatically deploy them (but not to a SQL Azure DB). We've found the source control, SSDT tooling support, and automated deployment options to be worth the effort of setting up and maintaining our project this way.
This SO question has some good notes for Azure deployment of a dacpac using SSDT:
How to publish DACPAC file to a SQL Server database project via SQLPackage.exe of SSDT?

SQL Server Data Tools 2012 - deployment package for multiple databases

In our company we have database solution that contains three SQL Server instances each with different databases. Each instance has some jobs and replication.
As for now we are maintaining creation and update scripts manually and execute them with bat files.
Our deployment package contains scripts for all objects including jobs and replication.
We want to automate our process to make and test deployment packages after every svn commit - continuous integration. Also we have branches for every release. Release correspond to a database version. Different clients have different releases/versions installed. We need to create deployment package for any branch.
Can we use SQL Server Data Tools 2012 for our needs? I have only seen tutorials for single database and I don't know how to use it in more complex environment.
Optionally we could use Data Tools for maintaining schema scripts and write manually scripts for jobs/replication. But can we use the build process to combine it all into one package?
You should be able to use SSDT for this, by way of Publish Profiles. Create a publish profile for each instance and set up your CI jobs accordingly.
Standardizing your database names across instances (especially if they're all for the same product) would help.

Recreate database from RedGate checked-in scripts

We've got a SQL Server instance with some 15-20 databases, which we check in TFS with the help of RedGate. I'm working on a script to be able to replicate the instance (so a developer could run a local instance when needed, for example) with the help of these scripts. What I'm worried about is the dependencies between these scripts.
In TFS, RedGate has created these folders with .sql files for each database:
Functions
Security
Stored Procedures
Tables
Triggers
Types
Views
I did a quick test with Powershell, just looping over these folders to execute the sql, but I think that might not always work. Is there a strict ordering which I can follow? Or is there some simpler way to do this? To clarify, I want to be able to start with an completly empty SQL Server instance, and end up with a fully configured one according to what is in the TFS (without data, but that is ok). Using Powershell is not a requirement, so if it is simpler to do some other way, that is preferrable.
If you're already using RedGate they have a ton of articles on how to move changes from source control to database. Here's one which describes moving database code from TFS using sqcompare command-line:
http://www.codeproject.com/Articles/168595/Continuous-Integration-for-Database-Development
If you compare to any empty database it will create the script you are looking for.
The only reliable way to deploy the database from scripts folders would be to use Red Gate SQL Compare. If you run the .sql files using PowerShell, the objects may not be created in the right order. Even if you run them in an order that makes sense (functions, then tables, then views...), you still may have dependency issues.
SQL Compare reads all of the scripts and uses them to construct a "virtual" database in memory, then it calculates a dependency matrix for it so when the deployment script is created, things are done in the correct order. That will prevent SQL Server from throwing dependency-related errors.
If you are using Visual Studio with the database option it includes a Schema Compare that will allow you to compare what is in the database project in TFS to the local instance. It will create a script for you to have those objects created in the local instance. I have not tried doing this for a complete instance.
You might have to at most create the databases in the local instance and then let Visual Studio see that the tables and other objects are not there.
You could also just take the last backup of each database and let the developer restore them to their local instance. However this can vary on each environment depending on security policy and what type of data is in the database.
I tend to just use PowerShell to build the scripts for me. I have more control over what is scripted out when so when I rerun the scripts on the local instance I can do it in the order it needs to be done in. May take a little more time but I get better functioning scripts for me to work with, and PS is just my preference. There are some good scripts already written in the SQL Community that can help you on this. Jen McCown did a blog post of all the post her husband has written for doing just this, right here.
I've blogged about how to build a database from a set of .sql files using the SQL Compare command line.
http://geekswithblogs.net/SQLDev/archive/2012/04/16/how-to-build-a-database-from-source-control.aspx
The post is more from the point of view of setting up continuous integration, but the principles are the same.

Exporting database on oracle

I have a DB on oracle on Windows Server 2003. How do I export it with all the data and put it into other Windows server?
Use RMAN to take a full backup. Then restore it on the new server.
See Clone using RMAN Article
You can use Oracle Data Pump to export and import database. Quote from documentation:
Oracle Data Pump is a feature of Oracle Database 11g Release 2 that enables very fast bulk data and metadata movement between Oracle databases.
Procedure is like this:
Export existing database using expdp utility
Install Oracle database server on new Windows server
Import database on new server using impdp utility
Check this link: Oracle Data Pump. There you will find complete documentation and examples how to use this utility.
If you are wanting to create an exact copy of an existing database on a new sever of the same operating system (though not necessarily the same O/S version) and the same Oracle version, the quickest and least problematic method is to just copy the database files. This is often referred to as database cloning, and it is a common method DBAs use to setup development and test databases that are intended to be exact duplicates of production databases.
Stop all instances of the database on the existing system. You could login to each instance "as sysdba" using SQLPlus and run the "shutdown immediate" command. You could also stop the Windows Services for the instances. They are named OracleServicesid where "sid" is the instance name. Usually, there is just one instance, but there could be multiple instances to a single database. All instances must be stopped for this procedure.
Locate the database files. Look for an "oradata" folder somewhere below the Oracle root folder and then find the folder for the database sid in there. (There could be multiple oradata folders. You need to find the one that has the folder named for the SID of your database.) There are also the files in the Admin folder for the sid as well as the %ORACLE_HOME%/database folder. If DBCA had been used to create the database, then the location of all of these files varies by the Oracle version.
Once you have identified all of the files for the database, you can use any method at your disposal to copy these files to the same locations on the new server. (Note: The database files, control files, and redo logs must be placed in the same locations (i.e., file system paths) where they exist on the old server. Otherwise, configuration files must be changed and commands must be run to alter the database's internal file paths.) The parameter file (initSID.ora) and server parameter file (spfileSID.ora) must be placed in the %ORACLE_HOME%/database folder.
On the new sever, you must run the oradim utility. (Note: oradim is an Oracle utility that is specific to Windows and is used to create, maintain, and delete instance services.) Here is a sample command:
oradim -new -sid yourdbsid -startmode automatic
Startup the database with SQLPlus, and you should be in business.
This is a general overview of the process, but it should help you get the job done quickly and easily. The problem with other tools is the need to create an empty database on the target server before loading the data by whatever means. If the target server has a different version of Oracle, it will be necessary to run data dictionary scripts to upgrade or downgrade the database. (Note: A downgrade may not always be possible.) If the new server has a different O/S, then the above procedure would require additional steps that would significantly increase its complexity.
It also possible to duplicate a database using RMAN. Google the words "clone oracle database using rman" to get some good sites on how this is done using that tool. If you are not already using RMAN, the procedure I have described above would probably be the way to go.

Resources