Flyway migration with existing database using hasura - PostgresSQL Backup - database

I've develop a web platform that uses a PostgresSQL database along with Hasura to make GraphQL interface. This platform was deployed on a Google Cloud enviroment: the database is running in a Google Cloud SQL instance, the Hasura and a simple node.js servers are running on Cloud Run instances.
Anyway, since the database should keep growing, I need a secure and reliable way to keep track of changes done in development env to futher deploy it to production database.
The buck of the edits to the database schema are done using the Hasura Console and by now I just need a solution to track changes in data schema made in development enviroment to deploy only the needed changes to production
Reading about migrations I've found out Flyway as a solution to keep of these changes. However, there still some concerns about the implementation of Flyway in the project. But a couple of question arrise:
Is it possible to use the PostgresSQL (pgAdmin) backup generated files as migrations?
How could I make a migration from a development to the production database? Just by adding the remote url from Google Cloud SQL the do the migration?
There's no much need to keep track of changes of the data in production.
Is there a better option to control changes between development and production databases?
If a make frequent schema backup (using pgAdmin Backup tool) and run restore on the production database, it would do what I want?

Is it possible to use the PostgresSQL (pgAdmin) backup generated files as migrations?
I think you are going the wrong way. Flyway is about the migration scripts you execute to propogate DB version. The backup file contains the whole database. If you want to replace the whole database with the new version of it you may simply drop the old one and create the new one, but you will loose data that way. You can of course use flyway to restore the backup for you, but that way you'll get only the version table. If you'll update over several versions, then multiple restores will be performed that is not needed.
How could I make a migration from a development to the production database? Just by adding the remote url from Google Cloud SQL the do the migration?
I tried google'ing (entered "Google Cloud SQL flyway") and the first result pointed me to Umberto D'Ovido post Setup Flyway with Google Cloud SQL I'm sure with a little effor you'll find the instructions.

Related

Deploy SQL server database on multiple server using DevOps (an mac as development laptop)

I,
We are currently working on a .net core project that will use multiple databases with the same structure.
In short, this is a multi tenant project and each tenant will use the same web application (multiple instances behind a load balancer) BUT each tenant will have its own database.
We are searching for the best solution to ease our deployment process.
When the application (and DB) is updated, we need to update the database structure on all SQL servers and all databases (one SQL can contain x databases).
FYI, application and SQL server are hosted on AWS, our CI/CD is Azure DevOps.
And last (but not least) limitation: we are working on VSCode only (MAC & Linux laptop).
So, we looked for some solutions :
Using Database projects (.sqlproj) + DACPAC generation deployed using DevOps, but it's not available on VSCode
Using Migration: not working with multiple databases and dynamic connection strings
Using SQL script: too complicated to maintains by hand a SQL script that takes care of possible cases
So could someone give us some advice to solve this problem?
The general solution here is to generate SQL Scripts for each deployment, and integrate those into your CI/CD process.
You could use EF Migrations to generate a SQL Script, that is then tested, deployed to your repo as a first-class asset, and deployed by your CI/CD pipeline. Or you could use SSDT to manage the schema and generate change scripts. But those aren't the only reasonable ways.
If you are modifying the schema by hand without using SSDT, you would normally just use a tool to generate the change script. And go from there.
There are many tools (including SSDT) that help you to diff a development environment against a target production schema and generate the change scripts. Eg Redgate ReadyRoll
Note that if you intend to perform online schema updates you need to review the change scripts manually for offline DDL operations, and to ensure that your code/database changes have the correct forward and backward compatibility to support a rollout while the application is online.
And preparing, reviewing, testing, and editing the database change scripts is not something that everyone on the team dev needs to do. So you can always consider jumping onto a Windows VM for that task.

SQL Server restore the schema only

We have a database with test data in it the Dev environment we used to develop. and another DB in production, I want to refresh only the schema from production to Dev environment, but I need all the data that is in Dev environment.
Is there a way to copy the database schema alone from Production and refresh the schema in Dev without losing the Dev environment data?.
Any help gratefully received.
You could try using schema compare from SQL Server Data tools.
You could use SQL Compare from Redgate. Its a paid product but if its just a one off you could use the trial version
Schema compare can be done with SDDT. Make a data schema project, import schema from production, generate change script to develop.
That said, your setup is broken - you should have deployable change scripts, or how do you expect to move changes back in an orderly fashion?

Azure continuous deployment from GitHub and database upgrades

I have a Web application that I usually deployed using Web Deploy directly from Visual Studio (whatever branch I am currently using in VS - normally master). But now I'm introducing a second web app on Azure that will be built from the same repo but different branch. To make things simpler I will be configuring both Web apps on Azure to integrate directly with GitHub and associate them with specific branch.
I also added two additional web.config files: Web.Primary.config and Web.Secondary.config and configured app settings on Azure portal of each web app by adding additional value SCM_BUILD_ARGS and set them to
SCM_BUILD_ARGS=-p:PublishProfile=Primary // in primary web app
SCM_BUILD_ARGS=-p:PublishProfile=Secondary // in secondary web app
which I understand will transform correct config file with specific external services' configurations (DB connection, mail server, etc.).
Now the additional step that I would like to include in continuous deployment is run a set of SQL scripts that I have in my repo that I used to manually upgrade database during Web Deploy in VS. Individual scripts are actually doing specific database upgrade steps:
backup current tables - backup creates a set of Backup_OriginalTableName tables that are copied from existing ones and populated with existing data
drop whole DB model - all non-backup objects are being dropped from procedures, functions, types, views, tables...
create model - creates all tables, views and indices
create user types
create user functions
create stored procedures
restore data to new tables from backup tables - this step may occasionally break if we introduce new non-nullable columns to tables in the new model don't have defaults defined on them; I will somehow have to mitigate this problem by adding an additional script that will add missing columns to backup tables and give them some defaults, but that's a completely different issue.
I used to also have a set of batch files (BAT) in my VS solution that simply executed sqlcmd against specific database instance and executed these scripts in predefined order (as above). Hence I had batches:
Recreate Local.bat - this one used additional SQL scripts to not restore from backup but rather to recreate an empty DB with only lookup tables being populated and some default data for development purposes (like predefined test users)
Restore Local.bat - I used this script to simply restore database from backup tables discarding any invalid data I may have created while debugging/testing since last DB recreate/upgrade/restore
Upgrade Local.bat - upgrade local development DB executing scripts mentioned above
Upgrade Production.bat - upgrade production DB on Azure executing scripts mentioned above
So to support the whole deployment process I was now doing manually in VS I would now like to also execute these scripts against specific Azure SQL DB during continuous deployment. I suppose I should be running these right after code deployment because if that one fails, DB shouldn't be upgraded either.
I'm a bit confused where and how to do this? Can I configure this somewhere in Azure portal? I was looking for resources on the Web but I can't seem to find any relevant information how to do additional deployment steps to execute these scripts. I think this is some everyday scenario as it's hard to think of web apps not requiring databases these days.
Maybe it's just my process that is wrong for DB upgrade/deployment so let me also know if there is any other normal way that does DB upgrade/migration with continuous deployment on Azure... I may change my process to accommodate for this.
Note 1: I'm not using Entity Framework or any other full blown ORM. I'm rather using NPoco and all my DB logic is built in SPs that DAL is using.
Note 2: I'm aware of recently introduced staging capabilities of Azure, but my apps are on cheaper plan that doesn't support staging and I want to keep it this way as I may be introducing additional web apps along the way that will be using additional code branches and resources (DB, mail etc.)
It sounds to me like your db project is a good candidate for SSDT and inclusion in source control. You can create a MyDB.sqlproj that builds your db as a dacpac, and then you can use SqlPackage.exe Publish to accomplish your deployment to Azure.
We recently brought our databases under source control and follow a similar process to build and automatically deploy them (but not to a SQL Azure DB). We've found the source control, SSDT tooling support, and automated deployment options to be worth the effort of setting up and maintaining our project this way.
This SO question has some good notes for Azure deployment of a dacpac using SSDT:
How to publish DACPAC file to a SQL Server database project via SQLPackage.exe of SSDT?

Flyway/Liquibase for Database Structure and DBUnit for Database Inserts?

I have the following scenario for my application:
1 Production Server
1 Test Server
n Development Computers
For database migration we use Hibernate Schema Update for the Schema and DBUnit for filling in alle the production data (on all servers/computers). When the schema update is done I generate a new DTD File for the new schema, so I can do a fresh import of the DBUnit XML. The application updates the database at startup with the XML file (only on development and test servers/computers!)
Of course this approach is not optimal and fragile. So I looked at Liquibase and Flyway. Both seem to be great tools, but what I do not get is: How do I migrate the data? In my case, I dump the data of the production system once a week and add it to the applications source control as a DBUnit XML file, so all developers have "fresh" data and the test server has current production data, too.
The problem I see with Liquibase and Flyway is, that there is no solution how to do automated diffs from the database data and generate the migration changes automatically.
So my idea is the following with the following steps:
Set Hibernate to validate instead of update.
When a STRUCTURAL database change is needed, I add it to the migration script for the major version
No database inserts are in the migration script.
Generate a new DTD for DBunit based on the new database structure
Generate the DBUnit XML from the production database.
Another idea would be to utilize flyways JavaMigration and provide an initial Database Dump based on DBUnit. All other changes for database data will be handled in migration scripts. But still there is the problem: How to make diffs from the current migration script state and the production database state?
It would be awesome if anyone could provide me hints how to handle my scenario :)
If your goal is to use dumps of the PROD database in DEV and TEST environments, I would:
Configure the DB migration tool to run on application startup (both Flyway and Liquibase support this through their respective APIs)
Package all the DB structure migrations together with the app
Dump both data and structure from PROD
This way, when the PROD database is restored to DEV or TEST, the old metadata table of the migration tool is restored as well.
When the app starts, the migration tool will discover that the db structure is outdated and upgrade it to the newest version. Done.
No need to use DBUnit for this.
The short answer is that all your changes would be done through Liquibase or Flyway.
We use Flyway, with the same prod/test/development setup.
We make all db changes (structure or metadata) using Flyway migration scripts, stored in source control. Each time we do a new deployment to an environment, we first run the migration scripts there (using either the command line tool or the maven plugin). The code first goes to development environment, gets integration tested there and keeps going to test and production.
The main thing to watch out for is that Flyway requires a linear versioning to the files, so if two developers check in migrations at the same time, one of them will have to rename theirs.

How do I manage version control when developing with SQL Server Express?

I am developing a website using SQL Server Express on my development machine. My web hosting company is providing me with SQL Server 2005.
At the moment all I have is a database that I develop with and a database that is on the live server. I do not have the original scripts to generate the schema but I can auto generate the create scripts individually or for the entire database.
I am now putting my code into source control and I would like to know how I manage my database schema. What do I put into it? Create commands? Alter scripts?
The database is very small at the moment and it is not hard to maintain the two databases, but I am concerned that going forward but it will get out of hand. Do you have any tips for getting the live database in sync when deploying new code?
EDIT Any ideas as to what should go into source control? Should the DDL scripts go here?
Deploy schema changes as DDL upgrade scripts and, if you haven't already, add a table to contain the schema version number which you update at the end of each upgrade script.
EDIT: Yes, all your scripts should go into source control, including DDL scripts.
I typically keep a testing copy of the live database on my local or virtual development box, which I flush routinely from the prod database down to testing. The testing copy is meant for my total exploitation. When I have something that I believe is ready for deployment, I move it to my development dataset, which mirrors the prod db and is not used for playing around. If the development db passes all my tests, I deploy the script to the production db

Resources