rewrite existing liquibase script - database

I have an old liquibase scripts (inherited, everywhere using instead of create table, insert etc). I want to rewrite because the old one does not work with h2 (only works with Oracle).
How can I do that? (I have many data in the existing database which I need, but I want to completely rewrite the pull up script)

One option is to use the generateChangeLog command against your existing Oracle database to re-generate a new file. There are some things generateChangeLog misses such as stored procedures but will capture all the tables, columns, indexes, foreign keys, etc. The data types will be oracle specific but you can do a simple search/replace to fix them into something more generic.
Alternately, you could take your existing changelog file as a starting point and edit it as heavily as you need. If you take out changeSets liquibase will not care but when you modify an existing changeSet you will need to add a new a validChecksum tag to let Liquibase know it is expected that the changeSet has changed.
When you do find changeSets that need be different between Oracle and H2 you can use the dbms attribute on changeSet to target one type or the other.

Related

How to run raw SQL to deploy database changes

We intend to create DACPAC files using SQL database projects and distribute them automatically to several environments, DEV/QA/PROD, using Azure Pipeline. I can make changes to the schema for a table, view, function, or procedure, but I'm not sure how we can update specific data in a table. I am sure this is very common use case but unfortunately I am having hard time implementing it.
Any idea how can I automate creating/updating/deleting a row for a table?
E.g.: update myTable set myColumn = 5 where someColumn = 'condition'
In your database project you can add a Post Deployment Script
Do not. Seriously. I found DACPAC always to be WAY too limiting for serious operations. Look how the SQL is generated and - realize how little control you have.
The standard approach is to have deployment scripts that you generate and that do the changes in the database, plus a table in the db tracking which have executed (possibly with a checksum so you do not need t change the name to update them).
You can easily generate them partially by schema compare (and then generate the change script), but those also allow you to do things like data scrubbing and multi step transformations that DACPAC by design cannot efficiently and easily do.
There are plenty of frameworks for this around. They generally belong in the category of developer tools.

Is it possible to automatically generate migration scripts by comparing db and code?

I’m seriously confused about how flyway generally works to maintain the db as a code. Suppose I have the following V0 script:
Create table student(
Name varchar(25)
)
That would be my initial db. Now, suppose I want to add a new column, why am I being forced to do a V1 script like this one?
Alter table student add column surname varchar(25)
What I’d like to do would be to simply update the v0 script like this:
Create table student(
Name varchar(25),
Surname varchar(25)
)
Then the tool, by comparing the actual db, should be able to understand that a new column should be created!
This is as other code (java, javasctipt,..) tools work and the same I would it like to be for db as a code tools.
So my question is: is there a way to achieve this behavior without dropping/recreating the db?
I tagged this question with flyway and liquibase tools but feel free to suggest other tools that would fit my needs.
Whatever way you develop the database,there is no way to achieve this behavior without dropping/recreating the db, because the CREATE TABLE statement assumes that the table that you specify isn't already there. You can't use a CREATE OR ALTER statement because these aren't supported for tables even where the RDBMS that you use supports that syntax.
In the early stages of a database project, you can work very much quicker with a build script that you use to create a database with tables, views and so on. You can then insert some data, try it out, run a few tests, maybe and then tear it down. Flyway community supports this: you just have a single migration script starting from an empty database that you repeatedly 'clean' and 'migrate', until you reach your first version. Flyway takes care of the 'Tear-down' process. and give you a fresh start, by wiping your configured schemas completely clean.
Flyway Teams supports a special type of migration, the 'repeatable' that allows you to use for migrations SQL files that you can alter. However, you would need to add logic that deletes the table if it already exists before it executes your CREATE TABLE statement. It avoids having to 'Flyway clean', but it is a lot of extra work. It also means that you lose the whole advantage of a version representing an exact state of a database.
At some point, you are going to use migrations because you're likely to have copies of the database to keep up-to-date. Whatever tool you use to update a development or producton database, you are going to need to use a migration for this because of the existing data in tables.
Flyway Enterprise supports the automatic generation of a migration, if you are using Oracle or SQL Server. SQL Compare is provided to compare two versions of a database and produce a migration script from one version to the next. This allows you to use a build script as you suggest, compare it with the current version of the database, and generate a migration script to get from the one to the other.

Is it possible to emit sql from grails/groovy newInstance/saves?

My team is looking into db migration tools (e.g., Flyway, Liquibase) and so I'm thinking about how to incorporate changes I make to the db contents using my groovy+grails service method. I'm not referring to changes to columns and/or tables (i.e., domain classes), I'm referring to inserts/updates of rows which represent configuration values for the associated webapp.
My service method is written to be used somewhat interactively. That is, when I'm adding or updating rows in various tables (i.e., newInstance or save), it helps me navigate various db constraints and to make sure all the foreign keys and my own business logic are set correctly. I run it repeatedly (rolling back each time afterwards using setRollbackOnly()) until I've found something I'm happy with. The method is written in groovy, and I don't want to rewrite it in sql.
Is there a way to get groovy/grails to emit the sql it would execute instead of executing the sql? That is, give me something I could copy/paste into a Flyway migration or Liquibase changeset?
I looked into logging, but I'd have to somehow process that output to substitute the values in and to get the proper column names, and even then I'd need a distinction between lines that I actually change the db (maybe I could just extract the inserts and updates). I also looked into these
grails database migration scripts, but they appear to either look at domain classes (which isn't where my changes are happening) or at the entire database (which would sweep up a lot of user data too).
Thanks!

Copy data from one db to another using liquibase

is it possible to migrate data from one database table into another database table using liquibase?
Now we are running liquibase changesets on two different databases as we have two executions in pom maven file. But is it possible to write one changeset which selects data from one database table and copies to another database table?
You could use your preferred scripting language to query the data from the table and generate insert statements with the result. Once you have the insert statements, put them in a liquibase formatted sql file. and run them on your target database.
The goal is to have already created the files when the data was originally inserted. If your database already existed before you started using liquibase, then it may be a good idea to restore from a backup taken from the day you started using liquibase and sync it from there.

Compare SQL Server DB schema & data (at the same time) and generate scripts

I've got a reasonably large / complicated DB which I need to upgrade in the field from version 1 to version 2. There's a lot of changes in schema and importantly data between the two.
Yes, I know this should have been version controlled alla:
http://www.codinghorror.com/blog/2008/02/get-your-database-under-version-control.html
but it wasn't - it will be when I am done.
So, current problem, I'm face with the choice of either going through all the commits or trying to diff between two versions of the db. So far I've tried:
http://opendbiff.codeplex.com/
http://www.sqldelta.com/
http://www.red-gate.com/
However none of them seem to be able to successfully generate schema upgrade scripts because they don't also do the data at the same time. This results in foreign key violations when adding new keys to tables as the table it references is new and while the schema for the table has been created, the data whcih it contains has not. Well it could be, but that requires me to use a different part of the tool and then mix together the two scripts.
I know this may look like a duplicate of:
What is best tool to compare two SQL Server databases (schema and data)?
which is where I found most of the existing tools I've tried, but so far I've not managed to get any of these to produce a working schema migration script (I'm really not too fussed about the data, but I do need the data which is required for foreign keys - which tbh is all the difference as I've deploy old version and new version).
Am I expecting too much?
Should I give up and start manually stitching together what I do have?
Or do I go through all the commits and manually create upgrade scripts?
I can't think of more powerful tools available than the ones you seem to have tried. If those fail, my homegrown versioning system probably won't help you much either.
However, you should be able to generate an update script and then manually edit it to add the data transformations to it.
And/or you could disable the foreign key constraints for the time that the update script runs.
There is no such thing as doing schema and data "at the same time". Even if you have them in one big script you would still be doing the schema first and then the data. If the schema script creates a new table and adds a constraint to it there is no reason you should get a referential integrity violation error as there are no rows in those tables.
In any case, you should give our xSQL Schema Compare and Data Compare tools a try, you will be impressed with the performance and the level of control you get.

Resources