Sync database between multiple users - reactjs

I am looking for a solution to sync DB between multiple developers (us at the office..).
We use Wordpress and MAMP (for now, MAMP/Headless WP and NPM/React in the future) and we want to use Appveyor (or similar) to deploy at dev-server and live-server, and want the DB to be synced everywhere or at least among us and the dev server and have a secondary (free standing) on the live-server.
Can this be done with Liquidbase or is there a better option?
Thanks :)

I don't know a whole lot about WordPress and how it uses the database, but in theory this should be possible as long as you are talking about syncing the schema changes. If you are also trying to sync the data, then Liquibase is not the right tool for the job.
To do this with Liquibase, try installing using the installer and working through some of the examples to get an idea for how the tool works. The examples use a local h2 in-memory database, so it is pretty painless to try things and start over if you mess things up.
After getting a feel for things, you will want to use the Liquibase generateChangeLog command to create the initial changelog that contains all the instructions for creating the schema as it exists on the database you are using when you run generateChangeLog. Then test that you can run liquibase update on a separate database and have WordPress use that database successfully.
Once you have proven that workflow, you can continue by following this pattern:
Before making changes to the WordPress schema, run liquibase snapshot to create a JSON formatted snapshot of the "DEV" schema - the schema you are changing in development mode. You will need additional options to generate the JSON format snapshot.
Make the desired changes to the WordPress "DEV" schema, most likely by using the WordPress app itself.
Use liquibase diffChangeLog to compare the JSON snapshot to the newly-altered "DEV" schema. This will add changesets to the existing changelog file that describe how to alter the schema to create the desired changes.
Use liquibase changeLogsSync on the "DEV" schema to update the liquibase tracking tables so that liquibase knows that the changes in the changelog already exist in that database.
Use liquibase update against the "PROD" database to have the new schema changes show up in that environment.
This workflow is described in the Liquibase docs for the snapshot command.
ps - there is no d in Liquibase :-)

Related

How to remove prisma models with data in production

I want to delete 2 models from a postgreSQL production database that are no longer used but have data on them.
I am a bit afraid of removing them from the schema and running prisma migrate dev and then having issues with the migration generated in production.
Therefore, my idea was to clean the database before:
1 make a query to the prod database to remove all the data from the tables related.
2 remove the models from the schema locally and run the migration.
3 push the migration to prod with the DB tables empty
What do you guys think?
Should I do that or just don't be afraid and delete the models locally + run prisma migrate dev and then push the migration to prod?
Are there any other ways to accomplish what I want?
Thanks in advance!
The approach you listed looks good.
Another alternative would be to delete the models from schema file and run npx prisma migrate dev command with the --create-only flag to just create the migration file without applying it to the database.
You can inspect the migration file first and if it looks good to you then you can invoke npx prisma migrate dev again to apply it.
Your plan seems a good one! This will ensure that the migration will not be affected by any data that is still present in the tables.
Another option you could consider is to rename the tables in the database, rather than removing them entirely. This will allow you to retrieve the data later (if necessary), but it will also allow you to remove the models from your schema without any issues.
So I would do this:
Rename the tables
Remove the models from the schema locally
push the migration to prod
Remove the renamed tables from step 1.

run liquibase on multiple databases at different versions

I am trying to integrate Liquibase with our Spring/Hibernate web-app to replace our existing home-grown solution. So far Liquibase is great, but there's one use-case that is important to us and I don't know if Liquibase supports it or not, which is this:
We deploy our web app to clients who host the webapp and the database (MySQL) themselves. So, supposing we deploy to our first client (client1) with a new clean DB schema ( generated from Hibernate mappings) and no items in Liquibase changeset. We then develop some schema changes and redeploy the application to client1, and liquibase does its stuff and applies the changesets- all great so far.
Now, we deploy to a new client, client2, again with a new database schema generated from Hibernate mappings. But this time, there are changesets present ( for the changes made between client1 and client2 deployments) but they don't need to be applied, as they're already in the new schema). However, because the DATABASECHANGELOG table is empty, Liquibase will try to apply the changesets and probably fail with SQL errors.
What we'd like is for new deployments to new clients to 'know' at what changeset they are (relative to the first deployment to client 1), so it only applies subsequent updates.
There seem to be several possibilities for this, probably more I've not thought of:
populate DATABASECHANGELOG with fake entries to fool Liquibase into thinking these have already been applied.
always deploy our first,baseline original schema to subsequent clients, and run updates sequentially, and so never deploy a 'new' schema derived from Hibernate mappings, after client1.
use our own tracking system (e.g., map a db version to an application version, and a db version to a changeset).
Is this a problem, or I am just not understanding how to use Liquibase properly? Would be grateful for any advice from people who've dealt with this sort of use-case before. We'd really like to avoid deployment-specific changeSets if at all possible - there will be dozens, if not hundreds of deployments to handle.
Thanks,
Richard
We have a similar setup.
But we are getting liquibase into the game earlier. Before we officially release the software we setup the liquibase changesets and let liquibase handle the database.
We did not want to loose the advantage of letting hibernate generate the DB during the development phase. So we are also using Hibernate while developing.
But right before the version is stable we let the liquibase diff tool run on the database and let it create a changeset for the hibernate-generated tables.
Then this changeset is corrected manually since the liquibase diff tool does produce some flaws.
Once the changeset is ready we ship this with the software.
We maintain a reference system that keeps the data base version of the last officially released version. Then for the next release we let the liquibase diff tool run with the current development version against the reference db. That spits out the difference for the next version. This is also corrected manually and finally you have a changeset that changes the db to the next version.
Hope this gives you an idea of one way to use liquibase and hibernate together.
I usually suggest always running the same changelog file against all your different databases. That way you don't have to deal with manually marking changeSets as ran, using preconditions, or anything else. Most importantly, every database will follow the same upgrade path so you know they are going to update consistently without any unexpected problems.
You can use the liquibase hibernate extension to automatically append changeSets to your changelog based on your hibernate mapping, but when it comes time to deploy your changes to the databases you just run your liquibase changelog file and not try to use hibernate's schema generation logic at all.
For option 1 above (populate with fake entries) I've just discovered the changelogSync command which looks like it marks all changeset entries as applied, even if they haven't been.
But is this better or worse than genuinely applying the changes, from a baseline schema?

Grails database migration changelog "rebase"

Is there an easy way to do a "git rebase"-like operation for Grails Database Migration plugin changelog scripts?
I have already several changelog scripts on top of the initial changelog from an old domain model. Now I'm deploying the application to a new environment and there's no need to migrate the database contents.
I could delete the scripts and generate a fresh initial script from the current domain model but then I'd have to install Grails to the old environment and execute dbm-clear-checksums there, right?
Is there an easier way to tell dbm that I don't want to create an old domain and patch it to current level?
Run the dbm-changelog-sync script - it marks everything as having been run.

Flyway/Liquibase for Database Structure and DBUnit for Database Inserts?

I have the following scenario for my application:
1 Production Server
1 Test Server
n Development Computers
For database migration we use Hibernate Schema Update for the Schema and DBUnit for filling in alle the production data (on all servers/computers). When the schema update is done I generate a new DTD File for the new schema, so I can do a fresh import of the DBUnit XML. The application updates the database at startup with the XML file (only on development and test servers/computers!)
Of course this approach is not optimal and fragile. So I looked at Liquibase and Flyway. Both seem to be great tools, but what I do not get is: How do I migrate the data? In my case, I dump the data of the production system once a week and add it to the applications source control as a DBUnit XML file, so all developers have "fresh" data and the test server has current production data, too.
The problem I see with Liquibase and Flyway is, that there is no solution how to do automated diffs from the database data and generate the migration changes automatically.
So my idea is the following with the following steps:
Set Hibernate to validate instead of update.
When a STRUCTURAL database change is needed, I add it to the migration script for the major version
No database inserts are in the migration script.
Generate a new DTD for DBunit based on the new database structure
Generate the DBUnit XML from the production database.
Another idea would be to utilize flyways JavaMigration and provide an initial Database Dump based on DBUnit. All other changes for database data will be handled in migration scripts. But still there is the problem: How to make diffs from the current migration script state and the production database state?
It would be awesome if anyone could provide me hints how to handle my scenario :)
If your goal is to use dumps of the PROD database in DEV and TEST environments, I would:
Configure the DB migration tool to run on application startup (both Flyway and Liquibase support this through their respective APIs)
Package all the DB structure migrations together with the app
Dump both data and structure from PROD
This way, when the PROD database is restored to DEV or TEST, the old metadata table of the migration tool is restored as well.
When the app starts, the migration tool will discover that the db structure is outdated and upgrade it to the newest version. Done.
No need to use DBUnit for this.
The short answer is that all your changes would be done through Liquibase or Flyway.
We use Flyway, with the same prod/test/development setup.
We make all db changes (structure or metadata) using Flyway migration scripts, stored in source control. Each time we do a new deployment to an environment, we first run the migration scripts there (using either the command line tool or the maven plugin). The code first goes to development environment, gets integration tested there and keeps going to test and production.
The main thing to watch out for is that Flyway requires a linear versioning to the files, so if two developers check in migrations at the same time, one of them will have to rename theirs.

How to update a database schema without losing your data with Hibernate?

Imagine you are developing a Java EE app using Hibernate and JBoss. You have a running server that has some important data on it. You release the next version of the app once in a while (1-2 weeks) and they have a bunch of changes in the persistence layer:
New entities
Removed entities
Attribute type changes
Attribute name changes
Relationship changes
How do you effectively set up a system that updates the database schema and preserves the data? As far as I know (I may be mistaking), Hibernate doesn't perform alter column, drop/alter constraint.
Thank you,
Artem B.
LiquiBase is your best bet. It has a hibernate integration mode that uses Hibernate's hbm2ddl to compare your database and your hibernate mapping, but rather than updating the database automatically, it outputs a liquibase changelog file which can be inspected before actually running.
While more convenient, any tool that does a comparison of your database and your hibernate mappings is going to make mistakes. See http://www.liquibase.org/2007/06/the-problem-with-database-diffs.html for examples. With liquibase you build up a list of database changes as you develop in a format that can survive code with branches and merges.
I personally keep track of all changes in a migration SQL script.
You can use https://github.com/Devskiller/jpa2ddl tool which provides Maven and Gradle plugin and is capable of generating automated schema migrations for Flyway based on JPA entities. It also includes all properties, dialects, user-types, naming strategies, etc.
For one app I use SchemaUpdate, which is built in to Hibernate, straight from a bootstrap class so the schema is checked every time the app starts up. That takes care of adding new columns or tables which is mostly what happens to a mature app. To handle special cases, like dropping columns, the bootstrap just manually runs the ddl in a try/catch so if it's already been dropped once, it just silently throws an error. I'm not sure I'd do this with mission critical data in a production app, but in several years and hundreds of deployments, I've never had a problem with it.
As a further response of what Nathan Voxland said about LiquiBase, here's an example to execute the migration under Windows for a mySql database:
Put the the mysql connector under lib folder in liquibase distribution for example.
Create a file properties liquibase.properties in the root of the liquibase distribution and insert this recurrent lines :
driver: com.mysql.jdbc.Driver
classpath: lib\\mysql-connector-java-5.1.30.jar
url: jdbc:mysql://localhost:3306/OLDdatabase
username: root
password: pwd
Generate or retrieve an updated database under another name for example NEWdatabase.
Now you will exctract differences in a file Migration.xml with the following command line :
liquibase diffChangeLog --referenceUrl="jdbc:mysql://localhost:3306/NEWdatabase"
--referenceUsername=root --referencePassword=pwd > C:\Users\ME\Desktop\Migration.xml
Finally execute the update by using the just generated Migration.xml file :
java -jar liquibase.jar --changeLogFile="C:\Users\ME\Desktop\Migration.xml" update
NB: All this command lines should be executed from the liquibase home directory where liquibase.bat/.sh and liquibase.jar are present.
I use the hbm2ddl ant task to generate my ddl. There is an option that will perform alter tables/columns in your database.
Please see the "update" attribute of the hbm2ddl ant task:
http://www.hibernate.org/hib_docs/tools/reference/en/html/ant.html#d0e1137
update(default: false): Try and create
an update script representing the
"delta" between what is in the
database and what the mappings
specify. Ignores create/update
attributes. (Do not use against
production databases, no guarantees at
all that the proper delta can be
generated nor that the underlying
database can actually execute the
needed operations)
You can also use DBMigrate. It's similar to Liquibase :
Similar to 'rake migrate' for Ruby on
Rails this library lets you manage
database upgrades for your Java
applications.

Resources