I want to delete 2 models from a postgreSQL production database that are no longer used but have data on them.
I am a bit afraid of removing them from the schema and running prisma migrate dev and then having issues with the migration generated in production.
Therefore, my idea was to clean the database before:
1 make a query to the prod database to remove all the data from the tables related.
2 remove the models from the schema locally and run the migration.
3 push the migration to prod with the DB tables empty
What do you guys think?
Should I do that or just don't be afraid and delete the models locally + run prisma migrate dev and then push the migration to prod?
Are there any other ways to accomplish what I want?
Thanks in advance!
The approach you listed looks good.
Another alternative would be to delete the models from schema file and run npx prisma migrate dev command with the --create-only flag to just create the migration file without applying it to the database.
You can inspect the migration file first and if it looks good to you then you can invoke npx prisma migrate dev again to apply it.
Your plan seems a good one! This will ensure that the migration will not be affected by any data that is still present in the tables.
Another option you could consider is to rename the tables in the database, rather than removing them entirely. This will allow you to retrieve the data later (if necessary), but it will also allow you to remove the models from your schema without any issues.
So I would do this:
Rename the tables
Remove the models from the schema locally
push the migration to prod
Remove the renamed tables from step 1.
Related
I currently have 2 databases used by 2 services (let's call them database/service A and database/service B), both of them with their own schemas.
I need to migrate some of the tables from DB A into DB B, once that's all completed re-point service A to service B. I know could easily do the schema migration by using pg_dump utility and that seems to be the "easy" bit.
The problem I have is that both services use Flyway for database version control, hence when I re-point service A to DB B there's a bunch of migrations that are clashing on the same version number because of checksum mismatch.
I've seen that there's a "baseline" functionality in Flyway (https://flywaydb.org/documentation/command/baseline), but at first look that doesn't seem to be what I need.
How could I resolve this problem?
On first considering this problem, the immediate answer is that your move from DbB to DbA is done through one migration on top of the existing migrations in DbA. You don't try to modify the database outside of the Flyway process. Instead, you incorporate the Flyway process into your database change. Flyway is very agnostic to the set of changes you introduce. So, you're just adding another change to the existing set. This shouldn't result in a repair or a baseline to get to the required point.
Let's say the last migration for DbA is V6.3__XXX, we just add V6.4__MigratingDbB to our chain of changes. What's in that script is the necessary set of changes. That should do it.
Grant's answer is definitely the best, but an alternative solution if the database objects for the two services are completely independent, is to have two Flyway configurations which refer to the script collections for each service, and which have distinct history tables. The problem is if there are dependencies between the two services; the migrations from one service would then need to know the current state of play in the other, which could get you in a tangle actually executing them.
I am looking for a solution to sync DB between multiple developers (us at the office..).
We use Wordpress and MAMP (for now, MAMP/Headless WP and NPM/React in the future) and we want to use Appveyor (or similar) to deploy at dev-server and live-server, and want the DB to be synced everywhere or at least among us and the dev server and have a secondary (free standing) on the live-server.
Can this be done with Liquidbase or is there a better option?
Thanks :)
I don't know a whole lot about WordPress and how it uses the database, but in theory this should be possible as long as you are talking about syncing the schema changes. If you are also trying to sync the data, then Liquibase is not the right tool for the job.
To do this with Liquibase, try installing using the installer and working through some of the examples to get an idea for how the tool works. The examples use a local h2 in-memory database, so it is pretty painless to try things and start over if you mess things up.
After getting a feel for things, you will want to use the Liquibase generateChangeLog command to create the initial changelog that contains all the instructions for creating the schema as it exists on the database you are using when you run generateChangeLog. Then test that you can run liquibase update on a separate database and have WordPress use that database successfully.
Once you have proven that workflow, you can continue by following this pattern:
Before making changes to the WordPress schema, run liquibase snapshot to create a JSON formatted snapshot of the "DEV" schema - the schema you are changing in development mode. You will need additional options to generate the JSON format snapshot.
Make the desired changes to the WordPress "DEV" schema, most likely by using the WordPress app itself.
Use liquibase diffChangeLog to compare the JSON snapshot to the newly-altered "DEV" schema. This will add changesets to the existing changelog file that describe how to alter the schema to create the desired changes.
Use liquibase changeLogsSync on the "DEV" schema to update the liquibase tracking tables so that liquibase knows that the changes in the changelog already exist in that database.
Use liquibase update against the "PROD" database to have the new schema changes show up in that environment.
This workflow is described in the Liquibase docs for the snapshot command.
ps - there is no d in Liquibase :-)
Hey folks and moderators,
Let’s say I have deployed a Laravel application on production server and how do I modify and update the application without affecting production data?
Assumed that I wanted to release the next version of the application with additional few columns for table Users.
The question is should I clone database from live to staging?
What is the right way to modify staging application and deploy to production without affecting the production database even though there are additional tables/columns from staging?
Currently I running two different environments and drop production tables and import from staging. It sounds like not efficient.
Any better idea to improve from staging to production?
Thank you!
I’ve been tried to search around unfortunately it brings no luck.
Assumed that I wanted to release the next version of the application with additional few columns for table Users.
Yes writing a new migration like this:
Schema::table('users', function (Blueprint $table) {
$table->string('email'); // Adding
$table->string('name', 50)->change(); //Changing
});
Thats for adding, for changing type, renaming and dropping columns, you will also have to install: composer require doctrine/dbal
See more info at the docs:https://laravel.com/docs/5.6/migrations#creating-columns
The question is should I clone database from live to staging? What is the right way to modify staging application and deploy to production without affecting the production database even though there are additional tables/columns from staging?
...
Any better idea to improve from staging to production?
I'm assuming you are using Git. A single server, and that you can put down your app for a while, maybe do the update when the less users are using the system.
Push to repository
ssh to production server
php artisan down --message="Your Awesome Message About your update" --retry=60
BACK UP YOUR DATABASE (Just in case something goes wrong!)
git pull
composer update
php artisan migrate
php artisan up
Just be sure that your migrations always have rollbacks!
You can make another back/dump after migrating if you want to use real data at staging or development.
I have a SQL Server CE database on my 'live' host that I deployed a few weeks ago. It has a migration history of two old migrations. Then I have my dev database, that has gone through umpteen migrations, and several delete and recreate moments.
Now I would like to use EF migrations to build a migration that will update the production db to match my code-first model on dev. I thought that if I cleared the prod migration history, and ran Add-Migration, EF would compare database and model, and generate a migration class to bring the db up to date with the model.
What really happens is that the migration that gets generated tries to create the whole dd, well, all tables, FKs, and Indexes. How do I get a proper update only, using EF migrations?
If you still have the deployed migration on your Dev box, you can create a script that will bring the deployed version up to date:
Update-Database -Script -SourceMigration: VersionDeployed -TargetMigration: CurrentMigration
You could also try bringing the PROD database down with the migration history (don't clear it). EF should compare the model in the last migration to the current model based on code.
http://cpratt.co/migrating-production-database-with-entity-framework-code-first/#at_pco=smlwn-1.0&at_si=54ad5c7b61c48943&at_ab=per-12&at_pos=0&at_tot=1
I have the following scenario for my application:
1 Production Server
1 Test Server
n Development Computers
For database migration we use Hibernate Schema Update for the Schema and DBUnit for filling in alle the production data (on all servers/computers). When the schema update is done I generate a new DTD File for the new schema, so I can do a fresh import of the DBUnit XML. The application updates the database at startup with the XML file (only on development and test servers/computers!)
Of course this approach is not optimal and fragile. So I looked at Liquibase and Flyway. Both seem to be great tools, but what I do not get is: How do I migrate the data? In my case, I dump the data of the production system once a week and add it to the applications source control as a DBUnit XML file, so all developers have "fresh" data and the test server has current production data, too.
The problem I see with Liquibase and Flyway is, that there is no solution how to do automated diffs from the database data and generate the migration changes automatically.
So my idea is the following with the following steps:
Set Hibernate to validate instead of update.
When a STRUCTURAL database change is needed, I add it to the migration script for the major version
No database inserts are in the migration script.
Generate a new DTD for DBunit based on the new database structure
Generate the DBUnit XML from the production database.
Another idea would be to utilize flyways JavaMigration and provide an initial Database Dump based on DBUnit. All other changes for database data will be handled in migration scripts. But still there is the problem: How to make diffs from the current migration script state and the production database state?
It would be awesome if anyone could provide me hints how to handle my scenario :)
If your goal is to use dumps of the PROD database in DEV and TEST environments, I would:
Configure the DB migration tool to run on application startup (both Flyway and Liquibase support this through their respective APIs)
Package all the DB structure migrations together with the app
Dump both data and structure from PROD
This way, when the PROD database is restored to DEV or TEST, the old metadata table of the migration tool is restored as well.
When the app starts, the migration tool will discover that the db structure is outdated and upgrade it to the newest version. Done.
No need to use DBUnit for this.
The short answer is that all your changes would be done through Liquibase or Flyway.
We use Flyway, with the same prod/test/development setup.
We make all db changes (structure or metadata) using Flyway migration scripts, stored in source control. Each time we do a new deployment to an environment, we first run the migration scripts there (using either the command line tool or the maven plugin). The code first goes to development environment, gets integration tested there and keeps going to test and production.
The main thing to watch out for is that Flyway requires a linear versioning to the files, so if two developers check in migrations at the same time, one of them will have to rename theirs.