I have made some changes to a database using code first entity framework migrations (lets say db2).
I now want to revert the change made back to the original database. As we want to retain the data on the old database (db1) I cant simple clone it.
Can someone please confirm the right process to do this?
I am assuming I will need to perform a rollback on db2 back to the original state it was in when it was cloned from db1.
I would then switch context so I am pointing at db1.
I then add a migration to generate all the database changes.
I then perform update-database to run make the changes.
Is this correct?
I will then need to run a migration to br
You can use –TargetMigration parameter in order to migrate to a specific version:
Update-Database –TargetMigration: db1
More info.
Related
I have several tables that I have created through migration. Then, what happens if I change the table structure directly from PHPMYADMIN without using migration? What if my backend team pulls my project, then runs the "php artisan migrate" command. Is the database on my backend team the same as the database that I have?
If you make changes through phpMyAdmin, these will only be visible to you. You should change the migrations, or if you can't change the original because you don't want to reset the database, you should create a new migration to alter the table. Check the documentation for altering tables here: https://laravel.com/docs/5.7/migrations#modifying-columns
I read that __MigrationHistory table is used solely by EF.
I have a team working on the same project and we always have problems with the versioning files EF6 create in the application, and I have to delete them every time I create a migration and someone created another before me (the same happens with the other team members).
Is there a way to restore a previous version of a database schema using the data in __MigrationHistory table alone? Or is it not possible without the versioning files EF6 creates in the application?
The clean way is to define the Down() method in the migration files correctly. Then you can go back to a certain version of the DB with the following command:
Update-Database -TargetMigration <Name of last migration to be applied>
Sometimes, you can also make EF happy by just adding an empty migration after the two migrations to be "merged". This empty migration is just there so EF can update its internal state and write it to _MigrationHistory correctly. This should get rid of the "Unable to update database to match the current model because there are pending changes and automatic migration is disabled." error.
To prevent the problems with migrations being created in parallel you have described, we always use the following process:
Only one team should have changes to the DB model checked out at any time
Before adding a new migration, always get latest version and apply Update-Database
Only now make the changes to the POCOs / ModelBuilder / DbContext
Add your migration with Add-Migration and also define the Down() method
Check in your changes before anyone else is allowed to make changes to the DB model
Track the migrations to be applied in an Excel file (for maintenance / support)
Note that model changes are tracked per DbContext, so maybe it is possible to split the DbContext into one separate context for each team. This would result one set of migrations per DbContext, i.e. team.
I want to manually remove a flyway migration that has run successfully against the database. It is the last migration that ran.
Will this work:
Manually revert the changes that were performed in the migration script (it added a column so will drop this column)
Remove the entry for the migration from the schema_version table
Is there anything else I need to do?
Yes that will work, but in addition you will need to remove the the offending migration script if you don't want it run again on the next migrate.
You could also leverage repair if you wanted to keep the migration around but make alterations to it.
Ideally if you want to undo the changes done by a migration, you should create another migration script that will do so. This practice is recommended because it avoids modifying the DB state outside of flyway.
We just trying to implement SSDT in our project.
We have lots of clients for one of our products which is built on a single DB (DBDB) with tables and stored procedures only.
We created one SSDT project for database DBDB (using VS 2012 > SQL Server object Browser > right click on project > New Project).
Once we build that project it creates one .sql file.
Problem: if we run that file on client's DBDB - it creates all the tables again & it deletes all records in it [this fulfills the requirements but deletes the existing records :-( ]
What we need: only the update which is not present on the client's DBDB should get update with new changes.
Note : we have no direct access to client's DBDB database for comparing with our latest DBDB. We only can send them some magic script file which will update their DBDB to the latest state.
The only way to update the Client's DB is to compare the DB schemas and then apply the delta. Any way you do it, you will need some way to get a hold on the schema thats running at the client:
IF you ship a versioned product, it is easiest to deploy version N-1 of that to your development server and compare that to the version N you are going to ship. This way, SSDT can generate the migration script you need to ship to the client to pull that DB up to the current schema.
IF you don't have a versioned product, or your client might have altered the schema or you will need to find a way to extract the schema data on site (maybe using SSDT there) and then let SSDT create the delta.
Option: You can skip using the compare feature of SSDT altogether. But then you need to write your migration script yourself. For each modification to the schema, you need to write the DDL statements yourself and wrap them in if clauses that check for the old state so the changes will only be made once and if the old state exists. This way, it doesnt really matter from wich state to wich state you are going as the script will determine for each step if and what to do.
The last is the most flexible, but requires deep testing in its own and of course should have started way before the situation you are in now, where you don't know what the changes have been anymore. But it can help for next time.
This only applies to schema changes on the tables, because you can always fall back to just drop and recreate ALL stored procedures since there is nothing lost in dropping them.
It sounds like you may not be pushing the changes correctly. You have a couple of options if you've built a SQL Project.
Give them the dacpac and have them use SQLPackage to update their own database.
Generate an update script against your customer's "current" version and give that to them.
In any case, it sounds like your publish option might be set to drop and recreate the database each time. I've written quite a few articles on SSDT SQL Projects and getting started that might be helpful here: http://schottsql.blogspot.com/2013/10/all-ssdt-articles.html
I am relativley new to MS SQL server. I need to create a test database from exisitng test data base with same schema and get the data from production and fill the newly created empty database. For this I was using generate scripts in SSMS. But now I need to do it on regular basis in a job. Please guide me how I can create empty databases automatically at a point of time.
You will have a very hard time automating the generate scripts wizard. I would suggest using something like Red-Gate's SQL Compare (or any alternative that supports command-line). You can create a new, empty database, then script a compare/deploy using the command line from SQL Server Agent.
Another, more icky alternative, is to deploy your schema and modules to the model database. You can keep this in sync using SQL Compare (or alternatives), or just be diligent about deployment of schema/module changes, then when you create a new database it will automatically inherit the current state of your schema/modules. The problem with this approach (other than depending on you keeping model in sync) is that all new databases will inherit this schema, since there currently is no way to have multiple models.
Have you considered restoring backups?
To add to Aaron's already good answer, I've been using SQLDelta for years - I think it's excellent.
(I have no connection to SqlDelta, other than being a very satisfied customer)