EF6 - Restoring previous database schema from __MigrationHistory - sql-server

I read that __MigrationHistory table is used solely by EF.
I have a team working on the same project and we always have problems with the versioning files EF6 create in the application, and I have to delete them every time I create a migration and someone created another before me (the same happens with the other team members).
Is there a way to restore a previous version of a database schema using the data in __MigrationHistory table alone? Or is it not possible without the versioning files EF6 creates in the application?

The clean way is to define the Down() method in the migration files correctly. Then you can go back to a certain version of the DB with the following command:
Update-Database -TargetMigration <Name of last migration to be applied>
Sometimes, you can also make EF happy by just adding an empty migration after the two migrations to be "merged". This empty migration is just there so EF can update its internal state and write it to _MigrationHistory correctly. This should get rid of the "Unable to update database to match the current model because there are pending changes and automatic migration is disabled." error.
To prevent the problems with migrations being created in parallel you have described, we always use the following process:
Only one team should have changes to the DB model checked out at any time
Before adding a new migration, always get latest version and apply Update-Database
Only now make the changes to the POCOs / ModelBuilder / DbContext
Add your migration with Add-Migration and also define the Down() method
Check in your changes before anyone else is allowed to make changes to the DB model
Track the migrations to be applied in an Excel file (for maintenance / support)
Note that model changes are tracked per DbContext, so maybe it is possible to split the DbContext into one separate context for each team. This would result one set of migrations per DbContext, i.e. team.

Related

How to remove all migrations without unapplying them

I have a .NET Core Project with lots of migrations in it. I also have the database (given to me and not generated with migrations on my pc). now
When I try to add a migration I get an error that there are pending migrations and I first need to update database and I you can guess running update-database command gives me:
object ... already exists error
If I remove database update-database command will generate the whole database however there are lots of data in the database that creating data with migrations would wipe them out.
I thought of generating data script from database, then creating database with migrations and then running the script, but the data script is very large and running the script have lots of other issues.
I just need to remove the old migrations but not unapplying them (as it would also remove my tables from database).
And also note that there are no _MigrationHistory Table in the database
If there is no __MigrationHistory table in the database and you want to omit the migrations, you can also comment the Up and Down fields in the migration files and apply migration. It does not affect your database and only add migration record to __MigrationHistory table.
Note: This problem might occur sometimes and using the approach above most probably fix the problem without affecting data. There are of course some other workarounds that can be applied for such a kind of problem.

VSTS build - Incremental database deployment in distributed environment

I have a sql server database working with a .net 2015 mvc 5 application. My database code is source controlled using SSDT project. I am using SqlPackage.exe to deploy database to the staging environment using .Decpac file created by the SSDT project build process. This has been done using a powershell task of VSTS build.
This way I can make db schema changes in a source controlled way. Now the problem is regarding the master data insertion for the database.
I use a sql script file which have data insertion scripts which is executed as a post deployment script. This file is also source controlled.
The problem is that initially we have prepared the insertion script to target a sprint ( taking sprint n as a base) which works well for first release. but in next sprint if update some master data then how should the master data insert should be updated:
Add new update / insert query at the last of the script file? but in this case the post deployment script will be execute by CI and it try to insert the data again and again in the subsequent builds which will eventually get failed if we have made some schema changes in the master tables of this database.
Update the existing insert queries in the data insertion script. in this case also we have trouble because at the post build event, whole data will be re-inserted.
Maintain separate data insertion scripts for each script and update the script reference to the new file for the post build event of SSDT. This approach has a manual effort and error pron because the developer has to remember this process. Also the other problem with this approach is if we need to setup 1 more database server in the distributed server farm. Multiple data insertion script will throw errors because SSDT has latest schema and it will create a database with the same. but older data scripts has data insertion for previous schema ( sprint wise db schema which was changed in later sprints)
So can anyone suggest best approach which have lesser manual effort but it can cover all the above cases.
Thanks
Rupendra
Make sure your pre- and post-deployment scripts are always idempotent. However you want to implement that is up to you. The scripts should be able to be run any number of times and always produce correct results.
So if your schema changes that would affect the deployment scripts, well, updating the scripts is a dependency of the changes and accompanies it in source control.
Versioning of your database is already a built in feature of SSDT. In the project file itself, there is a node for the version. And there is a whole slew of versioning build tasks in VSTS you can use for free to version it as well. When SqlPackage.exe publishes your project with the database version already set, a record is updated in msdb.dbo.sysdac_instances. It is so much easier than trying to manage, update, etc. your own home-grown version solution. And you're not cluttering up your application's database with tables and other objects not related to the application itself.
I agree with keeping sprint information out of the mix.
In our projects, I label source on successful builds with the build number, which of course creates a point in time marker in source that is linked to a specific build.
I would suggest to use MERGE statements instead of insert. This way you are protected for duplicated inserts within a sprint scope.
Next thing is how to distinguish different inserts for different sprints. I would suggest to implement version numbering to sync database with the sprints. So create a table DbVersion(version int).
Then in post deployment script do something like this:
SET #version = SELECT ISNULL(MAX(version), 0) FROM DbVersion
IF #version < 1
--inserts/merge for sprint 1
IF #version < 2
--inserts/merge for sprint 2
...
INSERT INTO DbVersion(#currentVersion)
What I have done on most projects is to create MERGE scripts, one per table, that populate "master" or "static" data. There are tools such as https://github.com/readyroll/generate-sql-merge that can be used to help generate these scripts.
These get called from a post-deployment script, rather than in a post-build action. I normally create a single (you're only allowed one anyway!) post-deployment script for the project, and then include all the individual static data scripts using the :r syntax. A post-deploy script is just a .sql file with a build action of "Post-Deploy", this can be created "manually" or by using the "Add New Object" dialog in SSDT and selecting Script -> Post-Deployment Script.
These files (including the post-deploy script) can then be versioned along with the rest of your source files; if you make a change to the table definition that requires a change in the merge statement that populates the data, then these changes can be committed together.
When you build the dacpac, all the master data will be included, and since you are using merge rather than insert, you are guaranteed that at the end of the deployment the contents of the tables will match the contents of your source control, just as SSDT/sqlpackage guarantees that the structure of your tables matches the structure of their definitions in source control.
I'm not clear on how the notion of a "sprint" comes into this, unless a "sprint" means a "release"; in this case the dacpac that is built and released at the end of the sprint will contain all the changes, both structural and "master data" added during the sprint. I think it's probably wise to keep the notion of a "sprint" well away from your source control!

migrating database back to original database entity framework migrations

I have made some changes to a database using code first entity framework migrations (lets say db2).
I now want to revert the change made back to the original database. As we want to retain the data on the old database (db1) I cant simple clone it.
Can someone please confirm the right process to do this?
I am assuming I will need to perform a rollback on db2 back to the original state it was in when it was cloned from db1.
I would then switch context so I am pointing at db1.
I then add a migration to generate all the database changes.
I then perform update-database to run make the changes.
Is this correct?
I will then need to run a migration to br
You can use –TargetMigration parameter in order to migrate to a specific version:
Update-Database –TargetMigration: db1
More info.

SSDT implementation: Alter table insteed of Create

We just trying to implement SSDT in our project.
We have lots of clients for one of our products which is built on a single DB (DBDB) with tables and stored procedures only.
We created one SSDT project for database DBDB (using VS 2012 > SQL Server object Browser > right click on project > New Project).
Once we build that project it creates one .sql file.
Problem: if we run that file on client's DBDB - it creates all the tables again & it deletes all records in it [this fulfills the requirements but deletes the existing records :-( ]
What we need: only the update which is not present on the client's DBDB should get update with new changes.
Note : we have no direct access to client's DBDB database for comparing with our latest DBDB. We only can send them some magic script file which will update their DBDB to the latest state.
The only way to update the Client's DB is to compare the DB schemas and then apply the delta. Any way you do it, you will need some way to get a hold on the schema thats running at the client:
IF you ship a versioned product, it is easiest to deploy version N-1 of that to your development server and compare that to the version N you are going to ship. This way, SSDT can generate the migration script you need to ship to the client to pull that DB up to the current schema.
IF you don't have a versioned product, or your client might have altered the schema or you will need to find a way to extract the schema data on site (maybe using SSDT there) and then let SSDT create the delta.
Option: You can skip using the compare feature of SSDT altogether. But then you need to write your migration script yourself. For each modification to the schema, you need to write the DDL statements yourself and wrap them in if clauses that check for the old state so the changes will only be made once and if the old state exists. This way, it doesnt really matter from wich state to wich state you are going as the script will determine for each step if and what to do.
The last is the most flexible, but requires deep testing in its own and of course should have started way before the situation you are in now, where you don't know what the changes have been anymore. But it can help for next time.
This only applies to schema changes on the tables, because you can always fall back to just drop and recreate ALL stored procedures since there is nothing lost in dropping them.
It sounds like you may not be pushing the changes correctly. You have a couple of options if you've built a SQL Project.
Give them the dacpac and have them use SQLPackage to update their own database.
Generate an update script against your customer's "current" version and give that to them.
In any case, it sounds like your publish option might be set to drop and recreate the database each time. I've written quite a few articles on SSDT SQL Projects and getting started that might be helpful here: http://schottsql.blogspot.com/2013/10/all-ssdt-articles.html

How to partially migrate a database to a new system over time?

We are in the process of a multi-year project where we're building a new system and a new database to eventually replace the old system and database. The users are using the new and old systems as we're changing them.
The problem we keep running into is when an object in one system is dependent on an object in the other system. We've been using views, but have run into a limitation with one of the technologies (Entity Framework) and are considering other options.
The other option we're looking at right now is replication. My boss isn't excited about the extra maintenance that would cause. So, what other options are there for getting dependent data into the database that needs it?
Update:
The technologies we're using are SQL Server 2008 and Entity Framework. Both databases are within the same sql server instance so linked servers shouldn't be necessary.
The limitation we're facing with Entity Framework is we can't seem to create the relationships between the table-based-entities and the view-based-entities. No relationship can exist in the database between a view and a table, as far as I know, so the edmx diagram can't infer it. And I cannot seem to create the relationship manually without getting errors. It thinks all columns in the view are keys.
If I leave it that way I get an error like this for each column in the view:
Association End key property [...] is
not mapped.
If I try to change the "Entity Key" property to false on the columns that are not the key I get this error:
All the key properties of the
EntitySet [...] must be mapped to all
the key properties [...] of table
viewName.
According to this forum post it sounds like a limitation of the Entity Framework.
Update #2
I should also mention the main limitation of the Entity Framework is that it only supports one database at a time. So we need the old data to appear to be in the new database for the Entity Framework to see it. We only need read access of the old system data in the new system.
You can use linked server queries to leave the data where it is, but connect to it from the other db.
Depending on how up-to-date the data in each db needs to be & if one data source can remain read-only you can:
Use the Database Copy Wizard to create an SSIS package
that you can run periodically as a SQL Agent Task
Use snapshot replication
Create a custom BCP in/out process
to get the data to the other db
Use transactional replication, which
can be near-realtime.
If data needs to be read-write in both database then you can use:
transactional replication with
update subscriptions
merge replication
As you go down the list the amount of work involved in maintaining the solution increases. Using linked server queries will work best if its the right fit for what you're trying to achieve.
EDIT: If they're the same server then as suggested by another user you should be able to access the table with servername.databasename.schema.tablename Looks like it's an entity-framework issues & not a db issue.
I don't know about EntityToSql but I know in LinqToSql you can connect to multiple databases/servers in one .dbml if you prefix the tables with:
ServerName.DatabaseName.SchemaName.TableName
MyServer.MyOldDatabase.dbo.Customers
I have been able to click on a table in the .dbml and copy and paste it into the .dbml of the alternate project prefix the name and set up the relationships and it works... like I said this was in LinqToSql, though have not tried it with EntityToSql. I would give it shot before you go though all the work of replication and such.
If Linq-to-Entities cannot cross DB's then Replication or something that emulates it is the only thing that will work.
For performance purposes you probably want either Merge replication or Transactional with queued (not immediate) updating.
Thanks for the responses. We're going to try adding triggers to the old database tables to insert/update/delete records in the new tables of the new database. This way we can continue to use Entity Framework and also do any data transformations we need.
Once the UI functions move over to the new system for a particular feature, we'll remove the table from the old database and add a view to the old database with the same name that points to the new database table for backwards compatibility.
One thing that I realized needs to happen before we can do this is we have to search all our code and sql for ##Identity and replace it with scope_identity() so the triggers don't mess up the Ids in the old system.

Resources