I have a .NET Core Project with lots of migrations in it. I also have the database (given to me and not generated with migrations on my pc). now
When I try to add a migration I get an error that there are pending migrations and I first need to update database and I you can guess running update-database command gives me:
object ... already exists error
If I remove database update-database command will generate the whole database however there are lots of data in the database that creating data with migrations would wipe them out.
I thought of generating data script from database, then creating database with migrations and then running the script, but the data script is very large and running the script have lots of other issues.
I just need to remove the old migrations but not unapplying them (as it would also remove my tables from database).
And also note that there are no _MigrationHistory Table in the database
If there is no __MigrationHistory table in the database and you want to omit the migrations, you can also comment the Up and Down fields in the migration files and apply migration. It does not affect your database and only add migration record to __MigrationHistory table.
Note: This problem might occur sometimes and using the approach above most probably fix the problem without affecting data. There are of course some other workarounds that can be applied for such a kind of problem.
Related
I am deploying a DACPAC via SqlPackage.exe to database servers that have a large volume of transaction replication in SQL Server. The DACPAC is built as the output of a SQL Server Database Project. When I attempt to deploy the DACPAC to the database with replication enabled the SqlPackage execution returns errors such as, Error SQL72035: [dbo].[SomeObject] is replicated and cannot be modified.
I found the parameter DoNotAlterReplicatedObjects which does not alter objects with replication turned on and would silence those errors, which isn't what I want to do. Instead, I want to alter all objects regardless of replication as part of the deployment.
The only option that I can think of to deploy the DACPAC to these replicated databases is to:
remove the replication through a script before deploying,
deploy the DACPAC via SqlPackage,
reconstruct the replication via scripts after deploying.
Unfortunately, the database is so heavily replicated that the step #3 above would take over 7 hours to complete. So this is not a practical solution.
Is there a better way to use SQL Server Database Projects and DACPACs to deploy to databases with a lot of replication?
Any assistance would be appreciated. Thank you in advance for your advice.
We solved the issue by doing the following. Hopefully this will work for others as well. The high level idea is that you need to disable "Do not ALTER replicated objects" and enable "Ignore column order".
There's a couple of ways to do this.
If you are using the SqlPackage tool in your deployment pipeline, then use the DoNotAlterReplicatedObjects and IgnoreColumnOrder properties see this link. So /p:DoNotAlterReplicatedObjects=False /p:IgnoreColumnOrder=True
If you are using C# or PowerShell Dac classes, then use the DacDeployOptions.DoNotAlterReplicatedObjects and DacDeployOptions.IgnoreColumnOrder properties.
You can directly modify the "Advanced Publish Setting" in Visual Studio IDE. You uncheck the Do not ALTER replicated objects checkbox and enable the Ignore column order checkbox. See this StackOverflow answer for how an example for the Ignore checkbox.
Our theory on why this works is that the alter table can only append a column to the end of a table so the only way to add a column to a specific position is to drop and recreate the table. The ignore tells the publisher to append the column to the end regardless of where I positioned the column in the script.
So the place this could be a problem is if you do an insert without specifying a column list because you expect the columns to be in a specific order and they're not.
Another potential side-effect that you could run in to is the table created by the DACPAC could have a different column order than the table altered by the DACPAC. We have been using this solution for a few months without issues, but the above are things to be aware of.
I hope that this helps.
I created a DACPAC project and its deployment hasn't proceeded past QA yet. There were only two tables in it. I just did a refactoring and deleted the two tables, then added one new one. Deployment worked locally, however it failed in QA. I'm getting warnings and errors about adding or renaming columns. I wasn't expecting this since I deleted the existing tables and added a new one.
How can I get past these warnings and errors? Is it safe/advisable to delete the refactorlog file? I'm assuming that's the reason the deployment is trying to update from a previous state, instead of just doing a fresh deployment (which is what I want).
I created a pre-deployment script to drop the two original tables. I was hoping the deployment would just create the new table.
I got this working by deleting the DB in our QA environment. Then I cleared the contents of the refactorlog in the DACPAC, and redeployed.
I believe another alternative would have been to clear the __RefactorLog table in the DB while also clearing the contents of the refactorlog in the DACPAC, then redeploying.
I read that __MigrationHistory table is used solely by EF.
I have a team working on the same project and we always have problems with the versioning files EF6 create in the application, and I have to delete them every time I create a migration and someone created another before me (the same happens with the other team members).
Is there a way to restore a previous version of a database schema using the data in __MigrationHistory table alone? Or is it not possible without the versioning files EF6 creates in the application?
The clean way is to define the Down() method in the migration files correctly. Then you can go back to a certain version of the DB with the following command:
Update-Database -TargetMigration <Name of last migration to be applied>
Sometimes, you can also make EF happy by just adding an empty migration after the two migrations to be "merged". This empty migration is just there so EF can update its internal state and write it to _MigrationHistory correctly. This should get rid of the "Unable to update database to match the current model because there are pending changes and automatic migration is disabled." error.
To prevent the problems with migrations being created in parallel you have described, we always use the following process:
Only one team should have changes to the DB model checked out at any time
Before adding a new migration, always get latest version and apply Update-Database
Only now make the changes to the POCOs / ModelBuilder / DbContext
Add your migration with Add-Migration and also define the Down() method
Check in your changes before anyone else is allowed to make changes to the DB model
Track the migrations to be applied in an Excel file (for maintenance / support)
Note that model changes are tracked per DbContext, so maybe it is possible to split the DbContext into one separate context for each team. This would result one set of migrations per DbContext, i.e. team.
I have a relational database in my server where I've used for developing a system. Now I want to make it live and truncate all data from the tables. I've manually deleted data from tables and after that I've run the truncate command, but it show this error :
Cannot truncate table 'dbo.Building' because it is being referenced by a FOREIGN KEY constraint.
Is there any way to empty my database by using a single command? I've searched google, all of them told to use truncate command. But I can not use it for all the tables because the error occurred.
I want to entry data from the ID no 1 in all tables.
Please give me a guideline to truncate all the data from my database.
Now I want to make it live and truncate all data from the tables
You are approaching this completely wrong. Even if you succeed, you will deploy a system which will be impossible to upgrade. As you continue to develop you will modify the development database and then when you have to deploy your next version of your application you'll realize you need to modify the production database and keep all of its data.
Stop the deployment right now and go back to the drawing board to design a proper deployment strategy. I recommend migrations. Another alternative is using diff tools.
Truncating tables is completely irrelevant for what you're actually trying to achieve.
There are two options I could think off..
You need to drop (not just disable ) all foreign keys, then finally run truncate to delete all table data using any method.. and finally recreate all foreign keys
You also can script out only DDL and deploy database using that script instead of providing database to deployment team..
I have recently created a database project in VS2010 for an existing SQL Server 2008 R2 DB. I have updated 1 table out of 11 by adding 3 new columns to the end. I then updated 4 views that referred to that table.
I then tried a Build/Deploy with it only generating a script.
I have inspected the script and for every single table in the DB, it has generated code that will create a temp version of each table, copy the data from the existing table, drop the original and rename the copy.
I saw the posting on here where it insisted on rebuilding the table for dropped columns and I tried setting the IgnoreColumnOrder but it didn't make any difference. It didn't seem relevant to my situation, anyway, so I wasn't surprised.
I created my DB project by getting the DBA to give me a fully scripted version of Production, built that DB on my PC version of SQL Server and then created my initial project from that. I don't think that would make any difference and I have compared the project definition of the tables to the target Dev DB and they are the same.
I have "Always recreate database" unticked and "Block incremental deployment if data loss might occur" ticked. Don't suppose they have anything to do with my issue?
Any ideas?
I found a backup of the database and as per Peter's suggestion, ran a Schema Compare. The difference turned out to be that the target DB had PAGE compression on most of the tables but that was not in the project definition.