asp.net code first automatic database updates - database

I am creating an application in C# Asp.net using Code First Entity Framework that will be using a different databases for different customers (in other words every customer has its own database, that will be generated on first time use).
I am trying to figure out a way to update all these databases automatically whenever I apply changes to my objects. In other words, how would I approach a cleanstep system in Code First EF?
Currently I am using InitializerIfModelChange to define a simple database that allows me to test my application whenever a schema change occurs. However, this method drops the database, which obviously is unacceptable in case of customer databases.
I must assume hundreds of customers so updating all databases by hand is not an option
I do not mind writing code that copies the data into a new database.
I think the best solution would be a way to backup a database somehow and then reinsert all data into the newly created database. Even better would be a way that automatically updates the schema without dropping the database. However I have no idea how to approach this. Can anyone point me in the right direction?

The link posted by Joakim was helpful. It requires you to update to EF 4.3.1 (dont forget your references in other projects if you have them) after which you can run the command that enables the migration. To automatically update the schema from code you can use
Configuration configuration = new Configuration();
DbMigrator migrator = new DbMigrator(configuration);
migrator.Update();
Database.SetInitializer<DbContext>(null);

Related

Copy and Sync Database Changes to Entity Framework Models

We are using Entity Framework Code First, and have our models in C#.
One of our DBAs added an additional column, how do we migrate his change into our Net Core Project? Is there a command line to automatically sync this?
We know command line below will take a whole database and place into C#. We are only interested in small modified changes.
Scaffold-DbContext "Server=(localdb)\mssqllocaldb;Database=Blogging;Trusted_Connection=True;" Microsoft.EntityFrameworkCore.SqlServer -OutputDir Models
That's all you've got. Using an existing database is an all or nothing affair. Either you manage your entities via code and migrate changes there back to your database, or you scaffold the entities from your database. In other words, a database change means you need to re-scaffold everything. There's no such thing as "code migration".
Alternatively, I suppose you could just manually modify the entity class. For simple changes like adding a new column, that's probably the best approach, as long as you take care to replicate the property exactly as it should be to match the column in the database.
EDIT (based on comments on question)
Okay, so if you're doing code-first, then no one should be manually touching the database ever. Period. You have to just make that a rule. If the DBA wants to make a change, he can communicate that to your team, where you can make the appropriate code change and hand back a SQL script to perform the migration. This is where DevOps comes into play. Your DBA should be part of your team and you all should be making decisions together. If you can't do that, this isn't going to work.
Add column to the model.
Generate migration.
Migration file 20180906120000_add_dba_column.cs will be generated
Manually add new record about just created migration in table _EFMigrationHistory
record: MigrationId: 20180906120000_add_dba_column, ProductVersion: <copy from previous migration>
If you want do it automatically, means you want combine two different approaches in same time.
I think there was a reason why EF team separated "EF Code First" and "EF Database first" approaches.
When you choose EF Code first approach, that means that source of truth is your application code.
If you want stick to have code-first approach, that mean you should train your DBA to write EF migrations.

Proper change-logging impossible with Entity Framework?

I'd like to log all changes made an SQL Azure database using Entity Framework 4.
However, I failed to find a proper solution so far..
So far I can track all entities themself by overriding SaveChanges() and using the ObjectStateManager to retrieve all added, modified and deleted entities. This works fine. Unfortunately I don't seem to be able to retrieve any useful information out of RelationshipEntries. In our database model we some many-to-many relationships, where I want to log new / modified / deleted entries too.
I want to store all changes in an Azure Storage, to be able to follow changes made by a user and perhaps roll back to a previous version of an entity.
Is there any good way to accomplish this?
Edit:
Our scenario is that we're hosting a RESTful WebService that contains all business logic and stores the data in the Azure SQL Database. A client must be authenticated as a user with the WebService, and I'd need to store the information which user changed the data.
See FrameLog, an Entity Framework logging library that I wrote for this purpose. It is open-source, including for commercial use.
Even if you don't want to use the library, you can look at the code to see one way of handling logging relationships. It handles all relationship multiplicities.
Particularly, see the code for the private methods logRelationshipChange, and logForeignKeyChange in the ChangeLogger class.
You can do it with a tracing provider.
You may want to consider just using a database trigger for this. Whenever a value in a table is changed, copy the row to another Archive table. It has worked pretty well for me.

How EF code first modeling will affect already existing data in database

Its clear to me that I can customize the behavior of syncing models and DB schema process. I am using the DropCreateDatabaseIfModelChanges<> class to do so.
Assume that I have a working project and site and DB is filling in the data. Everything is working fine.
One day I decide that some functionality needs to be changed. The changes will affect the properties of my models (they can be renamed/deleted/added, some models will be new, some models are deleted).
My question: What will happen with the already existing data on my deployed site when I check in all of my changes?
Will I lose it? If so, how can I avoid that?
Yes, you will lose your data if your model changes and you are using DropCreateDatabaseIfModelChanges<T>
To avoid this:
Don't use Db initializers in production (maybe except the CreateDatabaseIfNotExists<T>). DB initializes are there to smooth the development experience, not for production use.
What you need is the new Migration feature of Entity Framework 4.3. (currently in Beta1) which provides features for automatic and code base db schema migration.
Also now you can set the DB initializer from the *.config file, so you can easily switch beetween the development time DropCreateDatabaseIfModelChanges to no initializer in production configurations.

Database synchronization between a new greenfield project database and old projects database

I thinking about developing a new greenfield app using DDD/TDD/NHibernate with a new database schema reflecting the domain, where changes in the DB would need to be synchronized both ways with the old projects database. The requirement is that both projects will run in parallel, and once the new project starts adding more business value than the old project, the old projects would be shutted down.
One approach I have on my mind is to achieve the db synchronization via db triggers. Once you insert/update/delete in new database, the trigger for the table would need to correctly update the old database. The same for changes in the old database, its triggers would need update the new database.
Example:
old project has one table Quote, with columns QuoteId and QuoteVersion. The correct domain model is one Quote object, with many QuoteVersion objects. So the new database would have two tables, Quote and QuoteVersion. So, if you change Quote table in the new DB, the trigger would need to either update all records with that QuoteId in the old DB or the latest version. Next, if you update Quote record in the old DB, again you either update the record in the new DB or it might update it only if the latest version of the Quote in the old DB was updated.
So, there would need to be some logic in the triggers. Those sql statements might be kind of non-trivial. To ensure maintainability, there would need to be thorough tests for triggers (save data in one db, test data in the second db, for different cases).
The question: do you think this trigger idea for db synchronization is viable (not sure yet how to ensure one trigger wont trigger the other database trigger)? Anybody tried that and found out it goes to hell? Do you have a better idea how to fulfil the requirement of sync databases?
This is a non-trivial challenge, and I would not really want to use triggers - you've identified a number of concerns yourself, and I would add to this concerns about performance and availability, and the distinct likelihood of horrible infinite loop bugs - trigger in legacy app inserts record into greenfield app, causes trigger to fire in greenfield app to insert record in legacy app, causes trigger to fire in legacy app...
The cleanest option I've seen is based on a messaging system. Every change in the application fires a message, which is handled by a recipient at the receiving end. The recipient can validate the message, and - ideally - forward it to the "normal" code which handles that particular data item.
For example:
legacy app creates new "quote" record
legacy app sends a message with a representation of the new "quote"
message bus forwards message to greenfield app "newQuoteMessageHandler"
greenfield app "newQuoteMessageHandler" validates data
greenfield "newQuoteMessageHandler" instantiates "quote" domain entity, and populates it with data
greenfield domain entity deals with remaining persistence and associated business logic.
Your message handlers should be relatively easy to test - and you can use them to isolate each app from the crazy in the underlying data layer. It also allows you to deal with evolving data schemas in the greenfield app.
Retro-fitting this into the legacy app could be tricky - and may well need to involve triggers to capture data updates, but the logic inside the trigger should be pretty straightforward - "send new message".
Bi-directional sync is hard! You can expect to spend a significant amount of time on getting this up and running, and maintaining it as your greenfield project evolves. If you're working on MS software, it's worth looking at http://msdn.microsoft.com/en-us/sync/bb736753.

how to minimize application downtime when updating database and application ORM

We currently run an ecommerce solution for a leisure and travel company. Everytime we have a release, we must bring the ecommerce site down as we update database schema and the data access code. We are using a custom built ORM where each data entity is responsible for their own CRUD operations. This is accomplished by dynamically generating the SQL based on attributes in the data entity.
For example, the data entity for an address would be...
[tableName="address"]
public class address : dataEntity
{
[column="address1"]
public string address1;
[column="city"]
public string city;
}
So, if we add a new column to the database, we must update the schema of the database and also update the data entity.
As you can expect, the business people are not too happy about this outage as it puts a crimp in their cash-flow. The operations people are not happy as they have to deal with a high-pressure time when database and applications are upgraded. The programmers are upset as they are constantly getting in trouble for the legacy system that they inherited.
Do any of you smart people out there have some suggestions?
The first answer is obviously, don't use an ORM. Only application programmers think they're good. Learn SQL like everyone else :)
OK, so back to reality. What's to stop you restricting all schema changes to be additions only. Then you can update the DB schema anytime you like, and only install the recompiled application until a safe time (6am works best I find) after the DB is updated. If you must remove things, perform the steps the other way round - install the new app leaving the schema unchanged, and then remove the bits from the schema.
You're always going to have a high-pressure time as you roll out changes, but at least you can manage it better by doing it in 2 easier to understand pieces. Your DBAs will be ok with updating the schema for the existing application.
The downside is that you have to be a lot more organised, but that's not a bad thing when dealing with production servers and you should be seriously organised about it currently.
Supporting this scenario will add significant complexity to your environment and/or process and/or application.
You can run a complex update process where your application code is smart enough to run correctly on both the old schema and the new schema at the same time. Then you can update the application first and the schema second. A third step may be to migrate any data, which again, the application has to be able to work with. In that case, you only need to "tombstone" the application for the time it takes to upgrade the application, which could just be seconds, depending on how many files and machines are involved in the upgrade.
In most cases, it's best to leave the application/environment/process simple and live with the downtown during a slow time of the day/week/month. Pretty much all applications need to be "taken down" for time to time for "regularly schedule maintenance".

Resources