Yesterday I asked this question about changing the name of the __Migration History table generated by Entity Framework when using a Code First approach. The provided link was helpful in saying how to do what we want (and by "want" I mean what we're being forced into by our DBAs), however also left a somewhat non-specific and dire-sounding warning that says,
Words of precaution
Changing the migration history table is powerful but you need to be
careful to not overdo it. EF runtime currently does not check whether
the customized migrations history table is compatible with the
runtime. If it is not your application may break at runtime or behave
in unpredictable ways. This is even more important if you use multiple
contexts per database in which case multiple contexts can use the same
migration history table to store information about migrations.
We tried to use this warning to reason with the DBA team, telling them that we shouldn't mess with things because "here be dragons". Their response was, "It sounds more like the danger is in changing the content or the table structure, not the name. Go ahead and try it and see what happens."
Has anyone here changed the name of the __Migrations History table, and what was the result? Is it dangerous?
Changing the name of the migrations history table is possible.
But you have to tell EF this by calling the HasDefaultSchema method with the name of the schema in the OnModelCreating method of your DbContext class:
public partial class CustomerDatabasesModel : DbContext
{
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
modelBuilder.HasDefaultSchema("CustomerDatabases");
// Fluent API configuration
}
}
This the will cause EF to create a "CustomerDatabases" prefix for all database tables.
So in this example "CustomerDatabases" replaces the standard of "dbo" prefix of your tables. Your migration history table will be have the name CustomerDatabases.__MigrationHistory.
So in fact, you change the owner name of the database (the first part), the second part "__MigrationHistory" stays the same.
Usage scenario:
You usually do this, if you work with more than one DbContext.
So you can have more than one MigrationHistory table in a single database, one for each context.
Of cause you should carefully test this and perform database backups before.
Please check out this answer too:
Entity-Framework: On Database, multiple DbContexts
Related
Is there a way to use multimapping in Dapper in a generic way, without using custom SQL embedded in C# code?
See for example
Correct use of Multimapping in Dapper
Is there a generic way to query the data from 2 related entities, where common fields are determined automatically for join?
Don't do this. Don't even think this way! Databases are long lasting and normalized. Objects are perishable and frequently denormalized, and transitioning between the two is something to do thoughtfully, when you're writing your SQL. This is really not a step to automate. Long, painful experience has convinced many of us that database abstractions (tables and joins) should not just be sucked into (or generated out of) code. If you're not yet convinced, then use an established ORM.
If, on the other hand, you absolutely want to be in control of your SQL, but its the "embedding" in string literals in C# that bugs you, then I couldn't agree more. Can I suggest QueryFirst, a visual studio extension that generates the C# wrapper for your queries. Your SQL stays in a real SQL file, syntax validated, DB references checked, and at each save, QueryFirst generates a wrapper class with Execute() methods, and a POCO for the results.
By multi-mapping, I presume you want to fill a graph of nested objects. A nice way to do this is to use one QueryFirst .sql per class in your graph, then in the partial class of the parent, add a List of children. (QueryFirst generated POCOs are split across 2 partial classes, you control one of them, the tool generates the other.)
So, for a graph of Customers and their orders...
In the parent sql
select * from customers where name like #custName
The child sql
select * from orders where customerId = #customerId
In the parent partial class, for eager loading...
public List<Orders> orders;
public void OnLoad()
{
orders = new getOrders().Execute(customerId); // property of the parent POCO
}
or for lazy loading...
private List<Orders> _orders;
public List<Orders> orders
{
get
{
return _orders ?? _orders = new GetOrders().Execute(customerId);
}
}
5 lines of code, not counting brackets, and you have a nested graph, lazy loaded or eager loaded as you prefer, the interface discoverable in code (intellisense for the input parameter and result). Their might be hundreds of columns in those tables, whose names you will never need to re-type, and whose datatypes are going to flow transparently into your C#.
Clean separation of responsibilities. Total control. Disclaimer : I wrote QueryFirst :-)
Multimapping with Dapper is a method of running multiple SQL queries at once and then return each result mapped to a specific object.
In the context of this question, Multimapping is not even relevant, re: you're asking for a way to automatically generate a SQL query from the given objects and creating the correct joins which would result in a single SQL query which is not related to Multimapping.
I suspect what you're looking for is something along the lines of the Entity Framework. There are a couple of Dapper extension projects you may want to look into which will generate some of your SQL. See: Dapper.Rainbow VS Dapper.Contrib
I'd like to use SQL OUTPUT clause to keep history of the records on my database while I'm using Entity Framework. To achieve this, EF needs to generate the following example for a DELETE statement.
Delete From table1
output deleted.*, 'user name', getdate() into table1_hist
Where field = 1234;
The table table1_hist has the same columns as table1, with the addition of two columns to store the name of the user who did the action and when it happened. However, EF doesn't seem to have a way to support this SQL Server's clause, so I'm lost on how to implement that.
I looked at EF's source code, and the DELETE command is create inside a internal static method (GenerateDeleteSql in System.Data.Entity.SqlServer.SqlGen.DmlSqlGenerator class), so I can't extend the class to add the behavior I want. It looks like I'll have to rewrite the SQL Server provider based on the existing code, but that is something I'd like to avoid...
So, my question is if there's another option to do this (an extension, for example) or do I have to rewrite this provider?
Thank you.
Have you considered either
Using Stored Procedures to encapsulate your data logic
A delete trigger to capture the data
Change Data Capture (Enterprise edition only)
not actually deleting the data - merely setting a flag in the data to mark it as deleted.
I need to store data's change histories in database. For example some time some user modify some property of some data. The expected result is we can get the change histories for one data like
Tom changed title to 'Title one;'
James changed name to 'New name'
Steve added new_tag 'tag23'
Based on these change histories we can get all versions for some data.
Any good idea to achieve this? Not limited to traditional relation database.
These are commonly called audit tables. How I generally manage this is using triggers on the database. For every insert/update from a source table the trigger copies the data into another table called the same table name with an _AUDIT appended to it (the naming convention does not matter, it's just what I use). ORACLE provides you with something called journal tables. Using ORACLE designer (or manually) you can achieve the same thing and often developers put a _JN to the end of the journal/audit table. This, however, works the same, with triggers on the source table copying data into the audit table.
EDIT:
I should also note that you can create a new separate schema to manage just your audit tables or you can keep them in your schema with the source tables. I do both, it just depends on the situation.
I wrote an article about various options: http://blog.schauderhaft.de/2009/11/29/versioned-data/
If you are not tied to a relational database, there are things called 'append only' databases (I think), which never change data, but only append new versions. For your case this sounds kind of perfect. Unfortunately I don't know of any implementation.
In python::Pylons i'm able to issue a setup-app command and it will look at my Models and issue the appropriate CREATE TABLE or CREATE INDEX ddl for my particular database.
it seems like this would be a feature in CakePHP, but i'm having trouble finding it.
in fact i see this in the manual:
"You can create your database tables as you normally would. When you create your Model classes, they'll automatically map to the tables that you've created."
which leads me to believe it doesn't exist?
No, it's other way around - you can create models, controllers and views by having DB schema. It's more logical to have a DB design schema first.
Check this out
Some of the comments in the accepted answer above lead me to creating this answer. You can technically create new tables on the fly using the YourModel->query() function. I am currently using this in a Behavior I am writing. This works in CakePHP 2.x, pretty sure it works in 1.3 as well.
In the setup function for the Behavior I am checking to see if the table already exists. If it doesn't I create it.
$dataSource = ConnectionManager::getDataSource('your DB');
if(!in_array($tableName, $dataSource->listSources()){
$this->createYourTableFunction();
}
In the createYourTableFunction you create a temporary model to run the YourModel->query() against. And just provide it your SQL instructions. When creating your temporary model just set the table parameter to false so you don't get a missing table error.
$YourModel = new Model(array('table' => false, 'name' => 'YourModel', 'ds' => 'Your DB'));
$YourModel->query('SQL instruction string');
Are there any rapid Database protoyping tools that don't require me to declare a database schema, but rather create it based on the way I'm using my entities.
For example, assuming an empty database (pseudo code):
user1 = new User() // Creates the user table with a single id column
user1.firstName = "Allain" // alters the table to have a firstName column as varchar(255)
user2 = new User() // Reuses the table
user2.firstName = "Bob"
user2.lastName = "Loblaw" // Alters the table to have a last name column
Since there are logical assumptions that can be made when dynamically creating the schema, and you could always override its choices by using your DB tools to tweak it later.
Also, you could generate your schema by unit testing it this way.
And obviously this is only for prototyping.
Is there anything like this out there?
Google's Application Engine works like this. When you download the toolkit you get a local copy of the database engine for testing.
Grails uses Hibernate to persist domain objects and produces behavior similar to what you describe. To alter the schema you simply modify the domain, in this simple case the file is named User.groovy.
class User {
String userName
String firstName
String lastName
Date dateCreated
Date lastUpdated
static constraints = {
userName(blank: false, unique: true)
firstName(blank: false)
lastName(blank: false)
}
String toString() {"$lastName, $firstName"}
}
Saving the file alters the schema automatically. Likewise, if you are using scaffolding it is updated. The prototype process becomes run the application, view the page in your browser, modify the domain, refresh the browser, and see the changes.
I agree with the NHibernate approach and auto-database-generation. But, if you want to avoid writing a configuration file, and stay close to the code, use Castle's ActiveRecord. You declare the 'schema' directly on the class with via attributes.
[ActiveRecord]
public class User : ActiveRecordBase<User>
{
[PrimaryKey]
public Int32 UserId { get; set; }
[Property]
public String FirstName { get; set; }
}
There are a variety of constraints you can apply (validation, bounds, etc) and you can declare relationships between different data model classes. Most of these options are parameters added to the attributes. It's rather simple.
So, you're working with code. Declaring usage in code. And when you're done, let ActiveRecord create the database.
ActiveRecordStarter.Initialize();
ActiveRecordStarter.CreateSchema();
May be not exactly responding to your general question, but if you used (N)Hibernate then you can automatically generate the database schema from your hbm mapping files.
Its not done directly from your code as you seem to be wanting but Hibernate Schema generation seems to work well for us
Do you want the schema, but have it generated, or do you actually want NO schema?
For the former I'd go with nhibernate as #tom-carter said. Have it generate your schema for you, and you are all good (atleast until you roll your app out, then look at something like Tarantino and RedGate SQL Diff or whatever it's called to generate update scripts)
If you want the latter.... google app engine does this, as I've discovered this afternoon, and it's very nice. If you want to stick with code under your control, I'd suggest looking at CouchDB, tho it's a bit of upfront work getting it setup. But once you have it, it's a totally, 100% schema-free database. Well, you have an ID and a Version, but thats it - the rest is up to you. http://incubator.apache.org/couchdb/
But by the sounds of it (N)hibernate would suite the best, but I could be wrong.
You could use an object database.