I am working on a silverlight application and I am using RIA data services and nHibernate.
Currently, I have an entity with a one to many relationship to another entity.
public class Employer {
[Key]
public virtual int Id { get; set; }
public virtual string Name { get; set; }
}
public class Person {
[Key]
public virtual int Id { get; set; }
public virtual string Name { get; set; }
[Include]
[Association("PersonCurrentEmployer", "CurrentEmployerId", "Id", IsForeignKey = true)]
public virtual Employer CurrentEmployer { get; set; }
public virtual int? CurrentEmployerId { get; set; }
}
The property CurrentEmployerId is set for no insert and no update in the mappings.
On the Silverlight side, I set the CurrentEmployer property of the person to an existing employer on the client side submit the changes.
personEntity.CurrentEmployer = megaEmployer;
dataContext.SubmitChanges();
On the server side, the person entity's CurrentEmployerId is set to megaEmployer.Id but the CurrentEmployer is null. Because I am using the CurrentEmployer property and not the CurrentEmployerId to save the relationship, the relationship isn't changed.
Is there a way to force RIA to send the CurrentEmployer object with the save or do I have to use the CurrentEmployerId on the server side to load the employer and set it to the CurrentEmployer?
The reason you're not seeing your CurrentEmployer on the client side is because you don't have your association setup correctly.
RIA services doesn't work with references in the usual way so referencing your Employer on the client side doesnt work. RIA services works with entity sets and creates the "references" based on the association attributes. Your employer needs a property with an association back to the Person as follows.
public class Employer
{
private Person person;
[Key]
public virtual int Id { get; set; }
public virtual string Name { get; set; }
public virtual int PersonID { get; set; }
[Include]
[Association("PersonCurrentEmployer", "PersonID", "Id", IsForeignKey = false)]
public virtual Person Person {
get
{
return this.person;
}
set
{
this.person = value;
if (value != null)
{
this.PersonID = value.Id;
}
}
}
}
Is there a way to force RIA to send the CurrentEmployer object with the save or do I have to use the CurrentEmployerId on the server side to load the employer and set it to the CurrentEmployer?
I'm running into this problem as well. Basically, you either have to use the [Composition] attribute (which I wouldnt' recommend), or load the entity from the database, server-side. Composition muddies up the client data model and doesn't take care of all cases you need to worry about. (there is a lot more on Composition in the RIA forums.silverlight.net)
[UPDATE] Once you implement 2nd level cache, the worry of reading supporting entities from the database mostly goes away, as they will be loaded from cache. Also, if you only need a proxy for NHibernate to not complain, then look into Get/Load (can never remember which) .. which will return an NH proxy and will result in a single-column-and-entity select from the database. (If you try to access another property of the proxy, NH will select the rest. you can find more on this on Ayende's blog..)[/UPDATE]
The biggest problem I'm having is getting NHib to actually save and load the relationship. (I'm also using Fluent). The response from the responsible parties has so far been "waah, you can't do that. it looks like RIA wasn't developed with NHib in mind" .. which is a crap answer, IMHO. Instead of helping me figure out how to map it, they're telling me i'm doing it wrong for having a ForeignKey in my entity (NHib shouldn't care that i have my FK in my entity) ...
I want to share what I did to make this work, because 'official' support for this scenario was ... let's just say unhelpful at best, and downright rude at worst.
Incidentally, you had the same idea I had: making the Foreign Key not insert/update. BUT, I've also made it Generated.Always(). this way it will always read the value back.
Additionally, I override DomainService.Submit() and DomainService.ExecuteChangeSet(). I start an NHibernate Transaction in the Submit (though I'm not yet sure this does what I expect it does).
Instead of putting my save logic in the InsertSomeEntity() or UpdateSomeEntity() methods, I'm doing it all inside ExecuteChangeSet. this is because of NHibernate, and its NEED to have the entity graph fully-bi-directional and hydrated out prior to performing actions in NHibernate. This includes loading of entities from the database or session when a child item comes across the wire from RIA services. (I started down the path of writing methods to get the various other pieces of the graph as those specialized methods needed them, but I found it easier to do it all in a single method. Moreover, I was running into the problem of RIA wanting me to perform the insert/updates against the child objects first, which for new items is a problem.)
I want to make a comment about the composition attribute. I still stand by my previous comment about not recommending it for standard child entity collections, HOWEVER, it works GREAT for supporting NHibernate Components, because otherwise RIA will never send back the parent instance (of the composition), which is required for NHibernate to work right.
I didn't provide any code here because i would have to do some heavy redacting, but it's not a problem for me to do if you would like to see it.
Related
Feel free to tell me that this question needs to be moved and I will move it. I just don't know where else to go for help.
My current work flow is:
Create the database first (database Actual)
Run scaffold command which creates my models
Create a Visual Studio Database project
Import the database (database project)
Whenever I need to make a change to the database I follow the below:
Change the database project
Run a Schema Compare
Verify and update the database Actual
rerun the scaffold command with a -Force to rebuild all the models.
What (if any) type of problems am I leaving myself open to down the road?
I am not seeing the value of database migrations as I am updating the database first but using the database project to provide source control and some protection.
I always used to use the graphic database tool, but obviously with Core that is no longer an option.
I have also considered Devart's Entity Developer as a ORM.
Your thoughts and feedback are VERY much appreciated.
So the biggest problem is what happens when I need to make changes to the model.
So something simple like:
public partial class UserInfo
{
public int Id { get; set; }
[Required]
public string FirstName { get; set; }
public string LastName { get; set; }
public string UserName { get; set; }
public string Password { get; set; }
public DateTime RecordCreated { get; set; }
}
My '[Required]' will obliviously be gone after a -force.
Joe
That is the correct "database first" workflow for EF Core, and you would not use migrations in that scenario. Be sure to place customizations to your entities or DbContext in separate partial class files so they don't get clobbered when you regenerate the entities.
always used to use the graphic database tool, but obviously with Core that is no longer an option.
With this workflow you can use any graphical design tool you want for your database schema.
I have a Category class:
public class Category
{
public int CategoryId { get; set; }
public string CategoryName { get; set; }
}
I also have a Subcategory class:
public class Subcategory
{
public int SubcategoryId { get; set; }
public Category Category { get; set; }
public string SubcategoryName { get; set; }
}
And a Flavor class:
public class Flavor
{
public int FlavorId { get; set; }
public Subcategory Subcategory { get; set; }
public string FlavorName { get; set; }
}
Then I also have Filling and Frosting classes just like the Flavor class that also have Category and Subcategory navigation properties.
I have a Product class that has a Flavor navigation property.
An OrderItem class represents each row in an order:
public class OrderItem
{
public int OrderItemId { get; set; }
public string OrderNo { get; set; }
public Product Product { get; set; }
public Frosting Frosting { get; set; }
public Filling Filling { get; set; }
public int Quantity { get; set; }
}
I'm having issues when trying to save an OrderItem object. I keep getting DbUpdateException: An error occurred while saving entities that do not expose foreign key properties for their relationships. with the Inner Exception being OptimisticConcurrencyException: Store update, insert, or delete statement affected an unexpected number of rows (0). Entities may have been modified or deleted since entities were loaded. I've stepped through my code several times and I can't find anything that modifies or deletes any entities loaded from the database. I've been able to save the OrderItem, but it creates duplicate entries of Product, Flavor, Subcategory and Category items in the DB. I changed the EntityState of the OrderItem to Modified, but that throws the above exception. I thought it might have been the fact that I have Product, Frosting and Filling objects all referencing the same Subcategory and Category objects, so I tried Detaching Frosting and Filling, saving, attaching, changing OrderItem entity state to Modified and saving again, but that also throws the above exception.
The following statement creates duplicates in the database:
db.OrderItems.Add(orderItem);
Adding any of the following statements after the above line all cause db.SaveChanges(); to throw the mentioned exception (both Modified and Detached states):
db.Entry(item).State = EntityState.Modified;
db.Entry(item.Product.Flavor.Subcategory.Category).State = EntityState.Modified;
db.Entry(item.Product.Flavor.Subcategory).State = EntityState.Modified;
db.Entry(item.Product.Flavor).State = EntityState.Modified;
db.Entry(item.Product).State = EntityState.Modified;
Can someone please give me some insight? Are my classes badly designed?
The first thing to check would be how the entity relationships are mapped. Generally the navigation properties should be marked as virtual to ensure EF can proxy them. One other optimization is that if the entities reference SubCategory then since SubCats reference a Category, those entities do not need both. You would only need both if sub categories are optional. Having both won't necessarily cause issues, but it can lead to scenarios where a Frosting's Category does not match the category of the Frosting's SubCategory. (Seen more than enough bugs like this depending on whether the code went frosting.CategoryId vs. frosting.SubCategory.CategoryId) Your Flavor definition seemed to only use SubCategory which is good, just something to be cautious of.
The error detail seems to point at EF knowing about the entities but not being told about their relationships. You'll want to ensure that you have mapping details to tell EF about how Frosting and SubCategory are related. EF can deduce some of these automatically but my preference is always to be explicit. (I hate surprises!)
public class FrostingConfiguration : EntityTypeConfiguration<Frosting>
{
public FlavorConfiguration()
{
ToTable("Flavors");
HasKey(x => x.FlavorId)
.Property(x => x.FlavorId)
.HasDatabaseGeneratedOption(DatabaseGeneratedOption.Identity);
HasRequired(x => x.SubCategory)
.WithMany()
.Map(x => x.MapKey("SubCategoryId");
}
}
Given your Flavor entity didn't appear to have a property for the SubCategoryId, it helps to tell EF about it. EF may be able to deduce this, but with IDs and the automatic naming conventions it looks for, I don't bother trying to remember what works automagically.
Now if this is EF Core, you can replace the .Map() statement with:
.ForeignKey("SubCategoryId");
which will set up a shadow property for the FK.
If SubCats are optional, then replace HasRequired with HasOptional. The WithMany() just denotes that while a Flavor references a sub category, SubCategory does not maintain a list of flavours.
The next point of caution is passing entities outside of the scope of the DBContext that they were loaded. While EF does support detaching entities from one context and reattaching them to another, I would argue that this practice is almost always far more trouble than it is worth. Mapping entities to POCO ViewModels/DTOs, then loading them on demand again when performing updates is simpler, and less error-prone then attempting to reattach them. Data state may have changed between the time they were initially loaded and when you go to re-attach them, so fail-safe code needs to handle that scenario anyways. It also saves the hassle of messing around with modified state in the entity sets. While it may seem efficient to not load the entities a second time, by adopting view models you can optimize reads far more efficiently by only pulling back and transporting the meaningful data rather than entire entity graphs. (Systems generally read far more than they update) Even for update-heavy operations you can utilize bounded contexts to represent large tables as smaller, simple entities to load and update a few key fields more efficiently.
I want to setup a small event sourcing lib.
I read a few tutorials online, everything understood so far.
The only problem is, in these different tutorials, there are two different database strategies, but without any comments why they use the one they use.
So, I want to ask for your opinion.
And important, why do you prefer the solution you choose.
Solution is the db structure where you create one table for each event.
Solution is the db structure where you create only one generic table, and save the events as serialized string to one column.
In both cases I'm not sure how they handle event changes, maybe they create a whole new one.
Kind regards
I built my own event sourcing lib and I opted for option 2 and here's why.
You query the event stream by aggregate id not event type.
Reproducing the events in order would be a pain if they are all in different tables
It would make upgrading events a bit of pain
There is an argument to say you can store events on a per aggregate but that depends of the requirements of the project.
I do have some posts about how event streams are used that you may find helpful.
6 Code Smells With Your CQRS Events and How to Avoid Them
Aggregate Root – How to Build One for CQRS and Event Sourcing
How to Upgrade CQRS Events Without Busting Your Event Stream
Solution is the db structure where you create only one generic table, and save the events as serialized string to one column
This is by far the best approach as replaying events is simpler. Now my two cents on event sourcing: It is a great pattern, but you should be careful because not everything is as simple as it seems. In a system I was working on we saved the stream of events per aggregate but we still had a set of normalized tables, because we just could not accept that in order to get the latest state of an object we would have to run all the events (snapshots help but are not a perfect solution). So yes event sourcing is a fine pattern, it gives you a complete versioning of your entities and a full auditing log, and it should be used just for that, not as a replacement of a set of normalized tables, but this is just my two cents.
I think best solution will be to go with #2. And even you can save your current state together with the related event at the same time if you use a transactional db like mysql.
I realy dont like and recommend the solution #1.
If your concern for #1 is about event versioning/upgrading; then declare a new class for each new change. Dont be too lazy; or be obsess with reusing. Let the subscribers know about changes; give them the event version.
If your concers for #1 is about something like querying/interpreting events; then later you can easily push your events to an nosqldb or eventstore at any time (from original db).
Also; the pattern I use for eventsourcing lib is something like that:
public interface IUserCreated : IEventModel
{
}
public class UserCreatedV1 : IUserCreated
{
public string Email { get; set; }
public string Password { get; set; }
}
public class UserCreatedV2 : IUserCreated
{
// Fullname added to user creation. Wrt issue: OA-143
public string Email { get; set; }
public string Password { get; set; }
public string FirstName { get; set; }
public string LastName { get; set; }
}
public class EventRecord<T> where T : IEventModel
{
public string SessionId { get; set; } // Can be set in emitter.
public string RequestId { get; set; } // Can be set in emitter.
public DateTime CreatedDate { get; set; } // Can be set in emitter.
public string EventName { get; set; } // Extract from class or interface name.
public string EventVersion { get; set; } // Extract from class name
public T EventModel { get; set; } // Can be set in emitter.
}
public interface IEventModel { }
So; make event versioning and upgrading explicit; both in domain and codebase. Implement handling of new events in subscribers before deploying origin of new events. And; if not required, dont allow direct consuming of domain events from external subscribers; put an integration layer or something like that.
I wish my thoughts will be useful for you.
I read about an event-sourcing approach that consists in:
having two tables: aggregate and event;
base on you use cases either:
a. creates and registry on aggregate table, generating an ID, version = 0 and a event type and create an event on event table;
b. retrieve from aggregate table, events by ID or event type, apply business cases and then update aggregate table (version and event type) and then create an event on event table.
although I this approach updates some fields on aggregate table, it leaves event table as append only and improves performace as you have the latest version of an aggregate in aggregate table.
I would go with #2, and if you really want to have an efficient way of search via event type, I would just add an index on that column.
Here are the two strategies to access the data about a subject involved in this case.
1) current state and 2) event sequencing.
With current state we process the events but keep only the last state of the subject.
With event sequencing we keep the events and rebuild the current state by processing the events every time we need the state.
Event sequencing is more reliable as we can track everything that happened causing the current state but it's definitely not efficient. It's a common sense to keep also intermediate states (snapshots) not only the last one to avoid reprocessing all the events all the time. Now we have reliability and performance.
In crypto currencies there are the event sequencing and local snapshots - the local in the name is because blockchains are distributed and data are replicated.
I have a table Test with a foreign key to itself. In metadata class I have
[Include]
Test Test2 { get; set; }
In a service class:
return this.ObjectContext.Test.Include("Test2")
I checked that data loaded correctly from database. But on a client side I see that no parent has been loaded.
I use a DomainDataSource to load data (Silverlight 4.0).
Someone else experienced this strange behavior?
Ok, my mistake. The answer is to use [Include] attribute as always, but make sure that all participating properties are public in your metadata class.
I need to design domain that has two simple entities:
public class User
{
public virtual int Id { get; protected set; }
public virtual string Email { get; protected set; }
public virtual Country Country { get; protected set; }
...
}
public class Country
{
public virtual int Id { get; protected set; }
public virtual string Name { get; protected set; }
...
}
It's all nice and clear in domain world but the problem is that User and Country persisted in two different databases on two different servers (tho they are both MSSQL 2005 servers).
So, how should I correctly implement persistance of entites across different sql servers in NHibernate?
Using IDs instead of objects in references? Yeah, thats simple but it's hitting hard on the whole domain thing making domain object more like DTO. And it will require that IUserRepository get it's hands on ICountryRepository to load User entity.
Linked servers? Hm... Somehow I don't like it (distributed transactions and no XML columns). And what I should be aware in case of using them and more importantly how should I configure NHibernate to work effectively with linked servers?
Maybe some other solution?
I've heard of people using the schema property in a class mapping to contain the linked server name (like otherserver.dbo), but I don't know anyone that hasn't ran into one problem or another when doing that.
There are a few DDD bootstrapping frameworks that allow you to transparently map entities to different databases (resulting in multiple ISessionFactories, which it will manage for you). NCommon is one I would recommend. This assumes, however, that Country only exists in one database, and User only exists in another.
As for transactions... well, if you use a TransactionScope and configure DTS, that might work. NCommon uses a UnitOfWork API that also wraps TransactionScope.
You would have to change User so that Country is just an ID. Here's why. You'd end up with two session factories, one that has a mapping for Country and the other that has a mapping for User. If you don't make that change, NHibernate will complain that there is no mapping for Country when you save User (since they are stored in two different DBs).
Now you could instruct NHibernate to ignore Country property, and keep Country so your domain doesn't change. However, when you load User from the database next time, Country will be null.
You could use NHibernate.Shards from NHContrib.