I'm using Silverlight + EntityFramework + RIA Services in a business application. The underlying database tables include the Humans table and the HumanAddresses table. Every human can have one or more addresses of different types (e.g. home, job, place of birth, etc). At least one address of type "Home" must always be present.
The UI allows to edit, remove and add new several addresses of a given human before submitting. I need to perform validation to find out whether these changes violate the forenamed rule. What is the best way to do this?
I tried using CustomValidationAttribute, but it allows (AFAIK) only entity-level validation, not validation across multiple entities, some of which are to be deleted, while others are to be added or modified.
If you need access to other entities then you need to override ValidateEntity in your database context. This is called for every entity that has been changed when you call SaveChanges().
protected override DbEntityValidationResult ValidateEntity(DbEntityEntry entityEntry, IDictionary<object, object> items)
{
DbEntityValidationResult result = new DbEntityValidationResult(entityEntry, new List<DbValidationError>());
result = base.ValidateEntity(entityEntry, items);
//Do your validation
if(invalid)
{
result.ValidationErrors.Add(new DbValidationError("Property", "Error Message"));
}
return result;
}
Here are the validation options in EF http://msdn.microsoft.com/en-us/data/gg193959.aspx
Related
I should model a system that clients can apply some configuration on separated entities.
Let me explain with an example:
We have users that have a config tab in their dashboards.
We have a feature to send notifications on their browsers and we have a feature which we can send an email to them.
We also have a feature as a pop-up.
The user should be able to modify our default notification message, modify our default email template, modify our default text on email or elements.
For the pop-up, The user should be able to modify the width and height of the pop-up, change the default texts, modify background color, change the button location on the pop-up.
And when I want to send an email to the user I should apply these settings on the template then send the email. Also when the front-end wants to show those pop-ups, wants to get these configs from my API and apply them.
These settings will be more and more in the future. So I can not specify a settings table with some fields. I think it is not a good idea.
What can I do? How to design and model this scenario? What are the best practices?
Can I use a NoSQL like MongoDB instead of a relational database?
Thanks a lot.
PS:
I am using Django to develop this system.
I have built similar sub-systems before, by hand.
I don't know much about Django, but do some research to see if it has any "out of the box" or community developed / open source add-ons that do what you want.
If you have to do it yourself...
A key-value pair is not going to be enough, but it's close. You only need a simple data structure:
ID (how your code recognizes this property), e.g. UserPopupBackgroundColor.
Property name (what the user see's / how they recognize this property in the UI), e.g. "Popup Background Color".
Optional - Data type. This is essential if you want to do any sensible input validation. E.g. pop up height should probably expect an integer, and have a sensible min/max value on it, where as an email address is totally different.
Optional, some kind of flag to identify valid properties.
That last flag is bit of an edge case, but it's useful if you use the subsystem to hold more properties than you want users to have access to. E.g. imagine you want to get a list of all properties and display the list to the user - are there any 'special' ones you need to filter out that they should not see?
You then need somewhere to put the values, and link them to the user:
Row ID / GUID. You can use a unique constraint across the User and PropertyID if you wanted to instead, but personally I find a unique row ID is a reliable and flexible approach for most scenarios.
UserID.
PropertyID - refers to ID mentioned above.
PropertyValue
Depending on how serious you need to get, you can dump all the values into the one PropertyValue column (assuming you're persisting this in a database) - which means that column needs to be a string, or, you can add a column per data type.
If you want to add a column per data type, don't kill yourself. The most I have ever done is:
PropertyValue_text (text/varchar)
PropertyValue_int (or double)
PropertyValue_DateTime (date/time - surprise!!)
So when I say 'column per data type' I mean per data type your stack needs/wants to handle - not the 'optional' data types you define in the logic - since that data type is partially just about input validation.
Obviously if you use different logical data types, you can map those to data type columns in the database. The reasons for doing this (using the different data types in the database are:
To reduce the amount of casting you need to do (code to database, and vis-a-versa).
To leverage database level query features, which can be useful. E.g. find emails values and verify them; find expired date values; etc.
It takes a bit of work to build all this, but it's powerful once you get set-up because you can add any number of properties. If you are using the 'full' solution with explicit data types then adding new logical data types isn't too painful if you already have a few set-up.
Before you design and build this, think about future reuse, and anyway you can package it up for later - or community use. Remember it impact all layers (UI, logic and data).
Final tip - when coming up with the property ID's (that the code uses) make them human readable, and use some sort of naming convention so that adding new ones later is easy and follows a predictable path.
Update - Defining Property and PropertyValue in database tables is an obvious way to go. Depending on the situation you can also define Property in code - especially if you don't add new ones or change existing ones very frequently. Another bonus is that if you're in an MVP situation you can use the code effectively as a stub, and build out the database/persistence part for that later.
Lets take an example where there are two type of entites loaded: Product and Category, Product.CategoryId -> Category.Id. We have available CRUD operations on products (not Categories).
If on another screen Categories are updated (or from another user in the network), we would like to be able to reload the Categories, while preserving the context we currently use, since we could be in the middle of editing data, and we do not want changes to be lost (and we cannot depend on saving, since we have incomplete data).
Since there is no easy way to tell EF to get fresh data (added, removed and modified), we thought of twp possible ways:
1) Getting products attached to context, and categories detached from context. This would mean that we loose the ability to access Product.Category.Name, which we do sometimes require, so we would need to manually resolve it (example when printing data).
2) detaching / attaching all Categories from current context.
Context.ChangeTracker.Entries().Where(x => x.Entity.GetType() == typeof(T)).ForEach(x => x.State = EntityState.Detached);
And then reload the categories, which will get fresh data. Do you find any problem with this second approach? We understand that this will require all constraints to be put on foreign keys, and not navigation properties, since when detaching all Categories, Product.Category navigation properties would be reset to null also. Also, there could be a potential performance problem, which we did not test, since there could be couple of thousand products loaded, and all would need to resolve navigation property when reloading.
Which of the two do you prefer, and is there a better way (EF6 + .NET 4.0)?
Sounds like you want to try
/// <summary>
/// Reloads the entity from the database overwriting any property values with values from the database.
/// The entity will be in the Unchanged state after calling this method.
///
/// </summary>
public void Reload(){
....
}
sample call
public virtual void Reload(TPoco poco) {
if (poco == null) {
// wtf....
}
try {
Context.Entry<TPoco>(poco).Reload();
}
catch (Exception ex) {
//... whatever
throw;
}
}
You can use a second level cache implementation, for example:
SECOND LEVEL CACHE FOR EF 6.1
Note: it's no longer beta version. the 1.0.0 what released on 2013/8/27. There are also other implementations, including some for older versions of EF.
my first real (not test) NHibernate/Castle.ActiveRecord project is developing quickly.
I am working with NHibernate/Castle.ActiveRecord about one month now but still have not a real idea how to handle Sessions in my WindowsForms application.
The common handling-methods seam not to work for me:
SessionPerRequest, SessionPerConversation, etc. all only work for WebApplications, etc.
SessionPerApplication is not recomanded/highly dangerous when I am correct
SessionPerThread is not very helpfull, since I either have only one thread, the WindowsForms-thread, or for each button-click a new thread. The first thing would make my applicaton use too much memory and to hold old objects in the memmory. With worker-threads for ech button click I would disable lazy-loading, since my loaded objects would live longer then the thread.
SessionPerPresenter is not working as well, because it is common, that I open a "sub-presenter" in a form to let the user search/load/select some referenced objects (foreigen key) and of cause the presenter is destroyed - what means session closed - but the object used in the "super-presenter" to fill the referencing property (foreigen key).
I've used google and bing for hours and read a lot, but only found one good website about my case: http://msdn.microsoft.com/en-us/magazine/ee819139.aspx . There SessionPerPresenter is used, but to a "sub-presenter" it is only given the id, not the entire object! And it seams that there are no foreigen-keys in this example and no scenari in wich a object is returned to a "super-presenter".
Qestions
Is there any other method of session handling for windowsforms/desktop-application?
I could add a session-property or a session-constructor-parameter to all of my presenters, but it feels not right to have session-handling all over my ui-code.
When an Exception occures NHibernate want's me to kill the session. But if it is 'only' a business-logic exception and not an NHibernate-Exception?
Example
I am trying to make an example the covers most of my problem.
// The persisten classes
public class Box
{
public virtual int BoxId{get;set;}
public virtual Product Content{get;set;}
...
}
public class User
{
public virtual int UserId{get;set;}
public virtual IList<Product> AssigenedProducts{get;set;}
...
}
public clas Product
{
public virtual int ProductId{get;set;}
public virtual string PrductCode{get;set;}
}
.
// The presenter-classes
public class ProductSearchPresenter : SearchPresenter<Product> { ... }
public class ProductEditPresenter : EditPresenter<Product> { ... }
public class UserSearchPresenter : SearchPresenter<User> { ... }
public class UserEditPresenter : EditPresenter<User> { ... }
public class BoxSearchPresenter : SearchPresenter<Box> { ... }
public class BoxEditPresenter : EditPresenter<Box> { ... }
// The search-presenters allow the user to perform as search with criterias on the class defined as generic argument and to select one of the results
// The edit-presenters allow to edit a new or loaded (and given as parameter) object of the class defined as generic argument
Now I have the following use-cases, wich all can be performed in the same application at the same time asyncronous (the use simply switchs between the presenters).
using an instance of BoxSearchPresenter to search and select a object
part of this usecase is to use an instance of the ProductSearchPresenter to fill a criteria of the BoxSearchPresenter
part of this usecase is to use an instance of the BoxEditPresenter to edit and save the selected object of the BoxSearchPresenter-instance
using an instance of UserSearchPresenter to search and select a object
part of this usecase is to use an instance of the UserEditPresenter to edit and save the slected object of the UserSearchPresenter
part of this usecase is to use a ProductSearchPresenter to search and select objects that will be added to User.AssignedProducts.
Using an instance of ProductSearchPresenter to search and select a object.
part of this usecase is to use an instance of ProductEditPresenter to edit and save a selected object of the ProductSearchPresenter.
It's only a small collection of usecases, but there are allready a lot of the problems I have.
UseCase 1. and 2. run at the same time in the same ui-thread.
UseCase 1.1. and 2.2. return there selected objects to other presenters that use this objects longer then the presenters exist that have loaded the object.
UseCase 3.1. might alter a object loaded from 2.2./1.1. before 3.1. was started, but when 2.2./1.1. is commited before 3.1. is finished the object would be saved and it would not be possible to "rollback" 3.1.
Here is just a short view of what I found best to fit into our WinForms application architecture (based on MVP).
Every presenter is constructor dependent on repositories which it needs, for example if you have InvoicePresenter then you have InvoiceRepository as dependency, but you will probably have CustomerRepository and many others depending on complexity (CustomerRepsitory for loading all customers into the customers combobox if you want to change customer of the invoice, stuff like that).
Then, every repository has a constuctor argument for UnitOfWork. Either you can abstract the session with UnitOfWork pattern, or you can have your reporitories depend on ISession.
Everything is wired together by IoC container, where we create presenters based on "context". This is a very simple concept, context is per presenter and all sub presenter, which in turn we create as composite block of more complex presenters to reduce complexitiy (if for example you have multiple tabs of options to edit some entity or something).
So, in practice, this context is 90% time form based, because one form is at least one presenter / view.
So to answer your questions:
Session per presenter and session per conversation (works with WinForms as well) are only really usable patterns here (and opening closing sessions all over the place, but not really good way to handle that)-
this is best solved by making repositories depend on session, not presenters. You make presenters depend on repositories, repositories depend on session, and when you create all, you give them common session; but as I state again, this is only practical when done in contexts. You cannot share session for presenter editing invoices and another presenter editing customers; but you can share session when editing invoice via main presenter and invoice details and invoice notes sub presenter.
Please clarify, didn't understand this...
I have a Silverlight RIA app where I share the models and data access between the MVC web app and the Silverlight app using compiler directives, and for the server, to see what context I am running under I would check to see if the ChangeSet object was non-null (meaning I was running under RIA rather than MvC). Everything works alright but I had problems with the default code generated by the domain service methods.
Let's say I had a Person entity, who belonged to certain Groups (Group entity). The Person object has a collection of Groups which I add or remove. After making the changes, the SL app would call the server to persist the changes. What I noticed happening is that the group entity records would be inserted first. That's fine, since I'm modifying an existing person. However, since each Group entity also has a reference to the existing person, calling AddObject would mark the whole graph - including the person I'm trying to modify - as Added. Then, when the Update statement is called, the default generated code would try to Attach the person, which now has a state of Added, to the context, with not-so-hilarious results.
When I make the original call for an entity or set of entities in a query, all of the EntityKeys for the entities are filled in. Once on the client, then EntityKey is filled in for each object. When the entity returns from the client to be updated on the server, the EntityKey is null. I created a new RIA services project and verified that this is the case. I'm running RIA Services SP1 and I am not using composition. I kind of understand the EntityKey problem - the change tracking done is on two separate contexts. EF doesn't know about the change tracking done on the SL side. However, it IS passing back the object graph, including related entities, so using AddObject is a problem unless I check the database for the existence of an object with the same key first.
I have code that works. I don't know how WELL it works but I'm doing some further testing today to see what's going on. Here it is:
/// <summary>
/// Updates an existing object.
/// </summary>
/// <typeparam name="TBusinessObject"></typeparam>
/// <param name="obj"></param>
protected void Update<TBusinessObject>(TBusinessObject obj) where TBusinessObject : EntityObject
{
if (this.ChangeSet != null)
{
ObjectStateManager objectStateManager = ObjectContext.ObjectStateManager;
ObjectSet<TBusinessObject> entitySet = GetEntitySet<TBusinessObject>();
string setName = entitySet.EntitySet.Name;
EntityKey key = ObjectContext.CreateEntityKey(setName, obj);
object dbEntity;
if (ObjectContext.TryGetObjectByKey(key, out dbEntity) && obj.EntityState == System.Data.EntityState.Detached)
{
// An object with the same key exists in the DB, and the entity passed
// is marked as detached.
// Solution: Mark the object as modified, and any child objects need to
// be marked as Unchanged as long as there is no Domainoperation.
ObjectContext.ApplyCurrentValues(setName, obj);
}
else if (dbEntity != null)
{
// In this case, tryGetObjectByKey said it failed, but the resulting object is
// filled in, leading me to believe that it did in fact work.
entitySet.Detach(obj); // Detach the entity
try
{
ObjectContext.ApplyCurrentValues(setName, obj); // Apply the changes to the entity in DB
}
catch (Exception)
{
entitySet.Attach(obj); // Re-attach the entity
ObjectContext.ApplyCurrentValues(setName, obj); // Apply the changes to the entity in DB'
}
}
else
{
// Add it..? Update must have been called mistakenly.
entitySet.AddObject(obj);
}
}
else
DirectInsertUpdate<TBusinessObject>(obj);
}
Quick walkthrough: If the ChangeSet is null, I'm not under the RIA context, and therefore can call a different method to handle the insert/update and save immediately. That works fine as far as I can tell. For RIA, I generate a key, and see if it exists in the database. If it does and the object I am working with is detached, I apply those values; otherwise, I force detach and apply the values, which works around the added state from any previous Insert calls.
Is there a better way of doing this? I feel like I'm doing way too much work here.
In this kind of a case, where you're adding Group entities to Person.Groups, I would think of just saving the Person and expect RIA to handle the Groups for me.
But let's take a step back, how are you trying to persist your changes? You shouldn't be saving/updating entities one by one. All you have to do is call DomainContext.SubmitChanges and all your changes should be persisted.
I work with pretty complicated projects and I seldom ever have to touch add/update code.
This question has been around with no solid answer, so I'll tell you what I did... which is nothing. That's how I handled it in RIA services, using the code above, since I was sharing the RIA client model and the server model.
After working with RIA services for a year and a half, I'm in the camp that believes that RIA services is good for working with smaller, less complex apps. If you can use [Composite] for your entities, which I couldn't for many of my entities, then you're fine.
RIA services can make throwing together small applications where you want to use the entity from EF really quick, but if you want to use POCOs or you foresee your application getting complex in the future, I would stick with building POCOs on the service end and passing those through regular WCF, and using shared behaviors by making your POCOs partial classes and sharing the behavior code with the client. When you're trying to create models that work the same on the client and the server, I had to write a ridiculous amount of plumbing code to make it work.
It definitely IS possible to do, I've done it; but there is a lot of hoops you must jump through for everything to work well, and I never fully took into consideration things like your shared model pre-loading lists for use on the client, whereas the server didn't need these preloaded everytime and actually slowed down the loading of the web page unnecessarily and countering by writing hacky method calls which I had to adopt on the client. (Sorry for the run-on.) The technique I chose to use definitely had its issues.
I have created a class that inherits for DomainService and have a Silverlight app that uses System.ServiceModel.DomainServices.Client to get a DomainContext. I have also created POCO DataContracts that are used in the DomainServices's Query, Update, Insert, and Delete methods. I also have a ViewModel that executes all the LoadOperations. And now I'm at the part of my app where I want to add new Entities to the generated EntitySets but am unsure about what's going to happen when one user creates new and sets the Key value; all while another user creates a similar entity with the same Key value.
I have seen in the documentation that an ObjectContext is used, but in my situation I was not able to use the EntityFramework model generator. So I had to create my datacontracts by hand.
So I guess my question is, is there any way I can force other silverlight apps to update on database change?
When you make a Save operation to your DomainContext, depending on the load behavior, it will automatically refresh.
TicketContext.Load(TicketContext.GetTicketByIdQuery(ticketId),
LoadBehavior.RefreshCurrent,
x =>
{
Ticket = x.Entities.First();
Ticket.Load();
((IChangeTracking) TicketContext.EntityContainer).AcceptChanges();
}, null);
Here I've set the LoadBehavior to RefreshCurrent. When you make a save, RIA will send the entity back across the wire to the client and merge the changes with the entity already cached on your client side context. I don't know if that quite answers your question or not however.