I have a WPF application with MVVM. Assuming object composition from the ViewModel down looks as follows:
MainViewModel
OrderManager
OrderRepository
EFContext
AnotherRepository
EFContext
UserManager
UserRepository
EFContext
My original approach was to inject dependencies (from the ViewModelLocator) into my View Model using .InCallScope() on the EFContext and .InTransientScope() for everything else. This results in being able to perform a "business transaction" across multiple business layer objects (Managers) that eventually underneath shared the same Entity Framework Context. I would simply Commit() said context at the end for a Unit of Work type scenario.
This worked as intended until I realized that I don't want long living Entity Framework contexts at the View Model level, data integrity issues across multiple operations described HERE. I want to do something similar to my web projects where I use .InRequestScope() for my Entity Framework context. In my desktop application I will define a unit of work which will serve as a business transaction if you will, typically it will wrap everything within a button click or similar event/command. It seems that using Ninject's ActivationBlock can do this for me.
internal static class Global
{
public static ActivationBlock GetNinjectUoW()
{
//assume that NinjectSingleton is a static reference to the kernel configured with the necessary modules/bindings
return new ActivationBlock(NinjectSingleton.Instance.Kernel);
}
}
In my code I intend to use it as such:
//Inside a method that is raised by a WPF Button Command ...
using (ActivationBlock uow = Global.GetNinjectUoW())
{
OrderManager orderManager = uow.Get<OrderManager>();
UserManager userManager = uow.Get<UserManager>();
Order order = orderManager.GetById(1);
UserManager.AddOrder(order);
....
UserManager.SaveChanges();
}
Questions:
To me this seems to replicate the way I do business on the web, is there anything inherently wrong with this approach that I've missed?
Am I understanding correctly that all .Get<> calls using the activation block will produce "singletons" local to that block? What I mean is no matter how many times I ask for an OrderManager, it'll always give me the same one within the block. If OrderManager and UserManager compose the same repository underneath (say SpecialRepository), both will point to the same instance of the repository, and obviously all repositories underneath share the same instance of the Entity Framework context.
Both questions can be answered with yes:
Yes - this is service location which you shouldn't do
Yes you understand it correctly
A proper unit-of-work scope, implemented in Ninject.Extensions.UnitOfWork, solves this problem.
Setup:
_kernel.Bind<IService>().To<Service>().InUnitOfWorkScope();
Usage:
using(UnitOfWorkScope.Create()){
// resolves, async/await, manual TPL ops, etc
}
Related
I have a Silverlight RIA app where I share the models and data access between the MVC web app and the Silverlight app using compiler directives, and for the server, to see what context I am running under I would check to see if the ChangeSet object was non-null (meaning I was running under RIA rather than MvC). Everything works alright but I had problems with the default code generated by the domain service methods.
Let's say I had a Person entity, who belonged to certain Groups (Group entity). The Person object has a collection of Groups which I add or remove. After making the changes, the SL app would call the server to persist the changes. What I noticed happening is that the group entity records would be inserted first. That's fine, since I'm modifying an existing person. However, since each Group entity also has a reference to the existing person, calling AddObject would mark the whole graph - including the person I'm trying to modify - as Added. Then, when the Update statement is called, the default generated code would try to Attach the person, which now has a state of Added, to the context, with not-so-hilarious results.
When I make the original call for an entity or set of entities in a query, all of the EntityKeys for the entities are filled in. Once on the client, then EntityKey is filled in for each object. When the entity returns from the client to be updated on the server, the EntityKey is null. I created a new RIA services project and verified that this is the case. I'm running RIA Services SP1 and I am not using composition. I kind of understand the EntityKey problem - the change tracking done is on two separate contexts. EF doesn't know about the change tracking done on the SL side. However, it IS passing back the object graph, including related entities, so using AddObject is a problem unless I check the database for the existence of an object with the same key first.
I have code that works. I don't know how WELL it works but I'm doing some further testing today to see what's going on. Here it is:
/// <summary>
/// Updates an existing object.
/// </summary>
/// <typeparam name="TBusinessObject"></typeparam>
/// <param name="obj"></param>
protected void Update<TBusinessObject>(TBusinessObject obj) where TBusinessObject : EntityObject
{
if (this.ChangeSet != null)
{
ObjectStateManager objectStateManager = ObjectContext.ObjectStateManager;
ObjectSet<TBusinessObject> entitySet = GetEntitySet<TBusinessObject>();
string setName = entitySet.EntitySet.Name;
EntityKey key = ObjectContext.CreateEntityKey(setName, obj);
object dbEntity;
if (ObjectContext.TryGetObjectByKey(key, out dbEntity) && obj.EntityState == System.Data.EntityState.Detached)
{
// An object with the same key exists in the DB, and the entity passed
// is marked as detached.
// Solution: Mark the object as modified, and any child objects need to
// be marked as Unchanged as long as there is no Domainoperation.
ObjectContext.ApplyCurrentValues(setName, obj);
}
else if (dbEntity != null)
{
// In this case, tryGetObjectByKey said it failed, but the resulting object is
// filled in, leading me to believe that it did in fact work.
entitySet.Detach(obj); // Detach the entity
try
{
ObjectContext.ApplyCurrentValues(setName, obj); // Apply the changes to the entity in DB
}
catch (Exception)
{
entitySet.Attach(obj); // Re-attach the entity
ObjectContext.ApplyCurrentValues(setName, obj); // Apply the changes to the entity in DB'
}
}
else
{
// Add it..? Update must have been called mistakenly.
entitySet.AddObject(obj);
}
}
else
DirectInsertUpdate<TBusinessObject>(obj);
}
Quick walkthrough: If the ChangeSet is null, I'm not under the RIA context, and therefore can call a different method to handle the insert/update and save immediately. That works fine as far as I can tell. For RIA, I generate a key, and see if it exists in the database. If it does and the object I am working with is detached, I apply those values; otherwise, I force detach and apply the values, which works around the added state from any previous Insert calls.
Is there a better way of doing this? I feel like I'm doing way too much work here.
In this kind of a case, where you're adding Group entities to Person.Groups, I would think of just saving the Person and expect RIA to handle the Groups for me.
But let's take a step back, how are you trying to persist your changes? You shouldn't be saving/updating entities one by one. All you have to do is call DomainContext.SubmitChanges and all your changes should be persisted.
I work with pretty complicated projects and I seldom ever have to touch add/update code.
This question has been around with no solid answer, so I'll tell you what I did... which is nothing. That's how I handled it in RIA services, using the code above, since I was sharing the RIA client model and the server model.
After working with RIA services for a year and a half, I'm in the camp that believes that RIA services is good for working with smaller, less complex apps. If you can use [Composite] for your entities, which I couldn't for many of my entities, then you're fine.
RIA services can make throwing together small applications where you want to use the entity from EF really quick, but if you want to use POCOs or you foresee your application getting complex in the future, I would stick with building POCOs on the service end and passing those through regular WCF, and using shared behaviors by making your POCOs partial classes and sharing the behavior code with the client. When you're trying to create models that work the same on the client and the server, I had to write a ridiculous amount of plumbing code to make it work.
It definitely IS possible to do, I've done it; but there is a lot of hoops you must jump through for everything to work well, and I never fully took into consideration things like your shared model pre-loading lists for use on the client, whereas the server didn't need these preloaded everytime and actually slowed down the loading of the web page unnecessarily and countering by writing hacky method calls which I had to adopt on the client. (Sorry for the run-on.) The technique I chose to use definitely had its issues.
I am working on a Prism desktop application and would like to know the best way to deal with lookup / reference data lists when using a WCF backend. I think this question may cover a few areas and I would appreciate some guidance
For example, consider a lookup that contains Products(codes and descriptions) which would be used in a lot of different input screens in the system.
Does the viewmodel call the WCF service directly to obtain the data to fill the control?
Would you create a control that solely deals with Products with its own viewmodel etc and then use that in every place that needs a product lookup or would you re-implements say a combobox that repopulates the products ItemsSource in every single form view model that uses it?
Would I create a brand new WCF service called something like LookupData service and use that to populate my lookup lists? - I am concerned I will end up with lots of lookups if I do this.
What other approaches are there for going about this?
I suggest creating your lookup object/component as a proxy object for WCF service. It can work in several ways, but most simple coming to my mind would be:
Implement WCF service with methods to provide all Products entities and requested one (eg. basing on product code)
Implement component that will use WCF client to get products, let's call it ProductsProvider
Your view models will take dependency on ProductsProvider (eg. via constructor injection)
Key element in this model is ProductsProvider - it will work as kind of cache for Products objects. First, it will ask web service for all products (or some part of it, up to your liking) to start with. Then, whenever you need to lookup product, you ask provider - it's provider's responsibility to deal with how product should be looked up - maybe it's already in local list? Maybe it will need to call web service for update? Example:
public class ProductsProvider
{
private IList<Product> products;
private IProductsService serviceClient;
public ProductsProvider(IProductsService serviceClient)
{
this.serviceClient = serviceClient;
this.products = serviceClient.GetAllProducts();
}
public Product LookUpProduct(string code)
{
// 1: check if our local list contains product with given code
// 2: if it does not, call this.serviceClient.LookUpProduct
// 3: if service also doesn't know such product:
// throw, return null, report error
}
}
Now, what this gives you is:
you only need to have one ProductsProvider instance
better flexibility with when and how your service is called
your view models won't have to deal with WCF at all
Edit:
As for your second question. Control may not be needed, but having view model for Product entity is definitely a good idea.
I'm working on a out of browser Silverlight app that provides some MS Office Communicator 2007 controls. I'm using the Automation SDK. The docs that were installed with the SDK state that there's a MyGroups property in the IMessenger2 interface, which will return the groups that a user has defined, but when I try to use it, I get a NotImplementedException. Here's the code that I'm using:
dynamic communicator = AutomationFactory.CreateObject("Communicator.UIAutomation");
communicator.AutoSignin();
foreach (dynamic g in communicator.MyGroups)
{
//Do something with the group
}
If I replace MyGroups with MyContacts, I can get the contact list just fine. Do I have to do something different to access properties in the IMessenger2 interface? I've seen a few things on the web that say that MyGroups was deprecated for Windows Messenger, but from the docs, it seems like it should be available for MS Office Communicator.
If I can't use MyGroups, is there another way to get the groups that a user has created?
The problem here is that the MyGroups property is marked as NotScriptable, meaning you can't call it in the way you are doing i.e. using the AutomationFactory. For security reasons, some properties and methods in the Automation API are not scriptable - this is to avoid malicious pages automating Communicator and carrying out certain tasks without you knowing.
It looks like the COM interop in Silverlight is treated in the same way as e.g. creating and calling the API from VBScript, so you won't be able to access any of the non-scriptable properties and methods. See the reference for details of which properties and methods are not scriptable.
I'm guessing this is going to seriously hobble your app. I think what's hurting you is the decision to go with Silverlight OOB. Is there any way you could use WPF (or even winforms) rather than Silverlight? If you did this, you could reference the API directly, and have full access to all properties/methods.
Otherwise, I can't think of too many options. You can't trap the OnContactAddedToGroup event, as this is not scriptable.
It might be possible to wrap the API with a .NET assembly, and expose it via COM, then instantiate it in the same way - but the Not Scriptable might still be respected in that case, so it won't buy you anything. Hard to say without trying it, and still a fairly horrible solution.
Edit: I've just given the wrapper method a try (needed to do something similar as a proof of concept for a customer), and it seems to work. This is the way I did it:
Create a new .NET class library. Define a COM interface:
[ComVisible(true)]
[Guid("8999F93E-52F6-4E29-BA64-0ADC22A1FB11")]
public interface IComm
{
string GetMyGroups();
}
Define a class that implements that interface (you'll need to reference CommunicatorAPI.dll from the SDK):
[ComVisible(true)]
[ClassInterface(ClassInterfaceType.None)]
[GuidAttribute("C5C5A1A8-9BFB-4CE5-B42C-4E6688F6840B")]
[ProgId("Test.Comm.1")]
public class Comm : IComm
{
public string GetMyGroups()
{
var comm = new CommunicatorAPI.MessengerClass();
var groups = comm.MyGroups as IMessengerGroups;
return string.Join(", ", groups.OfType<IMessengerGroup>().Select(g => g.Name).ToArray());
}
}
Build, and register using RegAsm. Then call from the OOB silverlight app:
dynamic communicator = AutomationFactory.CreateObject("Test.Comm.1");
MessageBox.Show(communicator.GetMyGroups());
Note, the same technique also works using the Lync API:
public string GetMyGroups()
{
var comm = LyncClient.GetClient();
return string.Join(", ", comm.ContactManager.Groups.Select(g => g.Name).ToArray());
}
Although this works, I can't really say whether it's a good practice, as it's working around a security restriction which was presumably there for a good reason. I guess the worst that could happen is that a malicious web page could potentially use the component, if it knew the ProgId of the control.
Edit: Also, using this method you'd need to be careful about memory leaks, e.g. make sure you're releasing COM objects when you're finished with them - easy enough to do, just needs a little discipline ;o)
I have been using Prism for a while now and enjoy how much easier it is to decouple my modules.
This works especially great for views and view models since you can inject the view models via interfaces and the views via the region manager.
Unfortunately this only works when my views are full blown user controls unless I'm missing something here (and I sincerely hope I am).
A lot of times though, I'll create a ViewModel and a matching DataTemplate. These can then be used by other assemblies to compose a view.
My problem is, that I see no way of referring to these datatemplates without referencing the containing assembly, so in my xaml file I write something like:
<ResourceDictionary Source="pack://application:,,/......>
Of course this is not really decoupled, although I try to make sure, that I don't refer to the assembly anywhere else in my code.
Another solution I thought of, was to put the datatemplates into the Infrastructure project, but I don't like that too much either,as I want everything that belongs to a module to be contained in it (except the interfaces of course).
So, does anyone have a good workaround, or did I miss some Prism feature?
I would suggest creating a service that encapsulates adding resource dictionaries to the Application.Resources.MergedDictionaries collection.
// Service interface (defined in the 'infrastructure' project)
public interface IResourceAggregator
{
void AddResource(Uri resourceUri);
}
// Service implementation (implemented at the application/shell level)
class ResourceAggregator : IResourceAggregator
{
public void AddResource(Uri resourceUri)
{
var resourceDictionary = new ResourceDictionary() { Source = resourceUri };
var app = Application.Current;
app.Resources.MergedDictionaries.Add(resourceDictionary);
}
}
I would expect you would "resolve" this service during module load and use it to "register" the module-local resource dictionaries.
You would need to merge the resources when the module starts. You can read more about this here: http://blogs.southworks.net/jdominguez/2008/09/presentation-model-with-datatemplates-in-compositewpf-prism-sample/
Of course you can further abstract this functionality into a reusable service.
Currently for ASP.Net stuff I use a request model where a context is created per request (Only when needed) and is disposed of at the end of that request. I've found this to be a good balance between not having to do the old Using per query model and not having a context around forever. Now the problem is that in WPF, I don't know of anything that could be used like the request model. Right now it looks like its to keep the same context forever (Which can be a nightmare) or go back to the annoying Using per query model that is a huge pain. I haven't seen a good answer on this yet.
My first thought was to have an Open and Close (Or whatever name) situation where the top level method being called (Say an event handling method like Something_Click) would "open" the context and "close" it at the end. Since I don't have anything on the UI project aware of the context (All queries are contained in methods on partial classes that "extend" the generated entity classes effectively creating a pseudo layer between the entities and the UI), this seems like it would make the entity layer dependent on the UI layer.
Really at a loss since I'm not hugely familiar with state programming.
Addition:
I've read up on using threads, but the
problem I have with a context just
sitting around is error and recovery.
Say I have a form that updates user
information and there's an error. The
user form will now display the changes
to the user object in the context
which is good since it makes a better
user experience not to have to retype
all the changes.
Now what if the user decides to go to
another form. Those changes are still
in the context. At this point I'm
stuck with either an incorrect User
object in the context or I have to get
the UI to tell the Context to reset
that user. I suppose that's not
horrible (A reload method on the user
class?) but I don't know if that
really solves the issue.
Have you thought about trying a unit of work? I had a similar issue where I essentially needed to be able to open and close a context without exposing my EF context. I think we're using different architectures (I'm using an IoC container and repository layer), so I have to cut up this code a bit to show it to you. I hope it helps.
First, when it comes to that "Something_Click" method, I'd have code that looked something like:
using (var unitOfWork = container.Resolve<IUnitOfWork>){
// do a bunch of stuff to multiple repositories,
// all which will share the same context from the unit of work
if (isError == false)
unitOfWork.Commit();
}
In each of my repositories, I'd have to check to see if I was in a unit of work. If I was, I'd use the unit of work's context. If not, I'd have to instantiate my own context. So in each repository, I'd have code that went something like:
if (UnitOfWork.Current != null)
{
return UnitOfWork.Current.ObjectContext;
}
else
{
return container.Resolve<Entities>();
}
So what about that UnitOfWork? Not much there. I had to cut out some comments and code, so don't take this class as working completely, but... here you go:
public class UnitOfWork : IUnitOfWork
{
private static LocalDataStoreSlot slot = Thread.AllocateNamedDataSlot("UnitOfWork");
private Entities entities;
public UnitOfWork(Entities entities)
{
this.entities = entities;
Thread.SetData(slot, this);
}
public Entities ObjectContext
{
get
{
return this.Entities;
}
}
public static IUnitOfWork Current
{
get { return (UnitOfWork)Thread.GetData(slot); }
}
public void Commit()
{
this.Entities.SaveChanges();
}
public void Dispose()
{
entities.Dispose();
Thread.SetData(slot, null);
}
}
It might take some work to factor this into your solution, but this might be an option.