Calling WCF services in MVVM? - wpf

I am working on a Prism desktop application and would like to know the best way to deal with lookup / reference data lists when using a WCF backend. I think this question may cover a few areas and I would appreciate some guidance
For example, consider a lookup that contains Products(codes and descriptions) which would be used in a lot of different input screens in the system.
Does the viewmodel call the WCF service directly to obtain the data to fill the control?
Would you create a control that solely deals with Products with its own viewmodel etc and then use that in every place that needs a product lookup or would you re-implements say a combobox that repopulates the products ItemsSource in every single form view model that uses it?
Would I create a brand new WCF service called something like LookupData service and use that to populate my lookup lists? - I am concerned I will end up with lots of lookups if I do this.
What other approaches are there for going about this?

I suggest creating your lookup object/component as a proxy object for WCF service. It can work in several ways, but most simple coming to my mind would be:
Implement WCF service with methods to provide all Products entities and requested one (eg. basing on product code)
Implement component that will use WCF client to get products, let's call it ProductsProvider
Your view models will take dependency on ProductsProvider (eg. via constructor injection)
Key element in this model is ProductsProvider - it will work as kind of cache for Products objects. First, it will ask web service for all products (or some part of it, up to your liking) to start with. Then, whenever you need to lookup product, you ask provider - it's provider's responsibility to deal with how product should be looked up - maybe it's already in local list? Maybe it will need to call web service for update? Example:
public class ProductsProvider
{
private IList<Product> products;
private IProductsService serviceClient;
public ProductsProvider(IProductsService serviceClient)
{
this.serviceClient = serviceClient;
this.products = serviceClient.GetAllProducts();
}
public Product LookUpProduct(string code)
{
// 1: check if our local list contains product with given code
// 2: if it does not, call this.serviceClient.LookUpProduct
// 3: if service also doesn't know such product:
// throw, return null, report error
}
}
Now, what this gives you is:
you only need to have one ProductsProvider instance
better flexibility with when and how your service is called
your view models won't have to deal with WCF at all
Edit:
As for your second question. Control may not be needed, but having view model for Product entity is definitely a good idea.

Related

Zend Framework 2 tableGateway pattern workflow with other tables

I am just getting to grips with Zend Framework 2 Database theory, having used version 1 for a long time. I am trying to discern the 'right' way of working with many tables when the business logic requires one table object to defer operations to another table object.
It seems like a lengthy and laborious process to instantiate a different table class in the existing gateway class. If I use the same process as the service manager factory, e.g.
$yourData = $myData['thisPart'];
$dbAdapter = $this->_myTableGateway->getAdapter();
$resultSetPrototype = new ResultSet();
$resultSetPrototype->setArrayObjectPrototype(new TestObject());
$tbl = new TestTable(new TableGateway('tbl_name', $dbAdapter, null, $resultSetPrototype));
$tbl->insertSomeData($yourData);
... then I suppose it will work, but the service manager is not supposed to be available in the table class. I could inject it using the factory definition but that doesn't seem like a great idea.
So I suppose my question is, what is the best way for a class (representing a table and using this pattern) to insert some of its data into another table using a different gateway class. Or is the method above the only/'right' way?
It seems like the best approach for this problem would be to implement the ServiceLocatorAwareInterface interface in each model that needs to interact with other table gateway classes. You can then define the setServiceLocator(ServiceLocatorInterface $sl) method which is passed a reference to the Service Locator automatically.

WPF MVVM WCF client/server architecture

I want to build a basic wpf/mvvm application which gets the data from a server with WCF and allows the client to display/manipulate (with CRUD operations) this data.
So far, I thought about something like that for the architecture :
a "global" model layer, which implements validation, research criterias, and INotifyPropertyChanged and services contracts
some services layers, by mainly one for entity framework 4, implementing the contracts of the model layer and allowing me to access and manipulate data.
Note that I want to have an offline datasource as well, say XML or something else, and thus another service (I plan on using some DI/IoC)
the WCF layer
Extra layer for data storing client side ?
the ViewModel
I'm clear on the Views/ViewModel part, but I have troubles figuring out the relations between the model, WCF and the viewmodel.
My questions are :
How should I handle the model generated by EF ? Get rid of it and go
for a code first approach, manually doing the mapping with the
database ?
For the WCF data transport, should I have relational
properties in my model, i.e a Product has a Customer instead of a
CustomerId ?
Should I have an additional layer between the WCF and
the ViewModel, for storing and manipulating data or is it a best
practice to directly plug the ViewModel into the WCF ?
Any other tips for this kind of architecture are welcome...
There are different solutions for the architecture of a 3-tier WPF application, but here is one possibility:
1+2) One solution is to create "intermediate" objects that represent what your client application actually needs.
For instance, if your application needs to display information about a product plus the associated customer name, you could build the following object:
public MyProduct
{
// Properties of the product itself
public int ProductID { get; set; }
public string ProductName { get; set; }
...
// Properties that come from the Customer entity
public string CustomerName { get; set; }
}
You can then expose a stateless WCF service that returns your product from an ID:
[ServiceContract]
MyProduct GetProductByID(int productID);
In the server side of your application (i.e. the implementation of your service), you can return a MyProduct instance build by querying the database through EF (one context per call):
public MyProduct GetProductByID(int productID)
{
using (DBContext ctx = new ....)
{
return from p in ctx.Products
where p.ID == productID
select new MyProduct
{
ProductID = p.ID,
ProductName = p.Name,
CustomerName = p.Customer.Name // Inner join here
};
}
}
3) Adding additional layer between the WCF services and the ViewModel might be considered as over-engineering. IMHO it's OK to call WCF services directly from the ViewModel. WCF generated client proxy code has the actual role of your model (at least one part of your model).
EDIT:
why MyProduct should reference the CustomerName instead of the
Customer.In my case, Customer would have many properties I'd work
with. Woudn't this "mapping" be too expensive ?
You can use the actual entities. But on client side, as it's a 3-tier architecture, you have no access to the DB through the navigation properties. If there was a nested Customer property (of type Customer), the client would have access to theProduct.Customer.Products, which has no sense has you can't lazy load entities this way (no DB context on client side).
Flattened "intermediate" POCOs are much more simple IMO. There is no performance issues, the mapping is straightforward and the CPU usage for this particular operation is infinitesimal compared to the DB request time.
First of all, some general information: there is a really good tutorial on MVVM by Jason Dollinger available at Lab49
edit
The video covers most of the needs when architecting a WPF application.
Dependency injection and the connection to WCF are also covered (but
not in depth when speaking about WCF, but with a really strong way
to come up with good solutions here)
The source code he developed is also available here
In my opinion, everybody who has to do with MVVM should see it!
=> 1. How should I handle the model generated by EF ? Get rid of it and go for a code first approach, manually doing the mapping with the database ?
AutoMapper can help here. Codeplex of AutoMapper
Your issue seems like a perfect fit for this!
=> 2. For the WCF data transport, should I have relational properties in my model, i.e a Product has a Customer instead of a CustomerId ?
Don't mess with the model! A productid is part of orders, and orders have a customer-id.
Stick to this. In your service layer, you will probably end up with ids anyway.
Since you probably do not alter products nor customers here. If you do (and my
orders example does not fit then), you can transport the dynamic data, not the static.
=> 3. Should I have an additional layer between the WCF and the ViewModel, for storing and manipulating data or is it a best practice to directly plug the ViewModel into the WCF ?
In most cases, I have a service layer with gets injected into my viewmodel in the constructor.
That can be assumed another layer, as it handles the WCF client part and
handles the "changed" events of the server side. (row changed, new row, row deleted etc)
edit
If you have to dispatch your service layer events, it is much easier to have
that small, leightweight layer between WCF and ViewModel. As soon as you have
to, you will probably come up with such a layer naturally.

Ninject ActivationBlock as Unit of Work

I have a WPF application with MVVM. Assuming object composition from the ViewModel down looks as follows:
MainViewModel
OrderManager
OrderRepository
EFContext
AnotherRepository
EFContext
UserManager
UserRepository
EFContext
My original approach was to inject dependencies (from the ViewModelLocator) into my View Model using .InCallScope() on the EFContext and .InTransientScope() for everything else. This results in being able to perform a "business transaction" across multiple business layer objects (Managers) that eventually underneath shared the same Entity Framework Context. I would simply Commit() said context at the end for a Unit of Work type scenario.
This worked as intended until I realized that I don't want long living Entity Framework contexts at the View Model level, data integrity issues across multiple operations described HERE. I want to do something similar to my web projects where I use .InRequestScope() for my Entity Framework context. In my desktop application I will define a unit of work which will serve as a business transaction if you will, typically it will wrap everything within a button click or similar event/command. It seems that using Ninject's ActivationBlock can do this for me.
internal static class Global
{
public static ActivationBlock GetNinjectUoW()
{
//assume that NinjectSingleton is a static reference to the kernel configured with the necessary modules/bindings
return new ActivationBlock(NinjectSingleton.Instance.Kernel);
}
}
In my code I intend to use it as such:
//Inside a method that is raised by a WPF Button Command ...
using (ActivationBlock uow = Global.GetNinjectUoW())
{
OrderManager orderManager = uow.Get<OrderManager>();
UserManager userManager = uow.Get<UserManager>();
Order order = orderManager.GetById(1);
UserManager.AddOrder(order);
....
UserManager.SaveChanges();
}
Questions:
To me this seems to replicate the way I do business on the web, is there anything inherently wrong with this approach that I've missed?
Am I understanding correctly that all .Get<> calls using the activation block will produce "singletons" local to that block? What I mean is no matter how many times I ask for an OrderManager, it'll always give me the same one within the block. If OrderManager and UserManager compose the same repository underneath (say SpecialRepository), both will point to the same instance of the repository, and obviously all repositories underneath share the same instance of the Entity Framework context.
Both questions can be answered with yes:
Yes - this is service location which you shouldn't do
Yes you understand it correctly
A proper unit-of-work scope, implemented in Ninject.Extensions.UnitOfWork, solves this problem.
Setup:
_kernel.Bind<IService>().To<Service>().InUnitOfWorkScope();
Usage:
using(UnitOfWorkScope.Create()){
// resolves, async/await, manual TPL ops, etc
}

SL RIA app - Insert and Update using standard generated code does not work - is there a better way?

I have a Silverlight RIA app where I share the models and data access between the MVC web app and the Silverlight app using compiler directives, and for the server, to see what context I am running under I would check to see if the ChangeSet object was non-null (meaning I was running under RIA rather than MvC). Everything works alright but I had problems with the default code generated by the domain service methods.
Let's say I had a Person entity, who belonged to certain Groups (Group entity). The Person object has a collection of Groups which I add or remove. After making the changes, the SL app would call the server to persist the changes. What I noticed happening is that the group entity records would be inserted first. That's fine, since I'm modifying an existing person. However, since each Group entity also has a reference to the existing person, calling AddObject would mark the whole graph - including the person I'm trying to modify - as Added. Then, when the Update statement is called, the default generated code would try to Attach the person, which now has a state of Added, to the context, with not-so-hilarious results.
When I make the original call for an entity or set of entities in a query, all of the EntityKeys for the entities are filled in. Once on the client, then EntityKey is filled in for each object. When the entity returns from the client to be updated on the server, the EntityKey is null. I created a new RIA services project and verified that this is the case. I'm running RIA Services SP1 and I am not using composition. I kind of understand the EntityKey problem - the change tracking done is on two separate contexts. EF doesn't know about the change tracking done on the SL side. However, it IS passing back the object graph, including related entities, so using AddObject is a problem unless I check the database for the existence of an object with the same key first.
I have code that works. I don't know how WELL it works but I'm doing some further testing today to see what's going on. Here it is:
/// <summary>
/// Updates an existing object.
/// </summary>
/// <typeparam name="TBusinessObject"></typeparam>
/// <param name="obj"></param>
protected void Update<TBusinessObject>(TBusinessObject obj) where TBusinessObject : EntityObject
{
if (this.ChangeSet != null)
{
ObjectStateManager objectStateManager = ObjectContext.ObjectStateManager;
ObjectSet<TBusinessObject> entitySet = GetEntitySet<TBusinessObject>();
string setName = entitySet.EntitySet.Name;
EntityKey key = ObjectContext.CreateEntityKey(setName, obj);
object dbEntity;
if (ObjectContext.TryGetObjectByKey(key, out dbEntity) && obj.EntityState == System.Data.EntityState.Detached)
{
// An object with the same key exists in the DB, and the entity passed
// is marked as detached.
// Solution: Mark the object as modified, and any child objects need to
// be marked as Unchanged as long as there is no Domainoperation.
ObjectContext.ApplyCurrentValues(setName, obj);
}
else if (dbEntity != null)
{
// In this case, tryGetObjectByKey said it failed, but the resulting object is
// filled in, leading me to believe that it did in fact work.
entitySet.Detach(obj); // Detach the entity
try
{
ObjectContext.ApplyCurrentValues(setName, obj); // Apply the changes to the entity in DB
}
catch (Exception)
{
entitySet.Attach(obj); // Re-attach the entity
ObjectContext.ApplyCurrentValues(setName, obj); // Apply the changes to the entity in DB'
}
}
else
{
// Add it..? Update must have been called mistakenly.
entitySet.AddObject(obj);
}
}
else
DirectInsertUpdate<TBusinessObject>(obj);
}
Quick walkthrough: If the ChangeSet is null, I'm not under the RIA context, and therefore can call a different method to handle the insert/update and save immediately. That works fine as far as I can tell. For RIA, I generate a key, and see if it exists in the database. If it does and the object I am working with is detached, I apply those values; otherwise, I force detach and apply the values, which works around the added state from any previous Insert calls.
Is there a better way of doing this? I feel like I'm doing way too much work here.
In this kind of a case, where you're adding Group entities to Person.Groups, I would think of just saving the Person and expect RIA to handle the Groups for me.
But let's take a step back, how are you trying to persist your changes? You shouldn't be saving/updating entities one by one. All you have to do is call DomainContext.SubmitChanges and all your changes should be persisted.
I work with pretty complicated projects and I seldom ever have to touch add/update code.
This question has been around with no solid answer, so I'll tell you what I did... which is nothing. That's how I handled it in RIA services, using the code above, since I was sharing the RIA client model and the server model.
After working with RIA services for a year and a half, I'm in the camp that believes that RIA services is good for working with smaller, less complex apps. If you can use [Composite] for your entities, which I couldn't for many of my entities, then you're fine.
RIA services can make throwing together small applications where you want to use the entity from EF really quick, but if you want to use POCOs or you foresee your application getting complex in the future, I would stick with building POCOs on the service end and passing those through regular WCF, and using shared behaviors by making your POCOs partial classes and sharing the behavior code with the client. When you're trying to create models that work the same on the client and the server, I had to write a ridiculous amount of plumbing code to make it work.
It definitely IS possible to do, I've done it; but there is a lot of hoops you must jump through for everything to work well, and I never fully took into consideration things like your shared model pre-loading lists for use on the client, whereas the server didn't need these preloaded everytime and actually slowed down the loading of the web page unnecessarily and countering by writing hacky method calls which I had to adopt on the client. (Sorry for the run-on.) The technique I chose to use definitely had its issues.

Serializing Entities with RIA Services

I've got a Silverlight application that requires quite a bit of data to operate and it requires it all up-front. It's using RIA Services (and the Entity Framework) to get all that information. It takes 10-15 seconds to get all the data, but the data only changes about once a month.
What I'd like to do is toss that data into Isolated Storage so that the next time they load up the app, I can just grab it, see if its updated, and if not use that data they've already got and save a ton of time sending things over the wire.
The structure of the graph I need to store is (more-or-less) a typical tree structure. A model has components, a component has features, a feature has options. The issue that I'm coming up against is that when I ask to have this root entity (the model) serialized, it's only serializing the top-level object and ignoring all of the "child" objects.
Does anyone know of a convenient way to get it to serialize/deserialize the whole graph?
IF RIA services is the problem then i might have a hint.
Do transfer collecitons of objects through RIA you need to do alittle tweaking of the domain model.
Lets say you have a receipt with a list of ReceiptEntries. Then you'd do this.
public Receipt {
public guid Id;
public List<ReceiptEntry> Entries;
}
public ReceiptEntry {
public guid ReceiptId;
}
you have to tell RIA how to associate these objects.
[Include()]
[Composition()]
[Association("ReceiptEntries", "Id", "ReceiptId"]
public Receipt {
public guid Id;
public List<ReceiptEntry> Entries;
}
Then it will serialize the list of objects.
I might write weird syntax cause I'm used to VB.net or have some minor faults in the sample code, just threw it up. But if the problem is that RIA doesnt send over the objects the way it shuold, then you should investigate this scenario. If you didnt already.

Resources