MVVM pattern: ViewModel updates after Model server roundtrip - wpf

I have stateless services and anemic domain objects on server side. Model between server and client is POCO DTO. The client should become MVVM. The model could be graph of about 100 instances of 20 different classes. The client editor contains diverse tab-pages all of them live-connected to model/viewmodel.
My problem is how to propagate changes after server round-trip nice way. It's quite easy to propagate changes from ViewModel to DTO. For way back it would be possible to throw away old DTO and replace it whole with new one, but it will cause lot of redrawing for lists/DataTemplates.
I could gather the server side changes and transmit them to client side. But the names of fields changed would be domain/DTO specific, not ViewModel specific. And the mapping seems nontrivial to me. If I should do it imperative way after round-trip, it would break SOC/modularity of viewModels.
I'm thinking about some kind of mapping rule engine, something like automappper or emit mapper. But it solves just very plain use-cases. I don't see how it would map/propagate/convert adding items to list or removal. How to identify instances in collections so it could merge values to existing instances. As well it should propagate validation/error info.
Maybe I should implement INotifyPropertyChanged on DTO and try to replay server side events on it ? And then bind ViewModel to it ? Would binding solve the problems with collection merges nice way ? Is EventAgregator from PRISM useful for that ? Is there any event record-replay component ?
Is there better client side pattern for architecture with server side logic ?

Typically, I've kept a reference to the DTO in my Model classes. For multiple models, I ensure each model knows how to construct itself from a DTO, as well as how to Save itself using an injectible "saver" or other service provider object. Carrying around a reference to the DTO makes it pretty easy, when you call Save() on the model, to modify the old DTO according to the Model before passing it back to the service.
Hopefully any "updates" to other objects after a Save() operation could be communicated in other DTOs, which should then be loaded into the appropriate Model classes used by your ViewModel.
The downside to this is that you do indeed have to write the mapping code, but this is usually the easiest part. I am not convinced this is the best way to do things and I would appreciate reading others' responses.

I used another variation with a lot of success. We had a real-time GUI, so repetitive redraws on the client were not acceptable.
We made our DTOs aware of their property changes, and emit PropertyChanged events to the ViewModel, therefore replaying server-side events.
Code is straightforward to write, unit test, etc. It becomes easy to browse when PropertyChangeListeners (interfaces implemented by ViewModels) are typed.
This boundary can also be used to switch threads to the GUI worker thread.

Related

What is the real purpose of ViewModel in MVVM?

I had a talk with teamlead about this topic and from his point of view we can just use bindings and commandings omitting ViewModel, because we can test UI behaviour without VM using Automation or our own developed mechanisms of UI testing (based on automated clicks on Views). So, if there are no real benefits, why should I spawn "redundant" entities? Besides, automated integration tests look much more indicative than VM tests. So, it seems that we can mix VMs and Models.
update:
I agree that mixing VMs and Models brings into single .cs a data model and rules of data transformation for representing it in a View. But if this is only one benefit - I don't want to create a VM for each View.
So what pros of VM do you know?
The VM is the logic behind your UI. combining the UI code with the logic ends up in a mess, at least from my experience. Your view should define what you see - buttons, menus etc. Your VM is in charge of the binding and handling events caused by the user.
Edit:
Not wanting to create a VM for each view doesn't sound like a SW-oriented reason. Doing so will leave your view clean of logic and your model free to be the connecting layer between the core layer and your app layer.
I like the following example referring to the model and its role (why it shouldn't be combined with the VM): imagine you're developing some Android app using Google maps. Google maps are your core. Then one fine day you really need the option to, say, color parts of the map in pink, bright pink. An email to Google asking for colorPink(Map)will probably get you nowhere. That's where your model steps in and provides you the map wrapper that you need to define your pinky method.
Without a separate model, you'd have to go through every VM that uses map and update it.
So, the view has a role, the model has a role, the VM is the logic between those.
Edit 2:
There are some cases where there's no need of a model layer - I tended to disagree at first but was convinced after a discussion: In relatively small applications, the model can end up being a redundant wrapper with no added functionality over the core. In such cases, you can use the core objects directly.
What is he binding to? Whatever he is binding to is effectively a view model, whether you call it that or not. If it also doubles as a data model, then you're violating the Single Responsibility Principal (SRP). Essentially, the problem here is that you're lumping together code that is servicing different parts of the application architecture, which will lead to a convoluted mess.
UI Testing is a pain not just because you need to accomodate for changes in the View which can occur many times but also because UI tends to interfere with its own testing. Depending on the tests needed you'll need to keep track of focus, resize and possibly other (mouse) events which can be extremely hard to make it a representative test.
It is much easier to scope the test to the logic in the ViewModel and leave the UI testing to humans. The human testers do not need to test the logic; their only consern should be the UI.
By breaking down a ViewModel into a hierarchy of ViewModels you might be able to reuse a ViewModel multiple times. There is no rule that states that you should have a ViewModel for each View. (After a release or two I end up there but that's besides the point :) )
It depends on the nature of your Model - sometimes they are simple and can serve as both like you are suggesting. However, depending on the framework, you'll need to dirty up your model with some PropertyChanged event code which can get messy and distracting. In more complex cases, they serve as a mediator between your the view and the model. Let's suppose you're creating a client app that monitors a remote process or database entries - creating View Model's for these entities let's you surface your data in a way that is convenient to bind to for a UI framework (but is silly in a DB framework), encapsulate operations as commands, perform validation, etc etc.

How to make sure all models have access to service agents?

I have a WPF client in MVVM architecture.
The WPP client needs to connect to a WCF service, and send operations to it.
This ability needs to be from different views, thus meaning different models (right ?)
Questions:
Is my assumption, that the models are the ones who access the WCF service client - correct ? Meaning - we do not want the view or the model-view to connect to the WCF service, right ? only the models themselves...
How do I make sure that all the models have access to the WCF service's client ? Do I use some kind of 'ServiceLocator' ? (I have read the term somewhere, but do not know what it means exactly. Would be happy if someone who has done this before can shed some light on the topic).
Personally I don't think Models should be anything more than plain container objects to hold data. They should not contain data access code, or any other application logic other than perhaps basic data validation to verify its data integrity.
Your ViewModels should be the ones responsible for talking with the WCF server. Or better yet, make a repository class that contains all your data access calls, and have your ViewModel use that instead.
Don't forget that with MVVM, your ViewModels are your application. They're responsible for everything from application flow, to business logic, to data access (although sometimes these concepts are abstracted out of the VM, such as using a Repository for data access).
Views are just a user-friendly interface that sits on top of the ViewModels to allow users to interact with them, and Models are just objects used to contain data.
Model represents a data, so ViewModel should be aware of services
Just make a IMyWcfService as a dependency of any ViewModel, to accomplish that you can make ViewModelBase abstract class with protected constructor which accepts IMyWcfService so all concrete ViewModels will be obligated to provide this service
And as already stated in comments - try to avoid service locator, this would mess up API and unit testing. Just provide all dependencies as constructor arguments this woudl make a class API more clear so you will see what is required and do not worry about run time errors like "service locator unable to resolve a service"

WPF/Silverlight enterprise application architecture.. what do you do?

I've been wondering what lives in the community for quite some time now;
I'm talking about big, business-oriented WPF/Silverlight enterprise applications.
Theoretically, there are different models at play:
Data Model (typically, linked to your db tables, Edmx/NHibarnate/.. mapped entities)
Business Model (classes containing actual business logic)
Transfer Model (classes (dto's) exposed to the outside world/client)
View Model (classes to which your actual views bind)
It's crystal clear that this separation has it's obvious advantages;
But does it work in real life? Is it a maintenance nightmare?
So what do you do?
In practice, do you use different class models for all of these models?
I've seen a lot of variations on this, for instance:
Data Model = Business Model: Data Model implemented code first (as POCO's), and used as business model with business logic on it as well
Business Model = Transfer Model = View Model: the business model is exposed as such to the client; No mapping to DTO's, .. takes place. The view binds directly to this Business Model
Silverlight RIA Services, out of the box, with Data Model exposed: Data Model = Business Model = Transfer Model. And sometimes even Transfer Model = View Model.
..
I know that the "it depends" answer is in place here;
But on what does it depend then;
Which approaches have you used, and how do you look back on it?
Thanks for sharing,
Regards,
Koen
Good question. I've never coded anything truly enterprisey so my experience is limited but I'll kick off.
My current (WPF/WCF) project uses Data Model = Business Model = Transfer Model = View Model!
There is no DB backend, so the "data model" is effectively business objects serialised to XML.
I played with DTO's but rapidly found the housekeeping arduous in the extreme, and the ever present premature optimiser in me disliked the unnecessary copying involved. This may yet come back to bite me (for instance comms serialisation sometimes has different needs than persistence serialisation), but so far it's not been much of a problem.
Both my business objects and view objects required push notification of value changes, so it made sense to implement them using the same method (INotifyPropertyChanged). This has the nice side effect that my business objects can be directly bound to within WPF views, although using MVVM means the ViewModel can easily provide wrappers if needs be.
So far I haven't hit any major snags, and having one set of objects to maintain keeps things nice and simple. I dread to think how big this project would be if I split out all four "models".
I can certainly see the benefits of having separate objects, but to me until it actually becomes a problem it seems wasted effort and complication.
As I said though, this is all fairly small scale, designed to run on a few 10's of PCs. I'd be interested to read other responses from genuine enterprise developers.
The seperation isnt a nightmare at all, infact since using MVVM as a design pattern I hugely support its use. I recently was part of a team where I wrote the UI component of a rather large product using MVVM which interacted with a server application that handled all the database calls etc and can honestly say it was one of the best projects I have worked on.
This project had a
Data Model (Basic Classes without support for InotifyPropertyChanged),
View Model (Supports INPC, All business logic for associated view),
View (XAML only),
Transfer Model(Methods to call database only)
I have classed the Transfer model as one thing but in reality it was built as several layers.
I also had a series of ViewModel classes that were wrappers around Model classes to either add additional functionality or to change the way the data was presented. These all supported INPC and were the ones that my UI bound to.
I found the MVVM approach very helpfull and in all honesty it kept the code simple, each view had a corresponding view model which handled the business logic for that view, then there were various underlying classes which would be considered the Model.
I think by seperating out the code like this it keeps things easier to understand, each View Model doesnt have the risk of being cluttered because it only contains things related to its view, anything we had that was common between the viewmodels was handled by inheritance to cut down on repeated code.
The benefit of this of course is that the code becomes instantly more maintainable, because the calls to the database was handled in my application by a call to a service it meant that the workings of the service method could be changed, as long as the data returned and the parameters required stay the same the UI never needs to know about this. The same goes for the UI, having the UI with no codebehind means the UI can be adjusted quite easily.
The disadvantage is that sadly some things you just have to do in code behind for whatever reason, unless you really want to stick to MVVM and figure some overcomplicated solution so in some situations it can be hard or impossible to stick to a true MVVM implementation(in our company we considered this to be no code behind).
In conclusion I think that if you make use of inheritance properly, stick to the design pattern and enforce coding standards this approach works very well, if you start to deviate however things start to get messy.
Several layers doesn't lead to maintenance nightmare, moreover the less layers you have - the easier to maintain them. And I'll try to explain why.
1) Transfer Model shouldn't be the same as Data Model
For example, you have the following entity in your ADO.Net Entity Data Model:
Customer
{
int Id
Region Region
EntitySet<Order> Orders
}
And you want to return it from a WCF service, so you write the code like this:
dc.Customers.Include("Region").Include("Orders").FirstOrDefault();
And there is the problem: how consumers of the service will be assured that the Region and Orders properties are not null or empty? And if the Order entity has a collection of OrderDetail entities, will they be serialized too? And sometimes you can forget to switch off lazy loading and the entire object graph will be serialized.
And some other situations:
you need to combine two entities and return them as a single object.
you want to return only a part of an entity, for example all information from a File table except the column FileContent of binary array type.
you want to add or remove some columns from a table but you don't want to expose the new data to existing applications.
So I think you are convinced that auto generated entities are not suitable for web services.
That's why we should create a transfer model like this:
class CustomerModel
{
public int Id { get; set; }
public string Country { get; set; }
public List<OrderModel> Orders { get; set; }
}
And you can freely change tables in the database without affecting existing consumers of the web service, as well as you can change service models without making changes in the database.
To make the process of models transformation easier, you can use the AutoMapper library.
2) It is recommended that View Model shouldn't be the same as Transfer Model
Although you can bind a transfer model object directly to a view, it will be only a "OneTime" relation: changes of a model will not be reflected in a view and vice versa.
A view model class in most cases adds the following features to a model class:
Notification about property changes using the INotifyPropertyChanged interface
Notification about collection changes using the ObservableCollection class
Validation of properties
Reaction to events of the view (using commands or the combination of data binding and property setters)
Conversion of properties, similar to {Binding Converter...}, but on the side of view model
And again, sometimes you will need to combine several models to display them in a single view. And it would be better not to be dependent on service objects but rather define own properties so that if the structure of the model is changed the view model will be the same.
I allways use the above described 3 layers for building applications and it works fine, so I recommend everyone to use the same approach.
We use an approach similar to what Purplegoldfish posted with a few extra layers. Our application communicates primarily with web services so our data objects are not bound to any specific database. This means that database schema changes do not necessarily have to affect the UI.
We have a user interface layer comprising of the following sub-layers:
Data Models: This includes plain data objects that support change notification. These are data models used exclusively on the UI so we have the flexibility of designing these to suit the needs of the UI. Or course some of these objects are not plain as they contain logic that manipulate their state. Also, because we use a lot of data grids, each data model is responsible for providing its list of properties that can be bound to a grid.
Views: Our XAML definitions of the views. To accommodate for some complex requirements we had to resort to code behind in certain cases as sticking to a XAML only approach was too tedious.
ViewModels: This is where we define business logic for our views. These guys also have access to interfaces that are implemented by entities in our data access layer described below.
Module Presenter: This is typically a class that is responsible for initializing a module. Its task also includes registering the views and other entities associated with this module.
Then we have a Data Access layer which contains the following:
Transfer Objects: These are usually data entities exposed by the webservices. Most of these are autogenerated.
Data Adapters such as WCF client proxies and proxies to any other remote data source: These proxies typically implement one or more interfaces exposed to the ViewModels and are responsible for making all calls to the remote data source asynchronously, translating all responses to UI equivalent data models as required. In some cases we use AutoMapper for translation but all of this is done exclusively in this layer.
Our layering approach is a little complex so is the application. It has to deal with different types of data sources including webservices, direct data base access and other types of data sources such as OGC webservices.

POCOs, DTOs and IDataErrorInfo

I awake this morning to a problem!
In all of my components, I have a set of Business Rules which are used to validate DTOs before any changes are committed to the repository.
I've been trying to figure out the best way to get validation errors back to the UI and I came across the IDataErrorInfo interface. Fantastic!
However, The implementation of this interface would transform my DTO into a POCO and make it a larger object in terms of memory usage. At the moment, all of the user controls are bound to the current DTO objects.
Would transforming my DTOs into POCOs have a performance impact? Or is there a better way to get validation messages back to the UI?
MVVM. i.e. yours DTOs are wrapped in view models, which are bound to your view.
I've been trying to figure out the
best way to get validation errors back
to the UI and I came across the
IDataErrorInfo interface. Fantastic!
Absolutely. How comes you do not know what you are doing in the first place? IDataErrorInfo is fully documented - not something you should "coming across" (which sounds accidentally).
The implementation of this interface
would transform my DTO into a POCO and
make it a larger object in terms of
memory usage. At the moment, all of
the user controls are bound to the
current DTO objects.
A DTO has absolutely no business knowing aboutinternal errors - it should not never ever HAVE internal errors. See, DTO is "Data Transfer Object", not "Business Object". The DTO is what the Business object should generate to send it to the DataAccessLayer, and the reason there should not be validation is taht the Business Object makes sure ONLY VALID OBJECTS CREATE DTO's.
Btw.,
would transform my DTO into a POCO
I have another surprise discovery for you - your DTO already IS A POCO. POCO is "Plain Old CLR Object", and I guess the your DTO are established as .NET classes, so - guess what, surprise, they already ARE POCO.
What you mean (again something to discover) is that it would transform your DTO's into BO's.
Or is there a better way to get
validation messages back to the UI?
No, there is not. The best way to send messages up to the UI is by UI defined interfaces, and IDataErrorInfo is that.
Your problem is that you have a hugh confusion about:
How to program a multi tiered architecture and build a standard data access layer
Dont know anything about the terms you use (see your problem knowing what a DTO actually is, and what POCO is).
Thus mix up your responsibilities.
See the answers at POCO vs DTO for an explanation of what you actually mix up.
Back o the drawing board. As your lead developer / Architect to give you an introduction into multi tiered architectures and read the .NET documentation.
The DTO should noly deal with DataTransfer issues.

Is this the correct use of WCF and data contracts?

I'm still pretty new to Silverlight, quite new to WCF, and am trying to broaden my horizons into both. I'd like to learn what is considered to be good practices while doing so.
On the client side, I have a Silverlight application. On the server side, I have a database that the Silverlight application will be utilizing. In between the two (okay, it's server-side, but...), I have a WCF service that the client calls upon to get the data from the database.
I have created a class that is marked as a DataContract and is used by the WCF service. That class is an object model populated with data from the database. On the client side, when it requests and receives an instance of this class, it uses the instance data to instantiate and populate a client-defined object that has additional client-defined members.
It's my use of the DataContract that most worries me. To create an instance of an object to be serialized and sent, only to be pillaged for its data so another object can be created seems...inefficient. But if it's considered a good practice I can get past that.
I did consider going the route of a web handler (.ashx) and using a proprietary binary standard to communicate the data, but I think going the WCF route may be more applicable and usable in the future (thinking: job).
I don't see any particular problem with your approach.
In my mind what you're describing is the transfer of data from service to client as a DTO (data-transfer object), and then using that DTO to populate a view model object. It is also quite common for the DTO and view model objects to use varying levels of granularity in terms of the data they represent (typically DTOs will be more coarse grained), and the view model will contain behaviour that is specific to the UI.
You might want to look at tools and frameworks that help in the mapping between DTOs and view model objects. One of my favourites is AutoMapper.

Resources