I had a talk with teamlead about this topic and from his point of view we can just use bindings and commandings omitting ViewModel, because we can test UI behaviour without VM using Automation or our own developed mechanisms of UI testing (based on automated clicks on Views). So, if there are no real benefits, why should I spawn "redundant" entities? Besides, automated integration tests look much more indicative than VM tests. So, it seems that we can mix VMs and Models.
update:
I agree that mixing VMs and Models brings into single .cs a data model and rules of data transformation for representing it in a View. But if this is only one benefit - I don't want to create a VM for each View.
So what pros of VM do you know?
The VM is the logic behind your UI. combining the UI code with the logic ends up in a mess, at least from my experience. Your view should define what you see - buttons, menus etc. Your VM is in charge of the binding and handling events caused by the user.
Edit:
Not wanting to create a VM for each view doesn't sound like a SW-oriented reason. Doing so will leave your view clean of logic and your model free to be the connecting layer between the core layer and your app layer.
I like the following example referring to the model and its role (why it shouldn't be combined with the VM): imagine you're developing some Android app using Google maps. Google maps are your core. Then one fine day you really need the option to, say, color parts of the map in pink, bright pink. An email to Google asking for colorPink(Map)will probably get you nowhere. That's where your model steps in and provides you the map wrapper that you need to define your pinky method.
Without a separate model, you'd have to go through every VM that uses map and update it.
So, the view has a role, the model has a role, the VM is the logic between those.
Edit 2:
There are some cases where there's no need of a model layer - I tended to disagree at first but was convinced after a discussion: In relatively small applications, the model can end up being a redundant wrapper with no added functionality over the core. In such cases, you can use the core objects directly.
What is he binding to? Whatever he is binding to is effectively a view model, whether you call it that or not. If it also doubles as a data model, then you're violating the Single Responsibility Principal (SRP). Essentially, the problem here is that you're lumping together code that is servicing different parts of the application architecture, which will lead to a convoluted mess.
UI Testing is a pain not just because you need to accomodate for changes in the View which can occur many times but also because UI tends to interfere with its own testing. Depending on the tests needed you'll need to keep track of focus, resize and possibly other (mouse) events which can be extremely hard to make it a representative test.
It is much easier to scope the test to the logic in the ViewModel and leave the UI testing to humans. The human testers do not need to test the logic; their only consern should be the UI.
By breaking down a ViewModel into a hierarchy of ViewModels you might be able to reuse a ViewModel multiple times. There is no rule that states that you should have a ViewModel for each View. (After a release or two I end up there but that's besides the point :) )
It depends on the nature of your Model - sometimes they are simple and can serve as both like you are suggesting. However, depending on the framework, you'll need to dirty up your model with some PropertyChanged event code which can get messy and distracting. In more complex cases, they serve as a mediator between your the view and the model. Let's suppose you're creating a client app that monitors a remote process or database entries - creating View Model's for these entities let's you surface your data in a way that is convenient to bind to for a UI framework (but is silly in a DB framework), encapsulate operations as commands, perform validation, etc etc.
Related
I've inherited a functional but messy WPF MVVM application. To my annoyance, there were virtually no unit tests which makes the adoption MVVM somewhat pointless. So I decided to add some.
After grabbing the low-hanging fruit, I'm starting to run into trouble. There's a lot of highly interdependent code, especially inside the properties and methods that are used in the Views. It's common, for instance, for one call to set off a chain of "property change" events which in turn set off other calls and so forth.
This makes it very hard to test, because you have to write enormous mocks and set up a large number of properties to test single functions. Of course you only have to do it once per class and then re-use your mock and ViewModel on each test. But it's still a pain, and it strikes me this is the wrong way to go about things. It would be better, surely, to try and break the code down and make it more modular.
I'm not sure how realistic that is in MVVM. And I'm in a vicious circle because without good tests, I'm worried about breaking the build by refactoring to write good tests. The fact it's WPF MVVM is a further concern because nothing tracks the interdependence between View and ViewModel - a careless name change could completely break something.
I'm working in C# VS2013 and grabbed the trial version of ReSharper to see if it would help. It's fun to use, but it hasn't so far. My experience of unit testing is not vast.
So - how can I approach this sensibly, methodically and safely? And how can I use my existing tools (and look at any other tools) to help?
Start at the heart -- business logic
Your application solves some real world problems and this is what business logic represents. Your view models wrap around those business logic components even if they (components) don't exist just yet (as separate classes).
As a result, you should assume that view model is lightweight, data preparation/rearrangement object. All the heavy lifting should be done within business logic objects. View model should be served with raw data, display ready.
Modularize
Having this important assumption in mind (VM = no BL) move business logic components to separate, possibly modular projects. Organizing BL in modular way will often result in projects structure similar to:
Company.Project.Feature.Contract - interfaces, entities, DTOs relating to feature
Company.Project.Feature - implementation of contract
Company.Project.Feature.Tests - tests for feature implementation
Your goal with #1 and #2 is to reach a state where abandoning WPF and replacing it with console-interface should only require writing console interface and wiring it with BL layer. Everything WPF related (Views, ViewModels) could be shift-deleted and such action should not break a thing.
IoC for testability
Get familiar with dependency injection and abstract where needed. Introduce IoC container. Don't make your code rely on implementation, make it rely on contract. Whenever view model takes dependency to BL-implementation, swap it with BL-abstraction. This will be one of the necessary steps to make view models testable.
This is how I would start. By no means this is exhausive list and I'm convinced you'll run into situations where sticking to it won't be enough. But that's what it is -- working with legacy code is not an easy task.
One of my collegues offers quite a strange approach where VMs are used like singletons and each is attached simultaneously to several views.
I see no pros for such weirdness besides a kind of data sharing instead of caching at data access layer.
I've never seen this in practice but don't want to reject the ideas just because of that.
P.S. I speak about sharing one instance, not about applying differen views to one VM.
Thanks.
I suppose this approach could be useful if the views were to be kept in sync and made to look identical. Otherwise, I agree that it's confusing.
The only time I've seen singleton view models used in the real world are when you have a view that is also single instance, meaning that only one copy of the view is allowed to be open at any given time. In that case, there is a performance benefit because the view model doesn't have to be recreated every time the view is re-opened.
I'm not sure I would necessarily call it a "singleton" view model, but I like to share view model instances between multiple view controls in some cases. For example, this can be quite useful in a master/detail scenario where changes you make to the details may change the way the master part looks visually. For example, a list/tree view with an editing panel beside it that shows details of the selected item. Sure, you could do this kind of thing by passing messages between two VMs, but that seems more likely to add additional code than VM re-use.
Where I wouldn't recommend having a completely singleton VM is if you need some kind of master/detail scenario where the detail editing is modal, as in a dialog you open to make changes. There you are going to want to encapsulate the edit in a separate instance to make cancel support easier. An application-global (e.g. static) singleton implementation just makes that kind of thing much messier.
In theory you must have a View Model for each view, but I think that in some cases may be useful use tha same view model for different views. For instance, supose you are displaying a User in differents application´s places and you want that when the User.Name changed in an application´s place, also in all other places where the User is showed, the User.Name changed too. For this notification issues, it is better have only one view model, then using the INotifyPropertyChanged interface all views will be notified. I think that this is what your collegue would may want, but also is not good make an over use of this, because would increase the complexity of the application, and/or may bring some unexpected behaviors.
I've been wondering what lives in the community for quite some time now;
I'm talking about big, business-oriented WPF/Silverlight enterprise applications.
Theoretically, there are different models at play:
Data Model (typically, linked to your db tables, Edmx/NHibarnate/.. mapped entities)
Business Model (classes containing actual business logic)
Transfer Model (classes (dto's) exposed to the outside world/client)
View Model (classes to which your actual views bind)
It's crystal clear that this separation has it's obvious advantages;
But does it work in real life? Is it a maintenance nightmare?
So what do you do?
In practice, do you use different class models for all of these models?
I've seen a lot of variations on this, for instance:
Data Model = Business Model: Data Model implemented code first (as POCO's), and used as business model with business logic on it as well
Business Model = Transfer Model = View Model: the business model is exposed as such to the client; No mapping to DTO's, .. takes place. The view binds directly to this Business Model
Silverlight RIA Services, out of the box, with Data Model exposed: Data Model = Business Model = Transfer Model. And sometimes even Transfer Model = View Model.
..
I know that the "it depends" answer is in place here;
But on what does it depend then;
Which approaches have you used, and how do you look back on it?
Thanks for sharing,
Regards,
Koen
Good question. I've never coded anything truly enterprisey so my experience is limited but I'll kick off.
My current (WPF/WCF) project uses Data Model = Business Model = Transfer Model = View Model!
There is no DB backend, so the "data model" is effectively business objects serialised to XML.
I played with DTO's but rapidly found the housekeeping arduous in the extreme, and the ever present premature optimiser in me disliked the unnecessary copying involved. This may yet come back to bite me (for instance comms serialisation sometimes has different needs than persistence serialisation), but so far it's not been much of a problem.
Both my business objects and view objects required push notification of value changes, so it made sense to implement them using the same method (INotifyPropertyChanged). This has the nice side effect that my business objects can be directly bound to within WPF views, although using MVVM means the ViewModel can easily provide wrappers if needs be.
So far I haven't hit any major snags, and having one set of objects to maintain keeps things nice and simple. I dread to think how big this project would be if I split out all four "models".
I can certainly see the benefits of having separate objects, but to me until it actually becomes a problem it seems wasted effort and complication.
As I said though, this is all fairly small scale, designed to run on a few 10's of PCs. I'd be interested to read other responses from genuine enterprise developers.
The seperation isnt a nightmare at all, infact since using MVVM as a design pattern I hugely support its use. I recently was part of a team where I wrote the UI component of a rather large product using MVVM which interacted with a server application that handled all the database calls etc and can honestly say it was one of the best projects I have worked on.
This project had a
Data Model (Basic Classes without support for InotifyPropertyChanged),
View Model (Supports INPC, All business logic for associated view),
View (XAML only),
Transfer Model(Methods to call database only)
I have classed the Transfer model as one thing but in reality it was built as several layers.
I also had a series of ViewModel classes that were wrappers around Model classes to either add additional functionality or to change the way the data was presented. These all supported INPC and were the ones that my UI bound to.
I found the MVVM approach very helpfull and in all honesty it kept the code simple, each view had a corresponding view model which handled the business logic for that view, then there were various underlying classes which would be considered the Model.
I think by seperating out the code like this it keeps things easier to understand, each View Model doesnt have the risk of being cluttered because it only contains things related to its view, anything we had that was common between the viewmodels was handled by inheritance to cut down on repeated code.
The benefit of this of course is that the code becomes instantly more maintainable, because the calls to the database was handled in my application by a call to a service it meant that the workings of the service method could be changed, as long as the data returned and the parameters required stay the same the UI never needs to know about this. The same goes for the UI, having the UI with no codebehind means the UI can be adjusted quite easily.
The disadvantage is that sadly some things you just have to do in code behind for whatever reason, unless you really want to stick to MVVM and figure some overcomplicated solution so in some situations it can be hard or impossible to stick to a true MVVM implementation(in our company we considered this to be no code behind).
In conclusion I think that if you make use of inheritance properly, stick to the design pattern and enforce coding standards this approach works very well, if you start to deviate however things start to get messy.
Several layers doesn't lead to maintenance nightmare, moreover the less layers you have - the easier to maintain them. And I'll try to explain why.
1) Transfer Model shouldn't be the same as Data Model
For example, you have the following entity in your ADO.Net Entity Data Model:
Customer
{
int Id
Region Region
EntitySet<Order> Orders
}
And you want to return it from a WCF service, so you write the code like this:
dc.Customers.Include("Region").Include("Orders").FirstOrDefault();
And there is the problem: how consumers of the service will be assured that the Region and Orders properties are not null or empty? And if the Order entity has a collection of OrderDetail entities, will they be serialized too? And sometimes you can forget to switch off lazy loading and the entire object graph will be serialized.
And some other situations:
you need to combine two entities and return them as a single object.
you want to return only a part of an entity, for example all information from a File table except the column FileContent of binary array type.
you want to add or remove some columns from a table but you don't want to expose the new data to existing applications.
So I think you are convinced that auto generated entities are not suitable for web services.
That's why we should create a transfer model like this:
class CustomerModel
{
public int Id { get; set; }
public string Country { get; set; }
public List<OrderModel> Orders { get; set; }
}
And you can freely change tables in the database without affecting existing consumers of the web service, as well as you can change service models without making changes in the database.
To make the process of models transformation easier, you can use the AutoMapper library.
2) It is recommended that View Model shouldn't be the same as Transfer Model
Although you can bind a transfer model object directly to a view, it will be only a "OneTime" relation: changes of a model will not be reflected in a view and vice versa.
A view model class in most cases adds the following features to a model class:
Notification about property changes using the INotifyPropertyChanged interface
Notification about collection changes using the ObservableCollection class
Validation of properties
Reaction to events of the view (using commands or the combination of data binding and property setters)
Conversion of properties, similar to {Binding Converter...}, but on the side of view model
And again, sometimes you will need to combine several models to display them in a single view. And it would be better not to be dependent on service objects but rather define own properties so that if the structure of the model is changed the view model will be the same.
I allways use the above described 3 layers for building applications and it works fine, so I recommend everyone to use the same approach.
We use an approach similar to what Purplegoldfish posted with a few extra layers. Our application communicates primarily with web services so our data objects are not bound to any specific database. This means that database schema changes do not necessarily have to affect the UI.
We have a user interface layer comprising of the following sub-layers:
Data Models: This includes plain data objects that support change notification. These are data models used exclusively on the UI so we have the flexibility of designing these to suit the needs of the UI. Or course some of these objects are not plain as they contain logic that manipulate their state. Also, because we use a lot of data grids, each data model is responsible for providing its list of properties that can be bound to a grid.
Views: Our XAML definitions of the views. To accommodate for some complex requirements we had to resort to code behind in certain cases as sticking to a XAML only approach was too tedious.
ViewModels: This is where we define business logic for our views. These guys also have access to interfaces that are implemented by entities in our data access layer described below.
Module Presenter: This is typically a class that is responsible for initializing a module. Its task also includes registering the views and other entities associated with this module.
Then we have a Data Access layer which contains the following:
Transfer Objects: These are usually data entities exposed by the webservices. Most of these are autogenerated.
Data Adapters such as WCF client proxies and proxies to any other remote data source: These proxies typically implement one or more interfaces exposed to the ViewModels and are responsible for making all calls to the remote data source asynchronously, translating all responses to UI equivalent data models as required. In some cases we use AutoMapper for translation but all of this is done exclusively in this layer.
Our layering approach is a little complex so is the application. It has to deal with different types of data sources including webservices, direct data base access and other types of data sources such as OGC webservices.
So I am currently working on a UI written in WPF. One thing I really like about WPF is the way it leads you to write more decoupled, isolated UI components. One pain point for me in WPF is that it leads you to write more decoupled, isolated UI components that sometimes need to communicate with one another :). This is probably due to my relative lack of UI experience, especially in WPF (I'm not a novice, but most of my work is far more low level than UI design).
Anyway, here is the situation:
At any one time, the central area of the UI displays one of three views implemented as UserControls, let's call them Views A, B, and C.
The user will be switching between these views at various times, and there is more than one way to switch views (this works well for the customer, causes some pain in code design currently).
Right now each view switching mechanism does its own thing to transition to another view. A certain singleton class takes care of storing data and communicating between the views. I don't like this, it's messy, error prone, and the singleton class knows way too much about the details of the UI. I want to eliminate it as much as is possible.
I ran into a bug today that had to do with the timing of switching between views. To make it simple, one view needs to perform some cleanup when it is unloaded, but that cleanup erases some data that is needed for another view. If the cleanup runs after the other view is loaded, problems ensure. See what I mean? Messy.
I am trying to take a step back and imagine a different way to get these views loaded with the data they need to do their job. Some of you more experience UI / WPF people out there must have come across a similar issue. I have a couple of ideas, but I am hoping someone will present a cleaner approach to me here. I don't like depending upon order of operations (at a high level) for my code to work properly. Thanks in advance for any help you may be able to offer.
I would recommend some kind of parent ViewModel that handles the CurrentView. I wrote an example here a while back if you're interested.
Basically the parent view will have a List<ViewModelBase> AvailablePages, a ViewModelBase CurrentPage, and an ICommand ChangePageCommand
How you choose to display these is up to you. My preferred method is a ContentControl with it's Content bound to the CurrentPage, and using DataTemplates to determine which View should be displayed based on the ViewModel stored in CurrentPage
Rachel's post sums up my basic approach to this, quite well. However, I would like to add a few things based on your comments which you may want to consider here.
Note that this is all assuming a ViewModel-first approach, as mentioned in comments.
The user will be switching between these views at various times, and there is more than one way to switch views (this works well for the customer, causes some pain in code design currently).
This shouldn't cause pain in the design. The key here is to have a single, consistent way to request a "current ViewModel" change, and the View will follow suit automatically. The actual mechanism used in the View can be anything - changing the VM should be consistent.
Done correctly, there should be little pain in the design, and a lot of flexibility in terms of how the View actually operates.
Right now each view switching mechanism does its own thing to transition to another view. A certain singleton class takes care of storing data and communicating between the views. I don't like this, it's messy, error prone, and the singleton class knows way too much about the details of the UI. I want to eliminate it as much as is possible.
This is where a coordinating ViewModel can really ease things. It does not require a singleton, as it effectively "owns" the individual ViewModels of the views. One option here, that's fairly simple, is to implement an interface on the ViewModels that includes an event - the ViewModel can raise the event (which I would name based more on what the intent is, not based on the "view change"). The coordinating VM would subscribe to each child VM, and based on the event, change it's "CurrentItem" property (for the active VM) based on the appropriate content to make the request. There are no UI details at all required.
I ran into a bug today that had to do with the timing of switching between views. To make it simple, one view needs to perform some cleanup when it is unloaded, but that cleanup erases some data that is needed for another view. If the cleanup runs after the other view is loaded, problems ensure. See what I mean? Messy.
This is screaming out for a refactoring. A ViewModel should never clean up data it doesn't own. If this is occurring, it means that a VM is cleaning up data that really should be managed separately. Again, a coordinating VM could be one way to handle this, though it's very difficult without more information.
I don't like depending upon order of operations (at a high level) for my code to work properly.
This is the right way to think here. There should be no dependencies on order within your code if it can be avoided, as it will make life much simpler over time.
I am trying to take a step back and imagine a different way to get these views loaded with the data they need to do their job.
The approach Rachel and I are espousing here is effectively the same approach I used in my series on MVVM to implement the master-detail View. The nice thing here is that the "detail" portion of the View does not always have to be the same type of ViewModel or View - if you use a ContentPresenter bound to a property that's just an Object (or an interface that the VMs share), you can easily switch out the Views with completely different Views merely by changing the property value at runtime.
My suggestion for this is to have one main view model that coordinates everything (not static / singleton) that you then use sub view models to transfer data around. This keeps the decoupling you are looking for, provides testability, and allows you to control when the data for each object is changed.
I have a WPF application built using the MVVM pattern:
My Models come from LINQ to SQL.
I use the Repository Pattern to abstract away the DataContext.
My ViewModels have a reference to a Model.
Setting a property on the ViewModel causes that value to be written through to the Model.
As you can see, my data is stored in my Model, and changes are therefore tracked by my DataContext.
However, in this question I read:
The guidelines from the MSDN
documentation on the DataContext class
are what I would recommend following:
In general, a DataContext instance is
designed to last for one "unit of
work" however your application defines
that term. A DataContext is
lightweight and is not expensive to
create. A typical LINQ to SQL
application creates DataContext
instances at method scope or as a
member of short-lived classes that
represent a logical set of related
database operations.
How do you track your changes? In your DataContext? In your ViewModel? Elsewhere?
When I write this kind of software, my VMs never have a reference in any way to the data model. When you couple them like this, you are pretty much married to a simple two-tier solution which can be really tough to break out.
If you disconnect the DataContext entirely from your VM and have them live on their own, your application becomes:
Much more testable -- your VMs can be tested without the datacontext
Potentially stateless at the data layer -- it's easy to change your architecture to adopt a stateless 3-tier based solution.
Able to easily integrate other data sources -- your VMs can elegantly contain aggregates and combinations of other data sources on their own.
So in short, yes, I would recommend decoupling the data access classes from the ViewModel objects throughout the app. It might be a bit more code, but it will pay dividends down the road with the architecture's flexibility.
EDIT: I don't use the change tracking features of the L2SQL objects once they've crossed a distribution boundary. That turns into a can of worms pretty quickly -- you can spend a lot of time with the care and feeding of your data model's object graph inside your viewmodel - which adds not only complexity to the ViewModel, but at least transitively couples the ViewModel to the schema of the database. Instead of doing that, I make the ViewModel pure. When the time comes for them to be persisted, your service layer/repository/whatever does the translation between the ViewModel and the data objects. This at first seems like a lot of work, but for anything other than simple CRUD, this design pays off pretty quickly.
However you manage data inside a program, instances of objects of L2SQL generated classes are created like regular objects, without any using of a data context. DataContext is only responsible for interacting with a database.
Those guidelines may mean that you can do anything with objects of L2SQL generated classes, but don’t keep an object that transfers data between them and a database. You can treat L2SQL classes like any other classes, which you can use as you want, with an additional capability of being read from and written to a database.
This is a guess. I cannot say what was in the head of the MSDN author of that magic phrase, but this explanation makes sense. Store data in L2SQL generated classes objects during the whole program’s activity and synchronize it with database by temporary DataContext objects.