MVVM with XML Model and LinqToXml? - wpf

I've been reading up on the MVVM pattern, and I would like to try it out on a relatively small WPF project. The application will be single-user. Both input and output data will be stored in a "relational" XML file. A schema (XSD file) with Keys and KeyRefs is used to validate the file.
I have also started to get my feet wet with Linq and LinqToXml, and I have written a couple pretty complex queries that actually work (small victories :)).
Now, I'm trying to put it all together, and I'm finding that I'm a little bit confused as to what should go in the Model versus the ViewModel. Here are the options I've been wrestling with so far:
Should I consider the Model the XML file itself and place all the LinqToXml queries in the ViewModel? In other words, not even write a class called Model?
Should I write a Model that is just a simple wrapper for the XML file and XSD Schema Set and performs validation, saves changes, etc.?
Should I put "basic" queries in the Model and "view-specific" queries in the ViewModel? And if so, where should I draw the line between these two categories?
I realize there is not necessarily a "right" answer to this question...I'm just looking for advice and pros/cons, and if anyone is aware of a code example for a similar scenario, that would be great.
Thanks,
-Dan

For a small application, having separate Data Access, Domain Model and Presentation Model layers may seem like overkill, but modeling your application like that will help you decide what goes where. Even if you don't want to decompose your application into three disitinct projects/libraries, thinking about where each piece of functionality should go can help you decide.
In that light, pure data access (i.e. loading the XML files, querying and updating them) belongs in the data access layer, since these are technology specific.
If you have any operations that don't relate to your particular data access technology, but could rather be deemed universally applicable within your application domain, these should go into the Domain Model (or what some call the Business Logic).
Any logic whose sole purpose is to provide specific functionality for a particular user interface technology (WPF, in your case) should go into the Presentation Model.
In your case, the XML files and all the LINQ to XML queries belong in the Data Access Layer.
To utilize MVVM, you will need to create a ViewModel for each View that you want to have in your application.
From your question, it is unclear to me whether you have anything that could be considered Domain Model, but stuff like validation is a good candidate. Such functionality should go into the Domain Model. No classes in the Domain Model should be directly bound to a View. Rather, it is the ViewModel's responsibility to translate between the Domain Model and the View.
All the WPF-specific stuff should go in the ViewModel, while the other classes in your application should be unaware of WPF.

Scott Hanselmen has a podcast that goes over this very topic in detail with Ian Griffiths, an individual who is extremely knowledgeable about WPF, and who co-wrote an O'Reilly book titled, "Programming WPF."
Windows Presentation Foundation explained by Ian Griffiths
http://hanselminutes.com/default.aspx?showID=184
The short (woefully incomplete) answer is that the view contains visual objects and minimal logic for manipulating them, while the View Model contains the state of those objects.
But listen to the podcast. Ian says it better than I can.

Related

MVVM and ORM tools

I am confused.Please guide me anyone.
Is it mandatory to use any ORM tools(EF or Linq2SQL) when building an application in MVVM pattern?
Right now my application returns dataset using straight queries to like "select * from table"
Can I use dataset/datatable to List and then observable collection?or we need to have EF or L2S.
I am confused to kick start in MVVM
There's no reason you can't build your own Model layer, if that's what you want to do. The nice thing about modern design patterns is that they are generally agnostic toward what you use to fill each part.
I would build specific, separated classes for all your data access code, to keep that first M separate.
An overarching principle of patterns like MVVM and MVC are to separate your various concerns. This helps in so many ways - including, specifically, to support your ability to use your own data access (Model) while using the general pattern.
Ideally, you would write your code such that if you decided to move to Entity Framework in the future, you could do so without much change in the rest of the code. Rather - without any change in the rest of the code.
You can write your data access using the Repository pattern, using your custom classes that execute your hand-written SQL and produce classes that your View and ViewModel can deal with. With the Repository being the main place where your other code interacts, if you switch to EF or anything else in the future, you know you don't have to change any of your View or ViewModel code.

WPF/Silverlight enterprise application architecture.. what do you do?

I've been wondering what lives in the community for quite some time now;
I'm talking about big, business-oriented WPF/Silverlight enterprise applications.
Theoretically, there are different models at play:
Data Model (typically, linked to your db tables, Edmx/NHibarnate/.. mapped entities)
Business Model (classes containing actual business logic)
Transfer Model (classes (dto's) exposed to the outside world/client)
View Model (classes to which your actual views bind)
It's crystal clear that this separation has it's obvious advantages;
But does it work in real life? Is it a maintenance nightmare?
So what do you do?
In practice, do you use different class models for all of these models?
I've seen a lot of variations on this, for instance:
Data Model = Business Model: Data Model implemented code first (as POCO's), and used as business model with business logic on it as well
Business Model = Transfer Model = View Model: the business model is exposed as such to the client; No mapping to DTO's, .. takes place. The view binds directly to this Business Model
Silverlight RIA Services, out of the box, with Data Model exposed: Data Model = Business Model = Transfer Model. And sometimes even Transfer Model = View Model.
..
I know that the "it depends" answer is in place here;
But on what does it depend then;
Which approaches have you used, and how do you look back on it?
Thanks for sharing,
Regards,
Koen
Good question. I've never coded anything truly enterprisey so my experience is limited but I'll kick off.
My current (WPF/WCF) project uses Data Model = Business Model = Transfer Model = View Model!
There is no DB backend, so the "data model" is effectively business objects serialised to XML.
I played with DTO's but rapidly found the housekeeping arduous in the extreme, and the ever present premature optimiser in me disliked the unnecessary copying involved. This may yet come back to bite me (for instance comms serialisation sometimes has different needs than persistence serialisation), but so far it's not been much of a problem.
Both my business objects and view objects required push notification of value changes, so it made sense to implement them using the same method (INotifyPropertyChanged). This has the nice side effect that my business objects can be directly bound to within WPF views, although using MVVM means the ViewModel can easily provide wrappers if needs be.
So far I haven't hit any major snags, and having one set of objects to maintain keeps things nice and simple. I dread to think how big this project would be if I split out all four "models".
I can certainly see the benefits of having separate objects, but to me until it actually becomes a problem it seems wasted effort and complication.
As I said though, this is all fairly small scale, designed to run on a few 10's of PCs. I'd be interested to read other responses from genuine enterprise developers.
The seperation isnt a nightmare at all, infact since using MVVM as a design pattern I hugely support its use. I recently was part of a team where I wrote the UI component of a rather large product using MVVM which interacted with a server application that handled all the database calls etc and can honestly say it was one of the best projects I have worked on.
This project had a
Data Model (Basic Classes without support for InotifyPropertyChanged),
View Model (Supports INPC, All business logic for associated view),
View (XAML only),
Transfer Model(Methods to call database only)
I have classed the Transfer model as one thing but in reality it was built as several layers.
I also had a series of ViewModel classes that were wrappers around Model classes to either add additional functionality or to change the way the data was presented. These all supported INPC and were the ones that my UI bound to.
I found the MVVM approach very helpfull and in all honesty it kept the code simple, each view had a corresponding view model which handled the business logic for that view, then there were various underlying classes which would be considered the Model.
I think by seperating out the code like this it keeps things easier to understand, each View Model doesnt have the risk of being cluttered because it only contains things related to its view, anything we had that was common between the viewmodels was handled by inheritance to cut down on repeated code.
The benefit of this of course is that the code becomes instantly more maintainable, because the calls to the database was handled in my application by a call to a service it meant that the workings of the service method could be changed, as long as the data returned and the parameters required stay the same the UI never needs to know about this. The same goes for the UI, having the UI with no codebehind means the UI can be adjusted quite easily.
The disadvantage is that sadly some things you just have to do in code behind for whatever reason, unless you really want to stick to MVVM and figure some overcomplicated solution so in some situations it can be hard or impossible to stick to a true MVVM implementation(in our company we considered this to be no code behind).
In conclusion I think that if you make use of inheritance properly, stick to the design pattern and enforce coding standards this approach works very well, if you start to deviate however things start to get messy.
Several layers doesn't lead to maintenance nightmare, moreover the less layers you have - the easier to maintain them. And I'll try to explain why.
1) Transfer Model shouldn't be the same as Data Model
For example, you have the following entity in your ADO.Net Entity Data Model:
Customer
{
int Id
Region Region
EntitySet<Order> Orders
}
And you want to return it from a WCF service, so you write the code like this:
dc.Customers.Include("Region").Include("Orders").FirstOrDefault();
And there is the problem: how consumers of the service will be assured that the Region and Orders properties are not null or empty? And if the Order entity has a collection of OrderDetail entities, will they be serialized too? And sometimes you can forget to switch off lazy loading and the entire object graph will be serialized.
And some other situations:
you need to combine two entities and return them as a single object.
you want to return only a part of an entity, for example all information from a File table except the column FileContent of binary array type.
you want to add or remove some columns from a table but you don't want to expose the new data to existing applications.
So I think you are convinced that auto generated entities are not suitable for web services.
That's why we should create a transfer model like this:
class CustomerModel
{
public int Id { get; set; }
public string Country { get; set; }
public List<OrderModel> Orders { get; set; }
}
And you can freely change tables in the database without affecting existing consumers of the web service, as well as you can change service models without making changes in the database.
To make the process of models transformation easier, you can use the AutoMapper library.
2) It is recommended that View Model shouldn't be the same as Transfer Model
Although you can bind a transfer model object directly to a view, it will be only a "OneTime" relation: changes of a model will not be reflected in a view and vice versa.
A view model class in most cases adds the following features to a model class:
Notification about property changes using the INotifyPropertyChanged interface
Notification about collection changes using the ObservableCollection class
Validation of properties
Reaction to events of the view (using commands or the combination of data binding and property setters)
Conversion of properties, similar to {Binding Converter...}, but on the side of view model
And again, sometimes you will need to combine several models to display them in a single view. And it would be better not to be dependent on service objects but rather define own properties so that if the structure of the model is changed the view model will be the same.
I allways use the above described 3 layers for building applications and it works fine, so I recommend everyone to use the same approach.
We use an approach similar to what Purplegoldfish posted with a few extra layers. Our application communicates primarily with web services so our data objects are not bound to any specific database. This means that database schema changes do not necessarily have to affect the UI.
We have a user interface layer comprising of the following sub-layers:
Data Models: This includes plain data objects that support change notification. These are data models used exclusively on the UI so we have the flexibility of designing these to suit the needs of the UI. Or course some of these objects are not plain as they contain logic that manipulate their state. Also, because we use a lot of data grids, each data model is responsible for providing its list of properties that can be bound to a grid.
Views: Our XAML definitions of the views. To accommodate for some complex requirements we had to resort to code behind in certain cases as sticking to a XAML only approach was too tedious.
ViewModels: This is where we define business logic for our views. These guys also have access to interfaces that are implemented by entities in our data access layer described below.
Module Presenter: This is typically a class that is responsible for initializing a module. Its task also includes registering the views and other entities associated with this module.
Then we have a Data Access layer which contains the following:
Transfer Objects: These are usually data entities exposed by the webservices. Most of these are autogenerated.
Data Adapters such as WCF client proxies and proxies to any other remote data source: These proxies typically implement one or more interfaces exposed to the ViewModels and are responsible for making all calls to the remote data source asynchronously, translating all responses to UI equivalent data models as required. In some cases we use AutoMapper for translation but all of this is done exclusively in this layer.
Our layering approach is a little complex so is the application. It has to deal with different types of data sources including webservices, direct data base access and other types of data sources such as OGC webservices.

WPF and LINQ/SQL - how and where to keep track of changes?

I have a WPF application built using the MVVM pattern:
My Models come from LINQ to SQL.
I use the Repository Pattern to abstract away the DataContext.
My ViewModels have a reference to a Model.
Setting a property on the ViewModel causes that value to be written through to the Model.
As you can see, my data is stored in my Model, and changes are therefore tracked by my DataContext.
However, in this question I read:
The guidelines from the MSDN
documentation on the DataContext class
are what I would recommend following:
In general, a DataContext instance is
designed to last for one "unit of
work" however your application defines
that term. A DataContext is
lightweight and is not expensive to
create. A typical LINQ to SQL
application creates DataContext
instances at method scope or as a
member of short-lived classes that
represent a logical set of related
database operations.
How do you track your changes? In your DataContext? In your ViewModel? Elsewhere?
When I write this kind of software, my VMs never have a reference in any way to the data model. When you couple them like this, you are pretty much married to a simple two-tier solution which can be really tough to break out.
If you disconnect the DataContext entirely from your VM and have them live on their own, your application becomes:
Much more testable -- your VMs can be tested without the datacontext
Potentially stateless at the data layer -- it's easy to change your architecture to adopt a stateless 3-tier based solution.
Able to easily integrate other data sources -- your VMs can elegantly contain aggregates and combinations of other data sources on their own.
So in short, yes, I would recommend decoupling the data access classes from the ViewModel objects throughout the app. It might be a bit more code, but it will pay dividends down the road with the architecture's flexibility.
EDIT: I don't use the change tracking features of the L2SQL objects once they've crossed a distribution boundary. That turns into a can of worms pretty quickly -- you can spend a lot of time with the care and feeding of your data model's object graph inside your viewmodel - which adds not only complexity to the ViewModel, but at least transitively couples the ViewModel to the schema of the database. Instead of doing that, I make the ViewModel pure. When the time comes for them to be persisted, your service layer/repository/whatever does the translation between the ViewModel and the data objects. This at first seems like a lot of work, but for anything other than simple CRUD, this design pays off pretty quickly.
However you manage data inside a program, instances of objects of L2SQL generated classes are created like regular objects, without any using of a data context. DataContext is only responsible for interacting with a database.
Those guidelines may mean that you can do anything with objects of L2SQL generated classes, but don’t keep an object that transfers data between them and a database. You can treat L2SQL classes like any other classes, which you can use as you want, with an additional capability of being read from and written to a database.
This is a guess. I cannot say what was in the head of the MSDN author of that magic phrase, but this explanation makes sense. Store data in L2SQL generated classes objects during the whole program’s activity and synchronize it with database by temporary DataContext objects.

MVVM with WPF using LINQtoSQL in a DAL along with a BLL

My goal is to have an app that is using WPF and is a 3 tier architecture. UI, BLL, and DAL...I'd like to use the MVVM but I'm not sure how that works with a 3 tier architecture or if it is something entirely different. So with that in mind, I have a few questions:
1) LINQtoSQL: I've read a lot online that say LINQ replaces your DAL and seen many articles that say this a bad idea. I'm thinking it's a bad idea, however, what do I put in here? What are the datatypes i am returning to the BLL? IQueryable? ObservableCollection? I have no clue.
2) The BLL: I'd like to make this a service that runs on a server, so that when I need to make a change I don't need to redeploy the whole app, I just need to restart the service. But, I'm not sure where to start on this.
3) With the BLL, I guess I'm confused on how the data is going through all the layers from the DAL all the way to the Interface.
I've done lots of research online, and have got bits and pieces of things, however I haven't seen anyone talk about a WPF application that is using MVVM with LINQ in the DAL using SQLMetal and a BLL thats running on a server. Can anyone point me in the right direction? or maybe a book to get?
Mike,
your question is really cool, I like it. Firstly, feel free to experiment a bit - each and every project is different, so there's no single rule, which fits everywhere. That's why I would suggest just leaving DAL to LINQ 2 SQL. this great tool will handle it, you don't have to worry. Secondly - you mentioned 3 Tier Architecture, but why isn't there place for the Model? Since all models are generated automatically (e.g. SQLMetal), you don't have to worry about mappings either. So, if you're not bored yet, let me answer all of your 3 questions:
Skip DAL and observe your project carefuly - if you have a feeling, that it's lacking this layer - add it (it will contain LINQ2SQL queries). And the second part - you can return whatever you wish, but it will be most convenient for you to use IEnumerable<> or IQueryable<> parametrized with your models.
My intuition tells me, that you're going to need WCF - in this case you should be able to wrap whole (yes, that's true) whole business logic in a nice Contract and implement however you wish.
This is the easiest one :) Since your BLL layer is actually an implementation of some Contract (Interface), you can design that Interface to provide you with all data you need. For example:
Contract/Interface:
IEnumerable<User> GetTallUsersOver40();
IEnumerable<User> GetShortUsersOver60();
...
And that 'all the layers' you were talking about shrink to a single LINQ2SQL query execution. If you need more logic - place it in this layer.
I want to use MVVM, what now? The answer is simpler than you think - just prepare your views and view models and simply consume your BLL Contract/Interface implementaion.
Please ask if you have further questions!
I'll try to provide some insight, though I'm not an expert, I've tackled these issues in the past.
LINQ to SQL is actually pretty good at what it's supposed to do - which is replace your DAL. But don't return an IQueriable upwards to your BLL, as that would enable (or at least hint to the possibility) the BLL to query your DB directly. You should transfer the data object to the BLL and let it construct a matching business object. Also note: LINQ in and of itself can be used in any layer (and in fact is one of the best features of C#). LINQ to SQL is the mechanism by which LINQ statements get transated to SQL queries.
BLL as a service is a natural choice. Provide an upward interface to the presentation layer (a WCF service is a good option here).
The BLL generates business objects based on data it receives from the DAL. In order to provide for good decoupling of layers, you should use different classes for your DAL and BLL objects. Don't create a dependency between your presentation layer and your data layer.
Great question. I don't think there is any one place that has all the answers. I had very similar questions when we were starting a new project. MVVM is really just a presentation pattern and doesn't care of all the details like you listed. Laurent Bugnion has a good framework that glues everything together as well.
LINQ2SQL is cool but can get cumbersome with the VS08 designers. Take a look at http://plinqo.com/ to use with CodeSmith to generate the DAL and I think it will even do the BLL with contracts as well. Another generating option is Oleg Sych T4 templates One issue we ran into with LINQ2SQL is the singular datacontext. If you don't need to be modular this isn't an issue.
I agree with what the others said about data contracts and look at what Plinqo can generate. It may save you a lot of time.
The data will work it's way up in objects usually. Like the others said make sure you keep a seem between all the layers so you don't have dependencies.
When you get to the MVVM part you will open a whole new can of worms. I don't think there are many or any books out on MVVM yet. It is still a somewhat new fad.
Great question, I'm on the nursery slopes of the WCF/WPF learning curve so I'm in a similar position to you. My 2 cents:
Haven't got into Linq to SQL, I'm old school and used to writing stored procedures and views. I'm currently using these to populate DTO classes - that is, classes with no methods, just properties to represent the data. I know I'm probably behind the curve on this.
Make your BLL a WCF service - put the service contract(s) and data contract(s) in their own assembly, you can then include this in your client, where they become your model, or part of it.
In your client application, include a reference to the assembly containing the service contracts and data contracts. The data contracts then become your model, your ViewModels can wrap these models and expose their properties (implement INotifyPropertyChanged for databinding).
I'm using the O'Reilly books Programming WCF Services, Learning WCF Services and Programming WPF which I'm finding pretty good. I don't know of any books specifically about MVVM but there's loads of stuff on the web.

Do you create classes to handle "entities" for data driven apps?

I'm a newbie and when messing around with creating database applications I always just created my forms and put all the code and bindings in there. Instead of having arrays and lists that held information I made changes to the database directly.
Now that I have evolved a little bit let's say that I sold widgets to customers and kept the sales information in a database. If I were writing a program to access the database wouldn't I want to create a class of type 'Customer' and 'Widget' to work with those entities?
If I'm mistaken then what is the appropriate approach to programming database applications?
Yes.
You want to look into n-tier programming.
Basically, you allow you front end (presentation layer) access only to your class library (business layer). Your class library then accesses you database.
This gives you a less tightly coupled solution, and allows for more maintainable code. Also by introducing tiers you allow for changes to your DB without the need to rewrite code in your frontend, as long as the interface with the Business Layer doesn't need to be changed.
As far as binding is concerned, if you are using Visual Studio Windows Forms (2005 onwards) you should be able to us an bindingSource that you can then use to bind your controls. If you are using ASP.NET then your control should bind to a list of objects without any problems.
For ASP.Net the ObjectDataSource might be worth looking into. I haven't used it myself, but there's lots of samples on the web. Try here or here.
Yes.
You want to look closely at Object-Relational Mapping.
Your real-world business entities are modeled by objects which map to relational tables.
You do not want your presentation layer to directly depend on your database structure; the problem with that is if your database structure changes at all, your presentation layer must change, and in the long term, that tends to cause problems. Moreover, there are security issues involved with having your presentation layer directly interact with your database.
The rough analogy here is to markets; when you go to the store to buy a loaf of bread, you don't need to know how to grow wheat; all you need to know is that you have money, and they have bread, and that they will exchange a certain amount of bread for a certain amount of money. You don't need to know what time of year to plant the wheat, or how to remove the chaff, or any of that, because the backing layer takes care of that for you. Similarly, the farmer doesn't need to know how to sell bread to a whole bunch of people, or even how to make bread; all he has to do is know how to grow wheat.
Modern design philosophy recommends that you use an intermediate layer to interact between your presentation layer and your database layer; this is where you put your business logic. So, for example, let's say that you're selling widgets on your site. Instead of having your presentation code query the database for widgets and display that, you have a Business Object which handles your widgets. This way, your business object needs to know what your database structure is, but your presentation layer only needs to know how to ask your business object for a list of widgets to display. More importantly, in your business object you can place the rules that are to be invoked when certain things happen. So instead of your presentation layer directly making changes to the database for inventory and orders when an order is made, your business object knows how to make the changes and what tables to modify when your presentation layer requests that a sale occurs.
In this way, you separate the display of your information from the persistence and the logic underlying the website. What is involved is good planning; specifically, you have to figure out what your web site will be doing at any given point, and what that means in terms of what interfaces your business objects will provide. Then you implement your business objects based off of those requirements; those business objects are where you put the knowledge of the database structure and your specific business logic ("when A happens, do B and then C", etc.).
This seems like a lot of extra work at the beginning, but it really is worth it.

Resources