I awake this morning to a problem!
In all of my components, I have a set of Business Rules which are used to validate DTOs before any changes are committed to the repository.
I've been trying to figure out the best way to get validation errors back to the UI and I came across the IDataErrorInfo interface. Fantastic!
However, The implementation of this interface would transform my DTO into a POCO and make it a larger object in terms of memory usage. At the moment, all of the user controls are bound to the current DTO objects.
Would transforming my DTOs into POCOs have a performance impact? Or is there a better way to get validation messages back to the UI?
MVVM. i.e. yours DTOs are wrapped in view models, which are bound to your view.
I've been trying to figure out the
best way to get validation errors back
to the UI and I came across the
IDataErrorInfo interface. Fantastic!
Absolutely. How comes you do not know what you are doing in the first place? IDataErrorInfo is fully documented - not something you should "coming across" (which sounds accidentally).
The implementation of this interface
would transform my DTO into a POCO and
make it a larger object in terms of
memory usage. At the moment, all of
the user controls are bound to the
current DTO objects.
A DTO has absolutely no business knowing aboutinternal errors - it should not never ever HAVE internal errors. See, DTO is "Data Transfer Object", not "Business Object". The DTO is what the Business object should generate to send it to the DataAccessLayer, and the reason there should not be validation is taht the Business Object makes sure ONLY VALID OBJECTS CREATE DTO's.
Btw.,
would transform my DTO into a POCO
I have another surprise discovery for you - your DTO already IS A POCO. POCO is "Plain Old CLR Object", and I guess the your DTO are established as .NET classes, so - guess what, surprise, they already ARE POCO.
What you mean (again something to discover) is that it would transform your DTO's into BO's.
Or is there a better way to get
validation messages back to the UI?
No, there is not. The best way to send messages up to the UI is by UI defined interfaces, and IDataErrorInfo is that.
Your problem is that you have a hugh confusion about:
How to program a multi tiered architecture and build a standard data access layer
Dont know anything about the terms you use (see your problem knowing what a DTO actually is, and what POCO is).
Thus mix up your responsibilities.
See the answers at POCO vs DTO for an explanation of what you actually mix up.
Back o the drawing board. As your lead developer / Architect to give you an introduction into multi tiered architectures and read the .NET documentation.
The DTO should noly deal with DataTransfer issues.
Related
I've been wondering what lives in the community for quite some time now;
I'm talking about big, business-oriented WPF/Silverlight enterprise applications.
Theoretically, there are different models at play:
Data Model (typically, linked to your db tables, Edmx/NHibarnate/.. mapped entities)
Business Model (classes containing actual business logic)
Transfer Model (classes (dto's) exposed to the outside world/client)
View Model (classes to which your actual views bind)
It's crystal clear that this separation has it's obvious advantages;
But does it work in real life? Is it a maintenance nightmare?
So what do you do?
In practice, do you use different class models for all of these models?
I've seen a lot of variations on this, for instance:
Data Model = Business Model: Data Model implemented code first (as POCO's), and used as business model with business logic on it as well
Business Model = Transfer Model = View Model: the business model is exposed as such to the client; No mapping to DTO's, .. takes place. The view binds directly to this Business Model
Silverlight RIA Services, out of the box, with Data Model exposed: Data Model = Business Model = Transfer Model. And sometimes even Transfer Model = View Model.
..
I know that the "it depends" answer is in place here;
But on what does it depend then;
Which approaches have you used, and how do you look back on it?
Thanks for sharing,
Regards,
Koen
Good question. I've never coded anything truly enterprisey so my experience is limited but I'll kick off.
My current (WPF/WCF) project uses Data Model = Business Model = Transfer Model = View Model!
There is no DB backend, so the "data model" is effectively business objects serialised to XML.
I played with DTO's but rapidly found the housekeeping arduous in the extreme, and the ever present premature optimiser in me disliked the unnecessary copying involved. This may yet come back to bite me (for instance comms serialisation sometimes has different needs than persistence serialisation), but so far it's not been much of a problem.
Both my business objects and view objects required push notification of value changes, so it made sense to implement them using the same method (INotifyPropertyChanged). This has the nice side effect that my business objects can be directly bound to within WPF views, although using MVVM means the ViewModel can easily provide wrappers if needs be.
So far I haven't hit any major snags, and having one set of objects to maintain keeps things nice and simple. I dread to think how big this project would be if I split out all four "models".
I can certainly see the benefits of having separate objects, but to me until it actually becomes a problem it seems wasted effort and complication.
As I said though, this is all fairly small scale, designed to run on a few 10's of PCs. I'd be interested to read other responses from genuine enterprise developers.
The seperation isnt a nightmare at all, infact since using MVVM as a design pattern I hugely support its use. I recently was part of a team where I wrote the UI component of a rather large product using MVVM which interacted with a server application that handled all the database calls etc and can honestly say it was one of the best projects I have worked on.
This project had a
Data Model (Basic Classes without support for InotifyPropertyChanged),
View Model (Supports INPC, All business logic for associated view),
View (XAML only),
Transfer Model(Methods to call database only)
I have classed the Transfer model as one thing but in reality it was built as several layers.
I also had a series of ViewModel classes that were wrappers around Model classes to either add additional functionality or to change the way the data was presented. These all supported INPC and were the ones that my UI bound to.
I found the MVVM approach very helpfull and in all honesty it kept the code simple, each view had a corresponding view model which handled the business logic for that view, then there were various underlying classes which would be considered the Model.
I think by seperating out the code like this it keeps things easier to understand, each View Model doesnt have the risk of being cluttered because it only contains things related to its view, anything we had that was common between the viewmodels was handled by inheritance to cut down on repeated code.
The benefit of this of course is that the code becomes instantly more maintainable, because the calls to the database was handled in my application by a call to a service it meant that the workings of the service method could be changed, as long as the data returned and the parameters required stay the same the UI never needs to know about this. The same goes for the UI, having the UI with no codebehind means the UI can be adjusted quite easily.
The disadvantage is that sadly some things you just have to do in code behind for whatever reason, unless you really want to stick to MVVM and figure some overcomplicated solution so in some situations it can be hard or impossible to stick to a true MVVM implementation(in our company we considered this to be no code behind).
In conclusion I think that if you make use of inheritance properly, stick to the design pattern and enforce coding standards this approach works very well, if you start to deviate however things start to get messy.
Several layers doesn't lead to maintenance nightmare, moreover the less layers you have - the easier to maintain them. And I'll try to explain why.
1) Transfer Model shouldn't be the same as Data Model
For example, you have the following entity in your ADO.Net Entity Data Model:
Customer
{
int Id
Region Region
EntitySet<Order> Orders
}
And you want to return it from a WCF service, so you write the code like this:
dc.Customers.Include("Region").Include("Orders").FirstOrDefault();
And there is the problem: how consumers of the service will be assured that the Region and Orders properties are not null or empty? And if the Order entity has a collection of OrderDetail entities, will they be serialized too? And sometimes you can forget to switch off lazy loading and the entire object graph will be serialized.
And some other situations:
you need to combine two entities and return them as a single object.
you want to return only a part of an entity, for example all information from a File table except the column FileContent of binary array type.
you want to add or remove some columns from a table but you don't want to expose the new data to existing applications.
So I think you are convinced that auto generated entities are not suitable for web services.
That's why we should create a transfer model like this:
class CustomerModel
{
public int Id { get; set; }
public string Country { get; set; }
public List<OrderModel> Orders { get; set; }
}
And you can freely change tables in the database without affecting existing consumers of the web service, as well as you can change service models without making changes in the database.
To make the process of models transformation easier, you can use the AutoMapper library.
2) It is recommended that View Model shouldn't be the same as Transfer Model
Although you can bind a transfer model object directly to a view, it will be only a "OneTime" relation: changes of a model will not be reflected in a view and vice versa.
A view model class in most cases adds the following features to a model class:
Notification about property changes using the INotifyPropertyChanged interface
Notification about collection changes using the ObservableCollection class
Validation of properties
Reaction to events of the view (using commands or the combination of data binding and property setters)
Conversion of properties, similar to {Binding Converter...}, but on the side of view model
And again, sometimes you will need to combine several models to display them in a single view. And it would be better not to be dependent on service objects but rather define own properties so that if the structure of the model is changed the view model will be the same.
I allways use the above described 3 layers for building applications and it works fine, so I recommend everyone to use the same approach.
We use an approach similar to what Purplegoldfish posted with a few extra layers. Our application communicates primarily with web services so our data objects are not bound to any specific database. This means that database schema changes do not necessarily have to affect the UI.
We have a user interface layer comprising of the following sub-layers:
Data Models: This includes plain data objects that support change notification. These are data models used exclusively on the UI so we have the flexibility of designing these to suit the needs of the UI. Or course some of these objects are not plain as they contain logic that manipulate their state. Also, because we use a lot of data grids, each data model is responsible for providing its list of properties that can be bound to a grid.
Views: Our XAML definitions of the views. To accommodate for some complex requirements we had to resort to code behind in certain cases as sticking to a XAML only approach was too tedious.
ViewModels: This is where we define business logic for our views. These guys also have access to interfaces that are implemented by entities in our data access layer described below.
Module Presenter: This is typically a class that is responsible for initializing a module. Its task also includes registering the views and other entities associated with this module.
Then we have a Data Access layer which contains the following:
Transfer Objects: These are usually data entities exposed by the webservices. Most of these are autogenerated.
Data Adapters such as WCF client proxies and proxies to any other remote data source: These proxies typically implement one or more interfaces exposed to the ViewModels and are responsible for making all calls to the remote data source asynchronously, translating all responses to UI equivalent data models as required. In some cases we use AutoMapper for translation but all of this is done exclusively in this layer.
Our layering approach is a little complex so is the application. It has to deal with different types of data sources including webservices, direct data base access and other types of data sources such as OGC webservices.
I have stateless services and anemic domain objects on server side. Model between server and client is POCO DTO. The client should become MVVM. The model could be graph of about 100 instances of 20 different classes. The client editor contains diverse tab-pages all of them live-connected to model/viewmodel.
My problem is how to propagate changes after server round-trip nice way. It's quite easy to propagate changes from ViewModel to DTO. For way back it would be possible to throw away old DTO and replace it whole with new one, but it will cause lot of redrawing for lists/DataTemplates.
I could gather the server side changes and transmit them to client side. But the names of fields changed would be domain/DTO specific, not ViewModel specific. And the mapping seems nontrivial to me. If I should do it imperative way after round-trip, it would break SOC/modularity of viewModels.
I'm thinking about some kind of mapping rule engine, something like automappper or emit mapper. But it solves just very plain use-cases. I don't see how it would map/propagate/convert adding items to list or removal. How to identify instances in collections so it could merge values to existing instances. As well it should propagate validation/error info.
Maybe I should implement INotifyPropertyChanged on DTO and try to replay server side events on it ? And then bind ViewModel to it ? Would binding solve the problems with collection merges nice way ? Is EventAgregator from PRISM useful for that ? Is there any event record-replay component ?
Is there better client side pattern for architecture with server side logic ?
Typically, I've kept a reference to the DTO in my Model classes. For multiple models, I ensure each model knows how to construct itself from a DTO, as well as how to Save itself using an injectible "saver" or other service provider object. Carrying around a reference to the DTO makes it pretty easy, when you call Save() on the model, to modify the old DTO according to the Model before passing it back to the service.
Hopefully any "updates" to other objects after a Save() operation could be communicated in other DTOs, which should then be loaded into the appropriate Model classes used by your ViewModel.
The downside to this is that you do indeed have to write the mapping code, but this is usually the easiest part. I am not convinced this is the best way to do things and I would appreciate reading others' responses.
I used another variation with a lot of success. We had a real-time GUI, so repetitive redraws on the client were not acceptable.
We made our DTOs aware of their property changes, and emit PropertyChanged events to the ViewModel, therefore replaying server-side events.
Code is straightforward to write, unit test, etc. It becomes easy to browse when PropertyChangeListeners (interfaces implemented by ViewModels) are typed.
This boundary can also be used to switch threads to the GUI worker thread.
I have a WPF application built using the MVVM pattern:
My Models come from LINQ to SQL.
I use the Repository Pattern to abstract away the DataContext.
My ViewModels have a reference to a Model.
Setting a property on the ViewModel causes that value to be written through to the Model.
As you can see, my data is stored in my Model, and changes are therefore tracked by my DataContext.
However, in this question I read:
The guidelines from the MSDN
documentation on the DataContext class
are what I would recommend following:
In general, a DataContext instance is
designed to last for one "unit of
work" however your application defines
that term. A DataContext is
lightweight and is not expensive to
create. A typical LINQ to SQL
application creates DataContext
instances at method scope or as a
member of short-lived classes that
represent a logical set of related
database operations.
How do you track your changes? In your DataContext? In your ViewModel? Elsewhere?
When I write this kind of software, my VMs never have a reference in any way to the data model. When you couple them like this, you are pretty much married to a simple two-tier solution which can be really tough to break out.
If you disconnect the DataContext entirely from your VM and have them live on their own, your application becomes:
Much more testable -- your VMs can be tested without the datacontext
Potentially stateless at the data layer -- it's easy to change your architecture to adopt a stateless 3-tier based solution.
Able to easily integrate other data sources -- your VMs can elegantly contain aggregates and combinations of other data sources on their own.
So in short, yes, I would recommend decoupling the data access classes from the ViewModel objects throughout the app. It might be a bit more code, but it will pay dividends down the road with the architecture's flexibility.
EDIT: I don't use the change tracking features of the L2SQL objects once they've crossed a distribution boundary. That turns into a can of worms pretty quickly -- you can spend a lot of time with the care and feeding of your data model's object graph inside your viewmodel - which adds not only complexity to the ViewModel, but at least transitively couples the ViewModel to the schema of the database. Instead of doing that, I make the ViewModel pure. When the time comes for them to be persisted, your service layer/repository/whatever does the translation between the ViewModel and the data objects. This at first seems like a lot of work, but for anything other than simple CRUD, this design pays off pretty quickly.
However you manage data inside a program, instances of objects of L2SQL generated classes are created like regular objects, without any using of a data context. DataContext is only responsible for interacting with a database.
Those guidelines may mean that you can do anything with objects of L2SQL generated classes, but don’t keep an object that transfers data between them and a database. You can treat L2SQL classes like any other classes, which you can use as you want, with an additional capability of being read from and written to a database.
This is a guess. I cannot say what was in the head of the MSDN author of that magic phrase, but this explanation makes sense. Store data in L2SQL generated classes objects during the whole program’s activity and synchronize it with database by temporary DataContext objects.
I had a client ask for advice building a simple WPF LOB application the other day. They basically want a CRUD application for a database, with main purpose being as a WPF training project.
I also showed them Linq To SQL and they were impressed.
I then explained its probably not a good idea to use that L2S entities directly from their BLL or UI code. They should consider something like a Repository pattern instead.
At which point I could already feel the over-engineering alarm bells going off in their heads (and also in mine to some extent). Do they really need all that complexity for a simple CRUD application? (OK, its effectively functioning as a WPF training project for them but lets pretend it turns into a "real" app).
Do you ever think its acceptable to use L2S entities all through your application?
How difficult is it (from experience) to refactor to use another persistence framework later?
The way I see it, if the UI layer uses the L2S entities as a simple a POCO (without touching any L2S specific methods) then that should be really easy to refactor later on if need be.
They do need a way to centralize the L2S queries, so some sort of way to manage that is needed, even if they do use L2S entities directly. So in a way we are already pushing toward some aspects of DAL/DAO/Repository.
The main problem with Repo I can see is the pain of mapping between L2S entities and some domain model. And is it really worth it? You get quite a lot "for free" on L2S entities which I think would be hard to use once you map to another model.
Thoughts?
The only major drawback to using L2S entities all the way through is that your UI needs to know about and bind to the concrete entities. That means your UI knows your data layer. Not usually a good idea. You really want a layered approach for anything that has the potential to be serious.
That said, it's perfectly possible to use the LINQ-to-SQL entities themselves in a layered architecture without knowledge of the data layer: extract interfaces for the entities and bind to them instead.
Keep in mind that all L2S entities are partial classes. Create interfaces that reflect your entities (Refactor => Extract Interface is your friend here) and create partial class definitions of your entities that implement the interfaces. Put the interfaces themselves (and only the interfaces) in a separate .DLL that your UI and Business Layer reference. Have your Business Layer and Data Layer accept and emit these interfaces rather than the concrete versions of them. You can even have the interfaces implement INotifyPropertyChanging, since the L2S entities themselves implement those interfaces. And it all works peachy.
Then, when/if you need a different persistence framework, you have no pain at all in the BL or UI, only in the data layer -- which is where you want it.
Repositories are not a big deal. See here for a pretty good treatment of their use in ASP.NET MVC:
http://nerddinnerbook.s3.amazonaws.com/Part3.htm
Basically what we did for a project was that we had a Business layer that did all of the "LINQing" to L2S objects... in essence centralizing all querying to one layer via "Manager Objects" (I guess these are somewhat akin to repositories).
We did not use DTOs to map to L2S; as we didn't feel is was worth the effort in this project. Part of our logic was that as more and more ORMs support Iqueryable and similar syntax to L2S (e.g. entity framework), then it probably wouldn't be THAT bad to switch to a different ORM, so its not that bad of a sin, IMO.
Do you ever think its acceptable to use L2S entities all through your application?
That depends. If you do a short lived product you can get things easily wired up in a quick succession if you use L2S. But if you do a long lived product which you might have to maintain for a long duration, then you better think about a proper persistence layer.
How difficult is it (from experience) to refactor to use another persistence framework later?
If you use L2S in all your layers, well then you must not think about re-factoring them to use another persistence framework. This is exactly the advantage of using a framework like NHibernate or Entity Framework in your persistence layer , that though it requires a bit of extra effort upfront it will be easy to maintain in the long run
It sounds like you should be considering the ViewModel Pattern for WPF
http://msdn.microsoft.com/en-us/magazine/dd419663.aspx
I've been reading up on the MVVM pattern, and I would like to try it out on a relatively small WPF project. The application will be single-user. Both input and output data will be stored in a "relational" XML file. A schema (XSD file) with Keys and KeyRefs is used to validate the file.
I have also started to get my feet wet with Linq and LinqToXml, and I have written a couple pretty complex queries that actually work (small victories :)).
Now, I'm trying to put it all together, and I'm finding that I'm a little bit confused as to what should go in the Model versus the ViewModel. Here are the options I've been wrestling with so far:
Should I consider the Model the XML file itself and place all the LinqToXml queries in the ViewModel? In other words, not even write a class called Model?
Should I write a Model that is just a simple wrapper for the XML file and XSD Schema Set and performs validation, saves changes, etc.?
Should I put "basic" queries in the Model and "view-specific" queries in the ViewModel? And if so, where should I draw the line between these two categories?
I realize there is not necessarily a "right" answer to this question...I'm just looking for advice and pros/cons, and if anyone is aware of a code example for a similar scenario, that would be great.
Thanks,
-Dan
For a small application, having separate Data Access, Domain Model and Presentation Model layers may seem like overkill, but modeling your application like that will help you decide what goes where. Even if you don't want to decompose your application into three disitinct projects/libraries, thinking about where each piece of functionality should go can help you decide.
In that light, pure data access (i.e. loading the XML files, querying and updating them) belongs in the data access layer, since these are technology specific.
If you have any operations that don't relate to your particular data access technology, but could rather be deemed universally applicable within your application domain, these should go into the Domain Model (or what some call the Business Logic).
Any logic whose sole purpose is to provide specific functionality for a particular user interface technology (WPF, in your case) should go into the Presentation Model.
In your case, the XML files and all the LINQ to XML queries belong in the Data Access Layer.
To utilize MVVM, you will need to create a ViewModel for each View that you want to have in your application.
From your question, it is unclear to me whether you have anything that could be considered Domain Model, but stuff like validation is a good candidate. Such functionality should go into the Domain Model. No classes in the Domain Model should be directly bound to a View. Rather, it is the ViewModel's responsibility to translate between the Domain Model and the View.
All the WPF-specific stuff should go in the ViewModel, while the other classes in your application should be unaware of WPF.
Scott Hanselmen has a podcast that goes over this very topic in detail with Ian Griffiths, an individual who is extremely knowledgeable about WPF, and who co-wrote an O'Reilly book titled, "Programming WPF."
Windows Presentation Foundation explained by Ian Griffiths
http://hanselminutes.com/default.aspx?showID=184
The short (woefully incomplete) answer is that the view contains visual objects and minimal logic for manipulating them, while the View Model contains the state of those objects.
But listen to the podcast. Ian says it better than I can.