Using an ObservableCollection in a service - wpf

I have a few projects. One of the projects will be deployed as a Windows Service. An other project is a WPF application. There are a few other applications managing some data (of external hardware or the database). The data-lists are almost all ObservableCollection's.
I've read around a bit and it seems like ObservableCollection is only really ment for the UI layer (the WPF application, in my case). Is that correct? Would I be better of using events (PropertyChanged) in the service/datamanager layers?

In a service, usually all objects should only live as long as it takes to process the current request, correct? So, normally you wouldn't need change notification at all in the service layer, since the service would perform all changes to the model itself. For the next request, the required objects would be read again from the backing store.

Related

What is the real purpose of ViewModel in MVVM?

I had a talk with teamlead about this topic and from his point of view we can just use bindings and commandings omitting ViewModel, because we can test UI behaviour without VM using Automation or our own developed mechanisms of UI testing (based on automated clicks on Views). So, if there are no real benefits, why should I spawn "redundant" entities? Besides, automated integration tests look much more indicative than VM tests. So, it seems that we can mix VMs and Models.
update:
I agree that mixing VMs and Models brings into single .cs a data model and rules of data transformation for representing it in a View. But if this is only one benefit - I don't want to create a VM for each View.
So what pros of VM do you know?
The VM is the logic behind your UI. combining the UI code with the logic ends up in a mess, at least from my experience. Your view should define what you see - buttons, menus etc. Your VM is in charge of the binding and handling events caused by the user.
Edit:
Not wanting to create a VM for each view doesn't sound like a SW-oriented reason. Doing so will leave your view clean of logic and your model free to be the connecting layer between the core layer and your app layer.
I like the following example referring to the model and its role (why it shouldn't be combined with the VM): imagine you're developing some Android app using Google maps. Google maps are your core. Then one fine day you really need the option to, say, color parts of the map in pink, bright pink. An email to Google asking for colorPink(Map)will probably get you nowhere. That's where your model steps in and provides you the map wrapper that you need to define your pinky method.
Without a separate model, you'd have to go through every VM that uses map and update it.
So, the view has a role, the model has a role, the VM is the logic between those.
Edit 2:
There are some cases where there's no need of a model layer - I tended to disagree at first but was convinced after a discussion: In relatively small applications, the model can end up being a redundant wrapper with no added functionality over the core. In such cases, you can use the core objects directly.
What is he binding to? Whatever he is binding to is effectively a view model, whether you call it that or not. If it also doubles as a data model, then you're violating the Single Responsibility Principal (SRP). Essentially, the problem here is that you're lumping together code that is servicing different parts of the application architecture, which will lead to a convoluted mess.
UI Testing is a pain not just because you need to accomodate for changes in the View which can occur many times but also because UI tends to interfere with its own testing. Depending on the tests needed you'll need to keep track of focus, resize and possibly other (mouse) events which can be extremely hard to make it a representative test.
It is much easier to scope the test to the logic in the ViewModel and leave the UI testing to humans. The human testers do not need to test the logic; their only consern should be the UI.
By breaking down a ViewModel into a hierarchy of ViewModels you might be able to reuse a ViewModel multiple times. There is no rule that states that you should have a ViewModel for each View. (After a release or two I end up there but that's besides the point :) )
It depends on the nature of your Model - sometimes they are simple and can serve as both like you are suggesting. However, depending on the framework, you'll need to dirty up your model with some PropertyChanged event code which can get messy and distracting. In more complex cases, they serve as a mediator between your the view and the model. Let's suppose you're creating a client app that monitors a remote process or database entries - creating View Model's for these entities let's you surface your data in a way that is convenient to bind to for a UI framework (but is silly in a DB framework), encapsulate operations as commands, perform validation, etc etc.

Entity Framework 4 + Silverlight persisting entity graphs

we are currently building our first large application with Silverlight 4 (using PRISM) and Entity Framework 4. Now I'm having a general question about persisting view model data.
Suppose I have domain objects which translate to EF4 entities with multiple associations (Entity having collections, having collections again etc..). What would be the best way to persist those graphs during / after user actions? Would it be better to write more granular repository methods like "AddEntityToParent" and "RemoveEntityFromParent" or just take all the data from the view and push it to a "SaveLargeParentEntity" Method?
Can I "cache" the view model items for child objects in Silverlight and push it all down to EF4 later or would I have to make a granular update for every single item changed in the user interface? Any good advise? I hope my question was clear enough. Thank you.
You are actually making a choice between basic CRUD operations and working with object graphs. I would choose second approach because CRUD operations over web service can be very chatty.
When working with object graphs send over web service you have to deal with detached behavior. Detached entities + object graph couses some troubles when updating relations. The best approach usually is to load the whole graph before update (get attached entities) and merge received graph into attached one - it will correctly track changes for you.
But because you are using Silverlight which is stateful you can also think about using Self tracking entities (STE). STEs are able to track changes after they are detached from EF ObjectContext. So you can return object graph consisted of STEs from web service to Silverlight application, make some changes to STEs and send same object graph back to web service. Applying changes from STEs will handle a lot of work for you. Be aware that STEs are not the best solution for services which should be exposed to general web applications or non .NET clients.

Is this the correct use of WCF and data contracts?

I'm still pretty new to Silverlight, quite new to WCF, and am trying to broaden my horizons into both. I'd like to learn what is considered to be good practices while doing so.
On the client side, I have a Silverlight application. On the server side, I have a database that the Silverlight application will be utilizing. In between the two (okay, it's server-side, but...), I have a WCF service that the client calls upon to get the data from the database.
I have created a class that is marked as a DataContract and is used by the WCF service. That class is an object model populated with data from the database. On the client side, when it requests and receives an instance of this class, it uses the instance data to instantiate and populate a client-defined object that has additional client-defined members.
It's my use of the DataContract that most worries me. To create an instance of an object to be serialized and sent, only to be pillaged for its data so another object can be created seems...inefficient. But if it's considered a good practice I can get past that.
I did consider going the route of a web handler (.ashx) and using a proprietary binary standard to communicate the data, but I think going the WCF route may be more applicable and usable in the future (thinking: job).
I don't see any particular problem with your approach.
In my mind what you're describing is the transfer of data from service to client as a DTO (data-transfer object), and then using that DTO to populate a view model object. It is also quite common for the DTO and view model objects to use varying levels of granularity in terms of the data they represent (typically DTOs will be more coarse grained), and the view model will contain behaviour that is specific to the UI.
You might want to look at tools and frameworks that help in the mapping between DTOs and view model objects. One of my favourites is AutoMapper.

How to call operations other than CRUD in RIA Domain Service?

I have some trouble getting my head around how to implement more complex operations in a Domain Service in RIA Services. This is all Silverlight 4, VS 2010 and .Net Framework 4 in Beta 2.
Goal
I wish I could create an operation on my LinqToEntitiesDomainService that would have a signature something like this:
public UnwieldyOperationResult PerformUnwieldyOperation( UnwieldyOperationParameters p );
The idea is that this operation takes a colleciton of parameters and performs rather complex operations which would update different instances and types of the entities that are otherwise manipulated through the DomainService CRUD functionality.
Problem
The immediate problem i hit is that does not seem to be allowed to pass the custom type as parameter to the method, and I suppose something along those lines go for the return value. I want to encapsulate the operation parameters in a DTO for clarity, and this unwieldy operation does not have any corresponding entity in the legacy database that I have wrapped with Entity Framework 4.0 model, which I am in turn basing the Domain Service on.
Is a domain service supposed to deal with only the types that are entities in the underlying EF model? Is it not designed to expose more complex operations like my UnwieldyOperation?
If so, can I build another service somehow that allows both the operation signature and to manipulate the entity framework?
I have understood that only one Domain Service can handle an entity from the model. This has led me to cram all the CRUD and now also the UnwieldyOperation into one Domain Service, although my first idea was to split the service into smaller parts.
If I'd get the operation to work with parameters and return value in the Domain Service, my next wish would be to have the entities that are already loaded in the domain context at the client refresh themselves.
Is there any efficient mechanism for such a thing?
How would you go about to do that?
What I have so far...
In short, this is what I have so far:
I have wrapped an existing legacy database
with an Entity Framework 4.0 model with as
little extra padding/code as
possible. This means right-click, add and generate from database.
I have implemented the simpler CRUD operations in the DomainService and I am using them successfully to display and edit straight forward data. I have some encapsulation of logic through ViewModels in client but I expose the Entity classes directly, but I think this is unrelated to my problem/question.
I have realized that I can't add the UnwieldyOperation in an as straight forward manner as I initially thought... Also I suspect/hope that I have misunderstood some aspects of the Domain Service mechanism, which has led me to the current situation.
One way to go?
Writing this down in a question like this gives me the idea that perhaps I'd go in this direction:
LegacyModelService expose the CRUD operations as I have already done.
Expose the Unwieldy operations in another service. Should I make that a RIA Doamin Service or just plain WCF?
Access the Entity Framework model from the new UnwieldyOperationsService and manipulate the data layer there.
Explicitly reload or refresh the client domain context for the LegacyModelService at the client to reflect the changes that may have resultet from the UnwieldyOperation. What would be a good way get this done?
Check out http://msdn.microsoft.com/en-us/library/ee707373%28VS.91%29.aspx for naming conventions over and above simple CRUD, maybe Invoke or Named Update operations would be suitable?

Silverlight pages binding to dataset(design question)

I have a legacy dot net application (now migrated to .net 2.0).
We need to convert this application to silverlight.
Problem here is the datalayer. All the methods from the datalayer return datasets.
The entire web application is using datasets for databinding.
Now the questions are:
Can I use the same datasets for silverlight pages also?
Or do I have to create a wrapper around the datalayer?
Or do I have to change the entire datalayer architecture (like returning collections etc)?
Please suggest the best possible way.
Thanks,
SNA
Unfortunately, DataSets aren't supported in Silverlight 2 (and afaik aren't coming in Silverlight 3).
I'm going to assume that your current data layer has methods like GetTopCustomers that return DataSets, then the client application can modify that data and re-submit it to a data layer function like UpdateCustomers that takes a DataSet as a parameter and then submits changes to a database. If this is the case I think you'll have a tough time writing a wrapper because you'll be on your own for enforcing referential integrity and tracking changes on the client side. It's certainly possible but I think it'll be more pain than its worth. So imo creating a wrapper around your data layer would be equivalent to changing the entire data layer architecture to return collections, etc.
You best bet for a data layer is .NET RIA Services, which ships sometime in the Silverlight 3 timeframe. It's a huge leap over the current technology, ADO.NET Data Services, in that it adds change tracking and a DataSet-like "context" for the client. It also allows direct sharing of code between ASP.NET (or any part of the full .NET Framework) and Silverlight so your business rules can be run on both the client side and server side. Rewriting your data layer may not sound appealing, but I think it'll spare you much pain and you'll get a huge return if you choose .NET RIA Services. If that choice doesn't fit the other option is to use ADO.NET Data Services to ship the data back and forth (combined with a wrapper for your current data layer) or to write your own custom WCF services to provide CRUD operations (again with a wrapper on your current data layer).
Good luck!
If the goal of your conversion is to create a Silverlight version of your application with the least amount of change to your business logic layer, then a wrapper is your answer.
This is a lot of work in Silverlight V2, as you probably know. If you'd like some detail, here's a blog post. You will end up rolling your own serialize/deserialize/zip/encode layer for transferring the data to your Silverlight app.
Silverlight 3 isn't out yet, but close from the rumors. And this functionality is present in V3 (from what I hear).

Resources