Unit testing WCF-RIA Services - silverlight

When data from a Get operation on my DomainService is sent to the DomainContext in my silverlight application, some rows end up not being sent while others are sent. I check this by setting a breakpoint in the DomainService and a breakpoint in the DomainContext load operation callback. How can I create a unit test to check this?
E.g. Set up some in-memory data for the DomainService and check if the silverlight DomainContext receives this data?

This is usually caused by a primary key that isn't unique. When RIA Services sends rows to the client it filters the results by the primary key to make sure there are no duplicates. If you have two rows with different data but the same primary key only one of those rows will make it to the client.
There is a blog series by Kyle McClellan on how to unit test RIA Services: http://blogs.msdn.com/b/kylemc/archive/2011/08/18/unit-testing-a-wcf-ria-domainservice-part-1-the-idomainservicefactory.aspx which may be helpful.

Related

Breeze and Data Access SOA

In evaluating Angular + Breeze, does it support tracking changes with the consumed DTOs from services back to the backend Entity Framework?
Yes and no. Breeze tracks the changes on the client, and when you call saveChanges(), it sends the changed entities (with information about what properties changed) to the server. What happens on the server is up to you, so you could use the received data to modify the states of entities in an existing EF context, and accumulate change tracking info in EF until you decide to save it to the DB.
The provided EF + WebApi server-side components don't do that, however. They are built to streamline the following use case:
Client performs add/update/delete operations on entities and calls saveChanges().
Server creates a new EF DbContext and applies the changes to it.
Server applies validation rules (in the BeforeSaveEntities method), and rejects the save if they fail.
Server DbContext saves the changes to the database.
In this scenario, there is no long-lived EF DbContext tracking the changes; the change tracking is done on the client, and EF is used to process those changes on the server and save them in the DB.
That probably covers 90% of what most applications require, but there are hooks in place to intercept the save and make server-side changes before the save, and you can override any parts that don't fit your needs.

Understanding how WCF works

I am using a WCF service between the Client side UI (Silverlight 3.0) and the Data Layer. We are using NHibernate for Database Access. So please tell me if my below understanding is correct or not:
UI calls WCF for a Save Method (for eg).
The WCF has a Save method in it which actually encapsulates a Save method from the Data
Access Object.
The Data Access Object method of Save in turn encapsulates a default Save Method of
NHibernate which actually saves some Business Object/s into the Database.
Also can someone tell me that how do we pass objects from WCF to the UI (Silverlight 3.0) layer and vice versa. I have read that we use DTO for that. But how does a DTO work? Do they correspond to the 'Data Contracts' in the WCF? If not then is the DTO declared on WCF (server) side and Client side code as well?
No, not quite....
UI calls the client-side proxy method Save
the WCF runtime takes that call and all parameters being passed in, and serializes them into a message (typically a XML serialized message)
the WCF runtime sends the serialized message over some kind of a transport media (whatever it is)
on the server side, the WCF runtime takes the incoming message
the message is deserialized, the appropriate class and method to handle it are identified
typically: a new instance of a service class is instantiated to handle the request
the WCF runtime unpacks the parameters and calls that appropriate message on the service class
same steps - basically backwards - are done for response
Important point: the only thing between the client and the server is a serialized message (which could be sent by e-mail or pigeon courier) - there's no other connection - no "remote object call" or anything like that at all
marc_s mentions the client-side proxies, which can be generated via the service references in your Silverlight project. The generated proxies are decent enough and provide an async model for running requests from the Silverlight side; those will look mostly like remoted procedure calls.
Another approach is to use the leaner (but maybe more advanced?) channel factory directly. A simple example of that can be found here. Both methods take care of most of the serialization details for you.

RIA services and nHibernate insert new problem

I have combination of RIA services and nHibernate. nHibernate is configured to use identity on database side. So new Entities are sent with 0 for id. nHibernate works as it should. It updates generated keys form database and updates entites.
I have example with compositional hierarchy. My entity is complex it has two collections.
InvestObject
- MaterialItems
- WorkItems
I work with this structure in one unit of work. Geting and showing data in Silverlight app is no problem. But if I try to add more than one item in MaterialItems collection on client side, when saving I get this error:
Submit operation failed. Invalid
ChangeSet : Only one entry for a given
entity instance can exist in the
ChangeSet. at
System.ServiceModel.DomainServices.Server.ChangeSet.ValidateChangeSetEntries(IEnumerable1
changeSetEntries) at
System.ServiceModel.DomainServices.Server.ChangeSet..ctor(IEnumerable1
changeSetEntries)
There is a quick fix on client side, just to generate some dummy negative ids, for Material. This works for RIA and save is propagated to server side. But then nHibernate fires error, beacuse it expects 0 for all new Ids not a given value ( ). So this is not OK.
Finally I tricked nHibernate by reseting back all new Ids to 0. But this does not make me happy. It is messy ugly solution.
Please help
It's been a while since I've done this so the details are hazy but I think you basically can't use IDs that are generated in the DB with RIA services. We used the HiLo algorithm instead.

EF4 + STE: Reattaching via a WCF Service? Using a new objectcontext each and every time?

I am planning to use WCF (not ria) in conjunction with Entity Framework 4 and STE (Self tracking entitites). If I understand this correctly my WCF should return an entity or collection of entities (using LIST for example and not IQueryable) to the client (in my case Silverlight).
The client then can change the entity or update it. At this point I believe it is self tracking? This is where I sort of get a bit confused as there are a lot of reported problems with STEs not tracking.
Anyway, then to update I just need to send back the entity to my WCF service on another method to do the update. I should be creating a new OBJECTCONTEXT every time? In every method?
If I am creating a new objectcontext every time in every method on my WCF then don't I need to re-attach the STE to the objectcontext?
So basically this alone wouldn't work??
using(var ctx = new MyContext())
{
ctx.Orders.ApplyChanges(order);
ctx.SaveChanges();
}
Or should I be creating the object context once in the constructor of the WCF service so that 1 call and every additional call using the same WCF instance uses the same objectcontext?
I could create and destroy the WCF service in each method call from the client - hence creating in effect a new objectcontext each time.
I understand that it isn't a good idea to keep the objectcontext alive for very long.
You are asking several questions so I will try to answer them separately:
Returning IQueryable:
You can't return IQueryalbe. IQueryable describes query which should be executed. When you try to return IQueryable from service it is being executed during serialization of service response. It usually causes exception because ObjectContext is already closed.
Tracking on client:
Yes STEs can track changes on a client if client uses STEs! Assembly with STEs should be shared between service and client.
Sharing ObjectContext:
Never share ObjectContext in server environment which updates data. Always create new ObjectContext instance for every call. I described reasons here.
Attaching STE
You don't need to attach STE. ApplyChanges will do everything for you. Also if you want to returen order back from your service operation you should call AcceptChanges on it.
Creating object context in service constructor:
Be aware that WCF has its own rules how to work with service instances. These rules are based on InstanceContextMode and used binding (and you can implement your own rules by implement IInstanceProvider). For example if you use BasicHttpBinding, default instancing will be PerCall which means that WCF will create new service instance for each request. But if you use NetTcpBinding instead, default instancing will be PerSession and WCF will reuse single service instance for all request comming from single client (single client proxy instance).
Reusing service proxy on a client:
This also depends on used binding and service instancing. When session oriented binding is used client proxy is related to single service instance. Calling methods on that proxy will always execute operations on the same service instance so service instance can be stateful (can contains data shared among calls). This is not generally good idea but it is possible. When using session oriented connection you have to deal with several problems which can arise (it is more complex). BasicHttpBinding does not allow sessions so even with single client proxy, each call is processed by new service instance.
You can attach an entity to a new object context, see http://msdn.microsoft.com/en-us/library/bb896271.aspx.
But, it will then have the state unchanged.
The way I would do it is:
to requery the database for the information
compare it with the object being sent in
Update the entity from the database with the changes
Then do a normal save changes
Edit
The above was for POCO, as pointed out in the comment
For STE, you create a new context each time but use "ApplyChanges", see: http://msdn.microsoft.com/en-us/library/ee789839.aspx

Best practice for using Wcf service by silverlight?

How would you structure the code for calling a wcf service in silverlight application?
Using only-once instanciated wcf service-proxy (aka singleton) and using it across the whole SL app?
If so, how did you solve the unsubscribing controls from ws-call-completed event?
or
creating the wcf service-proxy for each ws-call? Where do you close the proxy then?
Here's the application structure I found workable:
Application is split into modules (Prism but can be anything) - module per vertical function.
Every module has its own set of service client classes (generated by slsvcutil)
For every service client partial class I have another generated partial class where for every service method I have a version that returns IObservable.
E.g. if my service client has a method GetAllMyDataAsync() and event GetAllMyDataCompleted the generated method signature will be IObservable<MyDataDto[]> GetMyData() This method will deal with subscribing/unsubscribing to an event, authentication, error handling, and other infrastructure issues.
This way web-service call becomes simply:
new MyServiceClient().GetAllMyData().Subscribe(DoSomethingWithAllMyData)
With this I can easily join data from multiple requests, e.g. (strictly for demonstration purposes, don't try this in real app):
var des = (from d in new MyServiceClient().GetMyDataItem()
from e in new MyServiceClient().GetDataItemEnrichment(d)
select new EnrichedData { Data = d, Enrichment = e});
des.Subscribe(DoSomethingWithEnrichedData);
Once application gets more complex (e.g. data is shared by multiple components, you add messaging that dynamically updates initially retrieved data, etc.) it's helpful to add another element in the stack - Model.
Therefore if I have a service MyDataService I'd have a model class called MyDataServiceModel. It will be registered in the container as singleton and it will be injected into viewmodels that needs it. Thus viewmodels talk to this class when they need data (so rather than calling MyServiceClient.GetAllMyData it would call MyDataServiceModel.GetAllMyData.
This way viewmodels are completely independent of WCF stack (easier to mock, easier to test) Additionally these model classes take care of:
data transformation from/to DTO
enriching and combining data (one model method may join data from more than one request)
handling issues like throttling (e.g. typical scenario, user selected something in a combobox, it caused a request to be sent to a server to retrieve a data for that selection, while that request is being exected user made another change, but for some reason responses came out of order), etc.
combining data pulled on initial load via WCF with data pushed by the service during the session

Resources