How would you structure the code for calling a wcf service in silverlight application?
Using only-once instanciated wcf service-proxy (aka singleton) and using it across the whole SL app?
If so, how did you solve the unsubscribing controls from ws-call-completed event?
or
creating the wcf service-proxy for each ws-call? Where do you close the proxy then?
Here's the application structure I found workable:
Application is split into modules (Prism but can be anything) - module per vertical function.
Every module has its own set of service client classes (generated by slsvcutil)
For every service client partial class I have another generated partial class where for every service method I have a version that returns IObservable.
E.g. if my service client has a method GetAllMyDataAsync() and event GetAllMyDataCompleted the generated method signature will be IObservable<MyDataDto[]> GetMyData() This method will deal with subscribing/unsubscribing to an event, authentication, error handling, and other infrastructure issues.
This way web-service call becomes simply:
new MyServiceClient().GetAllMyData().Subscribe(DoSomethingWithAllMyData)
With this I can easily join data from multiple requests, e.g. (strictly for demonstration purposes, don't try this in real app):
var des = (from d in new MyServiceClient().GetMyDataItem()
from e in new MyServiceClient().GetDataItemEnrichment(d)
select new EnrichedData { Data = d, Enrichment = e});
des.Subscribe(DoSomethingWithEnrichedData);
Once application gets more complex (e.g. data is shared by multiple components, you add messaging that dynamically updates initially retrieved data, etc.) it's helpful to add another element in the stack - Model.
Therefore if I have a service MyDataService I'd have a model class called MyDataServiceModel. It will be registered in the container as singleton and it will be injected into viewmodels that needs it. Thus viewmodels talk to this class when they need data (so rather than calling MyServiceClient.GetAllMyData it would call MyDataServiceModel.GetAllMyData.
This way viewmodels are completely independent of WCF stack (easier to mock, easier to test) Additionally these model classes take care of:
data transformation from/to DTO
enriching and combining data (one model method may join data from more than one request)
handling issues like throttling (e.g. typical scenario, user selected something in a combobox, it caused a request to be sent to a server to retrieve a data for that selection, while that request is being exected user made another change, but for some reason responses came out of order), etc.
combining data pulled on initial load via WCF with data pushed by the service during the session
Related
In evaluating Angular + Breeze, does it support tracking changes with the consumed DTOs from services back to the backend Entity Framework?
Yes and no. Breeze tracks the changes on the client, and when you call saveChanges(), it sends the changed entities (with information about what properties changed) to the server. What happens on the server is up to you, so you could use the received data to modify the states of entities in an existing EF context, and accumulate change tracking info in EF until you decide to save it to the DB.
The provided EF + WebApi server-side components don't do that, however. They are built to streamline the following use case:
Client performs add/update/delete operations on entities and calls saveChanges().
Server creates a new EF DbContext and applies the changes to it.
Server applies validation rules (in the BeforeSaveEntities method), and rejects the save if they fail.
Server DbContext saves the changes to the database.
In this scenario, there is no long-lived EF DbContext tracking the changes; the change tracking is done on the client, and EF is used to process those changes on the server and save them in the DB.
That probably covers 90% of what most applications require, but there are hooks in place to intercept the save and make server-side changes before the save, and you can override any parts that don't fit your needs.
I'm using RIA Services for Silverlight and I'm wondering if there's a way to get the service context an entity is attached to from the entity alone (on the client, ie with the RIA-Service domain context and entities!). This would help to implement functionality in them that need some context (the service context itself being one example) without relying on global (static) storage.
You can create many custom DomainContext instances, how you needed. You can cast it to base class (DomainContext) and store it in global storage for second use. In process of second use you should cast it to the custom DaomainContext class again. For example: ((CustomDomainContext)instance).Customers.SubmitChanges();
I'm not sure if I'm on the right lines but this is what I'm trying to do, I have a Silverlight application and a WCF service, the Silverlight app "subscribes" to the WCF service using PollingDuplex and the service can send data to any connected clients which works.
The service is marked with [ServiceContract(CallbackContract = typeof(IServiceCallback))] and it is single instanced
The problem is there is another service which should be able to call a standard method on this service to pass it data that will get distributed to the connected Silverlight clients, but because of the above settings it requires it to use callbacks (I can't change the other service).
Is there a way to have both types of operations, callback and standard in the same service if that makes sense?
Thanks for your time
Yes. It is possible. I guess CallbackContract parameter will not stop you from using your service as a regular request/response service (though I have not tried it).
But for the same contract, you may have to define two end points with different bindings, one with PollingDuplexHttpBinding and another one with basicHttpBinding (with silverlight this is the only other option).
You have to make sure that you are calling the right operation from the clients using duplex and basic http bindings.
I am using a WCF service between the Client side UI (Silverlight 3.0) and the Data Layer. We are using NHibernate for Database Access. So please tell me if my below understanding is correct or not:
UI calls WCF for a Save Method (for eg).
The WCF has a Save method in it which actually encapsulates a Save method from the Data
Access Object.
The Data Access Object method of Save in turn encapsulates a default Save Method of
NHibernate which actually saves some Business Object/s into the Database.
Also can someone tell me that how do we pass objects from WCF to the UI (Silverlight 3.0) layer and vice versa. I have read that we use DTO for that. But how does a DTO work? Do they correspond to the 'Data Contracts' in the WCF? If not then is the DTO declared on WCF (server) side and Client side code as well?
No, not quite....
UI calls the client-side proxy method Save
the WCF runtime takes that call and all parameters being passed in, and serializes them into a message (typically a XML serialized message)
the WCF runtime sends the serialized message over some kind of a transport media (whatever it is)
on the server side, the WCF runtime takes the incoming message
the message is deserialized, the appropriate class and method to handle it are identified
typically: a new instance of a service class is instantiated to handle the request
the WCF runtime unpacks the parameters and calls that appropriate message on the service class
same steps - basically backwards - are done for response
Important point: the only thing between the client and the server is a serialized message (which could be sent by e-mail or pigeon courier) - there's no other connection - no "remote object call" or anything like that at all
marc_s mentions the client-side proxies, which can be generated via the service references in your Silverlight project. The generated proxies are decent enough and provide an async model for running requests from the Silverlight side; those will look mostly like remoted procedure calls.
Another approach is to use the leaner (but maybe more advanced?) channel factory directly. A simple example of that can be found here. Both methods take care of most of the serialization details for you.
I am planning to use WCF (not ria) in conjunction with Entity Framework 4 and STE (Self tracking entitites). If I understand this correctly my WCF should return an entity or collection of entities (using LIST for example and not IQueryable) to the client (in my case Silverlight).
The client then can change the entity or update it. At this point I believe it is self tracking? This is where I sort of get a bit confused as there are a lot of reported problems with STEs not tracking.
Anyway, then to update I just need to send back the entity to my WCF service on another method to do the update. I should be creating a new OBJECTCONTEXT every time? In every method?
If I am creating a new objectcontext every time in every method on my WCF then don't I need to re-attach the STE to the objectcontext?
So basically this alone wouldn't work??
using(var ctx = new MyContext())
{
ctx.Orders.ApplyChanges(order);
ctx.SaveChanges();
}
Or should I be creating the object context once in the constructor of the WCF service so that 1 call and every additional call using the same WCF instance uses the same objectcontext?
I could create and destroy the WCF service in each method call from the client - hence creating in effect a new objectcontext each time.
I understand that it isn't a good idea to keep the objectcontext alive for very long.
You are asking several questions so I will try to answer them separately:
Returning IQueryable:
You can't return IQueryalbe. IQueryable describes query which should be executed. When you try to return IQueryable from service it is being executed during serialization of service response. It usually causes exception because ObjectContext is already closed.
Tracking on client:
Yes STEs can track changes on a client if client uses STEs! Assembly with STEs should be shared between service and client.
Sharing ObjectContext:
Never share ObjectContext in server environment which updates data. Always create new ObjectContext instance for every call. I described reasons here.
Attaching STE
You don't need to attach STE. ApplyChanges will do everything for you. Also if you want to returen order back from your service operation you should call AcceptChanges on it.
Creating object context in service constructor:
Be aware that WCF has its own rules how to work with service instances. These rules are based on InstanceContextMode and used binding (and you can implement your own rules by implement IInstanceProvider). For example if you use BasicHttpBinding, default instancing will be PerCall which means that WCF will create new service instance for each request. But if you use NetTcpBinding instead, default instancing will be PerSession and WCF will reuse single service instance for all request comming from single client (single client proxy instance).
Reusing service proxy on a client:
This also depends on used binding and service instancing. When session oriented binding is used client proxy is related to single service instance. Calling methods on that proxy will always execute operations on the same service instance so service instance can be stateful (can contains data shared among calls). This is not generally good idea but it is possible. When using session oriented connection you have to deal with several problems which can arise (it is more complex). BasicHttpBinding does not allow sessions so even with single client proxy, each call is processed by new service instance.
You can attach an entity to a new object context, see http://msdn.microsoft.com/en-us/library/bb896271.aspx.
But, it will then have the state unchanged.
The way I would do it is:
to requery the database for the information
compare it with the object being sent in
Update the entity from the database with the changes
Then do a normal save changes
Edit
The above was for POCO, as pointed out in the comment
For STE, you create a new context each time but use "ApplyChanges", see: http://msdn.microsoft.com/en-us/library/ee789839.aspx