this is probably only some conceptual problem, but I cannot seem to find the ideal solution.
I'd like to create a Silverlight client application that uses WCF to control a third party application via some self written webservice. If there is more than one Silverlight client, all clients should be synchronized, i.e. parameter changes from one client should be propagated to all clients.
I set up a very simple Silverlight GUI that manipulates parameters which are passed to the server (class inherits INotifyPropertyChanged):
public double Height
{
get { return frameworkElement.Height; }
set
{
if (frameworkElement.Height != value)
{
frameworkElement.Height = value;
OnPropertyChanged("Height", value);
}
}
}
OnPropertyChanged is responsible for transferring data. The WCF service (duplex net.tcp) maintains a list of all clients and as soon as it receives a data packet (XElement with parameter change description) it forwards this very packet to all clients but the one the packet was received from.
The client receives the package, but now I'm not sure, what's the best way to set the property internally. If I use "Height" (see above) a new change message would be generated and sent to all other clients a.s.o.
Maybe I could use the data field (frameworkElement.Height) itself or a function - but I'm not sure whether there would arise problems with data binding later on. Also I don't want to simply copy parts of the code properties, to prevent bugs with redundant code.
So what would you recommend?
Thanks!
One common solution here is to use a boolean to track your current state within OnPropertyChanged. It can be set to true when a WCF packet is received, and if it's true, you don't rebroadcast. You then set it to false after setting the property.
When you set the property normally, you'd just leave it false. This will cause it to broadcast normally when set internally, but not when set via the WCF call.
This option works, but it does require care to get right. Since you're putting this logic into a single point, it should be fairly straightforward to get correct.
Related
Say,there is a requirement that a customer object is loaded from the db to a silverlight application, so that the customer details are shown in the UI. We need to detect if the user is changing any data in the UI.
We are listening to property changed notifications from the view model. However, when the notifications are result of property change as part of the loading process we have to discard it.
class CustomerLoader
{
Helper helerobj;
Address addressobj;
Profile profileobj;
void LoadFromDb()
{
helperobj.Load();
addressobj.Load();
profileobj.Load();
//start listening after the loading finished
this.propertychanged += new PropertyChangedEventHandler(handlepropertychanged);
}
The trouble with this is the inner objects might be calling asynchronous functions which might set properties. So by the time we started the property change listening, the loading might not have been finished.
We need to know when the loading is actually done. As of now we are asking the developers who are developing the inner object classes to accept a callback in the parameter which they should call when the function is finished.
Is there any other way to do it?
You want nothing but a really classic asynchronous objet loading.
So yes, the only solution is to ask developers working on the loading to propose an asynchronous function. Now you hav several solution to achieve asynchronicity in Silverlight.
You could either provide a callback as you do, or use async and await to manage your asynch task as explain here: http://10rem.net/blog/2012/05/22/using-async-and-await-in-silverlight-5-and-net-4-in-visual-studio-11-with-the-async-targeting-pack
I have a Silverlight RIA app where I share the models and data access between the MVC web app and the Silverlight app using compiler directives, and for the server, to see what context I am running under I would check to see if the ChangeSet object was non-null (meaning I was running under RIA rather than MvC). Everything works alright but I had problems with the default code generated by the domain service methods.
Let's say I had a Person entity, who belonged to certain Groups (Group entity). The Person object has a collection of Groups which I add or remove. After making the changes, the SL app would call the server to persist the changes. What I noticed happening is that the group entity records would be inserted first. That's fine, since I'm modifying an existing person. However, since each Group entity also has a reference to the existing person, calling AddObject would mark the whole graph - including the person I'm trying to modify - as Added. Then, when the Update statement is called, the default generated code would try to Attach the person, which now has a state of Added, to the context, with not-so-hilarious results.
When I make the original call for an entity or set of entities in a query, all of the EntityKeys for the entities are filled in. Once on the client, then EntityKey is filled in for each object. When the entity returns from the client to be updated on the server, the EntityKey is null. I created a new RIA services project and verified that this is the case. I'm running RIA Services SP1 and I am not using composition. I kind of understand the EntityKey problem - the change tracking done is on two separate contexts. EF doesn't know about the change tracking done on the SL side. However, it IS passing back the object graph, including related entities, so using AddObject is a problem unless I check the database for the existence of an object with the same key first.
I have code that works. I don't know how WELL it works but I'm doing some further testing today to see what's going on. Here it is:
/// <summary>
/// Updates an existing object.
/// </summary>
/// <typeparam name="TBusinessObject"></typeparam>
/// <param name="obj"></param>
protected void Update<TBusinessObject>(TBusinessObject obj) where TBusinessObject : EntityObject
{
if (this.ChangeSet != null)
{
ObjectStateManager objectStateManager = ObjectContext.ObjectStateManager;
ObjectSet<TBusinessObject> entitySet = GetEntitySet<TBusinessObject>();
string setName = entitySet.EntitySet.Name;
EntityKey key = ObjectContext.CreateEntityKey(setName, obj);
object dbEntity;
if (ObjectContext.TryGetObjectByKey(key, out dbEntity) && obj.EntityState == System.Data.EntityState.Detached)
{
// An object with the same key exists in the DB, and the entity passed
// is marked as detached.
// Solution: Mark the object as modified, and any child objects need to
// be marked as Unchanged as long as there is no Domainoperation.
ObjectContext.ApplyCurrentValues(setName, obj);
}
else if (dbEntity != null)
{
// In this case, tryGetObjectByKey said it failed, but the resulting object is
// filled in, leading me to believe that it did in fact work.
entitySet.Detach(obj); // Detach the entity
try
{
ObjectContext.ApplyCurrentValues(setName, obj); // Apply the changes to the entity in DB
}
catch (Exception)
{
entitySet.Attach(obj); // Re-attach the entity
ObjectContext.ApplyCurrentValues(setName, obj); // Apply the changes to the entity in DB'
}
}
else
{
// Add it..? Update must have been called mistakenly.
entitySet.AddObject(obj);
}
}
else
DirectInsertUpdate<TBusinessObject>(obj);
}
Quick walkthrough: If the ChangeSet is null, I'm not under the RIA context, and therefore can call a different method to handle the insert/update and save immediately. That works fine as far as I can tell. For RIA, I generate a key, and see if it exists in the database. If it does and the object I am working with is detached, I apply those values; otherwise, I force detach and apply the values, which works around the added state from any previous Insert calls.
Is there a better way of doing this? I feel like I'm doing way too much work here.
In this kind of a case, where you're adding Group entities to Person.Groups, I would think of just saving the Person and expect RIA to handle the Groups for me.
But let's take a step back, how are you trying to persist your changes? You shouldn't be saving/updating entities one by one. All you have to do is call DomainContext.SubmitChanges and all your changes should be persisted.
I work with pretty complicated projects and I seldom ever have to touch add/update code.
This question has been around with no solid answer, so I'll tell you what I did... which is nothing. That's how I handled it in RIA services, using the code above, since I was sharing the RIA client model and the server model.
After working with RIA services for a year and a half, I'm in the camp that believes that RIA services is good for working with smaller, less complex apps. If you can use [Composite] for your entities, which I couldn't for many of my entities, then you're fine.
RIA services can make throwing together small applications where you want to use the entity from EF really quick, but if you want to use POCOs or you foresee your application getting complex in the future, I would stick with building POCOs on the service end and passing those through regular WCF, and using shared behaviors by making your POCOs partial classes and sharing the behavior code with the client. When you're trying to create models that work the same on the client and the server, I had to write a ridiculous amount of plumbing code to make it work.
It definitely IS possible to do, I've done it; but there is a lot of hoops you must jump through for everything to work well, and I never fully took into consideration things like your shared model pre-loading lists for use on the client, whereas the server didn't need these preloaded everytime and actually slowed down the loading of the web page unnecessarily and countering by writing hacky method calls which I had to adopt on the client. (Sorry for the run-on.) The technique I chose to use definitely had its issues.
I have added properties to client side entity that is generated by the Ria Services tooling.
I'm doing this by creating a new file containing a partial class definition.
Through the UI, some changes are made to various properties of an instance of this class. The problem comes when I call the DomainContext SubmitChanges().
It seems that the changed object is sent to the server (that's good) but then it seems that something else must be happening because my object's client side properties are being reset.
How should I preserve the local data such that it survives from one SubmitChanges to the next.
This is a known problem with WCF RIA Services. You would get the same problem if you ever try to refresh the entity with a new load. If null is not a valid value for your property in the setter of the property check to see if the value is null and if it is then ignore the set.
If your property is an integer change it to an int? so that you can get a null back instead of a 0.
I guess you see this behavior (the reset of the client side object properties) after the Submitchange's response. This is normal and I wouldn't change it.
With fiddler and the wcf binary inspector take a look at the response: the server update the state of the client-side object's after the submitchange's call.Do the updated object looks empty ?
I have to use functionality that is in another application domain. The result should be displayed in user control.
I have something like that:
var instance = domain.CreateInstanceFromAndUnwrap(...);
instance.Foo(myWpfUserControl as ICallback);
Foo(ICallback itf) {
itf.SetData("...");
}
WpfUserControl.SetData(string data)
{
if (!Dispatcher.CheckAccess())
Dispatcher.Invoke(...)
...
}
I had to put [Serializable] attribute onto WpfUserControll class and implement serialization contructor as well as ISerializable interface but now i receive exception:
The calling thread must be STA because many UI components require this
that is raised from UserControl() constructor
What shall I do to avoid this ?
Thank you in advance !
==============================
Solution
as #Al noticed, my user control have to be serialized when it comes to cross-application-domain calls. Now i pass proxy, that implements ICallback interface. Proxy was marked with Serializable attribute.
Proxy implementation should have absolutely no knowledge about user control as there should be an attempt to deserialize user control instance once again. When I tried to abstract proxy from user control via interface it didn't help. When i tried to pass interface to proxy (that was implemented by user control) - same exception occured.
Finally I decoupled proxy and user control with queue/semaphor. Queue was monitored by a worker thread that deligated calls to user control
p.s. this queue should be inherited from "MarshalByObjectRef".
If the exception is coming from the constructor, it means that you're not creating this control instance from the UI thread. This can be fine but you have to make sure the Thread is an STA thread by calling .SetApartmentState(ApartmentState.STA) on the thread object before the thread is started.
This also means you have to have access to the thread object before its started so you cant do this on a threadpool thread.
The best way to avoid the problem though is probably to create the control on the main UI thread and then assign the Text value using the Dispatcher (or a Task on the UiScheduler). That way you'll also avoid problems if the main thread needs to set, get or bind to the control, as that would cause a cross thread exception if the control was created on another thread
i'd advice against seriealizing the control this way if possible. doing that will generate a new object that is not attatched to any panels or some such, and the original control would not be updated. sadly you cant inheirit from MarshalByRefObject that would eliminate serialization since it would only pass a reference to the other domain.
If you can, call Foo separately and then pass the result to SetData in the original Appdomain
Probably a long question for a simple solution, but here goes...
I have a custom made silverlight control for selecting multiple files and sending them to the server. It sends files to a general handler (FileReciever.ashx) using the OpenWriteAsync method of a WebCLient control.
Basically, the silverlight code does something like this for each file:
WebClient client = new WebClient();
client.OpenWriteCompleted += (sender, e) =>
{
PushData(data, e.Result);
e.Result.Close();
data.Close();
};
client.OpenWriteAsync(handlerUri);
The server side handler simply reads the incoming stream, and then does some more processing with the resulting byte array.
THE PROBLEM is that client side OpenWriteCompleted is done as soon as all the data has been sent over the wire. My code will then contine with the next file. What I really want is to wait until the ASHX handler has finished with all it's processing of that request. How do I do that? Any wait mechanism on WebClient? Any callback I can do on the HttpContext in the handler? Should I use some other kind of transfer technique? Please advice!
The same question has been asked in Silverlight forums. The Microsoft endorsed answer was that you can't do that with WebClient and OpenWriteAsync. You need to either user UploadStringAsync or an HttpWebRequest.
Hrm, maybe a simple solutioin could be to tag the url with a GUID(the guid being unique per file, or transfer, whatever makes sense to your situatuation). Then you can have another simple web service that is capable of checking on the status of the other service, based on the guid, and have your silverlight client query that new service for its processing status(by passing the new web service the guid of the past transfer).
I'm assuming that you're concerned that the data being returned from the handler is taking a long time to transfer and the server is not being utilized during that time. There isn't a way to tell when the server is done processing, so I don't think you can do this without changing your architecture.
I would have your handler only an identifier of some sort (like a GUID or int) that can be used to retrieve the result of the handler in another request. So the page would call the handler, the handler would store the result and return the identifier, the page would call the handler the second time and call another handler to get the result of the first call. This would keep your server in use while your data was transferring.
Or you can probably do it with JavaScript (jQuery)... if you don't mind using JavaScript that is.
If files are not very big, and is feasible to keep each of them in memory, an ugly yet effective solution is converting them to strings and sending them using the UploadStringAsync method.
Avoid this approach if file size is unbounded, but if you can now that they will be relatively small, it is possible to use this approach.