Presuming you have an email folder and messages in it. Each message has Body property, which needs to be loaded async and notify you when done, how would you approach this?
1 - Message.LoadBody() + event Message.BodyLoadComplete
2 - Message.LoadBody(Action completeDelegate)
For the sake of completeness WPF and Prism are involved.
Thanks!
Edit:
Message will be a UI entity which will wrap a IMessage interface (that is not UI Ready (no INPC)), so the reason i'm asking is that we have to settle for an interface between UI and business layer.. IMessage. (The Business layer will be using an Imap library that already has some async pattern, but we don't want to depend too much on any imp, thats why i'm trying to figure out the best interface ..
If you're using .NET 4, I'd use:
Task<string> LoadBodyAsync()
(You might want to implement this using TaskCompletionSource<TResult>.)
The caller can then add continuations etc however they wish... and most importantly, in the brave new world of .NET 4.5 and C# 5, this will work seamlessly with the async/await features.
Of your two options:
Message.LoadBody() // Plus an event, Message.BodyLoadComplete
// or ...
Message.LoadBody(Action completeDelegate)
The event option is more flexible and may sometimes be less painful to use. If you don't care when or if LoadBody completes, then you aren't forced to provide a fake callback. And you can bind the completion event to multiple event handlers, which might sometimes be useful (e.g. if multiple UI controls need to be updated).
Both your solutions are non-typical, though. The typical "old" way to do this is for LoadBody to be split into BeginLoadBody and EndLoadBody, and give the user an IAsyncResult. See this article. The typical "new" way to do it is outlined in Jon Skeet's answer.
If you're using Prism you should consider using the EventAggregator and publishing a message to indicate a mail has loaded. This will allow you to easily have multiple loosely-coupled subscribers for this "event".
Using the EventAggregator is an elegant solution for publishing events, and leads to a much cleaner and decoupled architecture which is easier to extend. For instance, if you wish to add a new feature to email loading such as progress indication, you can simply subscribe to the EmailLoaded message and you're done, you don't have to tightly couple your new component to emails via an event or callback, they do not need to know about each other.
Related
An application that I develop was initially built with Flux.
However, over time the application became harder to maintain. There was a very large number of actions. And usually one action is only listened to in one place (store).
Actions make it possible to not write all event handler code in one place. So instead of this:
store.handleMyAction('ha')
another.handleMyAction('ha')
yetAnotherStore.handleMyAction('ha')
I can write:
actions.myAction('ha')
But I never use actions that way. I am almost sure, that this isn't an issue of my application.
Every time I call an action, I could have just called store.onSmthHappen instead of action.smthHappen.
Of course there are exceptions, when one action is processed in several places. But when that happens it feels like something went wrong.
How about if instead of calling actions I call methods directly from the store? Will my application not be so flexible? No! Occurs just rename (with rare exceptions). But at what cost! It becomes much harder to understand what is happening in the application with all these actions. Each time, when tracking the processing of complex action, I have to find in stores where they are processed. Then in these Stores I should find the logic that calls another action. Etcetera.
Now I come to my solution:
There are controllers that calls methods from stores directly. All logic to how handle action is in the Store. Also Stores calls to WebAPI (as usually one store relating to one WebAPI). If the event should be processed in several Stores (usually sequentially), then the controller handles this by orchestrating promises returned from stores. Some of sequentials (common used) in private methods of itself. And method of controllers can use them as simple part of handling. So I will never be duplicating code.
Controller methods do not return anything (one-way flow).
In fact the controller does not contain the logic of how to process the data. It's only points where, and in what sequence.
You can see almost the complete picture of the data processing in the Store. There is no logic in stores about how to interact with another stores (with flux it's like a many-to-many relation but just through actions). Now the store is a highly cohesive module that is responsible only for the logic of domain model (collection).
The main (in my opinion) advantages of flux are still here.
As a result, there are Stores, which are the only true source of the data. Components can subscribe to the Stores. And the components calls the same methods as before, but instead of actions uses controller. Interaction with React did not change at all.
Also, event processing becomes much obvious. Now I can just look at the handler in the controller and all becomes clear, and it's much easier to debug.
The question is:
Why were actions created in flux? And what are their advantages that I have missed?
Actions where implemented to capture a certain interaction on the view or from the server which can then be dispatched to as many different stores as you like. The developers explained this with the example of the facebookchat.
There is a messageStore and a threadstore. When the action eg. messagePost was emitted it got dispatched into both stores doing different work to update their attributes. Threadstore increased the number of unread messages and messageStore added the new message to its messagearray.
So its basicly channeling one action to perform datachanges in more than one store.
I had the same questions and thought process as you, and now I started using Flummox which makes it cleaner to have the Flux architecture.
I define my Actions in the same file where I define my Store, and that's close enough. I can still subscribe to the dispatcher to log events so I can see all actions being called, and I have the option to create multi-store Actions if needed.
It comes with a nice FluxComponent that lets you wrap all store-related code in a single place so its children are stateless components that get updated on store changes, like
<FluxComponent connectToStores={['storeA', 'storeB']}>
<InnerComponent />
</FluxComponent>
Keeping the Flux architecture (it uses Facebook's Flux behind the scenes) will hopefully make it easy to use other technologies like GraphQL.
I have recently looked at Backbone.Marionette.
It mentions the Event Aggregator in a way that seems to be something new.
https://github.com/toekneestuck/edgefonts-preview/blob/master/components/backbone.marionette/docs/marionette.eventaggregator.md
However I don't really see the added benefit to normal events. Doesn't following Code provide you the same?
var dispatcher = _.clone(Backbone.Events)
These are almost exactly the same things. (check the code)
The difference is that EventAggregators is a "class" which can be instantiated (wherea Backbone.Events act more as a mixin).
Being a "class", EventAggregators can be extended.
EventAggregators.extend({ /* your new methods */ });
The difference is really small, but goes a long way in reducing boilerplate necessary to create a event hub with custom prototype methods - and extending them in sub-eventAggregator.
Referring URL : http://davidsulc.com/blog/2012/04/22/a-simple-backbone-marionette-tutorial-part-2/
I am naive into backbone and event Aggregator. Can you please let me know the reason to use the following line of code.
this.model.addVote();
MyApp.vent.trigger("rank:down", this.model);
it seems some other possibility could be
this.model.addVote();
this.model(rankDown);
or the other way
MyApp.vent.trigger("addVote", this.model
Kindly explain thanks.
Running Sample: http://jsfiddle.net/Irfanmunir/966pG/29/
Events in general are useful for decoupling objects from each other, while still allowing them to communicate. The event aggregator pattern (or pub/sub pattern) at an application level allows further decoupling by having a 3rd party in the mix: publisher, aggregator, subscriber. This way, neither the publisher nor subscriber have to know about each other. They each know about the event aggregator only.
I wrote up a small article on this a while back:
http://lostechies.com/derickbailey/2012/04/03/revisiting-the-backbone-event-aggregator-lessons-learned/
In this case, the events are used because the model needs to be manipulated in the context of the collection that it belongs to. Rather than going through the model to get to the collection (which it might not be directly assigned to... models are not required to be part of a collection), it's easier and more flexible to raise this event and have it handled somewhere more appropriate.
I am using WCF services asynchronously in a WPF application. So I have class with all the web service. The view models call the method in this proc, which in-turn calls the web service.
So the view Model code looks like this:
WebServiceAgent.GetProductByID(SelectedProductID, (s, e)=>{States = e.Result;});
And the WebService agent looks like:
public static void GetProductByID(int ProductID, EventHandler<GetProductListCompletedEventArgs> callback)
{
Client.GetProductByIDCompleted += callback;
Client.GetProductByIDAsync(ProductID);
}
Is this a good approach? I am using MVVM light toolkit. So the View Model static, so in the lifetime of the application, the view model stays. But each time the view model calls this WebServiceAgent, I think I am registering an event. But that event is not being unregistered.
Is this a problem. Lets say the view Model is called for 20 - 30 times. I am inserting some kind of memory leak?
Some helpful information, based on the mistakes I learned from myself:
The Client object seems to be re-used all the time. When not unregisering event handlers, they will stack up when future invokations of the same operations finish and you'll get unpredictable results.
The States = e.Result statement is executed on the event handler's thread, which is not the UI dispatcher thread. When updating lists or complex properties this will cause problems.
In general not unregistering event handlers when they are invoked is a bad idea as it will indeed cause hard to find memory leaks.
You should probably refactor to create or re-use a clean client, wrap the viewmodel callback inside another callback that will take care of unregistering itself, cleaning up the client, and invoking the viewmodel's callback on the main dispatcher thread.
If you think all this is tedious, check out http://blogs.msdn.com/b/csharpfaq/archive/2010/10/28/async.aspx and http://msdn.microsoft.com/en-us/vstudio/async.aspx. In the next version of C# an async keyword will be introduced to make this all easier. A CTP is available already.
Event handlers are death traps and you will leak them if you do not "unsubscribe" with "-=".
One way to avoid is to use RX (Reactive Extensions) that will manage your event subscriptions. Take a look at http://msdn.microsoft.com/en-us/data/gg577609 and specifically creating Observable by using Observable.FromEvent or FromAsync http://rxwiki.wikidot.com/101samples.
This is unfortunaltely not a good approach.
I learned this the hard way in silverlight.
Your WebserviceAgent is probably a long-life object, whereas the model or view is probably short-life
Events give references, and in this case the webservice agent, and wcf client a reference to the model. A long lifeobject has a reference to a short life object, this means the short life object will not be collected, and so will have a memory leak.
As Pieter-Bias said, the async functionality will make this easier.
Have you looked at RIA services? This is the exact problem that RIA services was designed to solve
Yes, the event handlers are basically going to cause a leak unless removed. To get the near-single line equivalent of what you're expressing in your code, and to remove handlers you're going to need an instance of some sort of class that represents the full lifecycle of the call and does some housekeeping.
What I've done is create a Caller<TResult> class that uses an underlying WCF client proxy following this basic pattern:
create a Caller instance around an existing or new client proxy (the proxy's lifecycle is outside of the scope of the call to be made (so you can use a new short-lived one or an existing long-lived one).
use one of Caller's various CallAsync<TArg [,...]> overloads to specify the async method to call and the intended callback to call upon completion. This method will choose the async method that also takes a state parameter. The state parameter will be the Caller instance itself.
I say intended because the real handler that will be wired up will do a bit more housekeeping. The real callback is what will be called at the end of the async call, and will
check that ReferenceEquals(e.UserState, this) in your real handler
if not true, immediately return (the event was not intended to be the result of this particular call and should be ignored; this is very important if your proxy is long lived)
otherwise, immediately remove the real handler
call your intended, actual callback with e.Result
Modify Caller's real handler as needed to execute the intended callback on the right thread (more important for WPF than Silverlight)
The above implementation should also have separate handlers for cases where e.Error is non-null or e.Cancelled is true. This gives you the advantage of not checking these cases in your intended callback. Perhaps your overloads take in optional handlers for those cases.
At any rate, you end up cleaning up handlers aggressively at the expense of some per-call wiring. It's a bit expensive per-call, but with proper optimization ends up being far less expensive than the over-the-wire WCF call anyway.
Here's an example of a call using the class (you'll note I use method groups in many cases to increase the readability, though HandleStuff could have been result => use result ). The first method group is important, because CallAsync gets the owner of that delegate (i.e. the service instance), which is needed to call the method; alternatively the service could be passed in as a separate parameter).
Caller<AnalysisResult>.CallAsync(
// line below could also be longLivedAnalyzer.AnalyzeSomeThingsAsync
new AnalyzerServiceClient().AnalyzeSomeThingsAsync,
listOfStuff,
HandleAnalyzedStuff,
// optional handlers for error or cancelled would go here
onFailure:TellUserWhatWentWrong);
loosly coupled communication between viewmodels is a nice concept.
I have used Prism Eventaggregator as well as MVVM Light Toolkit's Messanger.
If the project grows I get alot of messages back and forth.
What is best practise of keeping track of my messages? Naming conventions? patterns?
etc...
How do you keep track?
I've found that there is a lot of value in providing a "Messages" namespace that contains your strongly typed messages. Keep in mind that well-defined messages will be more like contracts/DTOs - you want to maintain as much decoupling as possible, so dependencies should be kept to a minimum, otherwise the senders and receivers will both rely on common libraries. Sometimes this is necessary due to the nature of the message.
I think you'll also find that many messages may follow a particular pattern. Two common message patterns are what I'll call the Action and Command. Action is more of a "verb" and a "subject".
For example, you might have MessageAction that exposes T Target, and the action is an enumeration that indicates update, select, add, delete, etc. That's common and a generic message can wrap it, and your handlers listen for the generics that close the type they are interested in.
The Command is an Action that originates from somewhere and then applies an action to a target. For example, maybe you are adding a role to a user. In that case, your item of interest is the role, your target is the user, and your action is adding it. That can be a CommandAction.
Another common way to organize messages would be to implement a common interface or base class. It then becomes trivial to search for implementors in the project to determine where messages are being used.
Good question. Here are the solutions I've been using, but there are probably a lot of alternatives and haven't found any guidance on that.
One way is to define specific events that extend basic events : typical example when using prism is an extension of CompositePresentationEvent.
However, when having a large number of messages, it's sometimes useful to define what is a message. Usually it can be defined by a message header, some message attributes and an actual content. Then you can put these messages into your messagebus.