Does it makes sense to store/cache the TaskScheduler returned from TaskScheduler.FromCurrentSynchronizationContext while loading a WPF app and use it every from there on? Are there any drawbacks of this kind of usage?
What I mean from caching is to store a reference to the TaskScheduler in a singleton and make it available to all parts of my app, probably with the help of a DI/IoC container or worst case in a bare ol' singleton.
As Drew says, there's no performance benefit to doing this. But there might be other reasons to hold onto a TaskScheduler.
One reason you might want to do it is that by the time you need a TaskScheduler it may be too late to call FromCurrentSynchronizationContext because you may no longer be able to be certain that you are in the right context. (E.g., perhaps you can guarantee that your constructor runs in the right context, but you have no guarantees about when any of the other methods of your class are called.)
Since the only way to obtain a TaskScheduler for a SynchronizationContext is through the FromCurrentSynchronizationContext method, you would need to store a reference to the TaskScheduler itself, rather than just grabbing SynchronizationContext.Current during your constructor. But I'd probably avoid calling this "caching" because that word implies that you're doing it for performance reasons, when in fact you'd be doing it out of necessity.
Another possibility is that you might have code that has no business knowing which particular TaskScheduler it is using, but which still needs to use a scheduler because it fires off new tasks. (If you start new tasks, you're choosing a scheduler even if you don't realise it. If you don't explicitly choose which scheduler to use, you'll use the default one, which isn't always the right thing to do.) I've written code where this is the case: I've written methods that accept a TaskScheduler object as an argument and will use that. So this is another scenario where you might want to keep hold of a refernce to a scheduler. (I was using it because I wanted certain IO operations to happen on a particular thread, so I was using a custom scheduler.)
Having said all that, an application-wide singleton doesn't sound like a great idea to me, because it tends to make testing harder. And it also implies that the code grabbing that shared scheduler is making assumptions about which scheduler it should be using, and that might be a code smell.
The underlying implementation of FromCurrentSynchronizationContext just instantiates an instance of an internal class named SynchronizationContextTaskScheduler which is extremely lightweight. All it does is cache the SynchronizationContext it finds when constructed and then the QueueTask implementation simply does a Post to that SynchronizationContext to execute the Task.
So, all that said, I would not bother caching these instances at all.
Related
I understand that we encapsulate data to prevent things from being accessed that don't need to be accessed by developers working with my code. However I only program as a hobby and do not release any of code to be used by other people. I still encapsulate, but it mostly just seems to me like I'm just doing it for the sake of good policy and building the habit. So, is there any reason to encapsulate data when I know I am the only one who will be using my code?
Encapsulation not only about hiding data.
It is also about hiding details of implementation.
When such details are hidden, that forces you to use defined class API and the class is only who can change it inside.
So just imagine a situation, when you opened all methods to any class interested in them and you have a function that performs some calculations. And you've just realized that you want to replace it because the logic is not right, or you want to perform some complicated calculations.
In such cases sometimes you have to change all the places across your application to change the result instead of changing it in only one place, in API, that you provided.
So don't make everything public, it leads to strong coupling and pain during update process.
Encapsulation is not only creating "getters" and "setters", but also exposing a sort of API to access the data (if needed).
Encapsulation lets you keep access to the data in one place and allow you to manage it in a more "abstract" way, reducing errors and making your code more maintainable.
If your personal projects are simple and small, you can do whatever you feel like in order to produce fast what you need, but bear in mind the consequences ;)
I don't think unnecessary data access can happen only by third party developers. It can happen by you as well right? When you allow direct access to data through access rights on variables/properties, whoever is working with that, be it you, or someone else may end up creating bugs by accessing data directly.
I'm far from new at threading and asynchronous operations, but SL seems more prone to asynchronous issues, particularly ordering of operations, than most frameworks. This is most apparent at startup when you need a few things done (e.g. identity, authorization, cache warming) before others (rendering a UI usable by your audience, presenting user-specific info, etc.).
What specific solution provides the best (or at least a good) strategy for dealing with ordering of operations at startup? To provide context, assume the UI isn't truly usable until the user's role has been determined, and assume several WCF calls need to be made before "general use".
My solutions so far involve selective enablement of UI controls until "ready" but this feels forced and overly sensitive to "readiness" conditions. This also increases coupling, which I'm not fond of (who is?).
One useful aspect of Silverlight startup to remember is that the splash xaml will continue to be displayed until the Application.RootVisual is assigned. In some situations (for example where themes are externally downloaded) it can be better to leave the assignment of the RootVisual until other outstanding async tasks have completed.
Another useful control is the BusyIndicator in the Silverlight Toolkit. Perhaps when you are ready to display some UI but haven't got data to populate the UI you assign the RootVisual using a page that has a BusyIndicator.
In my oppinion:
1st: Render start up UI that will tell the user, that the application did register his action and is runing, (Login window may be?)
2nd: Issue neccessary calls for warm up procedures in background (WCF calls and all that) and let user know about progress of tasks that are required to finish before next GUI can be made operable (main window?)
Order of operations is kind of situation specific, but please be sure to let user know what is happening if it blocks his inputs.
The selective enabling of individual controls can look interesting, but user might go for that function first and it will be disabled, so you have to make sure user knows why, or he will be confused why at start it was disabled and when he went there 10mins later it works. Also depends on what is primary function of your program and what function would that disabled control have. If application main function is showing list of books it wouldnt be nice to make that list load last.
I'm writing a WPF application using a MVVM pattern and using Prism in selected places for loose coupling, and I'd like to have logging messages shown in a window and written to a file. The subset of messages going each way may not be the same.
I think I should publish a message through the EventAggregator (MS-Prism implementation of observer pattern) and have two objects subscribe: one that updates the LogWindowViewModel and one that logs using the Enterprise Library logger. Is this a good idea or am I duplicating something that's already implemented?
The fact that the log message will be different in each output is the limiting factor.
Extending the block may suffice and defining a CustomTraceListener or ILogFilter may work out for you. This would avoid needing to use the EventAggregator.
It boils down to who has the knowledge of what and where to log. Are the differences driven off values within the logging engine such as severity? Are they instead driven by the consumer of the logging engine and therefore tightly coupled to the class itself? These types of questions will dictate your choice.
Leveraging the extension points in the logging block would be my first choice before having to rely on using the EventAggregator.
I think an idea is fine. There is not so much functionality to be duplicated it seems
I used Common.Logging as data collector, filter and distributor for something comparable and wrote a custom appender for my own processing and ui-output.
In my WPF application I need to do an async-operation then I need to update the GUI. And this thing I have to do many times in different moment with different oparations. I know two ways to do this: Dispatcher and BackgroundWorker.
Because when I will chose it will be hard for me to go back, I ask you: what is better? What are the reasons for choosing one rather than the other?
Thank you!
Pileggi
The main difference between the Dispatcher and other threading methods is that the Dispatcher is not actually multi-threaded. The Dispatcher governs the controls, which need a single thread to function properly; the BeginInvoke method of the Dispatcher queues events for later execution (depending on priority etc.), but still on the same thread.
BackgroundWorker on the other hand actually executes the code at the same time it is invoked, in a separate thread. It also is easier to use than true threads because it automatically synchronizes (at least I think I remember this correctly) with the main thread of an application, the one responsible for the controls and message queue (the Dispatcher thread in the case of WPF and Silverlight), so there's no need to use Dispatcher.Invoke (or Control.Invoke in WinForms) when updating controls from the background thread, although that may not be always recommended.
As Reed said, Task Parallel Library is a great alternative option.
Edit: further observations.
As I said above, the Dispatcher isn't really multithreaded; it only gives the illusion of it, because it does run delegates you pass to it at another time. I'd use the Dispatcher only when the code really only deals with the View aspect of an application - i.e. controls, pages, windows, and all that. And of course, its main use is actually triggering actions from other threads to update controls correctly or at the right time (for example, setting focus only after some control has rendered/laid-out itself completely is most easily accomplished using the Dispatcher, because in WPF rendering isn't exactly deterministic).
BackgroundWorker can make multithreaded code a lot simpler than it normally is; it's a simple concept to grasp, and most of all (if it makes sense) you can derive custom workers from it, which can be specialized classes that perform a single task asynchronously, with properties that can function as parameters, progress notification and cancellation etc. I always found BackgroundWorker a huge help (except when I had to derive from it to keep the Culture of the original thread to maintain the localization properly :P)
The most powerful, but also difficult path is to use the lowest level available, System.Threading.Thread; however it's so easy to get things wrong that it's not really recommended. Multithreaded programming is hard, that's a given. However, there's plenty of good information on it if you want to understand all the aspects: this excellent article by our good fellow Jon Skeet immediately jumps to mind (the last page of the article also has a good number of very interesting links).
In .Net 4.0 we have a different option, Task Parallel Library. I haven't worked with it much yet but from what I've seen it's impressive (and PLINQ is simply great). If you have the curiosity and resources to learn it, that's what I'd recommend (it shouldn't take that much to learn after all).
BackgroundWorker is nice if you're doing a single operation, which provides progress notifications and a completion event. However, if you're going to be running the same operation multiple times, or multiple operations, then you'll need more than one BackgroundWorker. In this case, it can get cumbersome.
If you don't need the progress events, then using the ThreadPool and Dispatcher can be simpler - especially if you're going to be doing quite a few different operations.
If C# 4 is an option, however, using the Task Parallel Library is a great option, as well. This lets you use continuation tasks setup using the current SynchronizationContext, which provides a much simpler, cleaner model in many cases. For details, see my blog post on the subject.
In my wpf app i am using a lot of objects declared as static for caching purposes.
Just wondering if there are any drawbacks.
I almost never use static data, because of the inherent problems that come into play when you add worker threads.
If you only want one instance of something accessible by your objects, then perhaps the Singleton pattern will help. You might want to read this helpful article on Singletons in C#.
There's also a framework available that makes requesting services really easy. You can set up the Framework to give you a new instance of a service, or the same service every time. The problem is that I can't remember what it's called, and would really appreciate it if someone else could comment on this because I'd like to read up on it again. I thought it was Unity or Prism, but I'm not sure. I know the latter framework is for setting up your application with MVVM principles in mind.
One disadvantage is that a static members on a class don't have full lazy instantiation. The static constructor will be run the first time any member on that class is accessed. This may or may not be a big concern for you.
A much bigger problem, in my opinion, is that statics are not good for unit testing. Say you are trying to write unit tests for another class, that reference those static objects. You have no way of setting up a mock for those objects. You're forced to use the real thing, which may end up forcing you to start up a large part of your system, in which case it's no longer a unit test, but an integration test.
I don't think you need to avoid the static keyword entirely; just be aware of the limitations you're putting on your program by doing so. And using a Singleton is not the only alternative. You may simply just choose to follow the "just create one" policy. :)