Process lots of small tasks and keep the UI responsive - wpf

I have a WPF application that needs to do some processing of many small tasks.
These small tasks are all generated at the same time and added to the Dispatcher Queue with a priority of Normal. At the same time a busy indicator is being displayed. The result is that the busy indicator actually freezes despite the work being broken into tasks.
I tried changing the priority of these tasks to be Background to see if that fixed it, but still the busy indicator froze.
I subscribed to the Dispatcher.Hooks.OperationStarted event to see if any render jobs occurred while my tasks were processing but they didn't.
Any ideas what is going on?
Some technical details:
The tasks are actually just messages coming from an Observable sequence, and they are "queued" into the dispatcher by a call to ReactiveUI's ObserveOn(RxApp.MainThreadScheduler) which should be equivalent to ObserveOn(DispatcherScheduler). The work portion of each of these tasks is the code that is subscribing through the ObserveOn call e.g.
IObservable<TaskMessage> incomingTasks;
incomingTasks.ObserveOn(RxApp.MainThreadScheduler).Subscribe(SomeMethodWhichDoesWork);
in this example, incomingTasks would produce maybe 3000+ messages in short succession, the ObserveOn pushes each call to SomeMethodWhichDoesWork onto the Dispatcher queue so that it will be processed later

The basic problem
The reason you are seeing the busy indicator stall is because your SomeMethodWhichDoesWork is taking too long. While it is running, it prevents any other work from occuring on the Dispatcher.
Input and Render priority operations generated to handle animations are lower than Normal, but higher priority than Background operations. However, operations on the Dispatcher are not interrupted by the enqueing of higher priority operations. So a Render operation will have to wait for a running operation, even if it is a Background operation.
Caveat regarding observing on the DispatcherScheduler
ObserveOn(DispatcherScheduler) will push everything through at Normal priority by default. More recent versions of Rx have on overload that allows you to specify a priority.
One point to highlight that's often missed is that items will be queued onto the Dispatcher by the DispatcherScheduler as soon as they arrive NOT one after the other.
So if your 3000 items all turn up fairly close together, you will have 3000 operations at Normal priority backed up on the Dispatcher blocking everything of the same or lower priority until they are done - including Render operations. This is almost certainly what you were seeing - and that means you might still see problems even if you do all but the UI update work on a background thread depending on how heavy your UI updates are.
In addition to this, you should check you aren't running the whole subscription on the UI thread - as Lee says. I usually write my code so that I Subscribe on a background thread rather than use SubscribeOn, although this is perfectly fine too.
Recommendations
Whatever you do, do as much work as possible on a background thread. That point has been done to death on StackOverflow, and elsewhere. Here are some good resources covering this:
MSDN Entry on WPF Threading Model
MSDN Magazine "Build More Responsive Apps With The Dispatcher", by Shaun Wildermuth
If you want to keep the UI responsive in the face of lots of small updates you can either:
Schedule items at a lower priority, which is nice and easy - but not so good if you need a certain priority
Store updates in your own queue and enqueue them and have each operation you run Invoke the next item from your queue as it's last step.
The bigger picture
It's worth stepping back a bit and looking at the bigger picture as well.
If you separately dump 3000 items into the UI in succession, what's that going to do for the user? At best they are going to be running a monitor with a refresh rate of 100Hz, probably lower. I find that frame rates of 10 per second are more than adequate for most purposes.
Not only that, human beings supposedly can't handle more than 5-9 bits of information in one go - so you might find better ways of aggregating and displaying information than updating so many things at once. For example, make use of master/detail views rather than showing everything on screen at once etc. etc.
Another option is to review how much work your UI update is causing. Some controls (I'm looking at you XamDataGrid) can have very lengthy measure/arrange layout operations. Can you simplify your animations? Use a simpler Visual tree? Think about the popular busy spinner that looks like circling dots - but really it's just changing their color. A great effect that is fairly cheap to achieve. It's worth profiling your application to see where time is going.
I would think about the overrall approach front-to-back as well. If you are reasonably certain you are going to get that many items to update at once, why not buffer them up and manage them in chunks? That would might have advantages all the way back to the source - which perhaps is on a server somewhere? In any case, Rx has some nice operators, like Buffer that can turn a stream of individual items into a larger lists - and it has overloads that can buffer by time and size together.

Have you tried using .SubscribeOn(TaskPoolScheduler.TaskPool) to subscribe on a different thread?

#Pedro Pombeiro has the right answer.
The reason you are seeing the freezes on the UI is that you are queueing the work on the Dispatcher. This means the work will be done on the UI thread. You can think of the Dispatcher as a message pump that is constant draining messages from each of its queues (which you can think of each of the priorities [SystemIdle, ApplicationIdle, ContextIdle, Background, Input, Loaded, Render, DataBind, Normal, Send])
Putting you work onto a different priority queue, does not make it run concurrently, just asynchronously.
To run your work on another thread using Rx, then use SubscribeOn as above. Remember to then schedule any updates to the UI back on to the Dispatcher with ObserveOn.

Related

LabVIEW: how to stop a loop inside event structure

I create an event structure for two buttons, start ROI and stop ROI. When the user presses start ROI it goes to this event and do the following:
check if the camera is open and is in idle
enqueue "none" to the queue to initialize the queue
in the loop dequeue every iteration to find if there's invoked message, which is inserted from the callback
if the element is "invoked" then update the region
The problem I am seeing is that when it is in the loop I cannot press the stop ROI or any other buttons. But the ROI keeps updating. I am puzzled why this is happening.
Could you please help me ?
Thanks,
Edit events for that case (the one pictured in your screenshot) and make sure the box titled "Lock front panel" is unchecked. This should solve your issue.
As far as I can tell from the code you have shown, your event structure should not be attempting to handle the stop ROI Value Change event. It doesn't need to, because the only place you need to respond to that event is inside your innermost loop and there you are handling the button click by polling the value of its terminal anyway.
However as #Dave_St explains, this will only work if the loop runs regularly, i.e. if the Dequeue Element function either receives data regularly or has a short timeout, because otherwise it will wait for data indefinitely and the loop iteration will not complete until the dequeue has executed. Having an event handler for the button click can't help here because it can't interrupt the program flow - the event structure only waits for an event to happen and then allows the code in the corresponding frame to execute.
More generally though, and looking at your front panel which suggests you are going to want to deal with further controls and events, the problem is that you are trying to do a time-consuming task inside an event structure. That's not how event structures are designed to be used. You should use a design pattern for your app that separates the UI (responding to user input) from the process (acquiring images from a camera) - perhaps a queued message handler would be suitable. This may seem harder to understand at first but it will make your program much easier to develop, extend, and maintain.
You can find more information, examples and templates in your LabVIEW installation and its online help. I do recommend using one of the templates as your starting point if possible because they already implement a lot of common functionality and can save you a lot of redundant effort.

WPF Dispatcher not releasing object for GC

I'm trying to track down some memory leaks in my application and according the the ANTS profiler, many of my objects are being help up by System.Windows.Threading.Dispatcher. My application is basically single threaded and the only explicit calls I make to Dispatcher.Invoke are unrelated to the objects being held up. The objects do seem to all be UserControl children of a FixedDocument subclass of mine, if that means anything to anyone.
What is causing the dispatcher to not release my objects?
There is an operation scheduled on the dispatcher (e.g., with BeginInvoke), and one of the arguments to that operation references your ReportVisualTable, either directly or indirectly.
Looking at the types included in your retention graph, it looks like a DocumentViewer attempted to bring a page into view, but the page hadn't been loaded yet, so the operation was deferred on the dispatcher. The operation was enqueued at priority level Inactive, which means it will just sit in the queue indefinitely, because that priority level is never processed. When the requested page is loaded, the operation's priority is bumped up to Background, but if that never happens, it seems the operation will just stay inactive.

Silverlight UI Thread Blocking

Can someone please tell me how the processing in Silverlight processes between the UI Thread and the other "worker" threads.
I have a scenario where I have to update several hundred complex UI objects in the view via a viewmodel. Each item is backed by its own viewmodel.
If each viewmodel had a property, for example, called IsSelected, which changed a background color through behaviours, how should I go about making changes to minimal UI Thread blocking?
If I update my (several hundred) viewmodels, it blocks the UI thread for around 4 seconds. How can I determine what's doing the blocking? Are there more efficient ways to update?
Thanks
There are definitely more efficient ways than doing it in one go.
A non-Silverlight specific solution would be to space these updates a few milliseconds apart with DispatcherTimer delayed calls, so the thread has some "breathing space" to carry on with the execution path.
But you should also give some thought to your architecture, if you're dealing with hundreds of VMs it might worth using lazy loading and updating your screen sequentially, in order of importance for your audience.
See this answer too for more explanation: https://stackoverflow.com/a/1710868/21217

WPF performance problem with CommandManager

We've created a new, quite complex, WPF application from the ground up and have run into a performance problem as the number of commands registered with the CommandManager increase. We're using simple lightweight commands in our MVVM implementation, however the third party controls we're using (Infragistics) do not, and call CommandManager.RegisterClassCommandBinding liberally to add RoutedCommands. The performance problem manifests itself as a perceived sluggishness in the UI when responding to user input, for example tabbing between controls is slow, text input is 'jerky' and popup animation is 'clunky'. When the app is first fired up the UI is snappy. As more screens containing Infragistics grids are opened the performance deteriorates.
Internally, the CommandManager has a private field named _requerySuggestedHandlers, which is a List< WeakReference>. I've used reflection to get a reference to this collection, and I've noticed that when I call .Clear(), the responsiveness of the UI improves back to its initial state. Obviously I don't want to go round clearing collections that I know little about, especially using reflection (!) but I did it to see if it would cure the performance problems, and voila it did.
Normally, this situation would clean itself up after a certain amount of time passes. However, the collection of WeakReferences (_requerySuggestedHandlers) will only get trimmed once a garbage collection is initiated, which is non-deterministic. Because of this, when we close down windows containing grids (Infragistics XamDataGrid), the CanExecute property for 'dead' grid commands continue to be evaluated unnecessarily, long after the window is closed. This also means that if we close down a number of windows, the performance is still sluggish until a garbage collect is initiated. I understand that this can happen on allocation, and I've seen that myself because if I open a further window this causes the initial memory (from the disposed Windows) to be collected and performance returns to normal.
So, given the above, here are my questions:
How, and from where, does CommandManager.InvalidateRequerySuggested() get called? I haven't found any documentation on MSDN that explains this in any great detail. I hooked up to the CommandManager.RequerySuggested event and it looks like it's being called whenever controls lose focus.
Is is possible to suppress CommandManager.InvalidateRequerySuggested() being called in response to user input?
Has anyone else run into this issue, and if so, how have you avoided it?
Thanks!
This sounds like one of the rare cases where deterministically calling GC.Collect() is the right thing to do. The ordinary argument against it is that the garbage collector is smarter than you are. But when you're dealing with WeakReference objects, you enter territory where you may know something that the garbage collector doesn't. Kicking off garbage collection is certainly better than clearing _requerySuggestedHandlers - among other things, it won't do anything to the WeakReference objects that point to controls that are still alive.
I'd choose this over trying to figure out how to suppress RequerySuggested, since that would screw up the behavior of those commands that you still care about.

Patterns for delegating work to multiple threads

I'm updating a WinForms application that uses a BackgroundWorker to do something useful when a button is pressed.
The trouble is, "something useful" iterates sequentially through a long list of things to do, and can take quite a while to complete.
I'm considering having the button press event create multiple BackgroundWorkers instead of one or having the current BackgroundWorker create additional BackgroundWorkers to do the actual work.
Both approaches seem fairly equivalent to me.
Are there advantages/disadvantages to either one? Is there a better way to do this?
Have you looked at using the background worker with Parallel.For? (Parallel.For # msdn)
Managing the multiple workers could be an issue - that's the kind of thing the Parallel extensions do...
PK :-)

Resources