Performance issue with "Measure" - wpf

I'm encountering a performance issue in my application. Essentially I click a button, and a list is populated with databound data (this is virtualized because of the large amount of data), and then click another button which will add a row to an associated list view. I'm being vague because I think it's just necessary to illustrate how little is actually going on in the UI.
Here's what I know:
I don't see the issue on my my beefy dev computer running Win 7 Pro, nor on XP SP3 machines with decent specs. I only see it on a certain brand of laptops (Lenovo ThinkPads) running Win 7 enterprise with 4 GB RAM and Core i5 CPU (much beefier than the XP desktop).
Because of the aforementioned findings, I'm not thinking this is an issue with code.
I profiled with Microsoft's PerfView tool and noticed what I would assume to be an incredibly large number of calls to UIElement.Measure (not ever invoked directly by our code), something I don't see when I profile on the other machines.
The laptop has a 1360x780 resolution, so I thought that perhaps the small resolution was causing the GPU to unnecessarily render the controls because of some data binding that I might be doing (which might explain the large number of calls to Measure()). I extended the laptop's display to my 24" monitor and didn't see any improvement.
Right now I'm assuming that the issue is with the GPU. I've updated the driver with no improvements.
Even though I don't think it's an issue with code, is there a WPF equivalent to "SuspendLayout()"
Is there a way to profile GPU performance to see if it is being hammered during certain processes
(far shot) Has anyone had similar performance issues that seem to be computer specific and suggestions on how to track them down?
Sorry if this is a vague question. I tried to make it comply with SO's usage reqs. Let me know if you want any more info.
Just as an addendum: The program is using WPF, C# 4.0, the issue seems to be around Telerik controls (though I don't think they're suspect since we use them elsewhere without issue).

Turns out it's caused by a known Microsoft issue. I’d try to explain, but I won’t. Mainly because I can’t.
Article talking about fix (see post by Viðar on 3 August 2010):
Microsoft Hotfix site: http://support.microsoft.com/kb/2484841/en-us
Fix: http://archive.msdn.microsoft.com/KB2484841/Release/ProjectReleases.aspx?ReleaseId=5583

1. Answer
To to prevent rampant MeasureOverride calls originating from the WPF ContextLayoutManager:
protected override void OnChildDesiredSizeChanged(UIElement el)
{
/* base.OnChildDesiredSizeChanged(el); */ // avoid rampant remeasuring
}
2. Relevant citation
UIElement.OnChildDesiredSizeChanged(UIElement) Method   ...
The OnChildDesiredSizeChanged(UIElement) method has the default implementation of calling InvalidateMeasure() on itself. A typical implementation would be: do whatever optimization your own element supports, and then typically call base OnChildDesiredSizeChanged(UIElement) from a̲t̲ l̲e̲a̲s̲t̲ o̲n̲e̲ of the code branches...
...the implication (and fact-of-the-matter) being that, for any single parent layout pass originated by any one of its children, the parent's MeasureOverride will be called additionally—and likely extraneously—once for each of its children whose size(s) have changed as well.
3. Discussion
In the case where multiple children change their sizes "at the same time", the parent will typically detect and account for the new overall layout amongst all of its children entirely during just the first of these calls. This is standard practice in WPF, and is encouraged by MeasureOverride(…) deliberately excluding any indication of some specific triggering child. Besides the fact that in the most common cases there is no such child (see above link for details), it makes code for attempting any sort of "partial layout" onerous. But mostly, why would any layout calculation ever want to proceed without first obtaining all the very latest available measurements anyway?
So we see that after a single MeasureOverride call, triggered by whichever child happened to be "first" (it shouldn't matter), the layout of the parent should actually be final regarding all of its latest child size information. But this doesn't mean that any queued OnChildDesiredSizeChanged notifications—for other children whose size(s) had also changed—have gone away. Those calls are still pending with the parent, and unless the virtual base call is explicitly abandoned (as shown in bullet #1), each will generate one additional, now-extraneous MeasureOverride call.
4. Caveat
The code shown here disables a̲l̲l̲ child-initiated measure invalidations, which is appropriate for cases where the parent either willfully forbids such changes, or is inherently already aware of them. This is not uncommon; for example, it includes any parent that always fully determines and enforces the size of its children, or more generally, any parent that only adopts DesiredSize values from its children during its own measure pass. What's important is that the measuring of the parent be sufficiently guaranteed by its own parent only.
The situation where a parent might wish to cancel some of the child-initiated measure notifications while preserving/allowing others will naturally depend on additional particular circumstances. It seems more obscure, so it is not addressed here.

Related

Differences between working with states and windows(time) in Flink streaming

Let's say we want to compute the sum and average of the items,
and can either working with states or windows(time).
Example working with windows -
https://ci.apache.org/projects/flink/flink-docs-release-0.10/apis/streaming_guide.html#example-program
Example working with states -
https://github.com/dataArtisans/flink-training-exercises/blob/master/src/main/java/com/dataartisans/flinktraining/exercises/datastream_java/ride_speed/RideSpeed.java
Can I ask what would be the reasons to make decision? Can I infer that if the data arrives very irregularly (50% comes in the defined window length and the other 50% don't), the result of the window approach is more biased (because the 50% events are dropped)?
On the other hand, do we spend more time checking and updating the states when working with states?
First, it depends on your semantics... The two examples use different semantics and are thus not comparable directly. Furthermore, windows work with state internally, too. It is hard to say in general with approach is the better one.
As Flink's window semantics are very rich, I would suggest to use windows. If you cannot express your semantics with windows, using state can be a good alternative. Using windows, has the additional advantage that state handling---which is hard to get done right---is done automatically for you.
The decision is definitely independent from your data arrival rate. Flink does not drop any data. If you work with event time (rather than with processing time) your result will be the same independently of the data arrival rate after all.

Radio buttons not selecting in old program

I wrote a large complex C program around 20(!) years go. As far as I can recall it worked fine at the time in all respects - it was probably running on windows 95.
Now I need to use it again. Unfortunately the radio buttons in it do not appear to work properly any more (the ordinary push buttons are all behaving correctly). As I click on the radio buttons, I get some feedback that windows is acknowledging my click in as much as I see a dotted line appear around the button's text and the circle of the button goes grey for as long as my finger is on the button, but when I take my finger off I see that the selected button has not changed.
My suspicion is that I was perhaps getting away with some bad practice at the time which worked with windows 95 but no longer works on newer versions of windows, but I'm struggling work out what I did wrong. Any ideas?
EDIT: Its difficult to extract the relevant code because the message handling in this program was a tangled nightmare. Many buttons were created programatically at runtime and there were different message loops working when the program was in different modes of operation. The program was a customisable environment for running certain types of experiment. It even had its own built-in interpreted language! So I'm not expecting an answer like "you should have a comma instead of a semicolon at line 47", but perhaps something more like "I observed similar symptoms once in my program and it turned out to be ..... " .. or perhaps "the fact that the dotted rectangle is appearing means that process AAA has happened, but maybe step BBB has gone wrong".
EDIT: I've managed to extract some key code which my contain an error...
char *process_messages_one_at_a_time()
{
MSG msg;
int temp;
temp = PeekMessage(&msg,winh,0,0,PM_NOREMOVE);
if (temp)
{
GetMessage (&msg, NULL, 0, 0);
if (msg.message == WM_LBUTTONUP)
{
mouse_just_released_somewhere = TRUE;
}
TranslateMessage (&msg);
DispatchMessage (&msg);
}
if (button_command_waiting)
{
button_command_waiting = FALSE;
return (button_command_string);
}
else
{
return (NULL);
}
}
There are two simple things to check when using radio buttons. First is to make sure that each has the BS_AUTORADIOBUTTON property set. The second is to make sure that the first button in the tab order and the next control after the set of buttons (typically a group box) have the WS_GROUP property set, while the other buttons have it clear.
A few suggestions:
I'd try to use spy++ to monitor the messages in that dialog box, particularly to and from the radiobutton controls. I wonder if you'll see a BM_SETCHECK that your program is sending (ie, somewhere you're unchecking the button programatically).
Any chance your code ever checks the Windows version number? I've been burned a few times with an == where I should have used a >= to ensure version checking compatibility.
Do you sub-class any controls? I don't remember, but it seems to me there were a few ways sub-classing could go wrong (and the effects weren't immediately noticeable until newer versions of Windows rolled in).
Owner-drawing the control? It's really easy to for the owner-draw to not work with newer Windows GUI styles.
Working with old code like that, the memories come back to me in bits and pieces, rather than a flood, so it usually takes some time before it dawns on me what I was doing back then.
If you just want to get the program running to use it, might I suggest "compatibility mode".
http://www.howtogeek.com/howto/windows-vista/using-windows-vista-compatibility-mode/
However, if you have a larger, expected useful life of the software, you might want to consider rewriting it. Rewriting it is not anywhere near the complexity or work of the initial write because of a few factors:
Developing the requirements of a program is a substantial part of the required work in making a software package (the requirements are already done)
A lot of the code is already written and only parts may need to be slightly refactored in order to be updated
New library components may be more stable alternatives to parts of the existing codebase
You'll learn how to write current applications with current library facilities
You'll have an opportunity to comment or just generally refactor and cleanup the code (thus making it more maintainable for the anticipated, extended life)
The codebase will be more maintainable/compatible going forward for additional changes in both requirements and operating systems (both because it's updated and because you've had the opportunity to re-understand the entire codebase)
Hope that helps...

Are there any good software development "patterns" for memory intensive .net programs?

Basically I'm working on a program that processes a lot of large video and image files, and I'm struggling with the memory management side of it because I've never dealt with anything quite like this before.
For instance, it stores all these images in a database, and loads a list of videos, and then you can switch between the videos and view images from the video. Right now, it's keeping all of those images in memory all the time, which is eating up a lot of space. I know I can lazy load the images, but once you've switched back and forth you get all of them stuck in memory.
I want to take advantage of the WPF databinding functionality and MVVM as much as possible, but if I need to look at a different architecture I will.
I'm just looking for general advice, tips, links to articles, or anything that could help.
One of the things you could look at is data virtualization, which is not provided in WPF by default (they provide UI virtualization instead). Data virtualization can say 'load and bind the data for an item / range of items while visible, then unload when not visible'.
Here's a great article that describes a concrete implementation that you may be able to use as-is or adapt:
http://www.codeproject.com/KB/WPF/WpfDataVirtualization.aspx
It sounds like the main problem you're having is not so much the performance-intensiveness of the application (which things like fixed-size buffers and static allocation will help with) but its overall memory footprint. The way to control that is with virtualization.
Lazy loading gets you halfway there: you don't actually create the object until something needs it. That's fine, but the longer the user works with the application and the more objects he visits in the UI, the more objects get created, and eventually the application runs out of memory.
So you want to throw away objects that the user doesn't need anymore. Figuring out which objects the user doesn't need can be a hard problem, but it can also be as easy as assuming that the user doesn't need the object that he used least recently. You use a least-recently-used (LRU) cache to do this.
This is totally consistent with the MVVM pattern. In your view class, you make your property getter for the object use this pseudocode:
if object hasn't been loaded
load object
add object to the LRU cache (whether you loaded it or not)
return object
The LRU cache I wrote keeps a simple queue of the objects it contains. When you add an object to the cache, if it's not already in the queue it gets added to the back, and if it is already in the queue it gets moved to the back.
If the queue's at its capacity when you add an object, it pops off whatever is at the front of the queue (which is the one that was used least recently) and raises the DiscardingOldestItem event.
This event is the object's chance to tell anything that holds a reference to it (i.e. the view object that it's a property of) that it needs to be discarded (probably by raising an event of its own). The view object's event handler should first raise the PropertyChanged event. If the property getter gets called when it does this, there's a binding somewhere that's still looking at the property, so it shouldn't be discarded yet. (Also, since the getter was called, the object just got moved to the back of the queue.) Otherwise, it can be thrown away.
(Note that if you have more objects visible in the UI than the cache can hold, this little dance becomes an infinite loop and you'll get a stack overflow.)
A more sophisticated approach would have the LRU cache start discarding old items when the application started running low on memory (it uses a fixed capacity right now). That's a straightforward change, but if you make that change, the scenario described in the previous paragraph is something you need to give more thought to; one very large object could result in the whole UI going kablooey.
It seems that to increase raw performance you would actually want to avoid patterns. They have their uses, don't get me wrong, but if you're trying to blast video at the highest performance possible the last thing you need to do it introduce abstraction layers that are designed to write higher quality code, not increase application performance.
this article on informIt has a lot of good info on the subject although it is more c and c++.
Static Allocation Pattern: Allocates memory up front
It suggests,
Pool Allocation Pattern: Preallocates pools of needed objects
Fixed Sized Buffer Pattern: Allocates memory in same-sized blocks
Smart Pointer Pattern: Makes pointers reliable
Garbage Collection Pattern: Automatically reclaims lost memory
Garbage Compactor Pattern: Automatically defragments and reclaims memory
"I know I can lazy load the images,
but once you've switched back and
forth you get all of them stuck in
memory."
This is not true to my understanding. The images can get garbage collected just like anything else, by removing all references. Are you sure you dont have a reference to them somewhere? Try a memory profiler like memprofiler or ANTS to see whats happening.
To those who have found this question looking for general patterns (not WPF) to reduce memory, the famous one (which I have never seen used!) is The Flyweight pattern

Trying to track down a Silverlight memory leak that only happens in browsers

This is an odd one. I am making a app that is kind of a game, and I wanted to have a shooting starburst effect. I made it one evening and it all worked well, until I noticed that my browser was eating over 300 megs of ram, eating 1 meg every 5 seconds, mainly when the starburst would happen.
Here is an example stripped down to just the starburst:
http://www.sizzln.com/example.htm
First thought, I am not removing the objects or still have references somewhere. I am placing each generated star into a Canvas, but I am removing old starts every 3 seconds. I do have a lot of DoubleAnimations as well, but I even have a callback to set everything to null.
Here is the weird part, if I convert it to WPF it doesnt happen, if I run it inside of Silverlight Spy 3, it doenst happen. If I take a Heap Dump using WinDbg and SOS.dll, it reports that it should only be using between 1.8 and 3 MBs of ram.
I have the GC running every 3 seconds to cleanup, but it never has any effect. I can see in the heapdump that many objects are now deleted, and I always get back to 1.8 meg or so after a GC, but the memory shown in Task Manager just keeps going up.
I dont know what to do, I think I am carefully removing the objects unless my Heap is not being honest.
Are you running Vista or Win7? It sounds like the OS is not reclaiming memory, as it shouldn't unless it needs to.
It may also be that the Silverlight GC doesn't free its buffers, on the assumption that the memory may need to be reallocated soon.
In either case, it doesn't sound like anything to worry about, as long as the profiler says your program only uses 1.8MB after the GC runs.
I just briefly looked over your code. You have a lot of places where you hook into events (+=), but never unhook (-=). These are hard references and therefore won't ever be collected if they are ultimately connected to a root object.
OK I am going to sorta answer my own question. Silveright doesn't have the handy "BeginAnimation" method, so I found online a quick way to add an extension to do basically the same thing, it did this by creating a storyboard and starting it.
However, it just stayed there, I dont exactly know what it was being connected to either. Calling Stop() on it after it finishes fixed my memory issue.
One odd side effect is I have to be careful when I call the stop method, when creating so many storyboards it seemed to get a bit confused and it would cause some of the objects to reappear, even after they were removed from the control.

In WinForms, why can't you update UI controls from other threads?

I'm sure there is a good (or at least decent) reason for this. What is it?
I think this is a brilliant question -
and I think there is need of a better
answer.
Surely the only reason is that there
is something in a framework somewhere
that isn't very thread-safe.
That "something" is almost every single instance member on every single control in System.Windows.Forms.
The MSDN documentation for many controls in System.Windows.Forms, if not all of them, say "Any public static (Shared in Visual Basic) members of this type are thread safe. Any instance members are not guaranteed to be thread safe."
This means that instance members such as TextBox.Text {get; set;} are not reentrant.
Making each of those instance members thread safe could introduce a lot of overhead that most applications do not need. Instead the designers of the .Net framework decided, and I think correctly, that the burden of synchronizing access to forms controls from multiple threads should be put on the programmer.
[Edit]
Although this question only asks "why" here is a link to an article that explains "how":
How to: Make Thread-Safe Calls to Windows Forms Controls on MSDN
http://msdn.microsoft.com/en-us/library/ms171728.aspx
Because you can easily end up with a deadlock (among other issues).
For exmaple, your secondary thread could be trying to update the UI control, but the UI control will be waiting for a resource locked by the secondary thread to be released, so both threads end up waiting for each other to finish. As others have commented this situation is not unique to UI code, but is particularly common.
In other languages such as C++ you are free to try and do this (without an exception being thrown as in WinForms), but your application may freeze and stop responding should a deadlock occur.
Incidentally, you can easily tell the UI thread that you want to update a control, just create a delegate, then call the (asynchronous) BeginInvoke method on that control passing it your delegate. E.g.
myControl.BeginInvoke(myControl.UpdateFunction);
This is the equivalent to doing a C++/MFC PostMessage from a worker thread
Although it sounds reasonable Johns answer isn't correct. In fact even when using Invoke you're still not safe not running into dead-lock situations. When dealing with events fired on a background thread using Invoke might even lead to this problem.
The real reason has more to do with race conditions and lays back in ancient Win32 times. I can't explain the details here, the keywords are message pumps, WM_PAINT events and the subtle differences between "SEND" and "POST".
Further information can be found here here and here.
Back in 1.0/1.1 no exception was thrown during debugging, what you got instead was an intermittent run-time hanging scenario. Nice! :)
Therefore with 2.0 they made this scenario throw an exception and quite rightly so.
The actual reason for this is probably (as Adam Haile states) some kind of concurrency/locky issue.
Note that the normal .NET api (such as TextBox.Text = "Hello";) wraps SEND commands (that require immediate action) which can create issues if performed on separate thread from the one that actions the update. Using Invoke/BeginInvoke uses a POST instead which queues the action.
More information on SEND and POST here.
It is so that you don't have two things trying to update the control at the same time. (This could happen if the CPU switches to the other thread in the middle of a write/read)
Same reason you need to use mutexes (or some other synchronization) when accessing shared variables between multiple threads.
Edit:
In other languages such as C++ you are
free to try and do this (without an
exception being thrown as in
WinForms), but you'll end up learning
the hard way!
Ahh yes...I switch between C/C++ and C# and therefore was a little more generic then I should've been, sorry... He is correct, you can do this in C/C++, but it will come back to bite you!
There would also be the need to implement synchronization within update functions that are sensitive to being called simultaneously. Doing this for UI elements would be costly at both application and OS levels, and completely redundant for the vast majority of code.
Some APIs provide a way to change the current thread ownership of a system so you can temporarily (or permanently) update systems from other threads without needing to resort to inter-thread communication.
Hmm I'm not pretty sure but I think that when we have a progress controls like waiting bars, progress bars we can update their values from another thread and everything works great without any glitches.

Resources