Profiling WPF applications - rundown of all methods - wpf

I am having trouble profiling my WPF application.
Here is the situation: any use-case will be the following: enter values -> click on "compute values" -> loading... -> display values.
During the "Loading..." phase, there are two phases:
A pure mathematical phase, which is extremely optimized
A "WPF is drawing your controls" phase, which is... well... long.
What I want to do here is to profile the application to have a TreeView with: function, elapsed time, number of calls.
I usually use the Visual Studio profiler (mostly because my company doesn't wanna pay for a good profiler. Ask people to optimize performance, don't give them any good profiler, let's just politely say it's a nice company policy).
The problem is that this profiler does not go until WPF system functions (draw, MeasureOverride, measureLength....).
I used JetBrains' dotTrace for a while (the 10 day trial... meh) whic is truely awesome, since it was able to really separate the phases even in the most precise situations (time elapsed to color one cell in a datagrid, time elapsed to calculate one cell's width...).
Ants doesn't seem like it profiles WPF (it just displays "Managed code" ... )
So right now, my Visual Studio profiler stops at a function which defines an Xaxis for a Visiblox chart. It just tells me that WPF takes around 2.3 seconds to "Define XAxis", while 2.3 seconds is actually the entire time spent to draw all my grids&graphs
Do you guys know by any chance a profiler (or a setting in VS profiler) which can do the magic?
Thanks a lot!

You can use the Windows SDK WPF Performance Suite

Stuck record here.
You think you want a tree view, with elapsed time, call count, etc.,
but what you need is to optimize the app by finding what you can fix to eliminate wall-clock time being spent unnecessarily.
Here's an example of what works best, in my experience.
It is very easy for something to be taking a large fraction of time, where it is not necessarily localized to particular routines, particular lines of code, or particular paths in the call tree.
That method finds it no matter where it is.
What more, it doesn't bother measuring the problem beyond the minimum needed to find it.
However, if you really feel the need to buy or install something, it doesn't help you there.

Related

how to find "bottleneck"?

I have an WPF application, which do very slow an operation. The same operation does quickly second time. This operation uses third-party components. Seems, it's loading some libraries or something else. How can I found, what happens to fix it?
The simplest possible thing you can do is watch the output window while it is running in the Debugger. This will write a line for each assembly that is loaded so if your theory is correct then you will see lots of lines added while the slowness occurs.
In my experience this isn't the usual cause of delays such as this.
A far better solution is to get hold of a profiler, there are quite a few out there with trial periods so you can evaluate which most meets your needs, see Ants from redgate or DotTrace by Jetbrains. These will let you find out exactly where the delays are occuring.

Visual Studio 2012 in profiling mode gives much better performance

I have come across a strange situation and do not know what or how to look for.
We are having a Silverlight project hosted in a web project. This Silverlight project communicates using REST services hosted by the web project.
Now when we run this in debug mode, Everything runs fine as expected. So I thought of profiling it and checking which all places I might be loosing performance. So here is the interesting part.
I ran VS2012 Profiler and its is collecting all information related to methods executed, time and so on. But this time my project is lightning fast. Queries which used to take under normal debug about 1 sec to execute are now taking less than 200ms. There is one very intensive query which takes about 20 sec to execute in normal mode, but under profiling it takes less than 600ms.
So what I make out of this is that my code and project is capable of running this fast but for some reason it is not that fast under normal debug scenarios.
Can somebody throw light as what is happening under the hood and how can I achieve this performance in normal scenarios.
I would also like to mention that I have also tried release mode and publishing to IIS but none of these give as good performance as when in profiling mode.
Technically what I thought earlier is under profiling mode, performance should be less than normal as at that instant VS2012 is also collection other data.
I am confused. Please help.
Thanks
I know you probably don't need help at this point, but for anyone else who stumbles upon this post, I'll give my two cents.
I had this same problem with an XNA project I'm working on. Debug and Release modes both saw MASSIVE slowdowns in a certain situations. It pulled me down to less than 1 FPS. I was trying to profile the problem to solve it, but the issue never occurred during profiling.
I finally discovered the slowdowns were caused by a Console.WriteLine() I was calling in the situation. Commenting it out solved the issues on both Debug and Release build. Apparently, Console.WriteLine is just INCREDIBLY slow.

Faster way to create a deep zoom collection

I am creating a silverlight pivot collection with 31K items (and images), however when I'm using the DeepZoomTools library to create the deep zoom images; it takes hours and hours (and hasn't actually completed even one).
Is there a multi-threaded way or distributed way in which collections could be created?
It is a time intensive process to be sure. Does your individual data points change often? What we have found in nearly all of our projects that the image for an individual item almost never changes. This allows you to streamline the process a little bit.
What I do in a case like this is to initially process the entire dataset. Then the next time I run the process, I only update the images that have been added or modified. As I said, in almost all of my cases this solved the problem you are running into. In fact, when it works, I will plug my card generation into whatever business applications that are running and generate/modify a card when data is added/changed in the system. This removes the need for batch processing altogether after your initial build.
If that will not work for you, take a look at the code for PAuthor. It is using DeepZoomTools and does so in a multi-threaded way. You should be able to find the code you are looking for there. PAuthor - CodePlex
Let me know if you have more specifics about your specific needs and we can see if we can come up with something.

how to run project starting without debuging like starting debugging mode?

i'm using C++ managed 2010 for designing a GUI in a form.h file. The GUI acts as a master querying data streaming from slave card.
Pressing a button a function (in the ApplicationIO.cpp file) is called in which 2 threads are created by using API win32 (CREATETHREAD(...)): the former is for handling data streaming, and the latter is for data parsing and data monitoring on real time grpah on the GUI.
The project has two different behaviour: if it starts in debugging mode it is able to update GUI controls as textbox (using invoke) and graph during data straming, contrariwise when it starts without debugging no data appears in the textbox, and data are shown very slowly on the chart.
has anyone ever addressed a similar problem? Any suggestion, please?
A pretty classic mistake is to use Control::Begin/Invoke() too often. You'll flood the UI thread with delegate invoke requests. UI updates tend to be expensive, you can easily get into a state where the message loop doesn't get around to doing its low-priority duties. Like painting. This happens easily, invoking more than a thousand times per second is the danger zone, depending on how much time is spent by the delegate targets.
You solve this by sending updates at a realistic rate, one that takes advantage of the ability of the human eye to distinguish them. At 25 times per second, the updates turn into a blur, updating it any faster is just a waste of cpu cycles. Which leaves lots of time for the UI thread to do what it needs to do.
This might still not be sufficiently slow when the updates are expensive. At which point you'll need to skip updates or throttle the worker thread. Note that Invoke() automatically throttles, BeginInvoke() doesn't.

WPF: Improving Performance for Running on Older PCs

So, I'm building a WPF app and did a test deployment today, and found that it performed pretty poorly. I was surprised, as we are really not doing much in the way of visual effects or animations.
I deployed on two machines: the fastest and the slowest that will need to run the application (the slowest PC has an Intel Celeron 1.80GHz with 2GB RAM). The application ran pretty well on the faster machine, but was choppy on the slower machine. And when I say "choppy", I mean the cursor jumped even just passing it over any open window of the app that had focus.
I opened the Task Manager Performance window, and could see that the CPU usage jumped whenever the app had focus and the cursor was moving over it. If I gave focus to another (e.g. Excel), the CPU usage went back down after a second. This happened on both machines, but the choppiness was only noticeable on the slower machine. I had very limited time to tinker on the deployment machines, so didn't do a lot of detailed testing.
The app runs fine on my development machine, but I also see the CPU spiking up to 10% there, just running the cursor over the window.
I downloaded the WPF performance tool from MS and have been tinkering with it (on my dev machine). The docs say this about the "Frame Rate" metric in the Perforator tool:
For applications without animation,
this value should be near 0.
The app is not doing any heavy animation, but the frame rate stays near 50 when the cursor is over any window. The screens I tested on have column headers in a grid that "highlight" and buttons that change color and appearance when scrolled over. Even moving the mouse on blank areas of the windows cause the same Frame rate and CPU usage (doesn't seem to be related to these minor animations).
(Also, I am unable to figure out how to get anything but the two default tools--Perforator and Visual Profiler--installed into the WPF performance tool. That is probably a separate question).
I also have Redgate's profiling tool, but I'm not sure if that can shed any light on rendering performance.
So, I realize this is not an easy thing to troubleshoot without specifics or sample code (which I can't post). My questions are:
What are some general things to look
for (or avoid) in the code to improve
performance?
What steps can I take using the WPF
performance tool to narrow down the
problem?
Is the PC spec listed above (Intel Celeron 1.80GHz with 2GB RAM) too slow to be running even vanilla WPF applications?
Are you applying any BitmapEffect-s to your UI elements?
They are not handled by GPU, so CPU takes care of rendering them. If not used properly (e.g. having a OuterGlowBitmapEffect applied to a large complex element) they can have terrible impact on performance.
Also, you still might want to try profiling your app with a performance profiler. Just to see if it's not your code that causes this.
This is not normal for WPF - I'd suspect one of your developers has written code that runs a timer in the background (or more likely given your description, a mouse move handler) which is affecting the UI in some way.
If you have ANTS performance profiler (it's really nice) I'd run that over your app and reproduce the problem.
Once you've done that, ANTS should tell you fairly quickly what the problem is.
If ANTS doesn't reveal anything at all, and shows you that in fact none of your code is running during this time, then I'd suspect buggy graphics card drivers.
You can test for this by disabling hardware acceleration by setting the following registry key, and trying again:
HKEY_CURRENT_USER\Software\Microsoft\Avalon.Graphics\DisableHWAcceleration to 1
Note: the DisableHWAcceleration value should be a DWORD

Resources