I have an WPF application, which do very slow an operation. The same operation does quickly second time. This operation uses third-party components. Seems, it's loading some libraries or something else. How can I found, what happens to fix it?
The simplest possible thing you can do is watch the output window while it is running in the Debugger. This will write a line for each assembly that is loaded so if your theory is correct then you will see lots of lines added while the slowness occurs.
In my experience this isn't the usual cause of delays such as this.
A far better solution is to get hold of a profiler, there are quite a few out there with trial periods so you can evaluate which most meets your needs, see Ants from redgate or DotTrace by Jetbrains. These will let you find out exactly where the delays are occuring.
Related
I noticed my app becoming quite slow with increasing amounts of data.
I thought it is because of some filters in ng-repeats that are triggered too often, but I've optimized them already and there's still no sign of performance improvement.
Then I thought it's database slowness, but it turns out that wasn't the case either.
I'd like to stop guessing where the slowness is coming from and find the cause systematically.
Does anyone know how to do this?
I just see tons of optimizations articles everywhere, but first I want to find out what part of the code needs the optimization.
I ran this Chrome analysis - and it seems to me that there's some AngularJS functions that take forever, but I don't know how to find out what's causing them.
No idea how this can help me. Looks like there's a function call that takes 4.5 seconds...
I'd like to get rid of it :D :D
I guess I was thinking too complicated...
I found the cause of the slowness simply by commenting everything out and commenting it in again step by step until the first slowness appeared :D
I have come across a strange situation and do not know what or how to look for.
We are having a Silverlight project hosted in a web project. This Silverlight project communicates using REST services hosted by the web project.
Now when we run this in debug mode, Everything runs fine as expected. So I thought of profiling it and checking which all places I might be loosing performance. So here is the interesting part.
I ran VS2012 Profiler and its is collecting all information related to methods executed, time and so on. But this time my project is lightning fast. Queries which used to take under normal debug about 1 sec to execute are now taking less than 200ms. There is one very intensive query which takes about 20 sec to execute in normal mode, but under profiling it takes less than 600ms.
So what I make out of this is that my code and project is capable of running this fast but for some reason it is not that fast under normal debug scenarios.
Can somebody throw light as what is happening under the hood and how can I achieve this performance in normal scenarios.
I would also like to mention that I have also tried release mode and publishing to IIS but none of these give as good performance as when in profiling mode.
Technically what I thought earlier is under profiling mode, performance should be less than normal as at that instant VS2012 is also collection other data.
I am confused. Please help.
Thanks
I know you probably don't need help at this point, but for anyone else who stumbles upon this post, I'll give my two cents.
I had this same problem with an XNA project I'm working on. Debug and Release modes both saw MASSIVE slowdowns in a certain situations. It pulled me down to less than 1 FPS. I was trying to profile the problem to solve it, but the issue never occurred during profiling.
I finally discovered the slowdowns were caused by a Console.WriteLine() I was calling in the situation. Commenting it out solved the issues on both Debug and Release build. Apparently, Console.WriteLine is just INCREDIBLY slow.
I'm about to deploy my new WPF application and I've just noticed in the Task Manager that it was consuming a lot of memory. So I downloaded a trial of RedGate Antz to try and find out what was causing this issue and I was shocked to see about 90 MB of unmanaged memory usage. Because Antz does not support unmamaged memory I then tried to use Windbg which did not point to a high usage itself. This leads me to believe it must be one of the DLLs I'm loading. I'm using the DevExpress controls in my application.
An interesting feature is when I minimize my application the memory drops right down from say 110 MB to about 6-10 MB.
Should I be concerned / worried?
This is my first WPF application and I'm not totally sure what to expect in terms of memory usage. Does the fact when minimized this memory is regained/given up a sign that everything is ok?
Any thoughts or ideas on what could be causing this would be most helpful.
I've had good luck with SciTech's .Net Memory Profiler (memprofiler.com) if you want to know specifically what's causing it.
With the nature of the .Net runtime, if you're running on a machine that has plenty of memory available then it will generally try to use it. If you start seeing performance problems related to it then you should worry, and generally it's good to be aware of what is using resources regardless. A probable reason for the drop in memory is one of the DLLs may hook to your main Window's events and invoke a garbage collection on minimize.
If you're concerned about the perception of high memory usage there are tricks you can play to massage the numbers that show up in TaskManager (like p/invoking SetProcessWorkingSetSize), but that doesn't seem to be really what you're asking about.
I am having trouble profiling my WPF application.
Here is the situation: any use-case will be the following: enter values -> click on "compute values" -> loading... -> display values.
During the "Loading..." phase, there are two phases:
A pure mathematical phase, which is extremely optimized
A "WPF is drawing your controls" phase, which is... well... long.
What I want to do here is to profile the application to have a TreeView with: function, elapsed time, number of calls.
I usually use the Visual Studio profiler (mostly because my company doesn't wanna pay for a good profiler. Ask people to optimize performance, don't give them any good profiler, let's just politely say it's a nice company policy).
The problem is that this profiler does not go until WPF system functions (draw, MeasureOverride, measureLength....).
I used JetBrains' dotTrace for a while (the 10 day trial... meh) whic is truely awesome, since it was able to really separate the phases even in the most precise situations (time elapsed to color one cell in a datagrid, time elapsed to calculate one cell's width...).
Ants doesn't seem like it profiles WPF (it just displays "Managed code" ... )
So right now, my Visual Studio profiler stops at a function which defines an Xaxis for a Visiblox chart. It just tells me that WPF takes around 2.3 seconds to "Define XAxis", while 2.3 seconds is actually the entire time spent to draw all my grids&graphs
Do you guys know by any chance a profiler (or a setting in VS profiler) which can do the magic?
Thanks a lot!
You can use the Windows SDK WPF Performance Suite
Stuck record here.
You think you want a tree view, with elapsed time, call count, etc.,
but what you need is to optimize the app by finding what you can fix to eliminate wall-clock time being spent unnecessarily.
Here's an example of what works best, in my experience.
It is very easy for something to be taking a large fraction of time, where it is not necessarily localized to particular routines, particular lines of code, or particular paths in the call tree.
That method finds it no matter where it is.
What more, it doesn't bother measuring the problem beyond the minimum needed to find it.
However, if you really feel the need to buy or install something, it doesn't help you there.
Note: This is not for unit testing or integration testing. This is for when the application is running.
I am working on a system which communicates to multiple back end systems, which can be grouped into three types
Relational database
SOAP or WCF service
File system (network share)
Due to the environment this will run in, there are no guarantees that any of those will be available at run time. In fact some of them seem pretty brittle and go down multiple times a day :(
The thinking is to have a small bit of test code which runs before the actual code. If there is a problem then persist the request and poll until the target system until it is available. Tests could possibly be rerun within the code to check it is still available at logical points. The ultimate goal is to have a very stable system, regardless of the stability (or lack thereof) of the systems it communicates to.
My questions around this design are:
Are there major issues with it? (small things like the fact it may fail between the test completing and the code running are understandable)
Are there better ways to implement this sort of design?
Would using traditional exception handling and/or transactions be better?
Updates
The system needs to talk to the back end systems in a coordinated way.
The system is very async in nature so using things like queuing technologies is fine.
The system must run even if one or more backend systems are down as others may be up and processing of some information is possible.
You will be needing that traditional exception handling no matter what, since as you point out there's always the chance that things'll fail between your last check and the actual request. So I really think any solution you find should try to interact smoothly with this.
You are not stating if these flaky resources need to interact in some kind of coordinated manner, which would indicate that you should probably be using a transaction manager of some sort to do this. I do not believe you want to get into the footwork of transaction management in application code for most needs.
Sometimes I have also seen people use AOP to encapsulate retry logic to back-end systems that fail (for instance due to time-out issues). Used sparingly this may be a decent solution.
In some cases you can also use message queuing technology to alleviate unstable back-ends. You could for instance commit to a message queue as part of a transaction, and only pop off the queue when successful. But this design is normally only possible when you're able to live with an asynchronous process.
And as always, real stability can only be achieved by attacking the root cause of the problem. I had a 25-year old bug fixed in a mainframe TCP/IP stack fixed because we were overrunning it, so it is possible.
The Microsoft Smartclient framework provides a ConnectionMonitor class. Should be easy to use or duplicate.
Our approach to this kind of issue was to run a really basic 'sanity tester' prior to bringing up our main application. This was thick client so we could run the test every time the app started. This sanity test would go out and check things like database availability, and external network (extranet) access, and it could have been extended to do webservices as well.
If there was a failure, the user was informed, and crucially an email was also sent to the support/dev team. These emails soon became unweildy as so many were being created, but we then setup filters, so we knew when somethings really bad was happening. Overall the approach worked pretty well, our biggest win was being able to tell users that the system was down, before they had entered data, and got part way through a long winded process. They absolutely loved it.
At a technica level the sanity was written in C#, it used exception handling in a conventional way not to find the problems it was looking for. The sanity program became a mini app in its own right, and it was standalone from the main app. If I were doing it again I'd using a logging framework to capture issues, which is more flexible then our hard coded approach.