I'm going through the pain right now of finding memory leaks in my application using WinDbg. Luckily, I've found a few good articles that give a very good step-by-step process of how to do it. Still, it is a fairly painful process. Does VS2010 have any built in features that can ease the burden of finding a memory leak in a Silverlight application? Of course, a memory leak in .NET sounds a bit like a misnomer, but what I intend to do is to find all objects that are still referencing an object that I believe should be garbage collected.
For those that may be interested, here are some good articles on how to get started using WinDbg to find memory leaks in Silverlight:
Finding Memory Leaks In Silverlight With WinDbg
Hunting down memory leaks in Silverlight
Where's your leak at? (Using WinDbg, SOS, and GCRoot to diagnose a .NET memory leak)
A Memory leak in .NET applications isn't a misnomer at all. I've had this problem in applications I've worked on, both Winforms and Webforms.
WinDbg + SOS.dll is painful compared to the ANTS Profiler. Normally I wouldn't tout a product, but if you're working for a company, they will save a lot of money by buying that product. It'll save you time having to look for memory leaks, and developer time is almost always more expensive than purchasing an application.
I don't thing VS 2010 has this; I remember Microsoft did have http://memprofiler.com/ as a memory profiler for finding leaks, and there are other third party tools.
HTH.
Related
I was wondering if there is a way to visualize (inside Visual Studio) how much of the heap memory I'm using during the execution of my program.
Thanks
Use the Memory Usage Diagnostic Tool in Visual Studio: https://learn.microsoft.com/en-us/visualstudio/profiling/memory-usage?view=vs-2019
The Memory Usage tool lets you take one or more snapshots of the
managed and native memory heap to help understand the memory usage
impact of object types. You can collect snapshots of .NET, native, or
mixed mode (.NET and native) apps.
Here's a great blog article on its usage as well: https://devblogs.microsoft.com/visualstudio/analyze-cpu-memory-while-debugging/
Currently adding some automation testing to our UI framework, and I was wondering if there was any way to perform some kind of memory profiling at the same time.
e.g. rather than having white start our application, have it start dotmemory (or another memory diagnostics tool) get a snapshot and then begin performing the automation tests.
I know this wouldn't track down memory leaks as such, but we could use it as an indicator if there's a spike in memory somewhere.
If anyone knows of a way to kick this off it would be very helpful, even if we had to use visual studios built in memory profiler rather than dot memory.
Not a perfect solution, but managed to find this while googling around.
https://www.jetbrains.com/help/dotmemory-unit/2.3/Introduction.html
Still getting to grips with making it work, but seems promising.
We have an C application which is not written by me. We need to measure the performance in terms of CPU and memory usage.
I have never done the performance test, therefore I am not aware of tools which can be used to get the CPU and memory consumption details.
I tried to search SO and google but I am not sure what to use and how to do.
It would be of great help to me if I can get some guidance here.
EDIT:
I am not looking for profilers which I understand tells about the performance of code blocks. I just want to monitor the resources consumed by the application. We are not going to improve the code. This is just for comparison with other products.
It's something what task manager shows in windows about each process. Just that I want.
I found few tools like nmon, munin, collectd, collectl but still confused how to use them. Trying to understand but any help appreciated.
Thanks
Where are some good resources for looking at the pros/cons of different ways of implementing heap allocators? Resources touching on efficiency (fragmentation, throughput, etc) are preferred. I am NOT looking for simple code repositories.
edit:
I'm not really interested in the philosophical grounding of this wiki. As such, I don't really want to get into 'why' I'm interested in this. Regardless of the underlying intentions/problems/etc, this information exists, so if you know of any good resources, please link to them here!
This is a very old problem, and to get a comprehensive view you will have to dig through the research literature. (I'm not aware of a good textbook treatment.)
A few places to start:
Doug Lea's description of his memory allocator
The Art of Computer Programming, Volume 1 by Don Knuth
Quick fit: an efficient algorithm for heap storage allocation by Weinstock and Wulf
This one is worth spending a day in the library. Yes, a big building full of paper—the problem is that old.
I'm writing a database-style thing in C (i.e. it will store and operate on about 500,000 records). I'm going to be running it in a memory-constrained environment (VPS) so I don't want memory usage to balloon. I'm not going to be handling huge amounts of data - perhaps up to 200MB in total, but I want the memory footprint to remain in the region of 30MB (pulling these numbers out of the air).
My instinct is doing my own page handling (real databases do this), but I have received advice saying that I should just allocate it all and allow the OS to do the VM paging for me. My numbers will never rise above this order of magnitude. Which is the best choice in this case?
Assuming the second choice, at what point would it be sensible for a program to do its own paging? Obviously RDBMsses that can handle gigabytes must do this, but there must be a point along the scale at which the question is worth asking.
Thanks!
Use malloc until it's running. Then and only then, start profiling. If you run into the same performance issues as the proprietary and mainstream "real databases", you will naturally begin to perform cache/page/alignment optimizations. These things can easily be slotted in after you have a working database, and are orthogonal to having a working database.
The database management systems that perform their own paging also benefit from the investment of huge research efforts to make sure their paging algorithms function well under varying system and load conditions. Unless you have a similar set of resources at your disposal I'd recommend against taking that approach.
The OS paging system you have at your disposal has already benefit from tuning efforts of many people.
There are, however, some things you can do to tune your OS to benefit database type access (large sequential I/O operations) vs. the typical desktop tuning (mix of seq. and random I/O).
In short, if you are a one man team or a small team, you probably should make use of existing tools rather than trying to roll your own in that particular area.