I have a double free that I'm trying to hunt down. It got introduced in an edge case long ago, far enough back that I can't easily bisect to find what introduced it. So the next best way to hunt it down would be to debug it. I tried to find any documentation indicating whether Valgrind's gdb server could be configured to break on any violation. This would be desirable so I could understand the context of the second free. (Hopefully the invalid free is the second one).
Valgrind activates by default its embedded gdbserver. This allows a GDB to connect to it at any moment.
If you want Valgrind gdbserver to stop and wait for a connection from GDB when an error is detected, you can use the option --vgdb-error=<number>
By specifying --vgdb-error=1, valgrind will stop at the first error detected, and all the following errors.
See http://www.valgrind.org/docs/manual/manual-core-adv.html#manual-core-adv.gdbserver for more details
Related
I have a C CLI program that crashes and generates this error in Windows 7:
This application has requested the Runtime to terminate it in an unusual way.
Please contact the application's support team for more information.
First, I read somewhere that it could be caused assert statements triggering so as a first measure I replaced them with if statements to catch and log any potential failed asserts. Second, I sprayed the code with printf statements to see where the program exits. Third, I especially made sure that the code doesn't exit anywhere without first logging the exit. The program is threaded so there are quite a few things going on, but nothing too complex.
Now the problem is that the second time I got the error it showed that the program exited outside of my printf statements so I can't tell where it exited.
So two questions:
I suspect I would need to use a proper debugger to see more details regarding the exit, if so, which one?
Are there any other gotchas regarding this sort of error besides the assert statements? I find quite a few C++ blog entries regarding this error, but not too many C ones.
I am using Visual C++ 2008 Express Edition. Also, I am invoking the program in CMD.exe.
First of all, you removed calls to assert which are typically meant to help track down cases where the assumption the programmer makes don't hold? Really? Uhm...
Second of all, are you familiar with the debugger at all? Visual C++ should include an integrated debugger that can, when your program runs in debug mode, not only show you where your process exits from but it can also show you exactly where your program crashes, how it got to that point and what the values of variables where at the time of the crash. Imagine that!
This article mostly talks about C# but the principles are the same.
The message you are getting is from the VC runtime. It happens when an exception is thrown and not caught anywhere.
Compile your program with a debugging debugging configuration (that should be the default) and run in the debugger, and when you hit an unhandled exception, the debugger will break. Under the "Debug" menu, you will find an "Exceptions" item, which will help you fine tune how the debugger responds to exceptions.
Note that in the context of C++ and Windows, 'exception' can mean one of several things; there are the Win32 exceptions, the C++ exception, and VS Structured Exception Handling.
assertions are for fail conditions that you never expect to happen, but can't prove can't happen. You should always be surprised by an assertion. Many (most? all?) implementations of assert() are only compiled in debug configurations and not release configurations.
I'm new to embedded programming but I have to debug a quite complex application running on an embedded platform. I use GDB through a JTAG interface.
My program crashes at some point in an unexpected way. I suppose this happens due to some memory related issue. Does GDB allow me to inspect the memory after the system has crashed, thus being completely unresponsive?
It depends on your setup a bit. In particular, since you're using JTAG, you may be able to set your debugger up to halt the processor when it detects an exception (for example accessing protected memory illegally and so forth). If not, you can replace your exception handlers with infinite loops. Then you can manually unroll the exception to see what the processor was doing that caused the crash. Normally, you'll still have access to memory in that situation and you can either use GDB to look around directly, or just dump everything to a file so you can look around later.
It depends on what has crashed. If the system is only unresponsive (in some infinite loop, deadlock or similar), then it will normally respond to GDB and you will be able to see a backtrace (call stack), etc.
If the system/bus/cpu has actually crashed (on lower level), then it probably will not respond. In this case you can try setting breakpoints at suspicious places/variables and observe what is happening. Also simulator (ISS, RTL - if applicable) could come handy, to compare behavior with HW.
I got this problem on different C projects when using gdb.
If I run my program without it, it crashes consistently at a given event probably because of a invalid read of the memory. I try debugging it with gdb but when I do so, the crash seems to never occur !
Any idea why this could happen ?
I'm using mingw toolchain on Windows.
Yes, it sounds like a race condition or heap corruption or something else that is usually responsible for Heisenbugs. The problem is that your code is likely not correct at some place, but that the debugger will have to behave even if the debugged application does funny things. This way problems tend to disappear under the debugger. And for race conditions they often won't appear in the first place because some debuggers can only handle one thread at a time and uniformly all debuggers will cause the code to run slower, which may already make race conditions go away.
Try Valgrind on the application. Since you are using MinGW, chances are that your application will compile in an environment where Valgrind can run (even though it doesn't run directly on Windows). I've been using Valgrind for about three years now and it has solved a lot of mysteries quickly. The first thing when I get a crash report on the code I'm working with (which runs on AIX, Solaris, BSDs, Linux, Windows) I'm going to make one test run of the code under Valgrind in x64 and x86 Linux respectively.
Valgrind, and in your particular case its default tool Memcheck, is going to emulate through the code. Whenever you allocate memory it will mark all bytes in that memory as "tainted" until you actually initialize it explicitly. The tainted status of memory bytes will get inherited by memcpy-ing uninitialized memory and will lead to a report from Valgrind as soon as an uninitialized byte is used to make a decision (if, for, while ...). Also, it keeps track of orphaned memory blocks and will report leaks at the end of the run. But that's not all, more tools are part of the Valgrind family and test various aspects of your code, including race conditions between threads (Helgrind, DRD).
Assuming Linux now: make sure that you have all the debug symbols of your supporting libraries installed. Usually those come in the *-debug version of packages or in *-devel. Also, make sure to turn off optimization in your code and include debug symbols. For GCC that's -ggdb -g3 -O0.
Another hint: I've had it that pointer aliasing has caused some grief. Although Valgrind was able to help me track it down, I actually had to do the last step and verify the created code in its disassembly. It turned out that at -O3 the GCC optimizer got ahead of itself and turned a loop copying bytes into a sequence of instructions to copy 8 bytes at once, but assumed alignment. The last part was the problem. The assumption about alignment was wrong. Ever since, we've resorted to building at -O2 - which, as you will see in this Gentoo Wiki article, is not the worst idea. To quote the relevant partÖ
-O3: This is the highest level of optimization possible, and also the riskiest. It will take a longer time to compile your code with this
option, and in fact it should not be used system-wide with gcc 4.x.
The behavior of gcc has changed significantly since version 3.x. In
3.x, -O3 has been shown to lead to marginally faster execution times over -O2, but this is no longer the case with gcc 4.x. Compiling all
your packages with -O3 will result in larger binaries that require
more memory, and will significantly increase the odds of compilation
failure or unexpected program behavior (including errors). The
downsides outweigh the benefits; remember the principle of diminishing
returns. Using -O3 is not recommended for gcc 4.x.
Since you are using GCC in MinGW, I reckon this could well apply to your case as well.
Any idea why this could happen ?
There are several usual reasons:
Your application has multiple threads, has a race condition, and running under GDB affects timing in such a way that the crash no longer happens
Your application has a bug that is affected by memory layout (often reading of uninitialized memory), and the layout changes when running under GDB.
One way to approach this is to let the application trap whatever unhandled exception it is being killed by, print a message, and spin forever. Once in that state, you should be able to attach GDB to the process, and debug from there.
Although it's a bit late, one can read this question's answer in order to be able to set up a system to catch a coredump without using gdb. He may then load the core file using
gdb <path_to_core_file> <path_to_executable_file>
and then issue
thread apply all bt
in gdb.
This will show stack traces for all threads that were running when the application crashed, and one may be able to locate the last function and the corresponding thread that caused the illegal access.
Your application is probably receiving signals and gdb might not pass them on depending on its configuration. You can check this with the info signals or info handle command. It might also help to post a stack trace of the crashed process. The crashed process should generate a core file (if it hasn't been disabled) which can be analyzed with gdb.
I developed a command-line (non GUI) C program on Linux using QT Creator, which internally uses gdb as its debugger. When I debugged the program on Windows using Visual Studio, it reported that it was writing outside the bounds of allocated memory (although it did not report the violation at the exact time it occurred, so it was still hard to track down). I eventually managed to find a place in the code where a malloc call was allocating too little memory and that solved the problem.
However, it bothers me that this problem was never detected on the Linux side. Are there any switches or something that would enable this detection feature on Linux?
There are many in-code memory validators that work both for Windows and Linux. Check Wikipedia for their list. However, most Linux users use Valgrind as the ultimate tool for memory debugging.
HI, i am recently in a project in linux written in C.
This app has several processes and they share a block of shared memory...When the app run for about several hrs, a process collapsed without any footprints so it's very diffficult to know what the problem was or where i can start to review the codes....
well, it could be memory overflown or pointer malused...but i dunno exactly...
Do you have any tools or any methods to detect the problems...
It will very appreciated if it get resolved. thanx for your advice...
Before you start the program, enable core dumps:
ulimit -c unlimited
(and make sure the working directory of the process is writeable by the process)
After the process crashes, it should leave behind a core file, which you can then examine with gdb:
gdb /some/bin/executable core
Alternatively, you can run the process under gdb when you start it - gdb will wake up when the process crashes.
You could also run gdb in gdb-many-windows if you are running emacs. which give you better debugging options that lets you examine things like the stack, etc. This is much like Visual Studio IDE.
Here is a useful link
http://emacs-fu.blogspot.com/2009/02/fancy-debugging-with-gdb.html
Valgrind is where you need to go next. Chances are that you have a memory misuse problem which is benign -- until it isn't. Run the programs under valgrind and see what it says.
I agree with bmargulies -- Valgrind is absolutely the best tool out there to automatically detect incorrect memory usage. Almost all Linux distributions should have it, so just emerge valgrind or apt-get install valgrind or whatever your distro uses.
However, Valgrind is hardly the least cryptic thing in existence, and it usually only helps you tell where the program eventually ended up accessing memory incorrectly -- if you stored an incorrect array index in a variable and then accessed it later, then you will still have to figure that out. Especially when paired with a powerful debugger like GDB, however (the backtrace or bt command is your friend), Valgrind is an incredibly useful tool.
Just remember to compile with the -g flag (if you are using GCC, at least), or Valgrind and GDB will not be able to tell you where in the source the memory abuse occurred.