I have an executable I'm running on a remote linux machine.
When I run the executable normally (./execute arg_one), the program crashes in the middle of a while loop " Error in `./execute': malloc(): memory corruption (fast)"
However, when I run the program under the simplest valgrind (valgrind ./execute arg_one), the program doesn't crash, runs all the way through main, and actually produces the correct output.
Why would this be the case??
Sometimes it happens that your program crashes, but as valgrind is executing it very slowly it is possible that valgrind runs it to the end, and with the correct output. But it doesn't mean that your program is correct. You should check the errors/contexts shown by valgrind and correct them if you want your program to work.
Related
I wrote an infinite loop like the following
for (size_t i = words-1; i >= 0; --i) {
... data[i] ...
}
and it would also access OOB memory. The executable crashes with a coredump. Using gdb with the coredump shows me it crashes when i is a huge number.
However, lldb can successfully run the same executable w/o any crash... Did LLDB 'interpret' the code and fix the issue for me?
No, that would be a horrible thing for a debugger to do! Debuggers in general, and lldb in particular, try hard to run your program as closely as possible to how it runs normally. There are a couple of intrusive jobs the debugger has to do when running "flat out" - e.g. it has to pause when there are shared library loads to read in the new libraries, and the debugger gets first crack at signals sent to the program. So particularly in multi-threaded programs the debugger might perturb timings. But it should never change the instruction flow of code in your program.
If you can make up a test case showing a crash when run in command line but not in lldb, please file a bug with the lldb bug tracker:
https://bugs.llvm.org
and include the example.
How can I debug a C application that does not crash when attached with gdb and run inside of gdb?
It crashes consistently when run standalone - even the same debug build!
A few of us are getting this error with a C program written for BSD/Linux, and we are compiling on macOS with OpenSSL.
app(37457,0x7000017c7000) malloc: *** mach_vm_map(size=13835058055282167808) failed (error code=3)
*** error: can't allocate region
*** set a breakpoint in malloc_error_break to debug
ERROR: malloc(buf->length + 1) failed!
I know, not helpful.
Recompiling the application with -g -rdynamic gives the same error. Ok, so now we know it isn't because of a release build as it continues to fail.
It works when running within a gdb debugging session though!!
$ sudo gdb app
(gdb) b malloc_error_break
Function "malloc_error_break" not defined.
Make breakpoint pending on future shared library load? (y or [n]) y
Breakpoint 1 (malloc_error_break) pending.
(gdb) run -threads 8
Starting program: ~/code/app/app -threads 8
[New Thread 0x1903 of process 45436]
warning: unhandled dyld version (15)
And it runs for hours. CTRL-C, and run ./app -threads 8 and it crashes after a second or two (a few million iterations).
Obviously there's an issue within one of the threads. But those workers for the threads are pretty big (a few hundred lines of code). Nothing stands out.
Note that the threads iterate over loops of about 20 million per second.
macOS 10.12.3
Homebrew w/GNU gcc and openssl (linking to crypto)
Ps, not familiar with C too much - especially any type of debugging. Be kind and expressive/verbose in answers. :)
One debugging technique that is sometimes overlooked is to include debug prints in the code, of course it has it's disadvantages, but also it has advantages. A thing you must keep in mind though in the face of abnormal termination is to make sure the printouts actually get printed. Often it's enough to print to stderr (but if that doesn't make the trick one may need to fflush the stream explicitly).
Another trick is to stop the program before the error occurs. This requires you to know when the program is about to crash, preferably as close as possible. You do this by using raise:
raise(SIGSTOP);
This does not terminate the program, it just suspends execution. Now you can attach with gdb using the command gdb <program-name> <pid> (use ps to find the pid of the process). Now in gdb you have to tell it to ignore SIGSTOP:
> handle SIGSTOP ignore
Then you can set break-points. You can also step out of the raise function using the finish command (may have to be issued multiple times to return to your code).
This technique makes the program have normal behaviour up to the time you decide to stop it, hopefully the final part when running under gdb would not alter the behavior enuogh.
A third option is to use valgrind. Normally when you see these kind of errors there's errors involved that valgrind will pick up. These are accesses out of range and uninitialized variables.
Many memory managers initialise memory to a known bad value to expose problems like this (e.g. Microsoft's CRT will use a range of values (0xCD means uninitialised, 0xDD means already free etc).
After each use of malloc, try memset'ing the memory to 0xCD (or some other constant value). This will allow you to identify uninitialised memory more easily with the debugger. don't use 0x00 as this is a 'normal' value and will be harder to spot if it's wrong (it will also probably 'fix' your problem).
Something like:
void *memory = malloc(sizeof(my_object));
memset(memory, 0xCD, sizeof(my_object));
If you know the size of the blocks, you could do something similar before free (this is sometimes harder unless you know the size of your objects, or track it in some way):
memset(memory, 0xDD, sizeof(my_object));
free(memory);
I am using valgrind to find memory leaks on my program however it is taking a long time and its loading. When I run the program without valgrind it takes second, what is the problem and what should I look for in the code.
There is no problem as far as I can see unless you can verify an infinite loop or some other run-time error ... Valgrind basically acts like a virtual machine or virtual execution environment running the program, watching all variables, memory allocations, etc., etc. and therefore will run quite a bit slower than native code. You'll get the same effect if you ran your program inside a debugger like gdb and set it to watch every writable memory location.
Updated:
now with valgrind --tools=memcheck --track-origins=yes --leak-check=full ./prog it runs correctly, but without this valgrind, it still goes wrong, how's that happen?
I'm doing a project on Linux, which stores lots of data in memory, and I need to know which data block is changed in order to find out the problem in my program.
Updated: This is a multithread program, and the write/read is done by different threads which created by system calls.
The code is like this
for(j=0;j<save_size;j++){
e->blkmap_mem[blk_offset+save_offset + j] = get_mfs_hash_block();
memcpy(e->blkmap_mem[blk_offset + save_offset +j]->data, (char *)buff + j * 4096, 4096);
e->blkmap_mem[save_offset+j]->data = (char *)(buff + j* 4096);
e->blkmap_mem[blk_offset+save_offset + j]->size = 4096;
e->blkmap_addr[blk_offset+save_offset + j] = 1;
And I want to know if e->blkmap_mem[blk_offset+save_offset+j]->data is changed in somewhere else.
I know awatch exp in gdb could check if the value changes, but there are too many here, is there some way to trace them all, I mean they may be nearly 6,000.
Thanks your guys.
Reverse debugging has a great use case here, assuming you have some way to detect the corruption once it's happened (a seg fault will do fine).
Once you've detected the corruption in a debugging session, you put a watch point on the corrupted variable, and then run the program backwards until the variable was written to.
Here's a step-by-step guide:
Compile the program with debugging symbols as usual and load it into gdb.
Start the program using start.
This puts a breakpoint at the very beginning of main, and runs the program until it hits it.
Now, put a breakpoint somewhere where memory corruption is detected
You don't need to do this if you're detecting the corruption with a seg fault.
type record to start recording program execution
This is why we called start before - you can't record when there's no process running.
continue to set the program running again.
While recording, the program will run very slowly
It may tell you the record buffer is full - if this happens, tell it to wrap around.
When your corruption is detected by your breakpoint or the seg fault, the program will stop. Now put a watch on whatever the corrupted variable is.
reverse-continue to run the program backwards until the corrupted variable is written to.
When the watchpoint hits, you've found your corruption.
Note that it's not always the first or only corruption of that variable. But you can always keep running backwards until you run out of reverse execution history - and now you've got something to fix.
There's a useful tutorial here, which also discusses how to control the size of the record buffer, in case that becomes an issue for you.
I have a program which produces a fatal error with a testcase, and I can locate the problem by reading the log and the stack trace of the fatal - it turns out that there is a read operation upon a null pointer.
But when I try to attach gdb to it and set a breakpoint around the suspicious code, the null pointer just cannot be observed! The program works smoothly without any error.
This is a single-process, single-thread program, I didn't experience this kind of thing before. Can anyone give me some comments? Thanks.
Appended: I also tried to call pause() syscall before the fatal-trigger code, and expected to make the program sleep before fatal point and then attach the gdb on it on-the-fly, sadly, no fatal occurred.
It's only guesswork without looking at the code, but debuggers sometimes do this:
They initialize certain stuff for you
The timing of the operations is changed
I don't have a quote on GDB, but I do have one on valgrind (granted the two do wildly different things..)
My program crashes normally, but doesn't under Valgrind, or vice versa. What's happening?
When a program runs under Valgrind,
its environment is slightly different
to when it runs natively. For example,
the memory layout is different, and
the way that threads are scheduled is
different.
Same would go for GDB.
Most of the time this doesn't make any
difference, but it can, particularly
if your program is buggy.
So the true problem is likely in your program.
There can be several things happening.. The timing of the application can be changed, so if it's a multi threaded application it is possible that you for example first set the ready flag and then copy the data into the buffer, without debugger attached the other thread might access the buffer before the buffer is filled or some pointer is set.
It's could also be possible that some application has anti-debug functionality. Maybe the piece of code is never touched when running inside a debugger.
One way to analyze it is with a core dump. Which you can create by ulimit -c unlimited then start the application and when the core is dumped you could load it into gdb with gdb ./application ./core You can find a useful write-up here: http://www.ffnn.nl/pages/articles/linux/gdb-gnu-debugger-intro.php
If it is an invalid read on a pointer, then unpredictable behaviour is possible. Since you already know what is causing the fault, you should get rid of it asap. In general, expect the unexpected when dealing with faulty pointer operations.