Core dumped after program exits - c

I have a quite intriguing issue with my program (compiled with gcc 4.6.4 on ubuntu 12.04). When I dynamically build the executable, the program runs flawlessly. But when I build it statically (with -static flag), it gives me a 'core dumped' after exiting (e.g. after 'return 0' in main). Unfortunately, the whole program is too big to pose in here. What are the possibilities?

In addition of the two possibilities in johnnycrash answer:
Some functions with __attribute__ ((destructor)) is called, and dump core.
The memory heap is corrupted (check with valgrind)
Some function registered with atexit(3) is crashing
Some library/function is linked "twice"

1) You have a thread still executing.
2) You are overwriting memory and you get lucky with the dynamic libraries.

Related

How to debug an experimental toolchain producing malformed executables

I am working on cross compiling an experimental GNU free Linux toolchain using clang (instead of gcc), compiler-rt (instead of libgcc), libunwind (available at http://llvm.org/git/libunwind.git) (instead of libgcc_s), lld (instead of GNU ld), libcxx (instead of libstdc++), libcxxabi (instead of not sure, I'm unclear on the GNU distinction between libstdc++ and its ABI) and musl (instead of glibc).
Using a musl based gcc cross compiler and a few patches I've managed to successfully compile all of the above and sucessfully compile and link a simple hello world C program with it. Something seems to have gone wrong, however, as running the hello world program results in a segmentation fault:
$ ./hello
Segmentation fault
$
Normally I would simply debug it with gdb, but herein lies the problem:
$ gdb ./hello
Reading symbols from ./hello...Dwarf Error: Could not find abbrev number 5 in CU at offset 0x52 [in module /home/main/code/main/asm/hello]
(no debugging symbols found)...done.
(gdb) start
Temporary breakpoint 1 at 0x206
Starting program: /hello
During startup program terminated with signal SIGSEGV, Segmentation fault.
(gdb)
I can't seem to step through the program in any way, I'm guessing because the error is occuring somewhere in early C runtime startup. I can't even step through the assembly using layout asm and stepi, so I really don't know how to find out where exactly the error is occuring (to debug my toolchain).
I have confirmed that the problem resides with lld by using a GNU binutils ld to successfully link the hello world object (statically) using the cross compiled libraries and object files, which results in a functional hello world program. Since lld successfully links, however, I can't pinpoint where failure is occuring.
Note I compiled hello as a static executable and used the -v gcc/clang option to verify that all the correct libraries and object files were linked it.
Note online GDB documentation has the following to say about the above error:
On Unix systems, by default, if a shell is available on your target, gdb) uses it to start your program. Arguments of the run command are passed to the shell, which does variable substitution, expands wildcard characters and performs redirection of I/O. In some circumstances, it may be useful to disable such use of a shell, for example, when debugging the shell itself or diagnosing startup failures such as:
(gdb) run
Starting program: ./a.out
During startup program terminated with signal SIGSEGV, Segmentation fault.
which indicates the shell or the wrapper specified with ‘exec-wrapper’ crashed, not your program.
I don't think this is true, considering what I'm working with and that the problem doesn't happen when I use GNU ld, and because the suggested solution (set startup-with-shell off) doesn't work.
The croscompilling means that the compilation is done on a host machine, and the output of the compilation is the binary which shall run on a target machine. Therefore the compiled binary is not compatible with your host CPU. Instead, if your target supports this, you could run the binary there and use the debugger from your toolchain to connect to the running binary remotely if supported. Or alternatively, the debugger may also be available at the target and you can debug the binary already at place.
Just to get more feeling, try to use command file for the compiled binary, and some other binaries of your host to see possible differences.

warning for not using free() malloc

Are there any safeguards built into GCC that check for memory leaks? If so how can I use them? When I compile with "gcc -Wall -o run run.c", the compiler does not seem to care if any allocated heap-space is being freed at the end of the code. I could not find any simple fixes for this on Google.
Thanks much for your time.
EDIT:
Google Searches did point to Valgrind among other tools. But I was curious as to why the compiler cant deal with this issue. As a newbie, it seemed a simple enough task to check if every "malloc" has a "free" associated with it.
There are two ways to analyze code for problems - static analysis and run-time analysis. Static analysis reads the code - this is what compilers do really well. Run-time analysis for code problems happens when the code is linked against another set of libraries that see what the code actually does as it runs under surveillance. Finding memory leaks is difficult for static analysis but not for a run-time analysis package.
Other run-time analyses are things like code coverage - does all parts of your code run? gcov does this, like valgrind and electric fence look for memory problems like leaks.
So, no, there are no really good compiler safeguards for testing memory leaks.
There is -fsanitize=leak GCC flag.
It overrides malloc/calloc/free to make them count allocated and freed blocks of memory.
If your program is compiled with this flag, it prints information about detected leaks to the terminal after execution.
You can read about it here and here.
Also, I have never used it, so this answer is completely based on GCC manual.

MALLOCDEBUG showing random output when using xlc_r

I have a program, compiled using xlc_r, that spawns off multiple threads and am trying to trace it to see if there's any memory leaks. I've gone through this article detailing how I can use the MALLOCDEBUG feature that's built in to AIX, but after running format_mallocdebug_op.sh, it shows memory leaks all over the place for random pthread and file methods such as pthread_attri_init, _pth_init, fopen, fwrite, etc.
I then made a smaller test program that purposefully doesn't free a char * and compiled it with xlc_r and almost the exact same output appeared. I then compiled the test program again but with xlc and it worked correctly, showing the one char * memory leak and that was it. It seems that the MALLOCDEBUG feature doesn't work well with multi-threaded compiled applications. Is there a setting to tell it to be aware of this?

Minimum 504Kb Memory Usage

On doing some experiments whilst learning C, I've come across something odd. This is my program:
int main(void) {sleep(5);}
When it is compiled, the file size of the executable is 8496 bytes (in comparison to the 26 byte source!) This is understandable as sleep is called and the instructions for calling that are written in the executable. Another point to make is that without the sleep, the executable becomes 4312 bytes.
int main(void) {}
My main question is what happens when the first program is run. I'm using clang to compile and Mac OS X to run it. The result (according to Activity Monitor) is that the program uses 504KB of "real memory". Why is it so big when the program is just 4KB? I am assuming that the executable is loaded into memory but I haven't done anything apart from a sleep call. Why does my program need 500KB to sleep for five seconds?
By the way, the reason I'm using sleep is to be able to catch the amount of memory being used using Activity Monitor in the first place.
I ask simply out of curiosity, cheers!
When you compile a C program it is linked into an executable. Even though your program is very small it will link to the C runtime which will include some additional code. There may be some error handling and this error handling may write to the console and this code may include sprintf which adds some footprint to your application. You can request the linker to produce a map of the code in your executable to see what is actually included.
Also, an executable file contains more than machine code. There will be various tables for data and dynamic linking which will increase the size of the executable and there may also be some wasted space because the various parts are stored in blocks.
The C runtime will initialize before main is called and this will result in both some code being loaded (e.g. by dynamically linking to various operating system features) as well as memory being allocated for a a heap, a stack for each thread and probably also some static data. Not all of this data may show as "real memory" - the default stack size on OS X appears to be 8 MB and your application is still using much less than this.
In this case I suppose the size difference you've observed is significantly caused by dynamic linking.
Linkers usually don't place common code into the executable binaries, instead they reserve the information and the code would be loaded when the binary is loaded. Here those common code is stored in files called shared object(SO) or dynamically linked library(DLL).
[pengyu#GLaDOS temp]$ cat test.c
int main(void) {sleep(5);}
[pengyu#GLaDOS temp]$ gcc test.c
[pengyu#GLaDOS temp]$ du -h --apparent-size a.out
6.6K a.out
[pengyu#GLaDOS temp]$ gcc test.c -static
[pengyu#GLaDOS temp]$ du -h --apparent-size a.out
807K a.out
ALso, here I'm listing what are there in the memory of a process:
There're necessary dynamic libraries to be loaded in:
Here ldd gives the result of dynamic libraries to be loaded when invoking the binary. These libraries locates in the part where it's obtained by calling the mmap system call.
[pengyu#GLaDOS temp]$ cat test.c
int main(void) {sleep(5);}
[pengyu#GLaDOS temp]$ gcc test.c
[pengyu#GLaDOS temp]$ ldd ./a.out
linux-vdso.so.1 (0x00007fff576df000)
libc.so.6 => /usr/lib/libc.so.6 (0x00007f547a212000)
/lib64/ld-linux-x86-64.so.2 (0x00007f547a5bd000)
There're sections like .data, .code to be allocated for data from your binary file.
This part exists in the binary executable, so the size is supposed to be no lager than the file itself. Contents copied at the loading stage of a executable binary.
There're sections like .bss and also the stack zone to be allocated for dynamically use during execution of the program.
This part does not exist in the binary executable, so the size could be quite large without being affected by size of the file itself.

How do I know which illegal address the program access when a segmentation fault happens

Plus, The program runs on a arm device running Linux, I can print out stack info and register values in the sig-seg handler I assign.
The problem is I can't add -g option to the source file, since the bug may won't reproduce due to performance downgrade.
Compiling with the -g option to gcc does not cause a "performance downgrade". All it does is cause debugging symbols to be included; it does not affect the optimisation or code generation.
If you install your SIGSEGV handler using the sa_sigaction member of the sigaction struct passed to sigaction(), then the si_addr member of the siginfo_t structure passed to your handler contains the faulting address.
I tend to use valgrind which indicates leaks and memory access faults.
This seems to work
http://tlug.up.ac.za/wiki/index.php/Obtaining_a_stack_trace_in_C_upon_SIGSEGV
static void signal_segv(int signum, siginfo_t* info, void*ptr) {
// info->si_addr is the illegal address
}
If you are worried about using -g on the binary that you load on the device, you may be able to use gdbserver on the ARM device with a stripped version of the executable and run arm-gdb on your development machine with the unstripped version of the executable. The stripped version and the unstripped version need to match up to do this, so do this:
# You may add your own optimization flags
arm-gcc -g program.c -o program.debug
arm-strip --strip-debug program.debug -o program
# or
arm-strip --strip-unneeded program.debug -o program
You'll need to read the gdb and gdbserver documentation to figure out how to use them. It's not that difficult, but it isn't as polished as it could be. Mainly it's very easy to accidentally tell gdb to do something that it ends up thinking you meant to do locally, so it will switch out of remote debugging mode.
You may also want to use the backtrace() function if available, that will provide the call stack at the time of the crash. This can be used in order to dump the stack like it happens in an high level programming language when a C program gets a segmentation fault, bus error, or other memory violation error.
backtrace() is available both on Linux and Mac OS X
If the -g option makes the error disappear, then knowing where it crashes is unlikely to be useful anyway. It's probably writing to an uninitialized pointer in function A, and then function B tries to legitimately use that memory, and dies. Memory errors are a pain.

Resources