Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
I am working on C/C++ on UNIX and have often seen core files. Many times the core files are difficult to debug to find the actual cause of core or the segmentation fault. Could you please suggest me an efficient debugger?
For segmentation faults, memory leaks, uninitialized data and such, running your program through valgrind is always a good idea. If you are especially interested in memory leaks, the option "--leak-check=full" pays off.
And yes, learn gdb. It takes a little time, but it's worth it.
I think most C compilers on most flavors of *nix support -g to include debugging symbols within the object files, so if you do:
cc -g -c file1.c
cc -g -c file2.c
cc -g file1.o file2.o -o program
./program
Then when you run program if it crashes it should produce a more easily debugged core file. The first two lines just compile source files (producing .o files), the third line tells the compiler to call the linker to link the source files into an executable (passing -g here may not actually do anything if the linker does not have to do anything special to produce an executable with debugging symbols, but it should not hurt anything), and the last line runs the program. You should make sure that you do not tell the compiler to do optimizations when you are trying to debug (unless you find that it does not have errors unless optimizations are turned on) because optimizations typically make the more difficult to follow.
Since I don't know what platform you are on or what tools you have available (or really even what C compiler you are using) so it is difficult to give more specific advice. You should read the man page (manual) for your complier. From the command line type:
man cc
And that should bring up a manual page that tells you lots of things about the compiler on your system. This may tell you how to tell the compiler to produce more warning messages, which could help you find your errors before even running your programs. (note that some warnings may only be produced if you compile with certain optimizations turned on, so even though you probably won't want to debug the optimized program you may want to compile it with optimizations and extra warnings turned on just to see if they tell you anything).
Your Unix system probably has some type of debugger installed. Most Linux machines set up for C development have gdb installed. gdb can be used to run your program in debug mode or to analyze a core file. If you have gdb you can:
gdb ./program
it will start up ready to run your program. If you do:
gdb ./program ./core
it will behave similarly except that it will be as though you were debugging and your program just crashed. From this state the quickest and most helpful thing you can do is to
(gdb) bt
Here (gdb) is the prompt and bt is a command that says to produce a back-trace. That means a call stack, which shows what function the program was in when the failure happened, and what function called that function, and what function called that function, and on and on up to the first function. This can be confusing because it will often show library functions as the most recent called, but this usually means that you have passed in some bad data somewhere along the way that is causing the problem.
gdb is a large and complex program so if it is on your system you should take the time to read up on it.
If it is not on your system then you should find out what similar tools are. Some of the graphical debuggers (either within an IDE or not) act as front ends to command line debuggers and some even support several different command line debuggers, so if you are able to use one of the graphical debuggers you may not actually have to worry about what actual back end command line debugger is being used.
Use gdb. It is the defacto standard Unix C/C++ debugger and as of version 7.0 has reversible debugging features (you can go backwards in time). These reasons alone make it at least worthwhile to check it out.
I really like Totalview. The parallel debugging features are what make me like it as much as I do.
Generally, gdb is an excellent debugger (though it takes a bit to learn). There are also various frontends, some with a GUI, such as DDD or cgdb.
If you explain where specifically you are having trouble, we may be able to better recommend which debugger will help you most.
As suggested above gdb is an excellent debugger. But in the linux terminal debugging larger projects with gdb is little more complex. Simple reason is it is completely command line interface. So i would suggest kdevelop which internally uses the gdb in graphical manner. This debugging tool helped me a lot in debugging my big projects in very easy manner. Let me know if you need any help in using this tool.
Related
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 4 years ago.
Improve this question
I am working on a IoT linux device. There is segment fault when running my applicantion. I need some methods to solve this problem.
Methods that I have tried:
1.coredump
ulimit -c unlimited; unlimit -f unlimited;
core will create, but maybe the chip's memory is not enough, so the core is always truncatured. I cannot use gdb to get the backtrace.
2.dmesg | grep segfault
This linux system does not save crash in "demsg"
3./var/log/messages
This linux system does not save crash in "/var/log/messages"
Do you have any suggestion to solve segment fault? Thank you very much.
You can use a tool like Valgrind. It helped us a lot when we were trying to find data that is written out of bounds of an array. It is good for checking memory leaks, out-of-boundary cases and segmentation faults. Actually we just used it to check all of our C/C++ programs later and found a lot of undetected bugs.
Note: Don't forget to compile your program with debug information (e.g. '-g' switch of gcc compiler) to get more human readable messages in Valgrind. Check this quick start guide.
Coredumps can be large, but in my experience they contain huge chunks of zeros, so they can be easily compressed. Using the /proc/sys/kernel/core_pattern file, you can get the kernel to pipe the dump through gzip, so that it takes less space ( compressing the core file during core generation ).
Another option would be to try the -fsanitize family of gcc options. More specifically -fsanitize=address and -fsanitize=undefined. If you do that, your application will print a lot of useful information when it crashes (often including the actual file and line number where the crash occured). Oh and don't forget to copy the corresponding shared libraries to your target, otherwise the dynamic linker will throw an error when you try to run the instrumented application.
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 7 years ago.
Improve this question
TL;DR Program works on one system. On home system it hangs before reaching the main entry point (checked with debugger).
I have a local repository of code for a group project, synced up to the latest version that I have tested on several other machines (including Linux, Windows and Macs) that I am sure will work without giving any nasty errors. It's a fairly straightforward program, the main technical challenge we've dealt with has been linking in SDL2, which has been fairly easy.
The program compiles perfectly on my local machine and throws no warnings or errors even with as many of the warning flags turned on as possible. I've made sure that I'm not accidentally using .o files from a different system and I've cleaned then recompiled the code several times. I've reverted my local repository back to older code from the project that I'm also sure previously compiled and ran on my local machine.
I've made sure that the linking in of external libraries in gcc is working correctly, and there are no warnings or errors coming from that. The final linking together of the .o files is also error and warning free.
I've re-installed all the relevant .DLLs (in this case, the only DLL is the SDL one) for the program I'm working, both in the logical system locations and the working directory of the compiled executables. I've made sure that the .DLLs are the correct bit versions, and even tried the wrong bit versions in case I was accidentally making a version that needed the other bit versions.
I've re-installed my compiler (which I'm getting from Msys2 at the moment), and I've tried using the compiler that comes from Mingw64 instead.
None of these things have made any difference. When I go to run the executable, nothing will happen. No process is created and any terminal window which I'm running it from will just hang until I force close it (it won't respond to ctrl-c).
If I try running it in debug mode, gdb will be able to open the executable, and give me all the information, but once I try to run it, it will hang just like the terminal windows. Even if I try to break it at the entry point, the program seems to never get to the entry point, because it still hangs.
This problem started completely randomly. I came home from my university, having been working on the code before I went to university that day and having left it in a compilable state, came home, hit make and it wouldn't run. This was even before I had pulled the changes that we had committed and pushed that day, which is why I'm completely lost as to exactly why this will compile so happily yet absolutely refuse to run in any conditions.
If the debugger doesn't reach the main entry point, then the most likely culprit I can think of is static initialization fiasco, which is consistent with the behavior you describe: works on one system, fails on another.
Without seeing any code, we are just throwing arrows in the dark.
C vs C++:
Static initialization fiasco applies to C++ only, but keep in mind that libraries, even libraries linked to a C program can contain C++ code (not necessarily exposed as an interface).
Be sure to check Is there any way a C/C++ program can crash before main()? .
#JohnBollinger had an excellent comment: check if a simple program (compiled with the same compiler & flags) runs. We sometime get absorbed on where we think the problem is, that we can easily miss things like this.
Another thing you can do is to use ldd to see if the correct libraries are linked to your program.
Many times we don't get correct or complete stack dump during crash. My question is in what all cases we can see this to happen.
Probably it can be because of the function call stack getting corrupted. But How such corruption happens.
My 2nd question is how do we debug such an issue and what approach we can take to find the root cause for the crash.
I understand my questions may not have an exact answer but I would like to know your thoughts.
Thank You...
It is operating system and platform (i.e. processor) specific.
The best way is to use a debugger to find such issues (perhaps a remote one, learn about gdbserver)
I would suggest to debug most of your code on a desktop Linux system (because you have lots of useful tools: valgrind, gcc -fsanitize=address, gdb, etc...)
Of course, the call stack can be corrupted to the point of being unusable. Try to memset the stack segment, then return from the function doing that (no matter what tool or trick you would use, the stack is then desperately corrupted on most platforms)!
You might be interested by GNU glibc backtrace function, GCC __builtin_return_address, libbbacktrace by Ian Taylor in GCC
You might also enable core dumps and analyze them post mortem (perhaps using a cross-debugger). See core(5), proc(5), setrlimit(2)
I see many tutorials on gdb asking to use -g option while compiling c program. I fail to understand what does the -g option actually do.
It makes the compiler add debug information to the resulting binaries. This information allows a debugger to associate the instructions in the code with source code files and line numbers. Having debug symbols makes certain kinds of debugging (like stepping through code) much easier, if not possible at all.
The -g option actually has a few tunable parameters, check the manual. Also, it's most useful if you don't optimize the code, so use -O0 or -Og (in newer versions) - optimizations break the connection between instructions and source code. (Most importantly you have to not omit frame pointers from function calls, which is a popular optimization but basically completely ruins the ability to walk up the call stack.)
The debug symbols themselves are written in a standardized language (I think it's DWARF2), and there are libraries for reading that. A program could even read its own debug symbols at runtime, for instance.
Debug symbols (as well as other kinds of symbols like function names) can be removed from a binary later on with the strip command. However, since you'll usually combine debug symbols with unoptimizied builds, there's not much point in that - rather, you'd build a release binary with different optimizations and without symbols from the start.
Other compilers such as MSVC don't include debug information in the binary itself, but rather store it in a separate file and/or a "symbol server" -- so if the home user's application crashes and you get the core dump, you can pull up the symbols from your server and get a readable stack trace. GCC might add a feature like that in the future; I've seen some discussions about it.
I just used gprof to analyze my program. I wanted to see what functions were consuming the most CPU time. However, now I would like to analyze my program in a different way. I want to see what LINES of the code that consume the most CPU time. At first, I read that gprof could do that, but I couldn't find the right option for it.
Now, I found gcov. However, the third-party program I am trying to execute has no "./configure" so I could not apply the "./configure --enable-gcov".
My question is simple. Does anyone know how to get execution time for each line of code for my program?
(I prefer suggestions with gprof, because I found its output to be very easy to read and understand.)
I think oprofile is what you are looking for. It does statistical based sampling, and gives you an approximate indication of how much time is spent executing each line of code, both at the C level of abstraction, and at the assembler code level.
As well as simply profiling the relative number of cycles spent at each line, you can also instrument for other events like cache misses and pipeline stalls.
Best of all: you don't need to do special builds for profiling, all you need to do is enable debug symbols.
Here is a good introduction to oprofile: http://people.redhat.com/wcohen/Oprofile.pdf
If your program isn't taking too long to execute, Valgrind/Callgrind + KCacheGrind + [compiling with debugging turned on (-g)] is one of the best methods of how to tell where a program is spending time while it is running in user mode.
valgrind --tool=callgrind ./program
kcachegrind callgrind.out.12345
The program should have a stable IPC (instructions per clock) in the parts that you want to optimize.
A drawback is that Valgrind cannot be used to measure I/O latency or to profile kernel space. Also, it's usability with programming languages which are using a toolchain incompatible with the C/C++ toolchain is limited.
In case Callgrind's instrumentation of the whole program takes too much time to execute, there are macros CALLGRIND_START_INSTRUMENTATION and CALLGRIND_STOP_INSTRUMENTATION.
In some cases, Valgrind requires libraries with debug information (such as /usr/lib/debug/lib/libc-2.14.1.so.debug), so you may want to install Linux packages providing the debug info files or to recompile libraries with debugging turned on.
oprofile is probably, as suggested by Anthony Blake, the best answer.
However, a trick to force a compiler, or a compiler flag (such as -pg for gprof profiling), when compiling an autoconf-ed software, could be
CC='gcc -pg' ./configure
or
CFLAGS='-pg' ./configure
This is also useful for some newer modes of compilation. For instance, gcc 4.6 provides link time optimization with the -flto flag passed at compilation and at linking; to enable it, I often do
CC='gcc-4.6 -flto' ./configure
For a program not autoconf-ed but still built with a reasonable Makefile you might edit that Makefile or try
make CC='gcc -pg'
or
make CC='gcc -flto'
It usually (but not always) work.