Does debug information get stripped from library on optimized build? - c

I am using GCC's C compiler for ARM. I've compiled Newlib using the C compiler. I went into the makefile for Newlib and, saw that the Newlib library gets compiled using -g -O2.
When compiling my code and linking against Newlib's standard C library does this debug information get stripped?

You can use -g and -O2 both together. The compiler with optimize the code and keep the debugging information. Of course at some places because of code optimization you will not get information for some symbol that has been removed by code optimization and is no longer present.

From the Gcc options summary
Turning on optimization flags makes the compiler attempt to improve the performance and/or code size at the expense of compilation time and possibly the ability to debug the program.
There are multiple flags and options that will make debugging impossible or difficult. e.g.
-fomit-frame-pointer .... It also makes debugging impossible on some machines.
-fsplit-wide-types.... This normally generates better code for those types, but may make debugging more difficult.
-fweb - ... It can, however, make debugging impossible, since variables no longer stay in a “home register”.
The first two are enabled for -O2.
If you want debugging information to be preserved, the following option can be used.
-Og
Optimize debugging experience. -Og enables optimizations that do not interfere with debugging. It should be the optimization level of choice for the standard edit-compile-debug cycle, offering a reasonable level of optimization while maintaining fast compilation and a good debugging experience.

Related

Can I have optimization on, but not suffer from reordered statements?

My debug CFLAGS were always -g -O0. The latter mainly to disable jumps to unexpected line while debugging. Nowadays more and more programs refuse to compile with -O0, besides, -D_FORTIFY_SOURCE require optimizer.
is it possible to compile with -O, but have predictable behavior in debugger?
If you're using GCC 4.8 or above, try using -g -Og. As explained in the release notes:
A new general optimization level, -Og, has been introduced. It addresses the need for fast compilation and a superior debugging experience while providing a reasonable level of run-time performance. Overall experience for development should be better than the default optimization level -O0.

Compiling package with profiling enabled using GNU make

I am trying to compile a big package (Heasoft) with code profiling enabled. The package uses makefiles for setup.
I am enabling compilation with support for profiling with the (gcc) "-pg" flag through CFLAGS:
$ CFLAGS="-pg" make
The compilation runs till the following error between incompatible flags is raised:
gcc: error: -pg and -fomit-frame-pointer are incompatible
How am I suppose to deal with it? (Given that I am interested in profiling just some of the tools the package provides I ask the more objective questions below hoping them can be more easily answered)
What is this "omit-frame-pointer" and is it really needed?
Can I say to GNU make to avoid such conflicts, ignoring the command-line (in this case "-pg") flags?
Can I tell GNU make which code (tools) I want to enable the profiling ("-pg") flag?
Thanks.
"omit frame pointer" does exactly that - it instructs the compiler to not save the frame pointer into a CPU register if the compiler detects that the function does not need it (e.g. because it has no arguments and so does not affect the stack). This saves a little time on each call. Also it frees a register that can then be used for further optimizations.
On the other hand, tracking the function becomes impossible, so to enable debugging or profiling you need to restore it explicitly.
To do this, add -fno-omit-frame-pointer to the debug flags.
You may also have to disable all code optimizations with the -O0 flag.

What does -g option do in gcc

I see many tutorials on gdb asking to use -g option while compiling c program. I fail to understand what does the -g option actually do.
It makes the compiler add debug information to the resulting binaries. This information allows a debugger to associate the instructions in the code with source code files and line numbers. Having debug symbols makes certain kinds of debugging (like stepping through code) much easier, if not possible at all.
The -g option actually has a few tunable parameters, check the manual. Also, it's most useful if you don't optimize the code, so use -O0 or -Og (in newer versions) - optimizations break the connection between instructions and source code. (Most importantly you have to not omit frame pointers from function calls, which is a popular optimization but basically completely ruins the ability to walk up the call stack.)
The debug symbols themselves are written in a standardized language (I think it's DWARF2), and there are libraries for reading that. A program could even read its own debug symbols at runtime, for instance.
Debug symbols (as well as other kinds of symbols like function names) can be removed from a binary later on with the strip command. However, since you'll usually combine debug symbols with unoptimizied builds, there's not much point in that - rather, you'd build a release binary with different optimizations and without symbols from the start.
Other compilers such as MSVC don't include debug information in the binary itself, but rather store it in a separate file and/or a "symbol server" -- so if the home user's application crashes and you get the core dump, you can pull up the symbols from your server and get a readable stack trace. GCC might add a feature like that in the future; I've seen some discussions about it.

Faster code with another compiler

I'm using the standard gcc compiler in math software development with C-language. I don't know that much about compilers or compiler options, and I was just wondering, is it possible to make faster executables using another compiler or choosing better options? The default Makefile sets options -ffast-math and -O3 and I think both of them have some impact in the overall calculation time. My software is using memory quite extensively, so I imagine some options related to memory management might do the trick?
Any ideas?
Before experimenting with different compilers or random, arbitrary micro-optimisations, you really need to get a decent profiler and profile your code to find out exactly what the performance bottlenecks are. The actual picture may be very different from what you imagine it to be. Once you have a profile you can then start to consider what might be useful optimisations. E.g. changing compiler won't help you if you are limited by memory bandwidth.
Here are some tips about gcc performance:
do benchmarks with -Os, -O2 and -O3. Sometimes -O2 will be faster because it makes shorter code. Since you said that you use a lot of memory, try with -Os too and take measurements.
Also check out the -march=native option (it is considered safe to use, if you are making executable for computers with similar processors) on the client computer. Sometimes it can have considerable impact on performance. If you need to make a list of options gcc uses with native, here's how to do it:
Make a small C program called test.c, then
$ touch test.c
$ gcc -march=native -fverbose-asm -S test.c
$ cat test.s
credits for code goto Gentoo forums users.
It should print out a list of all optimizations gcc used. Please note that if you're using i7, gcc 4.5 will detect it as Atom, so you'll need to set -march and -mtune manually.
Also read this document, it will help you (still, in my experience on Gentoo, -march=native works better) http://gcc.gnu.org/onlinedocs/gcc/i386-and-x86_002d64-Options.html
You could try with new options in late 4.4 and early 4.5 versions such as -flto and -fwhole-program. These should help with performance, but when experimenting with them, my system was unstable. In any case, read this document too, it will help you understand some of GCC's optimization options http://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html
If you are running Linux on x86 then typically the Intel or PGI compilers will give you significantly faster performing executables.
The downsides are that there are more knobs to tune and that they come with a hefty price tag!
If you have specific hardware you can target your code for, the (hardware) company often releases paid-for compilers optimized for that hardware.
For example:
xlc for AIX
CC for Solaris
These compilers will generally produce better code optimization-wise.
As you say your program is memory heavy you could test to use a different malloc implementation than the one in standard library on your platform.
For example you could try the jemalloc (http://www.canonware.com/jemalloc/).
Keep in mind they most improvements to be had by changing compilers or settings will only get you proportional speedups where as adjusting algorithms you can sometimes get improvements in the O() of your program. Be sure to exhaust that before you put to much work into tweaking settings.

C/GCC Warnings - Write once and test everywhere?

I'm writing a command line program in ANSI C to parse a Quake 2 map file to report how many entities and textures are being used. My development machine is MacBook. I'm testing on OS X Snow Leopard (32-bit), Windows XP (32-bit) and Vista (64-bit), and Ubuntu 9.10 (32-bit).
The code is flawless on the OS X and GCC 4.2. The other platforms, not so flawless.
Visual Studio 2005 complained about an array declaration in the middle of the main() block -- size info for the array isn't available until then--that should be declared at top. Fixed that by declaring a pointer at top and writing a function to create the array.
Dev-C++ and GCC (3.4) on Windows has no complaints.
Cygwin and GCC (4.4) on Windows complained about array subscript has type char. I added (int) casts to fix that.
Ubuntu and GCC (4.4) is complaining about ignoring the return value of fread. Although I read elsewhere that I might be a bug in the way Ubuntu packaged GCC. Seems like this one in the context I'm using fread in may be safe to ignore. The warning only appears with the -O3 flag.
Except for Visual Studio 2005, all the compilers that I tested with are some version of GCC. Chasing down all these errors and warnings is serious pain in the butt. Until now, I been using the following flags in my Makefile:
debug: -pedantic -Wall
release: -O3
Is there a set of GCC flags that I should be using to catch all the errors on my primary development machine? Or is write once and test everywhere a fact of life?
Irritatingly enough, the C dialect in Visual Studio (even the beta of Visual Studio 2010!) is old, and doesn't support all C99 features, and being able to mix declarations and executable statements is probably the most irritating of them all. The least evil option might be to compile as C++ instead on this platform, even if it requires some re-arranging of your code to make it both valid C++ and C.
Other than that, as Ken's comment said, "write-once-test-everywhere is a fact of life". Lint can be a good help (as Chris wrote), but to find all the incompatibilities in syntax and semantics you really need to compile and test your program on several compilers and systems. Doing this can actually help find errors and problems in your code, even if you don't intend to actually run the program on more than one system.
Get yourself a copy of Lint. Lint is a static analysis tool that will pretty much cover the full range of compiler errors and warning and then some. As someone who frequently writes code targeting different platforms and compilers ensuring that the code passes through Lint is a pretty good barometer for getting the code to run across all compilers.
The best set of gcc flags to approximate lint is something like:
-ansi -pedantic -W -Wall -Wundef -Wstrict-prototypes -Wmissing-prototypes -Wmissing-declarations -Wcast-qual -Wwrite-strings -Weffc++
I also often use -Wno-long-long if I am going to be writing 64-bit code because many compilers complain that long long is not a c++ type.
*Edit: Fixing copy and paste error in command line options
Add -ansi to the mix. The canonical set is -ansi -pedantic -Wall.
That said, you will probably find other quirks with MSVC, because it's a different compiler altogether, with its own warnings - so you might have to tweak its options too.
You can save much time by using -Wall and Lint, because they will help you understand your code better. Rework your code to minimize the safe-to-ignore warnings. You will be less likely to agonize over a hard to reproduce run-time failure. Also, whoever maintains your code will find it easier to make changes.
When you are using Visual Studio, explore the compile options for Lint like tools. I forget where they are, and they slow down your build, but they are helpful.

Resources