GCC warnings not detected in cmake debug builds, but detected in release ones - c

I have a cmake/gcc project in which I have enabled several warnings and the flag -Werror.
I have noticed that some warnings are detected when I use the cmake flag -DCMAKE_BUILD_TYPE=Release, but they are not when I don't apply the above cmake flag. For example, one of these warnings is:
error: ‘var_name’ may be used uninitialized in this function [-Werror=maybe-uninitialized]
I have read here: Set CFLAGS and CXXFLAGS options using CMake that there are several CMAKE_C_FLAGS variables for different build types, for instance, CMAKE_C_FLAGS_RELEASE.
I have tried to apply those variables for release and debug builds, but this time none of the above detect the warnings I'm expecting.
What I'm missing?

CMake's default/"debug" build profile entirely disables optimization, which prevents the compiler passes that perform transformations and static analysis needed to determine that a variable is used uninitialized from ever happening. While to some extent this improves debugging experience with single-step of source lines, as you've found it hides warnings and also tends to hide consequences of undefined behavior in your code.
Traditionally "disable optimizations entirely for non-release builds" has not been a thing among unix-oriented developers. It's a carry-over from common practice in the MSVC world that's reflective of CMake's origins and user base.

Related

Does debug information get stripped from library on optimized build?

I am using GCC's C compiler for ARM. I've compiled Newlib using the C compiler. I went into the makefile for Newlib and, saw that the Newlib library gets compiled using -g -O2.
When compiling my code and linking against Newlib's standard C library does this debug information get stripped?
You can use -g and -O2 both together. The compiler with optimize the code and keep the debugging information. Of course at some places because of code optimization you will not get information for some symbol that has been removed by code optimization and is no longer present.
From the Gcc options summary
Turning on optimization flags makes the compiler attempt to improve the performance and/or code size at the expense of compilation time and possibly the ability to debug the program.
There are multiple flags and options that will make debugging impossible or difficult. e.g.
-fomit-frame-pointer .... It also makes debugging impossible on some machines.
-fsplit-wide-types.... This normally generates better code for those types, but may make debugging more difficult.
-fweb - ... It can, however, make debugging impossible, since variables no longer stay in a “home register”.
The first two are enabled for -O2.
If you want debugging information to be preserved, the following option can be used.
-Og
Optimize debugging experience. -Og enables optimizations that do not interfere with debugging. It should be the optimization level of choice for the standard edit-compile-debug cycle, offering a reasonable level of optimization while maintaining fast compilation and a good debugging experience.

Compiling package with profiling enabled using GNU make

I am trying to compile a big package (Heasoft) with code profiling enabled. The package uses makefiles for setup.
I am enabling compilation with support for profiling with the (gcc) "-pg" flag through CFLAGS:
$ CFLAGS="-pg" make
The compilation runs till the following error between incompatible flags is raised:
gcc: error: -pg and -fomit-frame-pointer are incompatible
How am I suppose to deal with it? (Given that I am interested in profiling just some of the tools the package provides I ask the more objective questions below hoping them can be more easily answered)
What is this "omit-frame-pointer" and is it really needed?
Can I say to GNU make to avoid such conflicts, ignoring the command-line (in this case "-pg") flags?
Can I tell GNU make which code (tools) I want to enable the profiling ("-pg") flag?
Thanks.
"omit frame pointer" does exactly that - it instructs the compiler to not save the frame pointer into a CPU register if the compiler detects that the function does not need it (e.g. because it has no arguments and so does not affect the stack). This saves a little time on each call. Also it frees a register that can then be used for further optimizations.
On the other hand, tracking the function becomes impossible, so to enable debugging or profiling you need to restore it explicitly.
To do this, add -fno-omit-frame-pointer to the debug flags.
You may also have to disable all code optimizations with the -O0 flag.

What does -g option do in gcc

I see many tutorials on gdb asking to use -g option while compiling c program. I fail to understand what does the -g option actually do.
It makes the compiler add debug information to the resulting binaries. This information allows a debugger to associate the instructions in the code with source code files and line numbers. Having debug symbols makes certain kinds of debugging (like stepping through code) much easier, if not possible at all.
The -g option actually has a few tunable parameters, check the manual. Also, it's most useful if you don't optimize the code, so use -O0 or -Og (in newer versions) - optimizations break the connection between instructions and source code. (Most importantly you have to not omit frame pointers from function calls, which is a popular optimization but basically completely ruins the ability to walk up the call stack.)
The debug symbols themselves are written in a standardized language (I think it's DWARF2), and there are libraries for reading that. A program could even read its own debug symbols at runtime, for instance.
Debug symbols (as well as other kinds of symbols like function names) can be removed from a binary later on with the strip command. However, since you'll usually combine debug symbols with unoptimizied builds, there's not much point in that - rather, you'd build a release binary with different optimizations and without symbols from the start.
Other compilers such as MSVC don't include debug information in the binary itself, but rather store it in a separate file and/or a "symbol server" -- so if the home user's application crashes and you get the core dump, you can pull up the symbols from your server and get a readable stack trace. GCC might add a feature like that in the future; I've seen some discussions about it.

Problems with Code::Blocks builds

I'm using Code::Blocks to develop a C application in Windows. When I build the application it builds and runs as expected. When my colleague builds the application it builds but doesn't behave correctly when it runs. The exe files created are also different sizes. The Code::Blocks project is stored in subversion and is as far as we can tell the same for both of us.
Can anyone suggest what could be causing the differences?
To summarize:
identical project file and sources checked out from svn
identical libraries as far as you can tell
presumably same compiler used
differently sized executables
different program behaviour
A different compiler, even a different version, could explain differently sized programs but not different behaviour, unless there is a serious compiler bug, which is extremely unlikely. You can ask the compiler for its version to be 100% sure, however (if you use the GNU compiler, try e.g. gcc --version)
Different versions of static libraries could similarly explain differently sized program sizes, but should normally not result in incorrect behaviour. To turn the "as far as I can tell" into certainity, you and your collegue should either both do a clean install of these libs (from the same aggreed version), or compare checksums (such as md5sum).
If you can safely rule out the possibility of malware on your collegue's computer (this could very well explain both symptoms!), the likely reason for the combination of "differently sized program and different/wrong behaviour" is that one of you has some optimization options enabled or compiles a different laguage standard, and at the same time, the source contains code that is not allowed by the standard (but tolerated, maybe with a warning) or triggers undefined behaviour.
Always compile at least with -Wall -Wtraditional and fix anything the compiler complains about, even if you think it's silly. Always. This prevents 99% of all "behaves strangely sometimes" kind of errors. Really, compiler warnings are not an annoyance, they are a help.
Note that having different build options in two different places can happen, even if it's not immediately obvious. First, the project file could have been locally changed and not committed. As long as no conflicting version is committed on the other side, you will never see as much as a warning from Subversion. Second, your collegue could have set some options globally in the compiler preferences. These will be applied to every project.
To rule out the possibility of different build settings, save the build log on each machine to a text file, and diff them.

C/GCC Warnings - Write once and test everywhere?

I'm writing a command line program in ANSI C to parse a Quake 2 map file to report how many entities and textures are being used. My development machine is MacBook. I'm testing on OS X Snow Leopard (32-bit), Windows XP (32-bit) and Vista (64-bit), and Ubuntu 9.10 (32-bit).
The code is flawless on the OS X and GCC 4.2. The other platforms, not so flawless.
Visual Studio 2005 complained about an array declaration in the middle of the main() block -- size info for the array isn't available until then--that should be declared at top. Fixed that by declaring a pointer at top and writing a function to create the array.
Dev-C++ and GCC (3.4) on Windows has no complaints.
Cygwin and GCC (4.4) on Windows complained about array subscript has type char. I added (int) casts to fix that.
Ubuntu and GCC (4.4) is complaining about ignoring the return value of fread. Although I read elsewhere that I might be a bug in the way Ubuntu packaged GCC. Seems like this one in the context I'm using fread in may be safe to ignore. The warning only appears with the -O3 flag.
Except for Visual Studio 2005, all the compilers that I tested with are some version of GCC. Chasing down all these errors and warnings is serious pain in the butt. Until now, I been using the following flags in my Makefile:
debug: -pedantic -Wall
release: -O3
Is there a set of GCC flags that I should be using to catch all the errors on my primary development machine? Or is write once and test everywhere a fact of life?
Irritatingly enough, the C dialect in Visual Studio (even the beta of Visual Studio 2010!) is old, and doesn't support all C99 features, and being able to mix declarations and executable statements is probably the most irritating of them all. The least evil option might be to compile as C++ instead on this platform, even if it requires some re-arranging of your code to make it both valid C++ and C.
Other than that, as Ken's comment said, "write-once-test-everywhere is a fact of life". Lint can be a good help (as Chris wrote), but to find all the incompatibilities in syntax and semantics you really need to compile and test your program on several compilers and systems. Doing this can actually help find errors and problems in your code, even if you don't intend to actually run the program on more than one system.
Get yourself a copy of Lint. Lint is a static analysis tool that will pretty much cover the full range of compiler errors and warning and then some. As someone who frequently writes code targeting different platforms and compilers ensuring that the code passes through Lint is a pretty good barometer for getting the code to run across all compilers.
The best set of gcc flags to approximate lint is something like:
-ansi -pedantic -W -Wall -Wundef -Wstrict-prototypes -Wmissing-prototypes -Wmissing-declarations -Wcast-qual -Wwrite-strings -Weffc++
I also often use -Wno-long-long if I am going to be writing 64-bit code because many compilers complain that long long is not a c++ type.
*Edit: Fixing copy and paste error in command line options
Add -ansi to the mix. The canonical set is -ansi -pedantic -Wall.
That said, you will probably find other quirks with MSVC, because it's a different compiler altogether, with its own warnings - so you might have to tweak its options too.
You can save much time by using -Wall and Lint, because they will help you understand your code better. Rework your code to minimize the safe-to-ignore warnings. You will be less likely to agonize over a hard to reproduce run-time failure. Also, whoever maintains your code will find it easier to make changes.
When you are using Visual Studio, explore the compile options for Lint like tools. I forget where they are, and they slow down your build, but they are helpful.

Resources