GCC -g vs -g3 GDB Flag: What is the Difference? - c

When compiling C source code with either gcc or Clang, I always use the -g flag to generate debugging information for gdb.
gcc -g -o helloworld helloworld.c
I noticed that some people recommend -g3 instead. What is the difference between the -g and -g3 flags? Also is there a difference between -g and -ggdb?

From the docs:
-g
Produce debugging information in the operating system's native format (stabs, COFF, XCOFF, or DWARF 2). GDB can work with this
debugging information. On most systems that use stabs format, -g
enables use of extra debugging information that only GDB can use; this
extra information makes debugging work better in GDB but probably
makes other debuggers crash or refuse to read the program. If you want
to control for certain whether to generate the extra information, use
-gstabs+, -gstabs, -gxcoff+, -gxcoff, or -gvms (see below).
...
-ggdb
Produce debugging information for use by GDB. This means to use the most expressive format available (DWARF 2, stabs, or the native
format if neither of those are supported), including GDB extensions if
at all possible.
-gvmslevel
Request debugging information and also use level to specify how much information. The default level is 2. Level 0 produces no
debug information at all. Thus, -g0 negates -g.
....
Level 3 includes extra information, such as all the macro definitions
present in the program. Some debuggers support macro expansion when
you use -g3.

tl;dr: To answer your specific question, -g3 "includes extra information such as macro definitions... Some debuggers support macro expansion when you use -g3", while -g does not include this extra information.
The broader answer is that gcc supports four levels of debug information, from -g0 (debug information disabled) through -g3 (maximum debug information).
Specifying -g is equivalent to -g2. Curiously, the gcc docs say little about what information -g/-g2 includes or excludes:
Request debugging information and also use level to specify how much information. The default level is 2.
Level 0 produces no debug information at all. Thus, -g0 negates -g.
Level 1 produces minimal information, enough for making backtraces in parts of the program that you don't plan to debug. This includes descriptions of functions and external variables, and line number tables, but no information about local variables.
Level 3 includes extra information, such as all the macro definitions present in the program. Some debuggers support macro expansion when you use -g3.

Related

Is -g the same as -g2 for gcc and clang?

GCC debug option documentation is not that comprehensive. So trying to compile a binary with different options -g, -g1, -g2, -g3 I got the following result.
When compiling with -g and -g2 binary has the same 13KB in size.
When compiling with -g1 the binary ended up in 9.3KB in size
When compiling with -g3 the binary has 73KB in size
So is -g equivalent to -g2? But the level 2 is not even explained in the documentation. Here is what the docs say (no level 2):
Level 0 produces no debug information at all. Thus, -g0 negates -g.
Level 1 produces minimal information, enough for making backtraces in
parts of the program that you don’t plan to debug. This includes
descriptions of functions and external variables, and line number
tables, but no information about local variables.
Level 3 includes extra information, such as all the macro definitions
present in the program. Some debuggers support macro expansion when
you use -g3.
Or am I missing something?
So is -g equivalent to -g2?
Yes.
But the level 2 is not even explained in the documentation. [...] Or am I missing something?
You are missing something. You have overlooked the sentence immediately preceding your quotation:
The default level is 2.
This means that -g2 means the same thing as -g. (And -ggdb2 means the same thing as -ggdb, etc.) This serves in part as a reference for each of the -g*2 options to the docs of the corresponding unnumbered -g* options, where you will find the relevant documentation. In particular, the documentation for -g2 is the documentation for -g, which appears first in the section.

Is it possible to pass GCC arguments directly from C source code?

I want to be able to pass arguments to GCC from my C source code, something like this...
// pass the "-ggdb" argument to GCC (I know this won't work!)
#define GCC_DEBUG_ARG -ggdb
int main(void) {
return 0;
}
With this code I'd like to simply run gcc myfile.c which would really run gcc myfile.c -ggdb (as the "-ggdb" argument has been picked up from the C source code).
I'm not interested in using make with the CFLAGS environment variable, I just want to know if its possible to embed GCC options within C source code
What you want to do is not possible in general.
However, recent GCC (e.g. GCC 8 in end of 2018) accepts many options and some of them could be passed by function attributes or by function specific pragmas (However, they don't accept -g but do accept -O2).
Also, you can use -g in every compilation (with GCC, it is mixable with optimization flags such as -O2; so runtime performance won't suffer. Of course the -g will increase compile time and size of produced object file). Notice that (on Linux) the DWARF debug information is visible in the generated assembler file (e.g. try to compile your foo.c with gcc -Wall -g -O -S -fverbose-asm foo.c, look into the generated foo.s, and repeat by removing the -g)
I'd like to simply run gcc myfile.c
That is a very bad habit. You should run gcc -Wall -Wextra -g myfile.c -o myprog to get all warnings (you really want them) and debug info in your executable myprog. Read How to debug small programs before continuing coding your program.
I'm not interested in using make with the CFLAGS environment variable
But you really should. Using make or some other build automation tool (e.g. ninja, omake, rake, etc, etc....) is, in practice, the conventional and usual way of using GCC.
Alternatively, on Linux, write a tiny shell script doing the compilation (this is particularly worthwhile if your program is a single source file; for anything bigger, you really should use some build automation tool). At last, if you use emacs as your source code editor, you could add a few lines of comments (like at end of my manydl.c example) specifying Emacs file variables to tune the compilation (done from emacs)
If these conventions surprise you, read about the Unix philosophy then study -for inspiration- the source code of some existing free software (e.g. on github, gitlab, or in your favorite Linux distribution).
At last, GCC itself is a free software project (but a huge one of more than five millions lines of mostly C++ source code). So you can improve it the way you desire (if you follow its GPLv3+ license), after having studying somehow its source code. That would take you several months (or years) of work (because GCC is very complex to understand).
See also this answer to a related question.
You might also (but I recommend not to, because it is very confusing) play tricks with your PATH variable and have some directory there -e.g. $HOME/bin/, ahead of /usr/bin/ which contains /usr/bin/gcc, with your shell script named gcc; but don't do that, you'll be confused. Instead write some "generic" mygcc shell script which would run /usr/bin/gcc and add appropriate flags to it (I believe it is not worth the effort).

how can there be such a big memory difference with -static compilation command?(C)

I am working on a task for the university, there is a webiste that checks my memory usage and it compiles the .c files with:
/usr/bin/gcc -DEVAL -std=c11 -O2 -pipe -static -s -o program programname.c -lm
and it says my program exceeds the memory limits of 4 Mib which is a lot i think. I was told this command makes it use more memory that the standard compilation I use on my pc, like this:
gcc myprog.c -o myprog
I launched the executable created by this one compilation with:
/usr/bin/time -v ./myprog
and under "maximum resident set size" it says 1708 kilobytes, which should be 1,6 Mibs. So how can it be that for the university checker my program goes over 4 Mibs? I have eliminated all the possible mallocs i have, I just left the essential ones but it still says it goes over the limit, what else should I improve? I'm almost thinking the wesite has an error or something...
From GNU GCC Manual, Page 197:
-static On systems that support dynamic linking, this overrides ‘-pie’ and prevents linking with the shared libraries. On other systems, this
option has no effect.
If you don't know about the pie flag quoted here, have a look at this section:
-pie Produce a dynamically linked position independent executable on targets that support it. For predictable results, you must also
specify the same set of options used for compilation (‘-fpie’,
‘-fPIE’, or model suboptions) when you specify this linker option.
To answer your question: yes is it possible this overhead generated by the static flag, because in that case, the compiler can not do the basic optimization by merging stdlib's code with the one you've produced.
As it was suggested in the comments you shall compile your code with the same flag of the website to have an idea of the real overhead of your program (be sure that your gcc version is the same of the website) and also you shall do some common manual optimization such constant folding, function inlining etc. A good reference to these optimizations could be this one

Code::Blocks + MinGW: minimize the size of a static library

I've tried passing -ffunction-sections -fdata-sections for the compiler, but that doesn't seem to have the desired effect. As far as I understand, I also have to pass -Wl,--gc-sections to the linker, but I'm not linking the files at this point. I just want to have a .a library file as small as possible, with minimal redundant code/data.
The compiler performs optimization based on the knowledge it has of the program. Optimization levels -O2 and above, in particular, enable unit-at-a-time mode, which allows the compiler to consider information gained from later functions in the file when compiling a function. Compiling multiple files at once to a single output file in unit-at-a-time mode allows the compiler to use information gained from all of the files when compiling each of them.
Not all optimizations are controlled directly by a flag.
-ffunction-sections
-fdata-sections
Place each function or data item into its own section in the output file if the target supports arbitrary sections. The name of the function or the name of the data item determines the section's name in the output file.
Use these options on systems where the linker can perform optimizations to improve locality of reference in the instruction space. Most systems using the ELF object format and SPARC processors running Solaris 2 have linkers with such optimizations. AIX may have these optimizations in the future.
Only use these options when there are significant benefits from doing so. When you specify these options, the assembler and linker will create larger object and executable files and will also be slower. You will not be able to use gprof on all systems if you specify this option and you may have problems with debugging if you specify both this option and -g.
U can use the following link for more details..:-)
http://gcc.gnu.org/onlinedocs/gcc-4.0.4/gcc/Optimize-Options.html
The following will reduce the size of your compiled objects (and thus the static library)
-Os -g0 -fvisibility=hidden -fomit-frame-pointer -mno-accumulate-outgoing-args -finline-small-functions -fno-unwind-tables -fno-asynchronous-unwind-tables -s
The following will increase the size of objects (though they may make the binary smaller)
-ffunction-sections -fdata-sections -flto -g -g1 -g2 -g3
The only way to really make a static library smaller is to remove the unneeded code by compiling the static library with -ffunction-sections -fdata-sections and linking the final product with -Wl,--gc-sections,--print-gc-sections to find out what parts are cruft. Then go back and remove those functions/variables (this also works for making smaller shared libraries - see http://libraryopt.sf.net)
Compare the size of the library if you -g in the compiler flags, vs not having it. I can easily imagine you double the size of a static library if it includes debug information in. If that was what you saw, chances are you already stripped the library of debug symbols, and hence cannot significantly shrink the library file more. This could explain your memory of cutting the size in half at some point.

Using gcc with -g flag before running gdb

without -g flag:
(gdb) break main
Breakpoint 1 at 0x8048274
with -g flag:
(gdb) break main
Breakpoint 1 at 0x8048277: file example.c, line 31.
I vaguely know -g option stores the symbol table information.
What does the -g option exactly do?
Is there any way I can look at this symbol table?
-g (for gcc) stores debugging information in the output files so that debuggers can pick it up and present more useful information during the debugging process. Exactly what gets stored can depend a great deal on the environment you're running in.
One way to look at what this consists of is to use objdump with the --debugging option (or its equivalent short form -g which matches gcc).
The -g command line option asks the compiler to emit additional debugging information; on Linux, the format is DWARF 2, but other platforms may have different defaults -- stabs was more common, once upon a time.
readelf --debug-dump can be used to dump the debugging information itself if you're curious in what it adds -- you can see the entire program source in the .debug_info section, for example.

Resources