Best practices regarding warning outputs when compiling in gcc? [closed] - c

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
I'm learning c and am compiling all my programs using gcc in linux subsystem for windows.
I learned that I can include some flags by the gcc standards. Some include the basic -o or -lm.
I discovered the -Wall flag and it has outputted some warnings in my shell that I fix.
Right now my usual gcc compilation line usually lies along the lines of cc -Wall -lm -o file.exe file.c.
I recently learned that there are a lot of other flags that can be included, some regarding warnings; one is -w, which is supposed to show even more warnings than -Wall, so my question is -
1- Should I always specify -w? or is there any drawbacks or maybe even incorrect issued warnings?
2- Also, what are the best practices when compiling a program, i.e., what options/flags do you always turn on?

The more professional you become the more warnings you strive to enable. My current favorite set is -Wall -Wextra -pedantic which gives a good average.
If you think you receive a false warning, think again. Almost always the compiler is right. Until you become an intimate expert with the C standard you better ask, for example here on SO.
Some think that -Werror has some value, and for their development process they are probably right. Since I use some kind of build control software (namely "make") I don't need it, because GCC returns non-zero values on warnings, too.
All other flags depend on the purpose of the command. Some examples:
-o is great to define the name of the output file.
(EDIT)-O… sets the optimization level; some years ago -O3 might have some problems but today it should be fine.
-s strips the debug information from the output; good for the release version.
-g includes debug information to the output; there are variants depending on your goal.
-Wl,--wrap… is really sophisticated for special debugging and/or testing; read its documentation.

Another best practise can be considered the often use of -Wfatal-errors
GCC compiler errors can sometimes be several pages long (especially so if you ever have to compile C++ code). There are several reasons for while compiler errors can be so long; a typical example is some missing brackets at the top of the file, which causes the compiler to report errors for many lines of the remainder of the file.
Seasoned developers know to initially direct their attention on the top-most error, and fixing that often solves all the following pages of errors. To help with this working practise, you can add the -Wfatal-errors to tell gcc that you only want to see the first error, and that it should stop trying to compile the rest of the code once a compilation error is detected. Thus you never get the daunting pages of output that you have to scroll through.
Best practice is then switching to and from this -Wfatal-errors mode, or just leave it mostly on, only occasionally turning it off when you would like to see more errors for a particular problem.

Should I always specify -w?
-w removes all warning messages. You should not use -w.
If you meant -W, it's an old flag for -Wextra, has the same meaning as -Wextra.
Also, rules are not absolute. Not always, there are project specific needs and recommendations. If I write a short program, I sometimes use like gcc -ansi -xc - <<<'a;b;main() { printf("%d\n", a * b); } because I'm in a hurry. But usually, if not always, you should enable as many warnings as you can.
or is there any drawbacks or maybe even incorrect issued warnings?
The drawbacks of too many warnings are coming from too many enabled warnings - clutter in compiler output, while irrelevant warnings hide important stuff. The warnings I do not like:
unused functions -Wunused-function
unused function parameters -Wunused-parameter
Although code may be perfectly fine, ton of "unused function" warnings may be issued, while these functions may be used in like different project configuration chosen by different macro definitions.
what are the best practices when compiling a program, i.e., what options/flags do you always turn on?
I recommend for warnings: -Wall -Wextra -Wwrite-strings -Wno-unused-function -Wno-unused-parameter
I also recommend following flags to write safe code and protect against runtime mistakes: -D_FORTIFY_SOURCE=2 -fsanitize=address -fsanitize=undefined -fsanitize=leak -fsanitize=pointer-subtract -fsanitize=pointer-compare -fno-omit-frame-pointer -fstack-protector-all -fstack-clash-protection -fcf-protection
Fun fact: -Wwrite-strings is not enabled with -Wall -Wextra and looks like a valuable warning to me.
See also redhat recommended compiler and linker flags for GCC.
With gcc10 static analyzer options are also worth a look.

Related

compile C code with no flag other than -o

Im doing an assignment and the requirement instruction say "Your code should compile correctly (no warnings and errors) with no flags other -o. you should compile your code to a debug-able executable without producing any warnings or error messages. (Note that -O and -o are different flags.)"
So Im so confused what does the " no flags other -o " means. As far as I know, "flag is a general tool for rudimentary signalling and identification, especially in environments where communication is similarly challenging.", so does the requirement mean that we can't use any for loop ?
No, the requirement is referring to command-line flags, ie options to the compiler. Eg, assuming gcc, gcc -o foo foo.c.
However, since the program is meant to be debuggable, the requirements are contradictory because creating a debuggable executable requires (for gcc) the -g flag.
On many compilers, you can control the warning level with flags. The requirement here is to not use those flags, yet raise no warning.
Said differently, you are just asked to write neat and clean C code using no specific extension nor any semi valid constructs (code not conforming to the standard but accepted with warnings by a compiler)

about the flags used in compilation

I ran into this while studying a line of code in cmake for building a library:
-Wall -Wfloat-equal -o3 -fPIC
What do these compiler flags mean and how do they work? Why do they need to be inserted?
-Wall -Wfloat-equal -o3 -fPIC"
So
-Wall
Enables apparently not all, but an awful lot of compiler warning messages. It should be used to generate better code since you'll know if anything's wrong.
-Wfloat-equal
Warns if floating point numbers are used in equality comparisons. Comparing floats for equality is risky business because 1.0 isn't necessarily the exact value. I'm not sure why you'd want it in this context, because it seems like -Wall would display the warnings anyways.
-o3
Is probably O3, or optimization level 3. AKA optimize to the maximum level permitted (iirc).
-fPIC
Will generate position independent code. This is a bit more complicated, but was asked before, but is useful for including in a library.

pthread_cleanup_push and O2 CFLAGS

I have some warning when compiling a piece of code using pthread_cleanup_push/pop with -O2 CFLAGS. Just by removing the O2 cflags in the Makefile make it compile without issue.
Is it forbidden to use gcc optimization with these pthread macros ? I was not able to find anything in man or documentation. By the way, is there any alternative to clean stuff at the end of a thread ? Also it is working perfectly with gcc arm. But not on x86 gcc.
Warning :
x/x.c:1292:2: warning: variable ‘__cancel_routine’ might be clobbered by ‘longjmp’ or ‘vfork’ [-Wclobbered]
pthread_cleanup_push(x_cleanup, &fd);
My current CFLAGS option :
-W -Wall -Wformat -Wformat-security -Wextra -Wno-unused-result,
-Wextra -Wno-long-long -Wno-variadic-macros -Wno-missing-field-initializers
-std=gnu99 -O2
This issue has been reported several times now in GCC tracker (see here). I believe that this warns about real issue in pthread.h (see my comment). __cancel_routine is not marked as volatile so it's value is indeed undefined after return via longjmp which may cause arbitrary consequences.
The only solution is to remove the Werror until a fix ?
I'd rather go with -Wno-clobbered, to keep other warnings enabled.
Roll back on a previous version of gcc on x86 ?
You'll have to rollback to pre-2014 times which is quite a change... I think that if the code works for you, just disable -Wclobbered (with a descriptive comment).
But I did want to be sure that its was not a bigger issue which can cause unexpected behavior of my code or bugs.
Glibc code looks really suspicious. I'd wait for comments from GCC devs and if there is none, report this to Glibc developers.

Curious result from the gcc linker behaviour around -ffast-math

I've noticed an interesting phenomenon around flags to the compiler linker affecting the running code in ways I cannot understand.
I have a library that presents different implementations of the same algorithm in order to test the run speed of those different implementations.
Initially, I tested the situation with a pair of identical implementation to check the correct thing happened (both ran at roughly the same speed). I begun by compiling the objects (one per implementation) with the following compiler flags:
-g -funroll-loops -flto -Ofast -Werror
and then during linking passed gcc the following flags:
-Ofast -flto=4 -fuse-linker-plugin
This gave a library that ran blazingly fast, but curiously was reliably and repeatably ~7% faster for the first object that was included in the arguments during linking (so either implementation was faster if it was linked first).
so with:
gcc -o libfoo.so -O3 -ffast-math -flto=4 -fuse-linker-plugin -shared support_obj.os obj1.os obj2.os -lm
vs
gcc -o libfoo.so -O3 -ffast-math -flto=4 -fuse-linker-plugin -shared support_obj.os obj2.os obj1.os -lm
the first case had the implementation in obj1 running faster than the implementation in obj2. In the second case, the converse was true. To be clear, the code is identical in both cases except for the function entry name.
Now I removed this strange link-argument-order difference (and actually sped it up a bit) by removing the -Ofast flag during linking.
I can replicate mostly the same situation by changing -Ofast to -O3 -ffast-math, but in that case I need to supply -ffast-math during linking, which leads again to the strange ordering speed difference. I'm not sure why the speed-up is maintained for -Ofast but not for -ffast-math when -ffast-math is not passed during linking, but I can accept it might be down to the link time optimisation passing the relevant info in one case but not the other. This doesn't explain the speed disparity though.
Removing -ffast-math means it runs ~8 times slower.
Is anybody able to shed some light on what might be happening to cause this effect? I'm really keen to know what might be going on to cause this funny behaviour so I can not accidentally trigger it down the line.
The run speed test is performed in python using a wrapper around the library and timeit, and I'm fairly sure this is doing the right thing (I can twiddle orders and things to show the python side effects are negligible).
I also tested the library for correctness of output, so I can be reasonably confident of that too.
too long for a comment so posted as an answer:
Due to the risk of obtaining incorrect results in math operations, I would suggest not using it.
using -ffast_math and/or -Ofast can lead to incorrect results, as expressed in these excerpts from the gcc manual:
option:-ffast-math Sets the options:
-fno-math-errno,
-funsafe-math-optimizations,
-ffinite-math-only,
-fno-rounding-math,
-fno-signaling-nans and
-fcx-limited-range.
This option causes the preprocessor macro __FAST_MATH__ to be defined.
This option is not turned on by any -O option besides -Ofast since it can result in incorrect output for programs that depend on an exact implementation of IEEE or ISO rules/specifications for math functions. It may, however, yield faster code for programs that do not require the guarantees of these specifications. "
option: -Ofast
Disregard strict standards compliance. -Ofast enables all -O3 optimizations. It also enables optimizations that are not valid for all standard-compliant programs. It turns on -ffast-math and the Fortran-specific -fno-protect-parens and -fstack-arrays.

how to write a makefile in linux for a c program

i have written a c program for doubly-linked-list in linux. the program is named as program2.c.
i have compiled it using "cc program2.c -o out2".
it compiled and also executed fine.
even also tried writing a makefile.my makefiel includes
all:doublelinkedlist
doublelinkedlist:program2.c
gcc -Wall -Werror -O2 -o $# $<
clean :
\rm -fr doublelinkedlist
when i did make it gives me the errors.
can any one please help me writing a makefile.
When using a makefile, you also started using the -Wall -Werror flags. This is a very good thing.
Now the compiler looks for more suspicious things in your program, and refuses to compile if it finds anything. This can be a great help in catching bugs.
However, these warnings mean your program doesn't compile, and you'll need to fix them, by changing the code so that the compiler will be sure all is OK (as far as the compiler can check - of course the code can still contain bugs).
Common issues are mixing different types and not paying attention to the const keyword. But for help with specific warnings, you'll need to show the warnings and the code. Or better - search for each of them in StackOverflow, and I'm sure you'll find good answers.

Resources