Using compiler builtins without linking the c standard library - c

I've seen this question, whose answers conclude that builtin math functions (like __builtin_sin, __builtin_fmod, etc.) can be substituted for functions from the C standard library.
I wrote the following program:
float fmod_test(float arg1, float arg2) {
return __builtin_fmod(arg1, arg2)
}
void _start() {}
And compiled it as follows:
gcc -nostdlib test.c -o test
Unfortunately, I received the following error:
/tmp/ccuHpvCP.o: In function `fmod_test':
test.c:(.text+0x1d): undefined reference to `fmod'
collect2: error: ld returned 1 exit status
It seems that __builtin_fmod uses fmod in the background and needs to link to it, rather than producing an inline version as might be expected of a "built in" function.
Is there any way of using these builtin functions without linking to external libraries?

The answer to this question depends on exactly which C compiler you are using. You appear to be using GCC; the answer for that compiler is no.
These functions are "built in" in the sense that GCC knows their names and can optimize away some calls to them, for instance fmod(7.0, 2.0) may well be evaluated at compile time. But GCC does not provide runtime definitions of these functions. It relies on the C library, which is a separate project, to provide them.

As the gcc manual says:
Many of these functions are only optimized in certain cases; if they are not optimized in a particular case, a call to the library function is emitted.
So there is no guaranteed way to avoid the possibility of a call to the library function.
However, you can experiment with how you call the function and your optimization options, in hopes of finding a combination that does get inlined. In particular, with floating point builtins, gcc will usually only inline them if -ffast-math is in effect, because its inline code may not attain as much precision or handle all corner cases (NaN, infinity, denormals, setting errno, etc) as the carefully-written library function would. That's the case here, and indeed if you enable -ffast-math you do get inlined code: see on godbolt. (It will look better if you turn on optimization.)
Of course, if you later change your compiler options, or call the function in a different way, or switch to a different compiler version, the compiler might again emit a library call. You'll know if this happens because your program won't link, so at least it won't break silently, and you can then try to readjust your code and/or compilation options, or if necessary, write or import your own version of the function.

Related

Resolve undefined reference by stripping unused code

Assume we have the following C code:
void undefined_reference(void);
void bad(void) {
undefined_reference();
}
int main(void) {}
In function bad we fall into the linker error undefined reference to 'undefined_reference', as expected. This function is not actually used anywhere in the code, though, and as such, for the execution of the program, this undefined reference doesn't matter.
Is it possible to compile this code successfully, such that bad simply gets removed as it is never called (similar to tree-shaking in JavaScript)?
This function is not actually used anywhere in the code!
You know that, I know that, but the compiler doesn't. It deals with one translation unit at a time. It cannot divine out that there are no other translation units.
But main doesn't call anything, so there cannot be other translation units!
There can be code that runs before and after main (in an implementation-defined manner).
OK what about the linker? It sees the whole program!
Not really. Code can be loaded dynamically at run time (also by code that the linker cannot see).
So neither the compiler nor linker even try to find unused function by default.
On some systems it is possible to instruct the compiler and the linker to try and garbage-collect unused code (and assume a whole-program view when doing so), but this is not usually the default mode of operation.
With gcc and gnu ld, you can use these options:
gcc -ffunction-sections -Wl,--gc-sections main.c -o main
Other systems may have different ways of doing this.
Many compilers (for example gcc) will compile and link it correctly if you
Enable optimizations
make function bad static. Otherwise, it will have external linkage.
https://godbolt.org/z/KrvfrYYdn
Another way is to add the stump version of this function (and pragma displaying warning)

Why would gcc change the order of functions in a binary?

Many questions about forcing the order of functions in a binary to match the order of the source file
For example, this post, that post and others
I can't understand why would gcc want to change their order in the first place?
What could be gained from that?
Moreover, why is toplevel-reorder default value is true?
GCC can change the order of functions, because the C standard (e.g. n1570 or newer) allows to do that.
There is no obligation for GCC to compile a C function into a single function in the sense of the ELF format. See elf(5) on Linux
In practice (with optimizations enabled: try compiling foo.c with gcc -Wall -fverbose-asm -O3 foo.c then look into the emitted foo.s assembler file), the GCC compiler is building intermediate representations like GIMPLE. A big lot of optimizations are transforming GIMPLE to better GIMPLE.
Once the GIMPLE representation is "good enough", the compiler is transforming it to RTL
On Linux systems, you could use dladdr(3) to find the nearest ELF function to a given address. You can also use backtrace(3) to inspect your call stack at runtime.
GCC can even remove functions entirely, in particular static functions whose calls would be inline expanded (even without any inline keyword).
I tend to believe that if you compile and link your entire program with gcc -O3 -flto -fwhole-program some non static but unused functions can be removed too....
And you can always write your own GCC plugin to change the order of functions.
If you want to guess how GCC works: download and study its source code (since it is free software) and compile it on your machine, invoke it with GCC developer options, ask questions on GCC mailing lists...
See also the bismon static source code analyzer (some work in progress which could interest you), and the DECODER project. You can contact me by email about both. You could also contribute to RefPerSys and use it to generate GCC plugins (in C++ form).
What could be gained from that?
Optimization. If the compiler thinks some code is like to be used a lot it may put that code in a different region than code which is not expected to execute often (or is an error path, where performance is not as important). And code which is likely to execute after or temporally near some other code should be placed nearby, so it is more likely to be in cache when needed.
__attribute__((hot)) and __attribute__((cold)) exist for some of the same reasons.
why is toplevel-reorder default value is true?
Because 99% of developers are not bothered by this default, and it makes programs faster. The 1% of developers who need to care about ordering use the attributes, profile-guided optimization or other features which are likely to conflict with no-toplevel-reorder anyway.

Linking libraries built with different preprocessor flags or C standards

Scenario 1:
I want to link an new library (libA) into my program, libA was built using gcc with -std=gnu99 flag, while the current libraries of my program were built without that option (and let's assume gcc uses -std=gnu89 by default).
Scenario 2:
libB was built with some preprocessor flags like "-D_XOPEN_SOURCE -D_XOPEN_SOURCE_EXTENDED" to enable XPG4 features, e.g. msg_control member of struct msghdr. While libC wasn't built without those preprocessor flags, then it's linked against libB.
Is it wrong to link libraries built with different preprocessor flags or C standards ?
My concern is mainly about structure definitions mismatch.
Thanks.
Scenario 1 is completely safe for you. std= option in GCC checks code for compatibility with standard, but has nothing to do with ABI, so you may feel free to combine precompiled code with different std options.
Scenario 2 may be unsafe. I will put here just one simple example, real cases may be much more tricky.
Consider, that you have some function, like:
#ifdef MYDEF
int foo(int x) { ... }
#else
int foo(float x) { ... }
#endif
And you compile a.o with -DMYDEF and b.o without, and function bar from a.o calls function foo in b.o. Next you link it together and everything seems to be fine. Then everything fails in runtime and you may have very hard time debugging why are you passing int from one module, while expecting float on callee side.
Some more tricky cases may include conditionally defined structure fields, calling conventions, global variable sizes.
P.S. Assuming all your sources are written in the same language, varying only std options and macro definitions. Combining C and C++ code is really tricky sometimes, agree with Mikhail.
The few times I have encountered structure definition mismatch was when combining C and C++ code, in these cases there was a clear warning that something terrible was happening.
Something like
/usr/lib/gcc/i586-suse-linux/4.3/../../../../i586-suse-linux/bin/ld: Warning: size of symbol `tree' changed from 324 in /tmp/ccvx8fpJ.o to 328 in gpu.o
See that question.

Generating link-time error for deprecated functions

Is there a way with gcc and GNU binutils to mark some functions such that they will generate an error at link-time if used? My situation is that I have some library functions which I am not removing for the sake of compatibility with existing binaries, but I want to ensure that no newly-compiled binary tries to make use of the functions. I can't just use compile-time gcc attributes because the offending code is ignoring my headers and detecting the presence of the functions with a configure script and prototyping them itself. My goal is to generate a link-time error for the bad configure scripts so that they stop detecting the existence of the functions.
Edit: An idea.. would using assembly to specify the wrong .type for the entry points be compatible with the dynamic linker but generate link errors when trying to link new programs?
FreeBSD 9.x does something very close to what you want with the ttyslot() function. This function is meaningless with utmpx. The trick is that there are only non-default versions of this symbol. Therefore, ld will not find it, but rtld will find the versioned definition when an old binary is run. I don't know what happens if an old binary has an unversioned reference, but it is probably sensible if there is only one definition.
For example,
__asm__(".symver hidden_badfunc, badfunc#MYLIB_1.0");
Normally, there would also be a default version, like
__asm__(".symver new_badfunc, badfunc##MYLIB_1.1");
or via a Solaris-compatible version script, but the trick is not to add one.
Typically, the asm directive is wrapped into a macro.
The trick depends on the GNU extensions to define symbol versions with the .symver assembler directive, so it will probably only work on Linux and FreeBSD. The Solaris-compatible version scripts can only express one definition per symbol.
More information: .symver directive in info gas, Ulrich Drepper's "How to write shared libraries", the commit that deprecated ttyslot() at http://gitorious.org/freebsd/freebsd/commit/3f59ed0d571ac62355fc2bde3edbfe9a4e722845
One idea could be to generate a stub library that has these symbols but with unexpected properties.
perhaps create objects that have the name of the functions, so the linker in the configuration phase might complain that the symbols are not compatible
create functions that have a dependency "dont_use_symbol_XXX" that is never resolved
or fake a .a file with a global index that would have your functions but where the .o members in the archive have the wrong format
The best way to generate a link-time error for deprecated functions that you do not want people to use is to make sure the deprecated functions are not present in the libraries - which makes them one stage beyond 'deprecated'.
Maybe you can provide an auxilliary library with the deprecated function in it; the reprobates who won't pay attention can link with the auxilliary library, but people in the mainstream won't use the auxilliary library and therefore won't use the functions. However, it is still taking it beyond the 'deprecated' stage.
Getting a link-time warning is tricky. Clearly, GCC does that for some function (mktemp() et al), and Apple has GCC warn if you run a program that uses gets(). I don't know what they do to make that happen.
In the light of the comments, I think you need to head the problem off at compile time, rather than waiting until link time or run time.
The GCC attributes include (from the GCC 4.4.1 manual):
error ("message")
If this attribute is used on a function declaration and a call to such a function is
not eliminated through dead code elimination or other optimizations, an error
which will include message will be diagnosed. This is useful for compile time
checking, especially together with __builtin_constant_p and inline functions
where checking the inline function arguments is not possible through extern
char [(condition) ? 1 : -1]; tricks. While it is possible to leave the function
undefined and thus invoke a link failure, when using this attribute the problem
will be diagnosed earlier and with exact location of the call even in presence of
inline functions or when not emitting debugging information.
warning ("message")
If this attribute is used on a function declaration and a call to such a function is
not eliminated through dead code elimination or other optimizations, a warning
which will include message will be diagnosed. This is useful for compile time
checking, especially together with __builtin_constant_p and inline functions.
While it is possible to define the function with a message in .gnu.warning*
section, when using this attribute the problem will be diagnosed earlier and
with exact location of the call even in presence of inline functions or when not
emitting debugging information.
If the configuration programs ignore the errors, they're simply broken. This means that new code could not be compiled using the functions, but the existing code can continue to use the deprecated functions in the libraries (up until it needs to be recompiled).

Why do you have to link the math library in C?

If I include <stdlib.h> or <stdio.h> in a C program, I don't have to link these when compiling, but I do have to link to <math.h>, using -lm with GCC, for example:
gcc test.c -o test -lm
What is the reason for this? Why do I have to explicitly link the math library, but not the other libraries?
The functions in stdlib.h and stdio.h have implementations in libc.so (or libc.a for static linking), which is linked into your executable by default (as if -lc were specified). GCC can be instructed to avoid this automatic link with the -nostdlib or -nodefaultlibs options.
The math functions in math.h have implementations in libm.so (or libm.a for static linking), and libm is not linked in by default. There are historical reasons for this libm/libc split, none of them very convincing.
Interestingly, the C++ runtime libstdc++ requires libm, so if you compile a C++ program with GCC (g++), you will automatically get libm linked in.
Remember that C is an old language and that FPUs are a relatively recent phenomenon. I first saw C on 8-bit processors where it was a lot of work to do even 32-bit integer arithmetic. Many of these implementations didn't even have a floating point math library available!
Even on the first 68000 machines (Mac, Atari ST, Amiga), floating point coprocessors were often expensive add-ons.
To do all that floating point math, you needed a pretty sizable library. And the math was going to be slow. So you rarely used floats. You tried to do everything with integers or scaled integers. When you had to include math.h, you gritted your teeth. Often, you'd write your own approximations and lookup tables to avoid it.
Trade-offs existed for a long time. Sometimes there were competing math packages called "fastmath" or such. What's the best solution for math? Really accurate but slow stuff? Inaccurate but fast? Big tables for trig functions? It wasn't until coprocessors were guaranteed to be in the computer that most implementations became obvious. I imagine that there's some programmer out there somewhere right now, working on an embedded chip, trying to decide whether to bring in the math library to handle some math problem.
That's why math wasn't standard. Many or maybe most programs didn't use a single float. If FPUs had always been around and floats and doubles were always cheap to operate on, no doubt there would have been a "stdmath".
Because of ridiculous historical practice that nobody is willing to fix. Consolidating all of the functions required by C and POSIX into a single library file would not only avoid this question getting asked over and over, but would also save a significant amount of time and memory when dynamic linking, since each .so file linked requires the filesystem operations to locate and find it, and a few pages for its static variables, relocations, etc.
An implementation where all functions are in one library and the -lm, -lpthread, -lrt, etc. options are all no-ops (or link to empty .a files) is perfectly POSIX conformant and certainly preferable.
Note: I'm talking about POSIX because C itself does not specify anything about how the compiler is invoked. Thus you can just treat gcc -std=c99 -lm as the implementation-specific way the compiler must be invoked for conformant behavior.
Because time() and some other functions are builtin defined in the C library (libc) itself and GCC always links to libc unless you use the -ffreestanding compile option. However math functions live in libm which is not implicitly linked by gcc.
An explanation is given here:
So if your program is using math functions and including math.h, then you need to explicitly link the math library by passing the -lm flag. The reason for this particular separation is that mathematicians are very picky about the way their math is being computed and they may want to use their own implementation of the math functions instead of the standard implementation. If the math functions were lumped into libc.a it wouldn't be possible to do that.
[Edit]
I'm not sure I agree with this, though. If you have a library which provides, say, sqrt(), and you pass it before the standard library, a Unix linker will take your version, right?
There's a thorough discussion of linking to external libraries in An Introduction to GCC - Linking with external libraries. If a library is a member of the standard libraries (like stdio), then you don't need to specify to the compiler (really the linker) to link them.
After reading some of the other answers and comments, I think the libc.a reference and the libm reference that it links to both have a lot to say about why the two are separate.
Note that many of the functions in 'libm.a' (the math library) are defined in 'math.h' but are not present in libc.a. Some are, which may get confusing, but the rule of thumb is this--the C library contains those functions that ANSI dictates must exist, so that you don't need the -lm if you only use ANSI functions. In contrast, `libm.a' contains more functions and supports additional functionality such as the matherr call-back and compliance to several alternative standards of behavior in case of FP errors. See section libm, for more details.
As ephemient said, the C library libc is linked by default and this library contains the implementations of stdlib.h, stdio.h and several other standard header files. Just to add to it, according to "An Introduction to GCC" the linker command for a basic "Hello World" program in C is as below:
ld -dynamic-linker /lib/ld-linux.so.2 /usr/lib/crt1.o
/usr/lib/crti.o /usr/libgcc-lib /i686/3.3.1/crtbegin.o
-L/usr/lib/gcc-lib/i686/3.3.1 hello.o -lgcc -lgcc_eh -lc
-lgcc -lgcc_eh /usr/lib/gcc-lib/i686/3.3.1/crtend.o /usr/lib/crtn.o
Notice the option -lc in the third line that links the C library.
If I put stdlib.h or stdio.h, I don't have to link those but I have to link when I compile:
stdlib.h, stdio.h are the header files. You include them for your convenience. They only forecast what symbols will become available if you link in the proper library. The implementations are in the library files, that's where the functions really live.
Including math.h is only the first step to gaining access to all the math functions.
Also, you don't have to link against libm if you don't use it's functions, even if you do a #include <math.h> which is only an informational step for you, for the compiler about the symbols.
stdlib.h, stdio.h refer to functions available in libc, which happens to be always linked in so that the user doesn't have to do it himself.
It's a bug. You shouldn't have to explicitly specify -lm any more. Perhaps if enough people complain about it, it will be fixed. (I don't seriously believe this, as the maintainers who are perpetuating the distinction are evidently very stubborn, but I can hope.)
I think it's kind of arbitrary. You have to draw a line somewhere (which libraries are default and which need to be specified).
It gives you the opportunity to replace it with a different one that has the same functions, but I don't think it's very common to do so.
I think GCC does this to maintain backwards compatibility with the original cc executable. My guess for why cc does this is because of build time -- cc was written for machines with far less power than we have now. A lot of programs don't have any floating-point math, and they probably took every library that wasn't commonly used out of the default. I'm guessing that the build time of the Unix OS and the tools that go along with it were the driving force.
I would guess that it is a way to make applications which don't use it at all perform slightly better. Here's my thinking on this.
x86 OSes (and I imagine others) need to store FPU state on context switch. However, most OSes only bother to save/restore this state after the app attempts to use the FPU for the first time.
In addition to this, there is probably some basic code in the math library which will set the FPU to a sane base state when the library is loaded.
So, if you don't link in any math code at all, none of this will happen, therefore the OS doesn't have to save/restore any FPU state at all, making context switches slightly more efficient.
Just a guess though.
The same base premise still applies to non-FPU cases (the premise being that it was to make apps which didn't make use libm perform slightly better).
For example, if there is a soft-FPU which was likely in the early days of C. Then having libm separate could prevent a lot of large (and slow if it was used) code from unnecessarily being linked in.
In addition, if there is only static linking available, then a similar argument applies that it would keep executable sizes and compile times down.
stdio is part of the standard C library which, by default, GCC will link against.
The math function implementations are in a separate libm file that is not linked to by default, so you have to specify it -lm. By the way, there is no relation between those header files and library files.
All libraries like stdio.h and stdlib.h have their implementation in libc.so or libc.a and get linked by the linker by default. The libraries for libc.so are automatically linked while compiling and is included in the executable file.
But math.h has its implementations in libm.so or libm.a which is separate from libc.so. It does not get linked by default and you have to manually link it while compiling your program in GCC by using the -lm flag.
The GNU GCC team designed it to be separate from the other header files, while the other header files get linked by default, but math.h file doesn't.
Here read the item number 14.3, you could read it all if you wish:
Reason why math.h is needs to be linked
Look at this article: Why do we have to link math.h in GCC?
Have a look at the usage:
Using the library
Note that -lm may not always need to be specified even if you use some C math functions.
For example, the following simple program:
#include <stdio.h>
#include <math.h>
int main() {
printf("output: %f\n", sqrt(2.0));
return 0;
}
can be compiled and run successfully with the following command:
gcc test.c -o test
It was tested on GCC 7.5.0 (on Ubuntu 16.04) and GCC 4.8.0 (on CentOS 7).
The post here gives some explanations:
The math functions you call are implemented by compiler built-in functions
See also:
Other Built-in Functions Provided by GCC
How to get the gcc compiler to not optimize a standard library function call like printf?

Resources