Check Source Code for the use of double-precision floating point operations - linker

I'm searching for an easy way to stop our project from using double-precision floating point operations. Currently, I'm trying the hard way, of using a regex to filter for double (long double,etc.) variables and constants in our project. But the flow of variables/constants/defines through the whole software is hard to follow.
Our MCU supports single-precision floating point operations by hardware, but not double-precision. We want to strictly prohibit the unintended use of double-precision floating point operations by notifying the user (e.g. failed build process). Unfortunately, we need to include prebuilt SW modules, which prohibits the use of compiler flags (there is one to automatically convert double to float, etc.). Does anyone have a general way of checking for double-precision operations? e.g. in the Linker?
Thanks a lot!

In case double precision operations are implemented via FP emulation i.e. through calls to library functions, you can check if they are present in final linked binary. For GNU targets you can use e.g. objdump for that:
# TODO: add all functions from libgcc in grep
$ arm-none-eabi-objdump -d a.out | grep -E '\<bl .*(add|sub|mul)df3'
or readelf:
# TODO: add all functions from libgcc in grep
$ arm-none-eabi-redelf -sW a.out | grep '(add|sub|mul)df3'

Related

Why do I get different output on linux and OS/X

I was trying a very simple printf test:
printf "%.16f\n" 5.10
On linux I got this output: 5.100000000000000000 which is expected.
But the very same test on OS/X produces this: 5.0999999999999996
Why does printf produce different output?
EDIT: This is not C code, printf is also a command-line utility, handy for scripts and tests.
The equivalent C program below produces 5.100000000000000000:
#include <stdio.h>
int main() {
printf("%.16f\n", 5.10);
return 0;
}
EDIT 2: The plot thickens... to make this more interesting for linux users, if I run this command as nobody, I get the same behavior as on OS/X:
chqrlie$ printf "%.18f\n" 5.10
5.100000000000000000
chqrlie$ su nobody -c 'printf "%.18f\n" 5.10'
5.099999999999999645
Both the GNU implementation and the MacOS (FreeBSD) implementation of printf are different programs. Both aim to be compatible with the POSIX standard.
POSIX leaves the representation of floating point numbers open to the implementation of printf. Their argumentation is that all calculation in shell is integer anyway.
The floating-point formatting conversion specifications of printf() are not required because all arithmetic in the shell is integer arithmetic. The awk utility performs floating-point calculations and provides its own printf function. The bc utility can perform arbitrary-precision floating-point arithmetic, but does not provide extensive formatting capabilities. (This printf utility cannot really be used to format bc output; it does not support arbitrary precision.) Implementations are encouraged to support the floating-point conversions as an extension.
https://pubs.opengroup.org/onlinepubs/9699919799/utilities/printf.html
PS:
5.1
is not a floating point number in bash. bash does not support floating numbers.
5.1 is a string, interpreted by printf depending on the locale(!)
theymann#theymann-laptop:~/src/sre/inventory-schema$ LANG=en_US.UTF8 printf "%.16f\n" 5.10
5.1000000000000000
theymann#theymann-laptop:~/src/sre/inventory-schema$ LANG=de_DE.UTF8 printf "%.16f\n" 5.10
bash: printf: 5.10: Ungültige Zahl. # << German: Bad Number
0,0000000000000000
Note: In Germany we use , as the decimal separator.
The difference in the output between a normal user and nobody must the shell which is used. some shells, like busybox come with their own implementation of printf. Btw, I'm extremely surprised that nobody is allowed to execute commands on your system!
This is because floating point types mostly can't represent exact values. For example using an online IEE754 tool you got:
So 5.1 is not exactly representable using this format.
Then printf (or whatever) is free to format/print any value that it thinks suitable for the user.

Can I use C code with floating point arithmetics in Linux Kernel code if I use "-msoft-float" compiler flag?

It is not the duplicate question!
I know why FPU module shouldn't be used in kernel.
But what if to compile source code for kernel, which contains floating point operations, using "-msoft-float" gcc compiler flag?
The information about this flag from "man gcc":
-msoft-float Use (do not use) the hardware floating-point instructions and registers for floating-point operations. When
-msoft-float is specified, functions in libgcc.a will be used to perform floating-point operations. When -mhard-float is specified,
the compiler generates IEEE floating-point instructions. This is the
default.

why is abs() and fabs() defined in two different headers in C

The standard library function abs() is declared in stdlib.h, while fabs() is in math.h.
Why are they reside in different headers?
math.h first appears in 7th Research Unix. It is hard to tell how it got there. For example, [1] claims that bits of C library were merged from "PWB/Unix" which included troff and C compiler pcc, but I cannot prove it.
Another interesting piece of information is library manual from V7 Unix:
intro.3:
(3) These functions, together with those of section 2 and those marked (3S),
constitute library libc, which is automatically loaded by the C compiler
cc(1) and the Fortran compiler f77(1). The link editor ld(1) searches
this library under the `-lc' option. Declarations for some of these
functions may be obtained from include files indicated on the appropri-
ate pages.
<...>
(3M) These functions constitute the math library, libm. They are automati-
cally loaded as needed by the Fortran compiler f77(1). The link editor
searches this library under the `-lm' option. Declarations for these
functions may be obtained from the include file <math.h>.
If you look into V7 commands makefiles, only few C programs are linked with -lm flag. So my conclusion is speculative:
libm.a (and math.h) was primarily needed for FORTRAN programs mostly, so it was separated into library to reduce binary footprint (note that it was linked statically).
Not many machines had floating point support. For example, you would need to buy an optional FPP for PDP-11 [2], there is also libfpsim simulation library in Unix to mitigate that, so floating point can be hardly used in early C programs.
1. A History of UNIX before Berkeley: UNIX Evolution: 1975-1984
2. PDP-11 architecture
Most operators like + - / * are also math operators yet these are also readily available. When programming you use so much math, that developers have started to differentiate between math that is needed for everyday stuff and math that is more specialized that you only use some of the time. Abs is one of those functions that are just used to often. Like with pointer arithmetic when you just want to know the difference to calculate the size of a memory block. But you are not interested in knowing which is higher in memory and which is lower.
So to sum up: abs is used often because it calculates the difference of two integers. The difference between two pointers for instance is also an integer. And so it is in stdlib.h. fabs how ever is not something you will need much unless you are doing math specific stuff. Thus it is in math.h.

perf report showing "__libm_pow_l9"

I am using perf to profile my program, which involves loads of use of exp() and pow(). The code was compiled use
icc -g -fno-omit-frame-pointer test.c
and profiled with:
perf record -g ./a.out
which is followed by:
perf report -g 'graph,0.5,caller'
and perf gave:
the two functions __libm_exp_l9() and __libm_pow_l9() are consuming considerable amount of computational power.
So I am wondering if they are just alias to exp() and pow(), respectively? Or any suggestions to read in the report here?
Thanks.
They are not aliases, but internal implementation of the functions. Mathematical libraries have usually several versions of the functions depending on used processor, instruction set, or arguments.
There is nothing to worry about. Exp and Pow are functions that are more complex than just an instructions (usually) and therefore they take some time. Unfortunately I didn't find any reference to them (Intel mathematical library is probably not opensource), but this is common practice to use internal, namespaced names for function.

Enabling strict floating point mode in GCC

I haven't yet created a program to see whether GCC will need it passed, When I do I'd like to know how I'd go about enabling strict floating point mode which will allow reproducible results between runs and computers, Thanks.
Compiling with -msse2 on an Intel/AMD processor that supports it will get you almost there. Do not let any library put the FPU in FTZ/DNZ mode, and you will be mostly set (processor bugs notwithstanding).
For other architectures, the answer would be different. Those achitectures that do not offer any convenient way to get exact IEEE 754 semantics (for instance, pre-SSE2 IA32 CPUs) would require the use of a floating-point emulation library to get the result you want, at a very high performance penalty.
If your target architecture supports the fmadd (multiplication and addition without intermediate rounding) instruction, make sure your compiler does not use it when you have explicit multiplications and additions in the source code. GCC is not supposed to do this unless you use the -ffast-math option.
If you use -ffloat-store and always store intermediate values to variables or apply (explicit) casts to the desired type/precision, you should be at least 90% to your goal, and maybe more. I'd welcome comments on whether there are cases this approach still misses. Note that I claim this works even without any SSE options.
You can also use GCC's option -mpc64 on i386 / ia32 target to force double precision computation even on x87 FPU. See GCC manual.
You can also modify the x87 FPU behavor at runtime, see Deterministic cross-platform floating point arithmetics and also An Introduction to GCC.

Resources