Overreaching gcc sanitizer - c

I have this little snippet here:
static int32_t s_pow(int32_t base, int32_t exponent)
{
int32_t result = 1;
while (exponent != 0) {
if ((exponent % 2) == 1) {
result *= base;
}
exponent /= 2;
base *= base; /* <- this line */
}
return result;
}
Small and neat and just the one I need.
But GCC 7.5.0 (and other newer versions that support the checks) complained:
gcc -I./ -Wall -Wsign-compare -Wextra -Wshadow -fsanitize=undefined\
-fno-sanitize-recover=all -fno-sanitize=float-divide-by-zero\
-Wdeclaration-after-statement -Wbad-function-cast -Wcast-align -Wstrict-prototypes\
-Wpointer-arith -Wsystem-headers -O3 -funroll-loops -fomit-frame-pointer -flto -m64\
test.o -o test
test.c:25:12: runtime error: signed integer overflow: 65536 * 65536 cannot be represented in type 'int'
Yes, GCC, that is correct, 65536 * 65536 is a bit much for a 31 bit data type even in a 64 bit environment, admitted. But the input comes from a small table and is carefully chosen in such a way that the result result cannot exceed 2^20 and therefor none of the intermediate results can. I cannot change the sanitizing itself only my code. Yes, I checked all possible results.
Any ideas? Or is it my code?

As I see runtime error: signed integer overflow is happened in runtime. It means that you send an input leading to UB. You say that result cannot exceed 2^20. As I guess you mean that you have a case where you make base *= base action after you already got a result, and don't care about base anymore.
If I understand you right, you can just change code to something like this I guess?
It will give you a fix for s_pow(60000, 1) call for example, and won't affect performance, due optimization is quite easy for compiler here.
static int32_t s_pow(int32_t base, int32_t exponent) {
int32_t result = 1;
while (exponent != 0) {
if ((exponent % 2) == 1) {
result *= base;
}
exponent /= 2;
if (exponent == 0) {
break;
}
base *= base; /* <- this line */
}
return result;
}
Basically it is important to understand what you want here, if you want to remove annoying warning, here it is a way. If warning still appears, probably you send too big numbers.

Related

Check for infinity while using -Wfloat-equal?

My pet project uses -nostdlib so I can't use any C libraries however I can include them for constants.
Below is my code. If I compile using gcc -Wfloat-equal -nostdlib -O2 -c a.c on gcc or clang it'll give me the error below. I was curious if there's a way to check without triggering this warning. I suspect I can't unless I call code I don't compile
warning: comparing floating-point with ‘==’ or ‘!=’ is unsafe [-Wfloat-equal]
#include <math.h>
int IsInfinity(double d) {
return d != (double)INFINITY && (float)d == INFINITY;
}
You can use return d < INFINITY && (float) d >= INFINITY;.
Another option is return isfinite(d) && isinf((float) d); if you want to classify negative values symmetrically. (These are macros defined in math.h.)
Code looks like it is trying to test if d is in the finite range and, as a float, is more than FLT_MAX.
// OP's
int IsInfinity(double d) {
return d != (double)INFINITY && (float)d == INFINITY;
}
if there's a way to check without triggering this warning.
Alternate:
Check via the max values:
int IsInfinity_alt(double d) {
if (d < 0) d = -d; // OP can not use fabs().
return ((float)d > FLT_MAX && d <= DBL_MAX);
}
Had code used the below, some double, just bigger than FLT_MAX function differently than above. It is possible that this behavior is preferred by OP.
return (d > FLT_MAX && d <= DBL_MAX);

Is this clang optimization a bug?

I ran into an interesting issue when compiling some code with -O3 using clang on OSX High Sierra. The code is this:
#include <stdint.h>
#include <limits.h> /* for CHAR_BIT */
#include <stdio.h> /* for printf() */
#include <stddef.h> /* for size_t */
uint64_t get_morton_code(uint16_t x, uint16_t y, uint16_t z)
{
/* Returns the number formed by interleaving the bits in x, y, and z, also
* known as the morton code.
*
* See https://graphics.stanford.edu/~seander/bithacks.html#InterleaveTableO
bvious.
*/
size_t i;
uint64_t a = 0;
for (i = 0; i < sizeof(x)*CHAR_BIT; i++) {
a |= (x & 1U << i) << (2*i) | (y & 1U << i) << (2*i + 1) | (z & 1U << i)
<< (2*i + 2);
}
return a;
}
int main(int argc, char **argv)
{
printf("get_morton_code(99,159,46) = %llu\n", get_morton_code(99,159,46));
return 0;
}
When compiling this with cc -O1 -o test_morton_code test_morton_code.c I get the following output:
get_morton_code(99,159,46) = 4631995
which is correct. However, when compiling with cc -O3 -o test_morton_code test_morton_code.c:
get_morton_code(99,159,46) = 4294967295
which is wrong.
What is also odd is that this bug appears in my code when switching from -O2 to -O3 whereas in the minimal working example above it appears when going from -O1 to -O2.
Is this a bug in the compiler optimization or am I doing something stupid that's only appearing when the compiler is optimizing more aggressively?
I'm using the following version of clang:
snotdaqs-iMac:snoFitter snoperator$ cc --version
Apple LLVM version 9.1.0 (clang-902.0.39.1)
Target: x86_64-apple-darwin17.5.0
Thread model: posix
InstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin
UndefinedBehaviorSanitizer is really helpful in catching such mistakes:
$ clang -fsanitize=undefined -O3 o3.c
$ ./a.out
o3.c:19:2: runtime error: shift exponent 32 is too large for 32-bit type 'unsigned int'
get_morton_code(99,159,46) = 4294967295
A possible fix would be replacing the 1Us with 1ULL, an unsigned long long is at least 64 bit and can be shifted that far.
When i is 15 in the loop, 2*i+2 is 32, and you are shifting an unsigned int by the number of bits in an unsigned int, which is undefined.
You apparently intend to work in a 64-bit field, so cast the left side of the shift to uint64_t.
A proper printf format for uint64_t is get_morton_code(99,159,46) = %" PRIu64 "\n". PRIu64 is defined in the <inttypes.h> header.

Why does clang produces wrong results for my c code compiled with -O1 but not with -O0?

For input 0xffffffff, the following c code works fine with no optimization, but produces wrong results when compiled with -O1. Other compilation options are -g -m32 -Wall. The code is tested with clang-900.0.39.2 in macOS 10.13.2.
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char *argv[]) {
if (argc < 2) return 1;
char *endp;
int x = (int)strtoll(argv[1], &endp, 0);
int mask1 = 0x55555555;
int mask2 = 0x33333333;
int count = (x & mask1) + ((x >> 1) & mask1);
int v1 = count >> 2;
printf("v1 = %#010x\n", v1);
int v2 = v1 & mask2;
printf("v2 = %#010x\n", v2);
return 0;
}
Input: 0xffffffff
Outputs with -O0: (expected)
v1 = 0xeaaaaaaa
v2 = 0x22222222
Outputs with -O1: (wrong)
v1 = 0x2aaaaaaa
v2 = 0x02222222
Below are disassembled instructions for the line "int v1 = count >> 2;" with -O0 and -O1.
With -O0:
sarl $0x2, %esi
With -O1:
shrl $0x2, %esi
Below are disassembled instructions for the line "int v2 = v1 & mask2;" with -O0 and -O1.
With -O0:
andl -0x24(%ebp), %esi //-0x24(%ebp) stores 0x33333333
With -O1:
andl $0x13333333, %esi //why does the optimization changes 0x33333333 to 0x13333333?
In addition, if x is set to 0xffffffff locally instead of getting its value from arguments, the code will work as expected even with -O1.
P.S: The code is an experimental piece based on my solution to the Data Lab from the CS:APP course # CMU. The lab asks the student to implement a function that counts the number of 1 bit of an int variable without using any type other than int.
As several commenters have pointed out, right-shifting signed values is not well defined.
I changed the declaration and initialization of x to
unsigned int x = (unsigned int)strtoll(argv[1], &endp, 0);
and got consistent results under -O0 and -O1. (But before making that change, I was able to reproduce your result under clang under MacOS.)
As you have discovered, you raise Implementation-defined Behavior in your attempt to store 0xffffffff (4294967295) in int x (where INT_MAX is 7fffffff, or 2147483647). C11 Standard §6.3.1.3 (draft n1570) - Signed and unsigned integers Whenever using strtoll (or strtoull) (both versions with 1-l would be fine) and attempting to store the value as an int, you must check the result against INT_MAX before making the assignment with a cast. (or if using exact width types, against INT32_MAX, or UINT32_MAX for unsigned)
Further, in circumstance such as this where bit operations are involved, you can remove uncertainty and insure portability by using the exact width types provided in stdint.h and the associated format specifiers provided in inttypes.h. Here, there is no need for use of a signed int. It would make more sense to handle all values as unsigned (or uint32_t).
For example, the following provides a default value for the input to avoid the Undefined Behavior invoked if your code is executed without argument (you can also simply test argc), replaces the use of strtoll with strtoul, validates the input fits within the associated variable before assignment handling the error if it does not, and then makes use of the unambiguous exact types, e.g.
#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>
#include <inttypes.h>
int main (int argc, char *argv[]) {
uint64_t tmp = argc > 1 ? strtoul (argv[1], NULL, 0) : 0xffffffff;
if (tmp > UINT32_MAX) {
fprintf (stderr, "input exceeds UINT32_MAX.\n");
return 1;
}
uint32_t x = (uint32_t)tmp,
mask1 = 0x55555555,
mask2 = 0x33333333,
count = (x & mask1) + ((x >> 1) & mask1),
v1 = count >> 2,
v2 = v1 & mask2;
printf("v1 = 0x%" PRIx32 "\n", v1);
printf("v2 = 0x%" PRIx32 "\n", v2);
return 0;
}
Example Use/Output
$ ./bin/masktst
v1 = 0x2aaaaaaa
v2 = 0x22222222
Compiled with
$ gcc -Wall -Wextra -pedantic -std=gnu11 -Ofast -o bin/masktst masktst.c
Look things over and let me know if you have further questions.
this statement:
int x = (int)strtoll(argv[1], &endp, 0);
results in a signed overflow, which is undefined behavior.
(on my system, the result is: -1431655766
The resulting values tend to go downhill from there:
The variable: v1 receives: -357913942
The variable: v2 receives: 572662306
the %x format specifier only works correctly with unsigned variables

m32 and m64 compiler options provides different output

Sample Code
#include "stdio.h"
#include <stdint.h>
int main()
{
double d1 = 210.01;
uint32_t m = 1000;
uint32_t v1 = (uint32_t) (d1 * m);
printf("%d",v1);
return 0;
}
Output
1. When compiling with -m32 option (i.e gcc -g3 -m32 test.c)
/test 174 # ./a.out
210009
2. When compiling with -m64 option (i.e gcc -g3 -m64 test.c)
test 176 # ./a.out
210010
Why do I get a difference?
My understanding "was", m would be promoted to double and multiplication would be cast downward to unit32_t. Moreover, since we are using stdint type integer, we would be further removing ambiguity related to architecture etc etc.
I know something is fishy here, but not able to pin it down.
Update:
Just to clarify (for one of the comment), the above behavior is seen for both gcc and g++.
I can confirm the results on my gcc (Ubuntu 5.2.1-22ubuntu2). What seems to happen is that the 32-bit unoptimized code uses 387 FPU with FMUL opcode, whereas 64-bit uses the SSE MULS opcode. (just execute gcc -S test.c with different parameters and see the assembler output). And as is well known, the 387 FPU that executes the FMUL has more than 64 bits of precision (80!) so it seems that it rounds differently here. The reason of course is that that the exact value of 64-bit IEEE double 210.01 is not that, but
210.009999999999990905052982270717620849609375
and when you multiply by 1000, you're not actually just shifting the decimal point - after all there is no decimal point but binary point in the floating point value; so the value must be rounded. And on 64-bit doubles it is rounded up. On 80-bit 387 FPU registers, the calculation is more precise, and it ends up being rounded down.
After reading about this a bit more, I believe the result generated by gcc on 32-bit arch is not standard conforming. Thus if you force the standard to C99 or C11 with -std=c99, -std=c11, you will get the correct result
% gcc -m32 -std=c11 test.c; ./a.out
210010
If you do not want to force C99 or C11 standard, you could also use the -fexcess-precision=standard switch.
However fun does not stop here.
% gcc -m32 test.c; ./a.out
210009
% gcc -m32 -O3 test.c; ./a.out
210010
So you get the "correct" result if you compile with -O3; this is of course because the 64-bit compiler uses the 64-bit SSE math to constant-fold the calculation.
To confirm that extra precision affects it, you can use a long double:
#include "stdio.h"
#include <stdint.h>
int main()
{
long double d1 = 210.01; // double constant to long double!
uint32_t m = 1000;
uint32_t v1 = (uint32_t) (d1 * m);
printf("%d",v1);
return 0;
}
Now even -m64 rounds it to 210009.
% gcc -m64 test.c; ./a.out
210009

Is there a document describing how Clang handles excess floating-point precision?

It is nearly impossible(*) to provide strict IEEE 754 semantics at reasonable cost when the only floating-point instructions one is allowed to used are the 387 ones. It is particularly hard when one wishes to keep the FPU working on the full 64-bit significand so that the long double type is available for extended precision. The usual “solution” is to do intermediate computations at the only available precision, and to convert to the lower precision at more or less well-defined occasions.
Recent versions of GCC handle excess precision in intermediate computations according to the interpretation laid out by Joseph S. Myers in a 2008 post to the GCC mailing list. This description makes a program compiled with gcc -std=c99 -mno-sse2 -mfpmath=387 completely predictable, to the last bit, as far as I understand. And if by chance it doesn't, it is a bug and it will be fixed: Joseph S. Myers' stated intention in his post is to make it predictable.
Is it documented how Clang handles excess precision (say when the option -mno-sse2 is used), and where?
(*) EDIT: this is an exaggeration. It is slightly annoying but not that difficult to emulate binary64 when one is allowed to configure the x87 FPU to use a 53-bit significand.
Following a comment by R.. below, here is the log of a short interaction of mine with the most recent version of Clang I have :
Hexa:~ $ clang -v
Apple clang version 4.1 (tags/Apple/clang-421.11.66) (based on LLVM 3.1svn)
Target: x86_64-apple-darwin12.4.0
Thread model: posix
Hexa:~ $ cat fem.c
#include <stdio.h>
#include <math.h>
#include <float.h>
#include <fenv.h>
double x;
double y = 2.0;
double z = 1.0;
int main(){
x = y + z;
printf("%d\n", (int) FLT_EVAL_METHOD);
}
Hexa:~ $ clang -std=c99 -mno-sse2 fem.c
Hexa:~ $ ./a.out
0
Hexa:~ $ clang -std=c99 -mno-sse2 -S fem.c
Hexa:~ $ cat fem.s
…
movl $0, %esi
fldl _y(%rip)
fldl _z(%rip)
faddp %st(1)
movq _x#GOTPCREL(%rip), %rax
fstpl (%rax)
…
This does not answer the originally posed question, but if you are a programmer working with similar issues, this answer might help you.
I really don't see where the perceived difficulty is. Providing strict IEEE-754 binary64 semantics while being limited to 80387 floating-point math, and retaining 80-bit long double computation, seems to follow well-specified C99 casting rules with both GCC-4.6.3 and clang-3.0 (based on LLVM 3.0).
Edited to add: Yet, Pascal Cuoq is correct: neither gcc-4.6.3 or clang-llvm-3.0 actually enforce those rules correctly for '387 floating-point math. Given the proper compiler options, the rules are correctly applied to expressions evaluated at compile time, but not for run-time expressions. There are workarounds, listed after the break below.
I do molecular dynamics simulation code, and am very familiar with the repeatability/predictability requirements and also with the desire to retain maximum precision available when possible, so I do claim I know what I am talking about here. This answer should show that the tools exist and are simple to use; the problems arise from not being aware of or not using those tools.
(A preferred example I like, is the Kahan summation algorithm. With C99 and proper casting (adding casts to e.g. Wikipedia example code), no tricks or extra temporary variables are needed at all. The implementation works regardless of compiler optimization level, including at -O3 and -Ofast.)
C99 explicitly states (in e.g. 5.4.2.2) that casting and assignment both remove all extra range and precision. This means that you can use long double arithmetic by defining your temporary variables used during computation as long double, also casting your input variables to that type; whenever a IEEE-754 binary64 is needed, just cast to a double.
On '387, the cast generates an assignment and a load on both the above compilers; this does correctly round the 80-bit value to IEEE-754 binary64. This cost is very reasonable in my opinion. The exact time taken depends on the architecture and surrounding code; usually it is and can be interleaved with other code to bring the cost down to neglible levels. When MMX, SSE or AVX are available, their registers are separate from the 80-bit 80387 registers, and the cast usually is done by moving the value to the MMX/SSE/AVX register.
(I prefer production code to use a specific floating-point type, say tempdouble or such, for temporary variables, so that it can be defined to either double or long double depending on architecture and speed/precision tradeoffs desired.)
In a nutshell:
Don't assume (expression) is of double precision just because all the variables and literal constants are. Write it as (double)(expression) if you want the result at double precision.
This applies to compound expressions, too, and may sometimes lead to unwieldy expressions with many levels of casts.
If you have expr1 and expr2 that you wish to compute at 80-bit precision, but also need the product of each rounded to 64-bit first, use
long double expr1;
long double expr2;
double product = (double)(expr1) * (double)(expr2);
Note, product is computed as a product of two 64-bit values; not computed at 80-bit precision, then rounded down. Calculating the product at 80-bit precision, then rounding down, would be
double other = expr1 * expr2;
or, adding descriptive casts that tell you exactly what is happening,
double other = (double)((long double)(expr1) * (long double)(expr2));
It should be obvious that product and other often differ.
The C99 casting rules are just another tool you must learn to wield, if you do work with mixed 32-bit/64-bit/80-bit/128-bit floating point values. Really, you encounter the exact same issues if you mix binary32 and binary64 floats (float and double on most architectures)!
Perhaps rewriting Pascal Cuoq's exploration code, to correctly apply casting rules, makes this clearer?
#include <stdio.h>
#define TEST(eq) printf("%-56s%s\n", "" # eq ":", (eq) ? "true" : "false")
int main(void)
{
double d = 1.0 / 10.0;
long double ld = 1.0L / 10.0L;
printf("sizeof (double) = %d\n", (int)sizeof (double));
printf("sizeof (long double) == %d\n", (int)sizeof (long double));
printf("\nExpect true:\n");
TEST(d == (double)(0.1));
TEST(ld == (long double)(0.1L));
TEST(d == (double)(1.0 / 10.0));
TEST(ld == (long double)(1.0L / 10.0L));
TEST(d == (double)(ld));
TEST((double)(1.0L/10.0L) == (double)(0.1));
TEST((long double)(1.0L/10.0L) == (long double)(0.1L));
printf("\nExpect false:\n");
TEST(d == ld);
TEST((long double)(d) == ld);
TEST(d == 0.1L);
TEST(ld == 0.1);
TEST(d == (long double)(1.0L / 10.0L));
TEST(ld == (double)(1.0L / 10.0));
return 0;
}
The output, with both GCC and clang, is
sizeof (double) = 8
sizeof (long double) == 12
Expect true:
d == (double)(0.1): true
ld == (long double)(0.1L): true
d == (double)(1.0 / 10.0): true
ld == (long double)(1.0L / 10.0L): true
d == (double)(ld): true
(double)(1.0L/10.0L) == (double)(0.1): true
(long double)(1.0L/10.0L) == (long double)(0.1L): true
Expect false:
d == ld: false
(long double)(d) == ld: false
d == 0.1L: false
ld == 0.1: false
d == (long double)(1.0L / 10.0L): false
ld == (double)(1.0L / 10.0): false
except that recent versions of GCC promote the right hand side of ld == 0.1 to long double first (i.e. to ld == 0.1L), yielding true, and that with SSE/AVX, long double is 128-bit.
For the pure '387 tests, I used
gcc -W -Wall -m32 -mfpmath=387 -mno-sse ... test.c -o test
clang -W -Wall -m32 -mfpmath=387 -mno-sse ... test.c -o test
with various optimization flag combinations as ..., including -fomit-frame-pointer, -O0, -O1, -O2, -O3, and -Os.
Using any other flags or C99 compilers should lead to the same results, except for long double size (and ld == 1.0 for current GCC versions). If you encounter any differences, I'd be very grateful to hear about them; I may need to warn my users of such compilers (compiler versions). Note that Microsoft does not support C99, so they are completely uninteresting to me.
Pascal Cuoq does bring up an interesting problem in the comment chain below, which I didn't immediately recognize.
When evaluating an expression, both GCC and clang with -mfpmath=387 specify that all expressions are evaluated using 80-bit precision. This leads to for example
7491907632491941888 = 0x1.9fe2693112e14p+62 = 110011111111000100110100100110001000100101110000101000000000000
5698883734965350400 = 0x1.3c5a02407b71cp+62 = 100111100010110100000001001000000011110110111000111000000000000
7491907632491941888 * 5698883734965350400 = 42695510550671093541385598890357555200 = 100000000111101101101100110001101000010100100001011110111111111111110011000111000001011101010101100011000000000000000000000000
yielding incorrect results, because that string of ones in the middle of the binary result is just at the difference between 53- and 64-bit mantissas (64 and 80-bit floating point numbers, respectively). So, while the expected result is
42695510550671088819251326462451515392 = 0x1.00f6d98d0a42fp+125 = 100000000111101101101100110001101000010100100001011110000000000000000000000000000000000000000000000000000000000000000000000000
the result obtained with just -std=c99 -m32 -mno-sse -mfpmath=387 is
42695510550671098263984292201741942784 = 0x1.00f6d98d0a43p+125 = 100000000111101101101100110001101000010100100001100000000000000000000000000000000000000000000000000000000000000000000000000000
In theory, you should be able to tell gcc and clang to enforce the correct C99 rounding rules by using options
-std=c99 -m32 -mno-sse -mfpmath=387 -ffloat-store -fexcess-precision=standard
However, this only affects expressions the compiler optimizes, and does not seem to fix the 387 handling at all. If you use e.g. clang -O1 -std=c99 -m32 -mno-sse -mfpmath=387 -ffloat-store -fexcess-precision=standard test.c -o test && ./test with test.c being Pascal Cuoq's example program, you will get the correct result per IEEE-754 rules -- but only because the compiler optimizes away the expression, not using the 387 at all.
Simply put, instead of computing
(double)d1 * (double)d2
both gcc and clang actually tell the '387 to compute
(double)((long double)d1 * (long double)d2)
This is indeed I believe this is a compiler bug affecting both gcc-4.6.3 and clang-llvm-3.0, and an easily reproduced one. (Pascal Cuoq points out that FLT_EVAL_METHOD=2 means operations on double-precision arguments is always done at extended precision, but I cannot see any sane reason -- aside from having to rewrite parts of libm on '387 -- to do that in C99 and considering IEEE-754 rules are achievable by the hardware! After all, the correct operation is easily achievable by the compiler, by modifying the '387 control word to match the precision of the expression. And, given the compiler options that should force this behaviour -- -std=c99 -ffloat-store -fexcess-precision=standard -- make no sense if FLT_EVAL_METHOD=2 behaviour is actually desired, there is no backwards compatibility issues, either.) It is important to note that given the proper compiler flags, expressions evaluated at compile time do get evaluated correctly, and that only expressions evaluated at run time get incorrect results.
The simplest workaround, and the portable one, is to use fesetround(FE_TOWARDZERO) (from fenv.h) to round all results towards zero.
In some cases, rounding towards zero may help with predictability and pathological cases. In particular, for intervals like x = [0,1), rounding towards zero means the upper limit is never reached through rounding; important if you evaluate e.g. piecewise splines.
For the other rounding modes, you need to control the 387 hardware directly.
You can use either __FPU_SETCW() from #include <fpu_control.h>, or open-code it. For example, precision.c:
#include <stdlib.h>
#include <stdio.h>
#include <limits.h>
#define FP387_NEAREST 0x0000
#define FP387_ZERO 0x0C00
#define FP387_UP 0x0800
#define FP387_DOWN 0x0400
#define FP387_SINGLE 0x0000
#define FP387_DOUBLE 0x0200
#define FP387_EXTENDED 0x0300
static inline void fp387(const unsigned short control)
{
unsigned short cw = (control & 0x0F00) | 0x007f;
__asm__ volatile ("fldcw %0" : : "m" (*&cw));
}
const char *bits(const double value)
{
const unsigned char *const data = (const unsigned char *)&value;
static char buffer[CHAR_BIT * sizeof value + 1];
char *p = buffer;
size_t i = CHAR_BIT * sizeof value;
while (i-->0)
*(p++) = '0' + !!(data[i / CHAR_BIT] & (1U << (i % CHAR_BIT)));
*p = '\0';
return (const char *)buffer;
}
int main(int argc, char *argv[])
{
double d1, d2;
char dummy;
if (argc != 3) {
fprintf(stderr, "\nUsage: %s 7491907632491941888 5698883734965350400\n\n", argv[0]);
return EXIT_FAILURE;
}
if (sscanf(argv[1], " %lf %c", &d1, &dummy) != 1) {
fprintf(stderr, "%s: Not a number.\n", argv[1]);
return EXIT_FAILURE;
}
if (sscanf(argv[2], " %lf %c", &d2, &dummy) != 1) {
fprintf(stderr, "%s: Not a number.\n", argv[2]);
return EXIT_FAILURE;
}
printf("%s:\td1 = %.0f\n\t %s in binary\n", argv[1], d1, bits(d1));
printf("%s:\td2 = %.0f\n\t %s in binary\n", argv[2], d2, bits(d2));
printf("\nDefaults:\n");
printf("Product = %.0f\n\t %s in binary\n", d1 * d2, bits(d1 * d2));
printf("\nExtended precision, rounding to nearest integer:\n");
fp387(FP387_EXTENDED | FP387_NEAREST);
printf("Product = %.0f\n\t %s in binary\n", d1 * d2, bits(d1 * d2));
printf("\nDouble precision, rounding to nearest integer:\n");
fp387(FP387_DOUBLE | FP387_NEAREST);
printf("Product = %.0f\n\t %s in binary\n", d1 * d2, bits(d1 * d2));
printf("\nExtended precision, rounding to zero:\n");
fp387(FP387_EXTENDED | FP387_ZERO);
printf("Product = %.0f\n\t %s in binary\n", d1 * d2, bits(d1 * d2));
printf("\nDouble precision, rounding to zero:\n");
fp387(FP387_DOUBLE | FP387_ZERO);
printf("Product = %.0f\n\t %s in binary\n", d1 * d2, bits(d1 * d2));
return 0;
}
Using clang-llvm-3.0 to compile and run, I get the correct results,
clang -std=c99 -m32 -mno-sse -mfpmath=387 -O3 -W -Wall precision.c -o precision
./precision 7491907632491941888 5698883734965350400
7491907632491941888: d1 = 7491907632491941888
0100001111011001111111100010011010010011000100010010111000010100 in binary
5698883734965350400: d2 = 5698883734965350400
0100001111010011110001011010000000100100000001111011011100011100 in binary
Defaults:
Product = 42695510550671098263984292201741942784
0100011111000000000011110110110110011000110100001010010000110000 in binary
Extended precision, rounding to nearest integer:
Product = 42695510550671098263984292201741942784
0100011111000000000011110110110110011000110100001010010000110000 in binary
Double precision, rounding to nearest integer:
Product = 42695510550671088819251326462451515392
0100011111000000000011110110110110011000110100001010010000101111 in binary
Extended precision, rounding to zero:
Product = 42695510550671088819251326462451515392
0100011111000000000011110110110110011000110100001010010000101111 in binary
Double precision, rounding to zero:
Product = 42695510550671088819251326462451515392
0100011111000000000011110110110110011000110100001010010000101111 in binary
In other words, you can work around the compiler issues by using fp387() to set the precision and rounding mode.
The downside is that some math libraries (libm.a, libm.so) may be written with the assumption that intermediate results are always computed at 80-bit precision. At least the GNU C library fpu_control.h on x86_64 has the comment "libm requires extended precision". Fortunately, you can take the '387 implementations from e.g. GNU C library, and implement them in a header file or write a known-to-work libm, if you need the math.h functionality; in fact, I think I might be able to help there.
For the record, below is what I found by experimentation. The following program shows various behaviors when compiled with Clang:
#include <stdio.h>
int r1, r2, r3, r4, r5, r6, r7;
double ten = 10.0;
int main(int c, char **v)
{
r1 = 0.1 == (1.0 / ten);
r2 = 0.1 == (1.0 / 10.0);
r3 = 0.1 == (double) (1.0 / ten);
r4 = 0.1 == (double) (1.0 / 10.0);
ten = 10.0;
r5 = 0.1 == (1.0 / ten);
r6 = 0.1 == (double) (1.0 / ten);
r7 = ((double) 0.1) == (1.0 / 10.0);
printf("r1=%d r2=%d r3=%d r4=%d r5=%d r6=%d r7=%d\n", r1, r2, r3, r4, r5, r6, r7);
}
The results vary with the optimization level:
$ clang -v
Apple LLVM version 4.2 (clang-425.0.24) (based on LLVM 3.2svn)
$ clang -mno-sse2 -std=c99 t.c && ./a.out
r1=0 r2=1 r3=0 r4=1 r5=1 r6=0 r7=1
$ clang -mno-sse2 -std=c99 -O2 t.c && ./a.out
r1=0 r2=1 r3=0 r4=1 r5=1 r6=1 r7=1
The cast (double) that differentiates r5 and r6 at -O2 has no effect at -O0 and for variables r3 and r4. The result r1 is different from r5 at all optimization levels, whereas r6 only differs from r3 at -O2.

Resources