So basically, for my project, there is a restriction that it must not use the -O3 flag (we must only use -O2). The reasoning for this is that the -O3 flag apparently introduces "hardware accelerated instructions".
The gcc version is 5.4 and the manual page for this version's optimization flags is: this
I want to include as many of -O3's flags as possible.
The list of flags introduced by -O3 are:
-finline-functions, -funswitch-loops, -fpredictive-commoning, -fgcse-after-reload, -ftree-loop-vectorize, -ftree-loop-distribute-patterns, -ftree-slp-vectorize, -fvect-cost-model, -ftree-partial-pre and -fipa-cp-clone
So I am planning to use -O2 and manually include as many of the above flags as possible.
Which flags above enable "hardware accelerated instructions" optimizations? How can I tell if a flag enables "hardware accelerated instructions" optimizations by reading the descriptions? What constitutes that?
The set of instructions used is controlled by -march, not by -O3. It is true that -O3 may make more use of SIMD instructions for vectorization, but -O3 does not specifically add or remove instructions from consideration during code generation.
If you want to compile your code with the simplest instructions only, choose the simplest march for your platform. For example, -march=core2 would be a conservative choice for x86-64, as this refers to the Intel Core 2 processor family which is quite old.
Still, Core 2 supports MMX and SSE to SSE3 and SSSE3. To disable those, add:
-mno-mmx -mno-sse -mno-sse2 -mno-sse3 -mno-ssse3
I feel like John has already answered this question, I will try to provide some examples.
Consider following minimal program:
#include <cstring>
void copy(long *dst , const long *src)
{
std::memcpy(dst, src, sizeof(long) * 4);
}
Compiled with GCC 7.2 g++ -O2 at x86_64 gives following output:
copy(long*, long const*):
movdqu (%rsi), %xmm0
movups %xmm0, (%rdi)
movdqu 16(%rsi), %xmm0
movups %xmm0, 16(%rdi)
ret
Compiled with GCC 7.2 g++ -O2 -mno-sse at x86_64 gives following output:
copy(long*, long const*):
movq (%rsi), %rax
movq %rax, (%rdi)
movq 8(%rsi), %rax
movq %rax, 8(%rdi)
movq 16(%rsi), %rax
movq %rax, 16(%rdi)
movq 24(%rsi), %rax
movq %rax, 24(%rdi)
ret
As you can see, GCC is able to generate SSE instructions even at -O2 level. Separate flag is required to suppress generation of those instructions.
At the same time GCC 5.4 generates the same code with and without -mno-sse flag, but it also does the same for -O3 optimization level.
So your goal is a bit misleading here. Using superset of -O2 flags might suppress generation of SSE and similar instructions in some cases, but this is not guaranteed as optimization level is only indirectly related to what instructions are generated. If you really want to suppress those, you can use -mno-sse flag, but this will probably put you at disadvantage. Just stick with -O2 - this way everyone will be on equal terms.
I used https://godbolt.org/ to demonstrate this.
I would like to add a couple.
Your TA said:
Using -O3 enables hardward accelerated instructions such as SSE
I think that's not really correct. It seems like -f options are intended to be machine-independent. A compiler parses the source code and transforms it into, generally, something called intermediate representations (IRs). Following that, a compiler optimizes IR itself so-called machine-independently. Then, the assembly code is generated from the optimized IR. When the assembly is generated, a different set of optimizations are applied. Examples of IR would be LLVM IR, Sun IR, GCC gimple tree and/or RTL, etc. I believe all modern compiler have IR(s).
GCC's -f options are, I believe, basically intended to be machine-independent. -m options are for machine-dependent optimizations. -O2 or -O3 determines -f options, which are ideally machine-independent. Using fancy instructions is up to machine-dependent parts of a compiler.
In reality, the boarder line between machine-dependent and machine-independent worlds might not be crystal clear.
Vectorization turned on by -O3 would be an example of optimizations in a gray area. Some machines do not support SIMD. Some do but the vector size, for example, is different from architecture to architecture. Here is my example code:
// code.c
long long int inner_product(int* v0, int* v1, int sz) {
long long int res = 0;
for (int i = 0; i < sz; i++) {
res += (v0[i] * v1[i]);
}
return res;
}
If it is compiled as follows:
$ gcc -O3 -S -march=core2 code.c -o code.s -fdump-tree-all
Gcc vectorizes it but the vector size is 2 long long ints or 4 ints:
$ cat code.*.optimized | grep 'vector('
vector(2) long long int vect_res_16.21;
vector(2) long long int vect_res_16.19;
vector(2) long long int vect__8.18;
vector(4) int vect__7.17;
vector(4) int vect__6.16;
vector(4) int * vectp_v1.15;
vector(4) int vect__4.13;
vector(4) int * vectp_v0.12;
On the other hands, if it is compiled with -march=skylake-avx512, whose vector size is 4 times bigger than core2, the outcome would be:
$ gcc -O3 -S -march=skylake-avx512 code.c -o code.s -fdump-tree-all && cat code.*.optimized | grep 'vector('
vector(8) long long int vect_res_16.21;
vector(8) long long int vect_res_16.19;
vector(8) long long int vect__8.18;
vector(16) int vect__7.17;
vector(16) int vect__6.16;
vector(16) int * vectp_v1.15;
vector(16) int vect__4.13;
vector(16) int * vectp_v0.12;
Note that the code generation did not yet start at this point. The IR looks different depending on the march value. I guess vectorization is not the only example that shows this kind of behavior.
Nonetheless, I do not think that it is fair to call such optimizations machine-dependent. A typical machine dependent optimization is instruction scheduling. It has almost nothing to do with intermediate representations. It is more about machine instructions and the micro-architecture. For those machine-independent optimizations in a gray area, conceptually, a compiler can generate unified vectorized IR (say, making vector size always 4), and let the code generator deals with it (each code generator may need to merge or split vector operands). However, in implementation, giving an appropriate size to a vector operand is regarded as easier at the IR level than at the code generation level. So, I guess people gave up living in the world of "Ideas." Nonetheless, I believe that machine-independent optimizations could be referred as machine-independent yet.
You might need to ask what kind of fancy instructions the TA does not want to see. If that's only vector instructions, all vector-related flags should be disabled but you could use -O3. My example code could be used to see if there is still SIMD instructions:
$ gcc -O3 -S code.c
$ cat code.s | egrep xmm
If the output is empty, you are good.
Related
I have some code that rotates my data. I know GAS syntax has a single assembly instruction that can rotate an entire byte. However, when I try to follow any of the advice on Best practices for circular shift (rotate) operations in C++, my C code compiles into at least 5 instructions, which use up three registers-- even when compiling with -O3. Maybe those are best practices in C++, and not in C?
In either case, how can I force C to use the ROR x86 instruction to rotate my data?
The precise line of code which is not getting compiled to the rotate instruction is:
value = (((y & mask) << 1 ) | (y >> (size-1))) //rotate y right 1
^ (((z & mask) << n ) | (z >> (size-n))) // rotate z left by n
// size can be 64 or 32, depending on whether we are rotating a long or an int, and
// mask would be 0xff or 0xffffffff, accordingly
I do not mind using __asm__ __volatile__ to do this rotate, if that's what I must do. But I don't know how to do so correctly.
Your macro compiles to a single ror instruction for me... specifically, I compiled this test file:
#define ROR(x,y) ((unsigned)(x) >> (y) | (unsigned)(x) << 32 - (y))
unsigned ror(unsigned x, unsigned y)
{
return ROR(x, y);
}
as C, using gcc 6, with -O2 -S, and this is the assembly I got:
.file "test.c"
.text
.p2align 4,,15
.globl ror
.type ror, #function
ror:
.LFB0:
.cfi_startproc
movl %edi, %eax
movl %esi, %ecx
rorl %cl, %eax
ret
.cfi_endproc
.LFE0:
.size ror, .-ror
.ident "GCC: (Debian 6.4.0-1) 6.4.0 20170704"
.section .note.GNU-stack,"",#progbits
Please try to do the same, and report the assembly you get. If your test program is substantially different from mine, please tell us how it differs. If you are using a different compiler or a different version of GCC please tell us exactly which one.
Incidentally, I get the same assembly output when I compile the code in the accepted answer for "Best practices for circular shift (rotate) operations in C++", as C.
How old is your compiler? As I noted in the linked question, the UB-safe variable-count rotate idiom (with extra & masking of the count) confuses old compilers, like gcc before 4.9. Since you're not masking the shift count, it should be recognized with even older gcc.
Your big expression is maybe confusing the compiler. Write an inline function for rotate, and call it, like
value = rotr32(y & mask, 1) ^ rotr32(z & mask, n);
Much more readable, and may help stop the compiler from trying to do things in the wrong order and breaking the idiom before recognizing it as a rotate.
Maybe those are best practices in C++, and not in C?
My answer on the linked question clearly says that it's the best practice for C as well as C++. They are different languages, but they overlap completely for this, according to my testing.
Here's a version of the Godbolt link using -xc to compile as C, not C++. I had a couple C++isms in the link in the original question, for experimenting with integer types for the rotate count.
Like the original linked from the best-practices answer, it has a version that uses x86 intrinsics if available. clang doesn't seem to provide any in x86intrin.h, but other compilers have _rotl / _rotr for 32-bit rotates, with other sizes available.
Actually, I talked about rotate intrinsics at length in the answer on the best-practices question, not just in the godbolt link. Did you even read the answer there, apart from the code block? (If you did, your question doesn't reflect it.)
Using intrinsics, or the idiom in your own inline function, is much better than using inline asm. Asm defeats constant-propagation, among other things. Also, compilers can use BMI2 rorx dst, src, imm8 to copy-and-rotate with one instruction, if you compile with -march=haswell or -mbmi2. It's a lot harder to write an inline-asm rotate that can use rorx for immediate-count rotates but ror r32, cl for variable-count rotates. You could try with _builtin_constant_p(), but clang evaluates that before inlining, so it's basically useless for meta-programming style choice of which code to use. It works with gcc though. But it's still much better not to use inline asm unless you've exhausted all other avenues (like asking on SO) to avoid it. https://gcc.gnu.org/wiki/DontUseInlineAsm
Fun fact: the rotate functions in gcc's x86intrin.h are just pure C using the rotate idiom that gcc recognizes. Except for 16-bit rotates, where they use __builtin_ia32_rolhi.
You might need to be a bit more specific with what integral type / width you're rotating, and whether you have a fixed or variable rotation. ror{b,w,l,q} (8, 16, 32, 64-bit) has forms for (1), imm8, or the %cl register. As an example:
static inline uint32_t rotate_right (uint32_t u, size_t r)
{
__asm__ ("rorl %%cl, %0" : "+r" (u) : "c" (r));
return u;
}
I haven't tested this, it's just off the top of my head. And I'm sure multiple constraint syntax could be used to optimize cases where a constant (r) value is used, so %e/rcx is left alone.
If you're using a recent version of gcc or clang (or even icc). The intrinsics header <x86intrin.h>, may provide __ror{b|w|d|q} intrinsics. I haven't tried them.
Best Way:
#define rotr32(x, n) (( x>>n ) | (x<<(64-n)))
#define rotr64(x, n) (( x>>n ) | (x<<(32-n)))
More generic:
#define rotr(x, n) (( x>>n ) | (x<<((sizeof(x)<<3)-n)))
And it compiles (in GCC) with exactly the same code as the asm versions below.
For 64 bit:
__asm__ __volatile__("rorq %b1, %0" : "=g" (u64) : "Jc" (cShift), "0" (u64));
or
static inline uint64_t CC_ROR64(uint64_t word, int i)
{
__asm__("rorq %%cl,%0"
:"=r" (word)
:"0" (word),"c" (i));
return word;
}
Well, this is obviously a beginner's question, but this is my first attempt at making an operating system in C (Actually, I'm almost entirely new to C.. I'm used to asm) so, why exactly is this not valid? As far as I know, a pointer in C is just a uint16_t used to point to a certain area in memory, right (or a uint32_t and that's why it's not working)?
I've made the following kernel ("I've already made a bootloader and all in assembly to load the resulting KERNEL.BIN file):
kernel.c
void printf(char *str)
{
__asm__(
"mov si, %0\n"
"pusha\n"
"mov ah, 0x0E\n"
".repeat:\n"
"lodsb\n"
"cmp al, 0\n"
"je .done\n"
"int 0x10\n"
"jmp .repeat\n"
".done:\n"
"popa\n"
:
: "r" (str)
);
return;
}
int main()
{
char *msg = "Hello, world!";
printf(msg);
__asm__("jmp $");
return 0;
}
I've used the following command to compile it kernel.c:
gcc kernel.c -ffreestanding -m32 -std=c99 -g -O0 -masm=intel -o kernel.bin
which returns the following error:
kernel.c:3: Error: operand type mismatch for 'mov'
Why exactly might be the cause of this error?
As Michael Petch already explained, you use inline assembly only for the absolute minimum of code that cannot be done in C. For the rest there is inline assembly, but you have to be extremely careful to set the constraints and clobber list right.
Let always GCC do the job of passing the values in the right register and just specify in which register the values should be.
For your problem you probably want to do something like this
#include <stdint.h>
void print( const char *str )
{
for ( ; *str; str++) {
__asm__ __volatile__("int $0x10" : : "a" ((int16_t)((0x0E << 8) + *str)), "b" ((int16_t)0) : );
}
}
EDIT: Your assembly has the problem that you try to pass a pointer in a 16 bit register. This cannot work for 32 bit code, as 32 bit is also the pointer size.
If you in case want to generate 16 bit real-mode code, there is the -m16 option. But that does not make GCC a true 16 bit compiler, it has its limitations. Essentially it issues a .code16gcc directive in the code.
You can't simply use 16bit assembly instructions on 32-bit pointers and expect it to work. si is the lower 16bit of the esi register (which is 32bit).
gcc -m32 and -m16 both use 32-bit pointers. -m16 just uses address-size and operand-size prefixes to do mostly the same thing as normal -m32 mode, but running in real mode.
If you try to use 16bit addressing in a 32bit application you'll drop the high part of your pointers, and simply go to a different place.
Just try to read a book on intel 32bit addressing modes, and protected mode, and you'll see that many things are different on that mode.
(and if you try to switch to 64bit mode, you'll see that everything changes again)
A bootloader is something different as normally, cpu reset forces the cpu to begin in 16bit real mode. This is completely different from 32bit protected mode, which is one of the first things the operating system does. Bootloaders work in 16bit mode, and there, pointers are 16bit wide (well, not, 20bits wide, when the proper segment register is appended to the address)
I had been struggling for weeks with a poor-performing translator I had written.
On the following simple bechmark
#include<stdio.h>
int main()
{
int x;
char buf[2048];
FILE *test = fopen("test.out", "wb");
setvbuf(test, buf, _IOFBF, sizeof buf);
for(x=0;x<1024*1024; x++)
fprintf(test, "%04d", x);
fclose(test);
return 0
}
we see the following result
bash-3.1$ gcc -O2 -static test.c -o test
bash-3.1$ time ./test
real 0m0.334s
user 0m0.015s
sys 0m0.016s
As you can see, the moment the "-std=c99" flag is added in, performance comes crashing down:
bash-3.1$ gcc -O2 -static -std=c99 test.c -o test
bash-3.1$ time ./test
real 0m2.477s
user 0m0.015s
sys 0m0.000s
The compiler I'm using is gcc 4.6.2 mingw32.
The file generated is about 12M, so this is a difference between of about 21MB/s between the two.
Running diff shows the the generated files are identical.
I assumed this has something to do with file locking in fprintf, of which the program makes heavy use, but I haven't been able to find a way to switch that off in the C99 version.
I tried flockfile on the stream I use at the beginning of the program, and an corresponding funlockfile at the end, but was greeted with compiler errors about implicit declarations, and linker errors claiming undefined references to those functions.
Could there be another explanation for this problem, and more importantly, is there any way to use C99 on windows without paying such an enormous performance price?
Edit:
After looking at the code generated by these options, it looks like in the slow versions, mingw sticks in the following:
_fprintf:
LFB0:
.cfi_startproc
subl $28, %esp
.cfi_def_cfa_offset 32
leal 40(%esp), %eax
movl %eax, 8(%esp)
movl 36(%esp), %eax
movl %eax, 4(%esp)
movl 32(%esp), %eax
movl %eax, (%esp)
call ___mingw_vfprintf
addl $28, %esp
.cfi_def_cfa_offset 4
ret
.cfi_endproc
In the fast version, this simply does not exist; otherwise, both are exactly the same. I assume __mingw_vfprintf seems to be the slowpoke here, but I have no idea what behavior it needs to emulate that makes it so slow.
After some digging in the source code, I have found why the MinGW function is so terribly slow:
At the beginning of a [v,f,s]printf in MinGW, there is some innocent-looking initialization code:
__pformat_t stream = {
dest, /* output goes to here */
flags &= PFORMAT_TO_FILE | PFORMAT_NOLIMIT, /* only these valid initially */
PFORMAT_IGNORE, /* no field width yet */
PFORMAT_IGNORE, /* nor any precision spec */
PFORMAT_RPINIT, /* radix point uninitialised */
(wchar_t)(0), /* leave it unspecified */
0, /* zero output char count */
max, /* establish output limit */
PFORMAT_MINEXP /* exponent chars preferred */
};
However, PFORMAT_MINEXP is not what it appears to be:
#ifdef _WIN32
# define PFORMAT_MINEXP __pformat_exponent_digits()
# ifndef _TWO_DIGIT_EXPONENT
# define _get_output_format() 0
# define _TWO_DIGIT_EXPONENT 1
# endif
static __inline__ __attribute__((__always_inline__))
int __pformat_exponent_digits( void )
{
char *exponent_digits = getenv( "PRINTF_EXPONENT_DIGITS" );
return ((exponent_digits != NULL) && ((unsigned)(*exponent_digits - '0') < 3))
|| (_get_output_format() & _TWO_DIGIT_EXPONENT)
? 2
: 3
;
}
This winds up getting called every time I want to print, and getenv on windows must not be very quick. Replacing that define with a 2 brings the runtime back to where it should be.
So, the answer comes down to this: when using -std=c99 or any ANSI-compliant mode, MinGW switches the CRT runtime with its own. Normally, this wouldn't be an issue, but the MinGW lib had a bug which slowed its formatting functions down far beyond anything imaginable.
Using -std=c99 disable all GNU extensions.
With GNU extensions and optimization, your fprintf(test, "B") is probably replaced by a fputc('B', test)
Note this answer is obsolete, see https://stackoverflow.com/a/13973562/611560 and https://stackoverflow.com/a/13973933/611560
After some consideration of your assembler, it looks like the slow version is using the *printf() implementation of MinGW, based undoubtedly in the GCC one, while the fast version is using the Microsoft implementation from msvcrt.dll.
Now, the MS one is notably for lacking a lot of features, that the GCC one does implement. Some of these are GNU extensions but some others are for C99 conformance. And since you are using -std=c99 you are requesting the conformance.
But why so slow? Well, one factor is simplicity, the MS version is far simpler so it is expected that it will run faster, even in the trivial cases. Other factor is that you are running under Windows, so it is expected that the MS version be more efficient that one copied from the Unix world.
Does it explain a factor of x10? Probably not...
Another thing you can try:
Replace fprintf() with sprintf(), printing into a memory buffer without touching the file at all. Then you can try doing fwrite() without printfing. That way you can guess if the loss is in the formatting of the data or in the writing to the FILE.
Since MinGW32 3.15, compliant printf functions are available to use instead of those found in Microsoft C runtime (CRT).
The new printf functions are used when compiling in strict ANSI, POSIX and/or C99 modes.
For more information see the mingw32 changelog
You can use __msvcrt_fprintf() to use the fast (non compliant) function.
I want to write a small low level program. For some parts of it I will need to use assembly language, but the rest of the code will be written on C/C++.
So, if I will use GCC to mix C/C++ with assembly code, do I need to use AT&T syntax or can
I use Intel syntax? Or how do you mix C/C++ and asm (intel syntax) in some other way?
I realize that maybe I don't have a choice and must use AT&T syntax, but I want to be sure..
And if there turns out to be no choice, where I can find full/official documentation about the AT&T syntax?
Thanks!
If you are using separate assembly files, gas has a directive to support Intel syntax:
.intel_syntax noprefix # not recommended for inline asm
which uses Intel syntax and doesn't need the % prefix before register names.
(You can also run as with -msyntax=intel -mnaked-reg to have that as the default instead of att, in case you don't want to put .intel_syntax noprefix at the top of your files.)
Inline asm: compile with -masm=intel
For inline assembly, you can compile your C/C++ sources with gcc -masm=intel (See How to set gcc to use intel syntax permanently? for details.) The compiler's own asm output (which the inline asm is inserted into) will use Intel syntax, and it will substitute operands into asm template strings using Intel syntax like [rdi + 8] instead of 8(%rdi).
This works with GCC itself and ICC, but for clang only clang 14 and later.
(Not released yet, but the patch is in current trunk.)
Using .intel_syntax noprefix at the start of inline asm, and switching back with .att_syntax can work, but will break if you use any m constraints. The memory reference will still be generated in AT&T syntax. It happens to work for registers because GAS accepts %eax as a register name even in intel-noprefix mode.
Using .att_syntax at the end of an asm() statement will also break compilation with -masm=intel; in that case GCC's own asm after (and before) your template will be in Intel syntax. (Clang doesn't have that "problem"; each asm template string is local, unlike GCC where the template string truly becomes part of the text file that GCC sends to as to be assembled separately.)
Related:
GCC manual: asm dialect alternatives: writing an asm statement with {att | intel} in the template so it works when compiled with -masm=att or -masm=intel. See an example using lock cmpxchg.
https://stackoverflow.com/tags/inline-assembly/info for more about inline assembly in general; it's important to make sure you're accurately describing your asm to the compiler, so it knows what registers and memory are read / written.
AT&T syntax: https://stackoverflow.com/tags/att/info
Intel syntax: https://stackoverflow.com/tags/intel-syntax/info
The x86 tag wiki has links to manuals, optimization guides, and tutorials.
You can use inline assembly with -masm=intel as ninjalj wrote, but it may cause errors when you include C/C++ headers using inline assembly. This is code to reproduce the errors on Cygwin.
sample.cpp:
#include <cstdint>
#include <iostream>
#include <boost/thread/future.hpp>
int main(int argc, char* argv[]) {
using Value = uint32_t;
Value value = 0;
asm volatile (
"mov %0, 1\n\t" // Intel syntax
// "movl $1, %0\n\t" // AT&T syntax
:"=r"(value)::);
auto expr = [](void) -> Value { return 20; };
boost::unique_future<Value> func { boost::async(boost::launch::async, expr) };
std::cout << (value + func.get());
return 0;
}
When I built this code, I got error messages below.
g++ -E -std=c++11 -Wall -o sample.s sample.cpp
g++ -std=c++11 -Wall -masm=intel -o sample sample.cpp -lboost_system -lboost_thread
/tmp/ccuw1Qz5.s: Assembler messages:
/tmp/ccuw1Qz5.s:1022: Error: operand size mismatch for `xadd'
/tmp/ccuw1Qz5.s:1049: Error: no such instruction: `incl DWORD PTR [rax]'
/tmp/ccuw1Qz5.s:1075: Error: no such instruction: `movl DWORD PTR [rcx],%eax'
/tmp/ccuw1Qz5.s:1079: Error: no such instruction: `movl %eax,edx'
/tmp/ccuw1Qz5.s:1080: Error: no such instruction: `incl edx'
/tmp/ccuw1Qz5.s:1082: Error: no such instruction: `cmpxchgl edx,DWORD PTR [rcx]'
To avoid these errors, it needs to separate inline assembly (the upper half of the code) from C/C++ code which requires boost::future and the like (the lower half). The -masm=intel option is used to compile .cpp files that contain Intel syntax inline assembly, not to other .cpp files.
sample.hpp:
#include <cstdint>
using Value = uint32_t;
extern Value GetValue(void);
sample1.cpp: compile with -masm=intel
#include <iostream>
#include "sample.hpp"
int main(int argc, char* argv[]) {
Value value = 0;
asm volatile (
"mov %0, 1\n\t" // Intel syntax
:"=r"(value)::);
std::cout << (value + GetValue());
return 0;
}
sample2.cpp: compile without -masm=intel
#include <boost/thread/future.hpp>
#include "sample.hpp"
Value GetValue(void) {
auto expr = [](void) -> Value { return 20; };
boost::unique_future<Value> func { boost::async(boost::launch::async, expr) };
return func.get();
}
When the program as shown below is run, it produces ok output:
j= 0 9007199616606190.000000 = x
k= 0 9007199616606190.000000 = [x]
r= 31443101 0.000000 = m*(x-[x])
But when the commented-out line (i.e. //if (argc>1) r = atol(argv[1]);) is uncommented, it produces:
j= 20000 9007199616606190.000000 = x
k= 17285 9007199616606190.000000 = [x]
r= 31443101 0.000000 = m*(x-[x])
even though that line should have no effect, since argc>1 is false. Has anybody got a plausible explanation for this problem? Is it reproducible on any other systems?
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
int main(int argc, char *argv[]) {
int j, k, m=10000;
double r=31443101, jroot=sqrt(83), x;
//if (argc>1) r = atol(argv[1]);
x = r * r * jroot;
j = m*(x-floor(x));
k = floor(m*(x-floor(x)));
printf ("j= %9d %24.6f = x\n", j, x);
printf ("k= %9d %24.6f = [x]\n", k, floor(x));
printf ("r= %9.0f %24.6f = m*(x-[x]) \n", r, m*(x-floor(x)));
return 0;
}
Note, test system = AMD Athlon 64 5200+ system with Linux 2.6.35.14-96.fc14.i686 (i.e., booted to run a 32-bit OS on 64-bit HW) with gcc (GCC) 4.5.1 20100924 (Red Hat 4.5.1-4)
Update -- A few hours ago I posted a comment that code generated with and without the if statement differed only in stack offsets and some skipped code. I now find that comment was not entirely correct; i.e. it is true for non-optimized code, but not true for the -O3 code I executed.
Effect of optimization switch on problem:
-O0 : Both program versions run ok
-O2 or -O3 : Version with comment has error as above, where j=20000 and k=17285
-O1 : Version with comment has j=20000 (an error) and k=0 (OK)
Anyhow, looking at -O3 -S code listings, the two cases differ mostly in skipped if code and stack offsets up to the line before call floor, at which point the with-if code has one more fstpl than the without-if code:
... ;; code without comment:
fmul %st, %st(1)
fxch %st(1)
fstpl (%esp)
fxch %st(1)
fstpl 48(%esp)
fstpl 32(%esp)
call floor
movl $.LC2, (%esp)
fnstcw 86(%esp)
movzwl 86(%esp), %eax
...
... ;; versus code with comment:
fmul %st, %st(1)
fxch %st(1)
fstpl (%esp)
fxch %st(1)
fstpl 48(%esp)
fstpl 32(%esp)
fstpl 64(%esp)
call floor
movl $.LC3, (%esp)
fnstcw 102(%esp)
movzwl 102(%esp), %eax
...
I haven't figured out the reason for the difference.
Not duplicated on my system, Win7 running CygWin with gcc 4.3.4. Both with and without the if statement, the value of j is set to zero, not 20K.
My only suggestion would be to use gcc -S to get a look at the assembler output. That should hopefully tell you what's going wrong.
Specifically, generate the assembler output to two separate files, one each for the working and non-working variant, then vgrep them (eyeball them side by side) to try and ascertain the difference.
This is a serious failure in your environment by the way. With m being 10000, that means the x - floor(x) must be equal to 2. I can't for the life of me think of any real number where that would be the case :-)
I think there are two reasons why that line could have an effect:
Without that line, the values of all of these variables can be (and, IMHO, most likely are) determined at compile-time; with that line, the computations have to be performed at run-time. But obviously, the compiler's precomputed values are supposed to be the same as values computed at run-time, and I'm inclined to discount this as the actual reason for the different observed behavior. (It would certainly show up as a huge difference in the assembler output, though!)
On many machines, floating-point arithmetic is performed using more bits in intermediate values than can actually be stored in a double-precision floating-point number. Your second version, by creating two different code-paths to set x, basically restricts x to what can be stored in a double-precision floating-point number, whereas your first version can allow the initially-calculated value for x to still be available as an intermediate value, with extra bits, when computing subsequent values. (This could be the case whether all of these values are computed at compile-time or at run-time.)
The reason that uncommenting that line might affect the result is that without that line, the compiler can see that r and jroot cannot change after initialisation, so it can calculate x at compile-time rather than runtime. When the line is uncommented, r might change, so the calculation of x must be deferred to runtime, which can result it in being done with a different precision (particularly if 387 floating point math is being used).
You can try using -mfpmath=sse -march=native to use the SSE unit for floating point calculations, which doesn't exhibit excess precision; or you can try using the -ffloat-store switch.
Your subtraction x - floor(x) exhibits catastrophic cancellation - this is the root cause of the problem something to be avoided ;).
EDITED:
I also do not see a difference when I compile your code on my computer using the -O0, -O1, -O2 and -O3.
AMD Phenom Quad 64 bit.
gcc (Ubuntu 4.4.3-4ubuntu5) 4.4.3
I also tried clang (llvm) from release 3.0 with and without the if same results.
I agree that the compiler can pre-compute everything without that if line, you would definitely see that in the assembly output though.
Floating point and C can be nasty, lots of stuff to know to get it to really work. Forcing the int to double conversions is good for accuracy (c libraries in the compiler, even if the fpu is good have been known to have problems and the compilers C library it uses and the C library compiled into or used by your program can/will differ), but int to/from float is where FPU's tend to have their bugs (I think I saw that mentioned with TestFloat or somewhere like that). Might try running TestFloat on your system to see if your FPU is good. Between the famous pentium floating point bug and into the PentiumIV and beyond days most processors had floating point bugs, the pentium III I had was solid but the Pentium IV I had would fail. I rarely use floating point anymore so dont bother to test my systems.
Playing with optimization did change your results according to your edit so this is most likely a gcc problem or a combination of your code and gcc (and not a hardware fpu problem). Then try a different version of gcc on the same computer. 4.4.x instead of 4.5.x for example.