Using gcc 4.7:
$ gcc --version
gcc (GCC) 4.7.0 20120505 (prerelease)
Code listing (test.c):
#include <stdint.h>
struct test {
int before;
char start[0];
unsigned int v1;
unsigned int v2;
unsigned int v3;
char end[0];
int after;
};
int main(int argc, char **argv)
{
int x, y;
x = ((uintptr_t)(&((struct test*)0)->end)) - ((uintptr_t)(&((struct test*)0)->start));
y = ((&((struct test*)0)->end)) - ((&((struct test*)0)->start));
return x + y;
}
Compile & execute
$ gcc -Wall -o test test.c && ./test
Floating point exception
The SIGFPE is caused by the second assignment (y = ...). In the assembly listing, there is a division on this line? Note that the only difference between x= and y= is casting to (uintptr_t).
Disregarding the undefined behaviour due to violation of constarints in the standard, what gcc does here is to calculate the difference between two pointers to char[0] - &(((struct test*)0)->start) and &(((struct test*)0)->end), and divide that difference by the size of a char[0], which of course is 0, so you get a division by 0.
Related
I ran into an interesting issue when compiling some code with -O3 using clang on OSX High Sierra. The code is this:
#include <stdint.h>
#include <limits.h> /* for CHAR_BIT */
#include <stdio.h> /* for printf() */
#include <stddef.h> /* for size_t */
uint64_t get_morton_code(uint16_t x, uint16_t y, uint16_t z)
{
/* Returns the number formed by interleaving the bits in x, y, and z, also
* known as the morton code.
*
* See https://graphics.stanford.edu/~seander/bithacks.html#InterleaveTableO
bvious.
*/
size_t i;
uint64_t a = 0;
for (i = 0; i < sizeof(x)*CHAR_BIT; i++) {
a |= (x & 1U << i) << (2*i) | (y & 1U << i) << (2*i + 1) | (z & 1U << i)
<< (2*i + 2);
}
return a;
}
int main(int argc, char **argv)
{
printf("get_morton_code(99,159,46) = %llu\n", get_morton_code(99,159,46));
return 0;
}
When compiling this with cc -O1 -o test_morton_code test_morton_code.c I get the following output:
get_morton_code(99,159,46) = 4631995
which is correct. However, when compiling with cc -O3 -o test_morton_code test_morton_code.c:
get_morton_code(99,159,46) = 4294967295
which is wrong.
What is also odd is that this bug appears in my code when switching from -O2 to -O3 whereas in the minimal working example above it appears when going from -O1 to -O2.
Is this a bug in the compiler optimization or am I doing something stupid that's only appearing when the compiler is optimizing more aggressively?
I'm using the following version of clang:
snotdaqs-iMac:snoFitter snoperator$ cc --version
Apple LLVM version 9.1.0 (clang-902.0.39.1)
Target: x86_64-apple-darwin17.5.0
Thread model: posix
InstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin
UndefinedBehaviorSanitizer is really helpful in catching such mistakes:
$ clang -fsanitize=undefined -O3 o3.c
$ ./a.out
o3.c:19:2: runtime error: shift exponent 32 is too large for 32-bit type 'unsigned int'
get_morton_code(99,159,46) = 4294967295
A possible fix would be replacing the 1Us with 1ULL, an unsigned long long is at least 64 bit and can be shifted that far.
When i is 15 in the loop, 2*i+2 is 32, and you are shifting an unsigned int by the number of bits in an unsigned int, which is undefined.
You apparently intend to work in a 64-bit field, so cast the left side of the shift to uint64_t.
A proper printf format for uint64_t is get_morton_code(99,159,46) = %" PRIu64 "\n". PRIu64 is defined in the <inttypes.h> header.
I encountered a weird situation where performing pointer arithmetic involving
dynamically linked symbols leads to incorrect results. I'm unsure if there
are simply missing some linker parameters or if it's a linker bug. Can someone
explain what's wrong in the following example?
Consider the following code (lib.c) of a simple shared library:
#include <inttypes.h>
#include <stdio.h>
uintptr_t getmask()
{
return 0xffffffff;
}
int fn1()
{
return 42;
}
void fn2()
{
uintptr_t mask;
uintptr_t p;
mask = getmask();
p = (uintptr_t)fn1 & mask;
printf("mask: %08x\n", mask);
printf("fn1: %p\n", fn1);
printf("p: %08x\n", p);
}
The operation in question is the bitwise AND between the address of fn1 and
the variable mask. The application (app.c) just calls fn2 like that:
extern int fn2();
int main()
{
fn2();
return 0;
}
It leads to the following output ...
mask: ffffffff
fn1: 0x2aab43c0
p: 000003c0
... which is obviously incorrect, because the same result is expected for fn1
and p. The code runs on an AVR32 architecture and is compiled as follows:
$ avr32-linux-uclibc-gcc -Os -Wextra -Wall -c -o lib.o lib.c
$ avr32-linux-uclibc-gcc -Os -Wextra -Wall -shared -o libfoo.so lib.o
$ avr32-linux-uclibc-gcc -Os -Wextra -Wall -o app app.c -L. -lfoo
The compiler thinks, it is the optimal solution to load the variable
mask into 32 bit register 7 and splitting the &-operation into two assembler
operations with immediate operands.
$ avr32-linux-uclibc-objdump -d libfoo.so
000003ce <fn1>:
3ce: 32 ac mov r12,42
3d0: 5e fc retal r12
000003d2 <fn2>:
...
3f0: e4 17 00 00 andh r7,0x0
3f4: e0 17 03 ce andl r7,0x3ce
I assume the immediate operands of the and instructions are not relocated
to the loading address of fn1 when the shared library is loaded into the
applications address space:
Is this behaviour intentional?
How can I investigate whether problem occurs when linking the shared library or when loading the executable?
Background: This is not an academic questions. OpenSSL and LibreSSL
use similar code, so changing the C source is not an option. The code runs
well on other architectures and certainly there is an unapparent reason for
doing bitwise operations on function pointers.
after correcting all the 'slopiness' in the code, the result is:
#include <inttypes.h>
#include <stdio.h>
int fn1( void );
void fn2( void );
uintptr_t getmask( void );
int main( void )
{
fn2();
return 0;
}
uintptr_t getmask()
{
return 0xffffffff;
}
int fn1()
{
return 42;
}
void fn2()
{
uintptr_t mask;
uintptr_t p;
mask = getmask();
p = (uintptr_t)fn1 & mask;
printf("mask: %08x\n", (unsigned int)mask);
printf("fn1: %p\n", fn1);
printf("p: %08x\n", (unsigned int)p);
}
and the output (on my linux 64bit computer) is:
mask: ffffffff
fn1: 0x4007c1
p: 004007c1
I have a problem with my function waveres. It is supposed to return a amplitude as a float, but does not. It return a random high number. I think it is the definition in my header file that is not "seen" by the main function. The other functions does work so I did not include them. When the waveres function runs, it prints correct values of amp.
Header file
#include <stdio.h>
#include <math.h>
#include <time.h> /* til random funksjonen */
#include <stdlib.h>
void faseforskyvning(float epsi[]);
float waveres(float S[],float w[],float *x, float *t, float epsi[]);
void lespar(float S[], float w[]);
Main program
#include "sim.h"
main()
{
float epsi[9], t = 1.0, x = 1.0;
float S[9], w[9];
float amp;
faseforskyvning(epsi);
lespar(S,w);
amp=waveres(S,w,&x,&t,epsi);
printf("%f\n", amp);
}
waveres:
#include "sim.h"
float waveres(float S[],float w[],float *x, float *t, float epsi[])
{
float amp = 0, k;
int i;
for(i=0;i<10;i++)
{
k = pow(w[i],2)/9.81;
amp = amp + sqrt(2*S[i]*0.2)*cos(w[i]*(*t)+k*(*x)+epsi[i]);
printf("%f\n",amp);
}
return(amp);
}
Sample output where the two last number are supposed to be the same.
0.000000
0.261871
3.750682
3.784552
3.741382
3.532950
3.759173
3.734213
3.418669
3.237864
1078933760.000000
A source to the error might be me compiling wrong. Here is a output from compiler:
make
gcc -c -o test.o test.c
gcc -c -o faseforskyvning.o faseforskyvning.c
gcc -c -o waveres.o waveres.c
gcc -c -o lespar.o lespar.c
gcc test.o faseforskyvning.o waveres.o lespar.o -o test -lm -E
gcc: warning: test.o: linker input file unused because linking not done
gcc: warning: faseforskyvning.o: linker input file unused because linking not done
gcc: warning: waveres.o: linker input file unused because linking not done
gcc: warning: lespar.o: linker input file unused because linking not done
You have undefined behavior, you iterate untill 10
for(i=0;i<10;i++)
But your arrays has size 9 which means the biggest index is 8
float epsi[9], t = 1.0, x = 1.0;
float S[9], w[9];
You need to change your loop to
for(i=0;i<9;i++)
Also your arrays are not initialized, this is also provokes undefined behavior. For example
float w[9]={0};
initializes all elements of array w with 0
My code is trying to find the entropy of a signal (stored in 'data' and 'interframe' - in the full code these would contain the signal, here I've just put in some random values). When I compile with 'gcc temp.c' it compiles and runs fine.
Output:
entropy: 40.174477
features: 0022FD06
features[0]: 40
entropy: 40
But when I compile with 'gcc -mstackrealign -msse -Os -ftree-vectorize temp.c' then it compiles, but fails to execute beyond line 48. It needs to have all four flags in order to fail - any three of them and it runs fine.
The code probably looks weird - I've chopped just the failing bits out of a much bigger program. I only have the foggiest idea of what the compiler flags do, someone else put them in (and there's usually more of them, but I worked out that these were the bad ones).
All help much appreciated!
#include <stdint.h>
#include <inttypes.h>
#include <stdio.h>
#include <math.h>
static void calc_entropy(volatile int16_t *features, const int16_t* data,
const int16_t* interframe, int frame_length);
int main()
{
int frame_length = 128;
int16_t data[128] = {1, 2, 3, 4};
int16_t interframe[128] = {1, 1, 1};
int16_t a = 0;
int16_t* features = &a;
calc_entropy(features, data, interframe, frame_length);
features += 1;
fprintf(stderr, "\nentropy: %d", a);
return 0;
}
static void calc_entropy(volatile int16_t *features, const int16_t* data,
const int16_t* interframe, int frame_length)
{
float histo[65536] = {0};
float* histo_zero = histo + 32768;
volatile float entropy = 0.0f;
int i;
for(i=0; i<frame_length; i++){
histo_zero[data[i]]++;
histo_zero[interframe[i]]++;
}
for(i=-32768; i < 32768; i++){
if(histo_zero[i])
entropy -= histo_zero[i]*logf(histo_zero[i]/(float)(frame_length*2));
}
fprintf(stderr, "\nentropy: %f", entropy);
fprintf(stderr, "\nfeatures: %p", features);
features[0] = entropy; //execution fails here
fprintf(stderr, "\nfeatures[0]: %d", features[0]);
}
Edit: I'm using gcc 4.5.2, with x86 architecture. Also, if I compile and run it on VirtualBox running ubuntu (gcc -lm -mstackrealign -msse -Os -ftree-vectorize temp.c) it executes correctly.
Edit2: I get
entropy: 40.174477
features: 00000000
and then a message from windows telling me that the program has stopped running.
Edit3: In the five months since I originally posted the question I've updated to gcc 4.7.0, and the code now runs fine. I went back to gcc 4.5.2, and it failed. Still don't know why!
ottavio#magritte:/tmp$ gcc x.c -o x -lm -mstackrealign -msse -Os -ftree-vectorize
ottavio#magritte:/tmp$ ./x
entropy: 40.174477
features: 0x7fff5fe151ce
features[0]: 40
entropy: 40
ottavio#magritte;/tmp$ gcc x.c -o x -lm
ottavio#magritte:/tmp$ ./x
entropy: 40.174477
features: 0x7fffd7eff73e
features[0]: 40
entropy: 40
ottavio#magritte:/tmp$
So, what's wrong with it? gcc 4.6.1 and x86_64 architecture.
It seems to be running here as well, and the only thing I see that might be funky is that you are taking a 16 bit value (features[0]) and converting a 32 bit float (entropy)
features[0] = entropy; //execution fails here
into that value, which of course will shave it off.
It shouldn't matter, but for the heck of it, see if it makes any difference if change your int16_t values to int32_t values.
Given the following program:
/* Find the sum of all the multiples of 3 or 5 below 1000. */
#include <stdio.h>
unsigned long int method_one(const unsigned long int n);
int
main(int argc, char *argv[])
{
unsigned long int sum = method_one(1000000000);
if (sum != 0) {
printf("Sum: %lu\n", sum);
} else {
printf("Error: Unsigned Integer Wrapping.\n");
}
return 0;
}
unsigned long int
method_one(const unsigned long int n)
{
unsigned long int i;
unsigned long int sum = 0;
for (i=1; i!=n; ++i) {
if (!(i % 3) || !(i % 5)) {
unsigned long int tmp_sum = sum;
sum += i;
if (sum < tmp_sum)
return 0;
}
}
return sum;
}
On a Mac OS system (Xcode 3.2.3) if I use cc for compilation using the -std=c99 flag everything seems just right:
nietzsche:problem_1 robert$ cc -std=c99 problem_1.c -o problem_1
nietzsche:problem_1 robert$ ./problem_1
Sum: 233333333166666668
However, if I use c99 to compile it this is what happens:
nietzsche:problem_1 robert$ c99 problem_1.c -o problem_1
nietzsche:problem_1 robert$ ./problem_1
Error: Unsigned Integer Wrapping.
Can you please explain this behavior?
c99 is a wrapper of gcc. It exists because POSIX requires it. c99 will generate a 32-bit (i386) binary by default.
cc is a symlink to gcc, so it takes whatever default configuration gcc has. gcc produces a binary in native architecture by default, which is x86_64.
unsigned long is 32-bit long on i386 on OS X, and 64-bit long on x86_64. Therefore, c99 will have a "Unsigned Integer Wrapping", which cc -std=c99 does not.
You could force c99 to generate a 64-bit binary on OS X by the -W 64 flag.
c99 -W 64 proble1.c -o problem_1
(Note: by gcc I mean the actual gcc binary like i686-apple-darwin10-gcc-4.2.1.)
Under Mac OS X, cc is symlink to gcc (defaults to 64 bit), and c99 is not (defaults to 32bit).
/usr/bin/cc -> gcc-4.2
And they use different default byte-sizes for data types.
/** sizeof.c
*/
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char *argv)
{
printf("sizeof(unsigned long int)==%d\n", (int)sizeof(unsigned long int));
return EXIT_SUCCESS;
}
cc -std=c99 sizeof.c
./a.out
sizeof(unsigned long int)==8
c99 sizeof.c
./a.out
sizeof(unsigned long int)==4
Quite simply, you are overflowing (aka wrapping) your integer variable when using the c99 compiler.
.PMCD.