Can someone explain this behaviour?
test.c:
#include <stdio.h>
int main(void)
{
printf("%d, %d\n", (int) (300.6000/0.05000), (int) (300.65000/0.05000));
printf("%f, %f\n", (300.6000/0.05000), (300.65000/0.05000));
return 0;
}
$ gcc test.c
$ ./a.out
6012, 6012
6012.000000, 6013.000000
I checked the assembly code and it puts both the arguments of the first printf as 6012, so it seems to be a compile time bug.
Run
#include <stdio.h>
int main(void)
{
printf("%d, %d\n", (int) (300.6000/0.05000), (int) (300.65000/0.05000));
printf("%.20f %.20f\n", (300.6000/0.05000), (300.65000/0.05000));
return 0;
}
and it should be more clear. The value of the second one (after floating point division, which is not exact) is ~6012.9999999999991, so when you truncate it with (int), gcc is smart enough to put in 6012 at compile time.
When you print the floats, printf by default formats them for display with only 6 digits of precision, which means the second prints as 6013.000000.
printf() rounds floating point numbers when you print them. If you add more precision you can see what's happening:
$ cat gccfloat.c
#include <stdio.h>
int main(void)
{
printf("%d, %d\n", (int) (300.6000/0.05000), (int) (300.65000/0.05000));
printf("%.15f, %.15f\n", (300.6000/0.05000), (300.65000/0.05000));
return 0;
}
$ ./gccfloat
6012, 6012
6012.000000000000000, 6012.999999999999091
Sounds like a rounding error. 300.65000/0.05000 is being calculated (floating point) as something like 6012.99999999. When casting as an int, it gets truncated to 6012. Of course this is all being precalculated in the compiler optimizations, so the final binary just contains the value 6012, which is what you're seeing.
The reason you don't see the same in your second statement is because it's being rounded for display by printf, and not truncated, as is what happens when you cast to int. (See #John Kugelman's answer.)
Related
This question already has answers here:
Printing int as float in C
(7 answers)
Closed 1 year ago.
#include <stdio.h>
#include <math.h>
int main(void)
{
float a = 16777215;
int b = pow(2, 26);
float c = 22345678;
printf("%f\n", a);
printf("%f\n", b);
puts("---------------");
printf("%f\n", c);
printf("%f\n", b);
return 0;
}
output:
16777215.000000
16777215.000000
---------------
22345678.000000
22345678.000000
why the former printf output can have influence to the subsequent printf output?
b isn't a float. Try %i for integer or %d for decimal.
#include <stdio.h>
#include <math.h>
int main(void)
{
float a = 16777215;
int b = pow(2, 26);
float c = 22345678;
printf("%f\n", a);
printf("%i\n", b);
puts("---------------");
printf("%f\n", c);
printf("%i\n", b);
return 0;
}
You need to match the type or strange things can happen:
printf("%d\n", b);
Compiling the original code with clang gives helpful warnings:
pow.c:10:20: warning: format specifies type 'double' but the argument has type 'int' [-Wformat]
printf("%f\n", b);
~~ ^
%d
pow.c:13:20: warning: format specifies type 'double' but the argument has type 'int' [-Wformat]
printf("%f\n", b);
~~ ^
%d
varargs functions like printf have a hard time pinning down their type requirements, it's up to compilers like clang to go the extra mile and show hints like this. You'll need to be extra careful when calling functions of that sort and be sure you're doing it precisely as documented.
As to how this ended up happening, it's not clear, but it doesn't have to be. Undefined behaviour is just that: Anything can happen. It could print the same thing. It could work. It could crash. There doesn't have to be an explanation.
Undefined behavior as you try to print integer with %f
Try
printf("%d\n", b);
To answer your specific question, my guess is that, at the assembly level, if the conversion is not correctly indicated, it will jump to ret, using the value previously stored in eax register.
My colleague and I are studying for a test, where we have to analyze C Code. Looking through the tests from the previous years, we saw the following code, which we don't really understand:
#include <stdio.h>
#define SUM(a,b) a + b
#define HALF(a) a / 2
int main(int argc, char *argv[])
{
int big = 6;
float small = 3.0;
printf("The average is %d\n", HALF(SUM(big, small)));
return 0;
}
This code prints 0, which we don't understand at all... Can you explain this to us?
Thanks so much in advance!
The compiler's warnings (format ‘%d’ expects argument of type ‘int’, but argument 2 has type ‘double’) give more-than-enough information. You need to correct your format-specifier, which should be %lf, instead of %d, since you are trying to print a double value.
printf("The average is %lf\n", HALF(SUM(big, small)));
printf will treat the memory you point as however you tell it to. Here, it is treats the memory that represents the float as an int. Because the two are stored differently, you should get what is essentially a random number. It needs not be 0 always.
To get correct output
Add parentheses in macro
Use correct format specifier (%f)
Corrected Code
#include <stdio.h>
#define SUM(a, b) (a + b)
#define HALF(a) a / 2
int main() {
int big = 6;
float small = 3.0;
printf("The average is %f\n", HALF(SUM(big, small)));
return 0;
}
Output
The average is 4.500000
If you don't add parentheses, output will be 7.500000 due to operator precedence.
In case you need integer output, cast to int before printing.
printf("The average is %d\n", (int)HALF(SUM(big, small)));
Output
The average is 4
Stackoverflow,
I'm trying to write a (very) simple program that will be used to show how machine precision and flops effect functions around their root. My code is as follows:
#include <stdio.h>
#include <math.h>
int main(){
const float x = 2.2;
float sum = 0.0;
sum = pow(x,9) - 18*pow(x,8) + 144*pow(x,7) - 672*pow(x,6) + 2016*pow(x,5) -
4032*pow(x,4) + 5376*pow(x,3) - 4608*pow(x,2) + 2304*x - 512;
printf("sum = %d", sum);
printf("\n----------\n");
printf("x = %d", x);
return 0;
}
But I keep getting that sum is equal to zero. At first I thought that maybe my machine wasn't respecting the level of percision, but after printing x I discovered that the value of x is changing each time I run the program and is always huge (abs(x) > 1e6)
I have it declared as a constant so I'm even more confused as to whats going on...
FYI I'm compiling with gcc -lm
printf("sum = %d", sum);
sum is a float, not an int. You should use %f instead of %d. Same here:
printf("x = %d", x);
Reading about printf() format specifiers may be a good idea.
When I execute this code it returns me 1610612736
void main(){
float a=3.3f;
int b=2;
printf("%d",a*b);
}
Why and how to fix this ?
edit : It's not even a matter of integer and float, if i replace int b=2: by float b=2.0f it return the same silly result
The result of the multiplication of a float and an int is a float. Besides that, it will get promoted to double when passing to printf. You need a %a, %e, %f or %g format. The %d format is used to print int types.
Editorial note: The return value of main should be int. Here's a fixed program:
#include <stdio.h>
int main(void)
{
float a = 3.3f;
int b = 2;
printf("%a\n", a * b);
printf("%e\n", a * b);
printf("%f\n", a * b);
printf("%g\n", a * b);
return 0;
}
and its output:
$ ./example
0x1.a66666p+2
6.600000e+00
6.600000
6.6
Alternately, you could also do
printf("%d\n", (int)(a*b));
and this would print the result you're (kind of) expecting.
You should always explicitly typecast the variables to match the format string, otherwise you could see some weird values printed.
I wish to generate random numbers between 0 and 1. (Obviously, this has application elsewhere.)
My test code:
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
int main() {
double uR;
srand(1);
for(int i=0;i<5;i++){
uR = rand()/(RAND_MAX+1.000);
printf("%d \n", uR);
}
}
And here's the output after the code is compiled with GCC:
gcc -ansi -std=c99 -o rand randtest.c
./rand
0
-251658240
910163968
352321536
-528482304
Upon inspection, it turns out that casting the integer RAND_MAX to a double has the effect of changing its value from 2147483647 to -4194304. This occurs regardless of the method used to change RAND_MAX to type double; so far, I've tried (double)RAND_MAX and double max = RAND_MAX as well.
Why does the number's value change? How can I stop that from happening?
You can't print a double with %d. If you use %f, it works just fine.
You are printing a double value as a decimal integer - which is causing you confusion.
Use %.6f or something similar.
You are passing a double (uR) to printf when it expects a signed int. You should cast it or print with %f
printf("%d \n", (int)uR);