This question already has answers here:
Anomalous behavior of printf()? [duplicate]
(6 answers)
Closed 3 years ago.
The o/p of the code snippet:
printf("%f", 9/5);
in my linux gcc 4.6.3 (gcc myprog.c followed by ./a.out):
-0.000000
in codepad.org:
2.168831
Why the difference?
I have referred the links: Why cast is needed in printf? and Implicit conversion in C?, but could'nt make use of it.
Info regarding codepad execution:
C: gcc 4.1.2
flags: -O -fmessage-length=0 -fno-merge-constants -fstrict-aliasing -fstack-protector-all
EDIT:
More:
for the execution of following in (in same program) codepad
printf("%f\n", 99/5);
printf("%f\n", 18/5);
printf("%f\n", 2/3);
printf("%f\n%f\n%f", 2, 3, 4);
printf("%f\n", 2);
the o/p is
2.168831
2.168831
2.168831
0.000000
0.000000
-0.001246
-0.0018760.000000
The first three outputs are same garbage values (and not the last one). Wondering why.
9/5 is seen as an int .. which leads to undefined output when you use the format specifier %f ( which expects a double )
to print correct result do this:
printf("%f", 9.0/5); // forces system to make sure that the argument is not treates as int
or
printf("%f\n",(double)9/5); // type casting the argument to a double .. this is more helpful
// in case you have a variable
I am not sure how codepad.org compiles their code .. but for gcc you have to have correct arguments for the printf to match with the format specifier
Consider the following to see this more carefully:
#include <stdio.h>
int main(){
printf("%f %d %f\n",9/5,9/5,9.0/5);
printf("%f\n",1); // as suggested by devnull-again since the argument is not a double
// the output is undefined
return 0;
}
Output:
$ ./test
0.000000 1 1.800000
0.000000
It's undefined behavior, anything could be printed at all. 9/5's type is int. %f requires a double to be passed (if you pass a float, it will be converted to double thanks to automatic promotion of arguments to variadic function calls, so that's ok too).
Since you're not giving a double, anything can happen.
In practice, assuming sizeof(int)==4 and sizeof(double)==8, your printf call will read 4 bytes of random garbage. If that's all zeros, then you'll print zero. If it's not, you'll print random stuff.
Codepad is wrong.
Every number is int if stated otherwise - that is the key to solution.
In C/C++ 9/5 is int number and equals 1
That is why your :
printf("%f\n", 9 / 5);
is the same as:
printf("%f\n", 1);
But why it prints 0.0 you ask - here is why.
When printf is given '%f' flag it will treat a parameter as float.
Now your code disassembly looks / may look like that:
printf("%f\n", 9 / 5);
push 1
call dword ptr [__imp__printf (0B3740Ch)]
push 1 moves (usually 32bit 0x1) on the stack
And now the most important how 32 bit value=1 looks (in binary):
(MSB) 00000000 00000000 00000000 00000001 (LSB)
and what this pattern means if treated as 32 bit floating point (according to this):
sign(bit #31) = 0
exponent(next 8 bits ie #30..#23) all = 0 too
fraction(the remaining bits) contains 1(dec)
This is in theory BUT exponent=0x0 which is special case (read link) and is treated as 0.0 - period.
Related
This question already has answers here:
What is the behavior of integer division?
(6 answers)
C program to convert Fahrenheit to Celsius always prints zero
(6 answers)
Closed 2 years ago.
I have a program in c which is as follows:
#include <stdio.h>
int main() {
int sum = 17, count = 5;
double mean;
printf("Value of mean (without casting): %f\n", sum/count);
mean = (double) sum / count;
printf("Value of mean (with casting): %f\n", mean );
return (0);
}
For the above program, I'm getting the following output:
Value of mean (without casting): 0.000000
Value of mean (with casting): 3.400000
I'm not getting why I'm getting 0.0000000 before performing the typecasting even though my sum/count returns a decimal (float) value, so I believe both the values should be coming out to be the same. Any help would be highly appreciated. Thanks!
That's the result of using improper format specifier to compute a task and display using printf(). Notice your code syntax:
printf("Value of mean (without casting): %f\n", sum / count);
Here you're computing the division task for sum by count which will evaluate as an integer 3 (because the datatypes of sum and count are of integer, the decimals are truncated.)
OTOH, if you enable the compiler warnings by -Wformat flag, you'll get a warning:
main.cpp:8:46: warning: format '%f' expects argument of type 'double', but argument 2 has type 'int' [-Wformat=]
8 | printf("Value of mean (without casting): %f\n", (sum / count));
| ~^ ~~~~~~~~~~~~~
| | |
| double int
| %d
By using the correct format specifier here, which is %d for integers, the error will no longer happen. Or, if you're thinking not to change the format specifier, change the expression instead into:
((float)sum / count)
Which will solve your problem as well.
printf("Value of mean (without casting): %f\n", sum/count);
You are pushing an integer expression (sum/count) onto the stack, but telling printf to pop it off as a double (%f) and interpret the bits as such.
Two problems with this. The integer expression is likely pushing 4 bytes (sizeof(int)) onto the stack, but printf is popping 8 bytes (sizeof(double)), as a result of being passed %f. Undefined behavior. Second, even if the sizes matched up, the bits of a floating point value are in a completely different order and arrangement than an integer. It's going to be evaluated differently and garbage gets printed.
I am trying to learn C and am very confused already.
#include <stdio.h>
int main(void)
{
int a = 50000;
float b = 'a';
printf("b = %f\n", 'a');
printf("a = %f\n", a);
return 0;
}
The above code produces a different output each time with gcc. Why?
You pass an int value ('a') for a %f format expecting a float or a double. This is undefined behavior, which can result in different output for every execution of the same program. The second printf has the same problem: %f expects a float or double but you pass an int value.
Here is a corrected version:
#include <stdio.h>
int main(void) {
int a = 50000;
float b = 'a';
printf("a = %d\n", a);
printf("b = %f\n", b);
printf("'a' = %d\n", 'a');
return 0;
}
Output:
a = 50000
b = 97.000000
'a' = 97
Compiling with more warnings enabled, with command line arguments -Wall -W -Wextra lets the compiler perform more consistency checks and complain about potential programming errors. It would have detected the errors in the posted code.
Indeed clang still complains about the above correction:
clang -O2 -std=c11 -Weverything fmt.c -o fmt
fmt.c:8:24: warning: implicit conversion increases floating-point precision: 'float' to 'double' [-Wdouble-promotion]
printf("b = %f\n", b);
~~~~~~ ^
1 warning generated.
b is promoted to double when passed to printf(). The double type has more precision than the float type, which might output misleading values if more decimals are requested than the original type afforded.
It is advisable to always use double for floating point calculations and reserve the float type for very specific cases where it is better suited, such as computer graphics APIs, some binary interchange formats...
From implementation standpoint, passing floating point numbers (that what %f expects) as variable argument lists (that is what ... means in printf prototype) and integers (that is what 'a' is, specifically of type int) may use different registers and memory layout.
This is usually defined by ABI calling conventions. Specifically, in x86_64, %xmm0 will be read by printf (with unitialized value), but gcc will fill %rdi in printf call.
See more in System V Application Binary Interface AMD64 Architecture Processor Supplement, p. 56
You should note that C is a very low-level language which puts a lot of confusing cases (including integer overflows and underflows, unitialized variables, buffer overruns) on shoulders of implementation. That allows to gain maximum performance (by avoiding lots of checks), but leaves to errors such as this error.
The %f specifier needs to be matched by a floating-point parameter (float or double), you're giving it ints instead. Not matching the type is undefined behaviour, meaning it could do anything, including printing different results everytime.
Hope this helped, good luck :)
Edit: Thanks chqrlie for the clarifications!
Unpredictable behavior.
When you try to print value using mismatch format specifier, compiler give an unpredictable output.The behavior on using %f as the format specifier for an char and int is undefined.
Use correct format specifier in your program according yo data type:
printf("%d\n", 'a'); // print ASCII value
printf("%d\n", a);
First of all, gcc should give a warning for the above code. This is because of the mismatch in the format specifier.
Mismatched formatting (printf() as well as scanf()) will give unpredictable behavior in C. This is in spite of the fact that gcc is expected to take care possible type conversions like int to float implicitly.
Here is are two nice references.
https://stackoverflow.com/a/1057173/4954434
https://stackoverflow.com/a/12830110/4954434
Following will work as expetced.
#include <stdio.h>
int main(void)
{
int a = 50000;
float b = 'a';
printf("b = %f\n", (float)'a');
printf("a = %f\n", (float)a);
return 0;
}
The main difficult you are having is the data type you are using.
When you create a variable you are telling the size the memory will have to reserve to work properly. Like char have a size of 1 byte, int equal 4 byte and float 32 byte. It's important to use the same data type to not have unpredictable result.
char a = 'a';
printf("%c",a);
int b = 50000;
printf("%d",b);
float c = 5.7;
printf("%f", c);
For more information: C-Variables
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
#include<stdio.h>
#define square(x) x*x
void main()
{
int i;
i = 8 / square(4);
printf("%d %d", i, 8/square(4));
}
Gives output : 8 8
but if I write below code :
#include<stdio.h>
#define square(x) x*x
void main()
{
float i;
i = 8 / square(4);
printf("%f %f", i, 8/square(4));
}
Gives Output : 8.000000 0.000000
Why like that??? please explain
The problems are not just with the format specifier but also the way you have defined your macro. It should be:
#define square(x) ((x)*(x))
Also macros are not type safe. Now if you cast your results you will see what is happening, since the square of 4 is 16 and 8/16 is 0.5 which gets truncated to int hence becomes 0. For proper values this is how you should typecast:
printf("%d %d", (int)i, (int)(8/square(4)));
printf("\n%f %f", (float)i, (float)8/((float)square(4)));
Sample Output:
0 0
0.000000 0.500000
First of all correct this:
#define square(x) x*x
to
#define square(x) ((x)*(x))
for correct results after macro replacement.
Now, in your first program, as others explained you are using wrong format specifier %f to print an integer (8/(square(4) will evaluate to an integer), which is undefined behavior.
In second program, 8/square(4) is type promoted to float as you are storing the result in float i. Therefore, you get 8.000000 on first printing. On second printing, result is wrong due to same reason as above.
The first is easy to understand so I focus on the second only. You use %f for the second parameter which requires a float number while C compiler take 8/square(4) as integer. This mismatch corrupt your result.
8/square(4) results to an int and trying to print an integer using %f is Undefined behavior. So there is no use of debugging the value you got in second case.
If you are using gcc compiler then command cc -E filename.c may clarify your doubts.
It is because you given float as datatype in second program.
8/square(4) will give an integer result, and hence your output becomes wrong. you used %f to print an integer.
That is so simple...
because %f means the type of number is double and default precision
void main()
{
float x = 8.2;
int r = 6;
printf ( "%f" , r/4);
}
It is clearly odd that i am not explicitly typecasting the r ( of int type ) in the printf func to float. However if i change the sequence of declaring x and r and declare r first and then x i get different results(in this case it is a garbage value). Again i am not using x
in the program anywhere.. These are the things i meant to be wrong... i want to keep them the way they are. But when i excute the first piece of code i get 157286.375011 as result ( a garbage value ).
void main()
{
int r = 6;
float x = 8.2;
printf ( "%f" , r/4);
}
and if i execute the code above i get 0.000000 as result. i know results can go wrong because i am using %f in the printf when it should have been %d... the results may be wrong... but my question is why the results change when i change sequence of variable definitions. Should not it be the same whether right or wrong???
Why is this happening?
printf does not have any type checking. It relies on you to do that checking yourself, verifying that all of the types match the formatting specifiers.
If you don't do that, you enter into the realm of undefined behavior, where anything can happen. The printf function is trying to interpret the specified value in terms of the format specifier you used. And if they don't match, boom.
It's nonsense to specify %f for an int, but you already knew that...
f conversion specifier takes a double argument but you are passing an int argument. Passing an int argument to f conversion specifier is undefined behavior.
In this expression:
r / 4
both operands are of type int and the result is also of type int.
Here is what you want:
printf ("%f", r / 4.0);
When printf grabs the optional variables (i.e. the variables after the char * that tells it what to print), it has to get them off the stack. double is usually 64 bits (8 bytes) whereas int is 32 bits (4 bytes).
Moreover, floating point numbers have an odd internal structure as compared to integers.
Since you're passing an int in place of a double, printf is trying to get 8 bytes off the stack instead of four, and it's trying to interpret the bytes of a int as the bytes of a double.
So not only are you getting 4 bytes of memory containing no one knows what, but you're also interpreting that memory -- that's 4 bytes of int and 4 bytes of random stuff from nowhere -- as if it were a double.
So yeah, weird things are going to happen. When you re-compile (or even times re-run) a program that just wantonly picks things out of memory where it hasn't malloc'd and it hasn't stored, you're going to get unpredictable and wildly-changing values.
Don't do it.
#include<stdio.h>
main()
{
float x=2;
float y=4;
printf("\n%d\n%f",x/y,x/y);
printf("\n%f\n%d",x/y,x/y);
}
Output:
0
0.000000
0.500000
0
compiled with gcc 4.4.3
The program exited with error code 12
As noted in other answers, this is because of the mismatch between the format string and the type of the argument.
I'll guess that you're using x86 here (based on the observed results).
The arguments are passed on the stack, and x/y, although of type float, will be passed as a double to a varargs function (due to type "promotion" rules).
An int is a 32-bit value, and a double is a 64-bit value.
In both cases you are passing x/y (= 0.5) twice. The representation of this value, as a 64-bit double, is 0x3fe0000000000000. As a pair of 32-bit words, it's stored as 0x00000000 (least significant 32 bits) followed by 0x3fe00000 (most significant 32-bits). So the arguments on the stack, as seen by printf(), look like this:
0x3fe00000
0x00000000
0x3fe00000
0x00000000 <-- stack pointer
In the first of your two cases, the %d causes the first 32-bit value, 0x00000000, to be popped and printed. The %f pops the next two 32-bit values, 0x3fe00000 (least significant 32 bits of 64 bit double), followed by 0x00000000 (most significant). The resulting 64-bit value of 0x000000003fe00000, interpreted as a double, is a very small number. (If you change the %f in the format string to %g you'll see that it's almost 0, but not quite).
In the second case, the %f correctly pops the first double, and the %d pops the 0x00000000 half of the second double, so it appears to work.
When you say %d in the printf format string, you must pass an int value as the corresponding argument. Otherwise the behavior is undefined, meaning that your computer may crash or aliens might knock at your door. Similar for %f and double.
Yes. Arguments are read from the vararg list to printf in the same order that format specifiers are read.
Both printf statements are invalid because you're using a format specifier expecting a int, but you're only giving it a floatdouble.
What you are doing is undefiend behaviour. What you are seeing is coincidental; printf could write anything.
You must match the exact type when giving printf arguments. You can e.g. cast:
printf("\n%d\n%f", (int)(x/y), x/y);
printf("\n%f\n%d", x/y, (int)(x/y));
This result is not surprising, in the first %d you passed a double where an integer was expected.
http://en.wikipedia.org/wiki/Format_string_attack
Something related to my question. Supports the answer of Matthew.