Different output every time in c? - c

I am trying to learn C and am very confused already.
#include <stdio.h>
int main(void)
{
int a = 50000;
float b = 'a';
printf("b = %f\n", 'a');
printf("a = %f\n", a);
return 0;
}
The above code produces a different output each time with gcc. Why?

You pass an int value ('a') for a %f format expecting a float or a double. This is undefined behavior, which can result in different output for every execution of the same program. The second printf has the same problem: %f expects a float or double but you pass an int value.
Here is a corrected version:
#include <stdio.h>
int main(void) {
int a = 50000;
float b = 'a';
printf("a = %d\n", a);
printf("b = %f\n", b);
printf("'a' = %d\n", 'a');
return 0;
}
Output:
a = 50000
b = 97.000000
'a' = 97
Compiling with more warnings enabled, with command line arguments -Wall -W -Wextra lets the compiler perform more consistency checks and complain about potential programming errors. It would have detected the errors in the posted code.
Indeed clang still complains about the above correction:
clang -O2 -std=c11 -Weverything fmt.c -o fmt
fmt.c:8:24: warning: implicit conversion increases floating-point precision: 'float' to 'double' [-Wdouble-promotion]
printf("b = %f\n", b);
~~~~~~ ^
1 warning generated.
b is promoted to double when passed to printf(). The double type has more precision than the float type, which might output misleading values if more decimals are requested than the original type afforded.
It is advisable to always use double for floating point calculations and reserve the float type for very specific cases where it is better suited, such as computer graphics APIs, some binary interchange formats...

From implementation standpoint, passing floating point numbers (that what %f expects) as variable argument lists (that is what ... means in printf prototype) and integers (that is what 'a' is, specifically of type int) may use different registers and memory layout.
This is usually defined by ABI calling conventions. Specifically, in x86_64, %xmm0 will be read by printf (with unitialized value), but gcc will fill %rdi in printf call.
See more in System V Application Binary Interface AMD64 Architecture Processor Supplement, p. 56
You should note that C is a very low-level language which puts a lot of confusing cases (including integer overflows and underflows, unitialized variables, buffer overruns) on shoulders of implementation. That allows to gain maximum performance (by avoiding lots of checks), but leaves to errors such as this error.

The %f specifier needs to be matched by a floating-point parameter (float or double), you're giving it ints instead. Not matching the type is undefined behaviour, meaning it could do anything, including printing different results everytime.
Hope this helped, good luck :)
Edit: Thanks chqrlie for the clarifications!

Unpredictable behavior.
When you try to print value using mismatch format specifier, compiler give an unpredictable output.The behavior on using %f as the format specifier for an char and int is undefined.
Use correct format specifier in your program according yo data type:
printf("%d\n", 'a'); // print ASCII value
printf("%d\n", a);

First of all, gcc should give a warning for the above code. This is because of the mismatch in the format specifier.
Mismatched formatting (printf() as well as scanf()) will give unpredictable behavior in C. This is in spite of the fact that gcc is expected to take care possible type conversions like int to float implicitly.
Here is are two nice references.
https://stackoverflow.com/a/1057173/4954434
https://stackoverflow.com/a/12830110/4954434
Following will work as expetced.
#include <stdio.h>
int main(void)
{
int a = 50000;
float b = 'a';
printf("b = %f\n", (float)'a');
printf("a = %f\n", (float)a);
return 0;
}

The main difficult you are having is the data type you are using.
When you create a variable you are telling the size the memory will have to reserve to work properly. Like char have a size of 1 byte, int equal 4 byte and float 32 byte. It's important to use the same data type to not have unpredictable result.
char a = 'a';
printf("%c",a);
int b = 50000;
printf("%d",b);
float c = 5.7;
printf("%f", c);
For more information: C-Variables

Related

The problem about printf function to "output float with %d" in C

I am a newbie to the C language. When I was learning floating point numbers today, I found the following problems.
float TEST= 3.0f;
printf("%x\n",TEST);
printf("%d\n",TEST);
first output:
9c9e82a0
-1667333472
second output:
61ea32a0
1642738336
As shown above, each execution will output different results. I have checked a lot of IEEE 754 format and still don't understand the reasons. I would like to ask if anyone can explain or provide keywords for me to study, thank you.
-----------------------------------Edit-----------------------------------
Thank you for your replies. I know how to print IEEE 754 bit pattern. However, as Nate Eldredge, chux-Reinstate Monica said, using %x and %d in printf is undefined behavior. If there is no floating point register in our device, how does it work ? Is this described in the C99 specification?
Most of the time, when you call a function with the "wrong kind" (wrong type) of argument, an automatic conversion happens. For example, if you write
#include <stdio.h>
#include <math.h>
printf("%f\n", sqrt(144));
this works just fine. The compiler knows (from the function prototype in <math.h>) that the sqrt function expects an argument of type double. You passed it the int value 144, so the compiler automatically converted that int to double before passing it to sqrt.
But this is not true for the printf function. printf accepts arguments of many different types, and as long as each argument is right for the particular % format specifier it goes with in the format string, it's fine. So if you write
double f = 3.14;
printf("%f\n", f);
it works. But if you write
printf("%d\n", f); /* WRONG */
it doesn't work. %d expects an int, but you passed a double. In this case (because printf is special), there's no good way for the compiler to insert an automatic conversion. So, instead, it just fails to work.
And when it "fails", it really fails. You don't even necessarily get anything "reasonable", like an integer representing the bit pattern of the IEEE-754 floating-point number you thought you passed. If you want to inspect the bit pattern of a float or double, you'll have to do that another way.
If what you really wanted to do was to see the bits and bytes making up a float, here's a completely different way:
float test = 3.14;
unsigned char *p = (unsigned char *)&test;
int i;
printf("bytes in %f:", test);
for(i = 0; i < sizeof(test); i++) printf(" %02x", p[i]);
printf("\n");
There are some issues here with byte ordering ("endianness"), but this should get you started.
To print hex (ie how it is represented in the memory) representation of the float:
float TEST= 3.0f;
int y=0;
memcpy(&y, &TEST, sizeof(y));
printf("%x\n",y);
printf("%d\n",y);
or
union
{
float TEST;
int y;
}uf = {.y = 0};
uf.TEST = 3.0f;
printf("\n%x\n",(unsigned)uf.y);
printf("%d\n",uf.y);
Both examples assuming sizeof(float) <= sizeof(int) (if they are not equal I need to zero the integer)
And the result (same for both):
40400000
1077936128
As you can see it is completely different from your one.
https://godbolt.org/z/Kr61x6Kv3

Meaning of "%lf" place holder

Here is my small program where I intently put the place holder %lf in the second printf. Why the second printf has the same result as the first printf( both printf 1.3).
int main()
{
double f = 1.3;
long l = 1024L;
printf("f = %lf", f);
printf("l = %lf", l);
return 0;
}
It's Undefined behaviour if printf() has format specifier mismatch. %lf expects a double but you are passing a long int.
C11, 7.21.6.1 The fprintf function
9 If a conversion specification is invalid, the behavior is
undefined.282) If any argument is not the correct type for the
corresponding conversion specification, the behavior is undefined.
That said, what probably happens is that when you call printf() the first time, the value of f is passed in a floating point register or at a location in stack for double. The next time you call printf(), it reads from the same location due to the format specifier %lf. As opposed to reading from where the value of l is stored. If you swap the order of printf() calls, you would probably observe a different output. But this is all platform specific. Once your program invokes undefined behaviour, anything can happen. Basically, you can't expect it to do anything sensible and there is absolutely no guarantee about its behaviour.
Here if you change your code to this:
#include <stdio.h>
int main()
{
double f = 1.3;
long l = 1024L;
printf("f = %lf", f);
printf("l = %lf", (float)l);
return 0;
}
you will see that the output would be different. When you pass a long to be presented as double you should expect undefined behavior
You have a specifier mismatch. The value
long l = 1024L;
is interpreted as a double; and this happens to be approximately 1.3 (at least on your and my pc. This might be different on different architectures I think; depending on how long a "long" and a "double" are, and how they are represented internally.
As for the meaning of the %lf placeholder, you can see in the printf documentation that %f means: decimal floating point. the l length modifier has no influence on the %f specifier.
Conclusion: %lf = %f = decimal floating point

strange output while printing float as integer and integer as float in C [duplicate]

This question already has an answer here:
Why are the int and float passed in printf going to the wrong positions in the format string?
(1 answer)
Closed 2 years ago.
the following code is not showing the expected output which is garbage value ( strangely the values are swapped )
#include<stdio.h>
int main()
{
float f = 4.6;
int d = 7;
printf("%d %f\n",f,d);
return 0;
}
output:
7 4.600000
Let's reduce this a bit:
float f = 4.6;
printf("%d\n", f);
That's undefined behavior. The correct format specifier must be given an argument of the correct type.
Undefined behavior can cause any outcome, including this odd outcome that you are seeing.
Further thoughts:
Now, you might be asking why a compiler would even produce this code. So let's look at the x86-64 assembly for 2 codes:
int main() {
float f = 4.6;
int d = 7;
printf("%d %f\n", d, f);
return 0;
}
int main() {
float f = 4.6;
int d = 7;
printf("%f %d\n", f, d);
return 0;
}
Other than the format string, these two codes produce identical assembly. This is likely because the calling convention requires floats to be placed in different registers than integers, or that floats should be passed on the stack (or any number of other rules that handle floats and integers differently).
This should make it clearer why the code you posted is still producing something useful, even though the code is just broken.
The argument corresponding to %d must be an int, and the argument corresponding to %f must be a double. Arguments to variadic functions undergo some standard conversions (so float will be converted to double automatically), but they're not automatically converted to the appropriate types for their corresponding printf format specifiers.
Not really hard to understand. The float value is passed in float registers, while the int value is passed in the conventional parameter stack. So when the values are referenced, they are fetched from different areas and it magically works, even though is shouldn't (and won't, on a different box).
For example, gcc 4.7.2 for amd64 does this, because integer and floating-point arguments are passed in different registers. This effectively reorders the arguments.
From "System V Application Binary Interface. AMD64 Architecture Processor Supplement. Draft Version 0.99.6" (floating-point numbers have class SSE):
If the class is INTEGER, the next available register of the sequence %rdi,%rsi, %rdx, %rcx, %r8 and %r9 is used.
If the class is SSE, the next available vector register is used, the registers
are taken in the order from %xmm0 to %xmm7.
Of course you should not do this and enable warnings to catch it during compilation.

Unexpected output printing a float cast as an int [duplicate]

This question already has answers here:
float to int unexpected behaviour
(6 answers)
Closed 6 years ago.
#include <stdio.h>
int main()
{
float a = 12.5;
printf("%d\n", a);
printf("%d\n", *(int *)&a);
return 0;
}
Additionally, how do you interpret the expression *(int *)&a?
It takes the address of a float, casts it to an integer pointer and then dereferences that as an integer. Totally wrong.
There are at least two things wrong here:
Nobody says the pointers for an int and a float need to be the same size
The representation for a float looks nothing like the representation for a signed int
So the output to the second printf (if it doesn't happen to crash since it's undefined behavior, as per the first point) would likely be some strange, huge number.
The author of this code is trying to take the bits of a float and reinterpret them as an int, but *(int *)&a invokes undefined behavior and modern compilers likely will not do what the author of the code intended. Passing an argument of the wrong type to printf is even worse undefined behavior; it definitely will not work on modern archs like x86_64. Instead you could use:
#include <stdio.h>
int main()
{
float a = 12.5;
int b;
memcpy(&b, &a, sizeof b);
printf("%d\n", b);
return 0;
}
to get the desired effect.
My compiler flags this and issues a stern warning.
$ make foolish
cc foolish.c -o foolish
foolish.c: In function ‘main’:
foolish.c:5: warning: format ‘%d’ expects type ‘int’, but argument 2 has type ‘double’
$ ./foolish
1606416928
1095237632
$
What is the reason for wanting to do this?
If you make that line printf("%08X\n", *(unsigned int *)&a); then it prints the binary representation of the floating point number. See here for details.
The size of size(float) and sizeof(double) tend to be some multiple of at least sizeof(short), and often sizeof(int) (to meet different needs even the IEEE floating point standards allow multiple floating point formats).
Assuming the previously mentioned undefined pointer behaviors do not kill the program, in most environments this will just take the bits of one of those ints and print it out in isolation. The internals of floats assure that this will be vastly different from the original floating point value.
printf( "%d sizeof(short)\n", int(sizeof(short)) );
printf( "%d sizeof(int)\n", int(sizeof(int)) );
printf( "%d sizeof(float)\n", int(sizeof(float)) );
printf( "%d sizeof(double)\n", int(sizeof(double)) );
printf( "%d sizeof(long double)\n", int(sizeof(long double)) );

How is conversion of float/double to int handled in printf?

Consider this program
int main()
{
float f = 11.22;
double d = 44.55;
int i,j;
i = f; //cast float to int
j = d; //cast double to int
printf("i = %d, j = %d, f = %d, d = %d", i,j,f,d);
//This prints the following:
// i = 11, j = 44, f = -536870912, d = 1076261027
return 0;
}
Can someone explain why the casting from double/float to int works correctly in the first case, and does not work when done in printf?
This program was compiled on gcc-4.1.2 on 32-bit linux machine.
EDIT:
Zach's answer seems logical, i.e. use of format specifiers to figure out what to pop off the stack. However then consider this follow up question:
int main()
{
char c = 'd'; // sizeof c is 1, however sizeof character literal
// 'd' is equal to sizeof(int) in ANSI C
printf("lit = %c, lit = %d , c = %c, c = %d", 'd', 'd', c, c);
//this prints: lit = d, lit = 100 , c = d, c = 100
//how does printf here pop off the right number of bytes even when
//the size represented by format specifiers doesn't actually match
//the size of the passed arguments(char(1 byte) & char_literal(4 bytes))
return 0;
}
How does this work?
The printf function uses the format specifiers to figure out what to pop off the stack. So when it sees %d, it pops off 4 bytes and interprets them as an int, which is wrong (the binary representation of (float)3.0 is not the same as (int)3).
You'll need to either use the %f format specifiers or cast the arguments to int. If you're using a new enough version of gcc, then turning on stronger warnings catches this sort of error:
$ gcc -Wall -Werror test.c
cc1: warnings being treated as errors
test.c: In function ‘main’:
test.c:10: error: implicit declaration of function ‘printf’
test.c:10: error: incompatible implicit declaration of built-in function ‘printf’
test.c:10: error: format ‘%d’ expects type ‘int’, but argument 4 has type ‘double’
test.c:10: error: format ‘%d’ expects type ‘int’, but argument 5 has type ‘double’
Response to the edited part of the question:
C's integer promotion rules say that all types smaller than int get promoted to int when passed as a vararg. So in your case, the 'd' is getting promoted to an int, then printf is popping off an int and casting to a char. The best reference I could find for this behavior was this blog entry.
There's no such thing as "casting to int in printf". printf does not do and cannot do any casting. Inconsistent format specifier leads to undefined behavior.
In practice printf simply receives the raw data and reinterprets it as the type implied by the format specifier. If you pass it a double value and specify an int format specifier (like %d), printf will take that double value and blindly reinterpret it an an int. The results will be completely unpredictable (which is why doing this formally causes undefined behavior in C).
Jack's answer explains how to fix your problem. I'm going to explain why you're getting your unexpected results. Your code is equivalent to:
float f = 11.22;
double d = 44.55;
int i,j,k,l;
i = (int) f;
j = (int) d;
k = *(int *) &f; //cast float to int
l = *(int *) &d; //cast double to int
printf("i = %d, j = %d, f = %d, d = %d", i,j,k,l);
The reason is that f and d are passed to printf as values, and then these values are interpreted as ints. This doesn't change the binary value, so the number displayed is the binary representation of a float or a double. The actual cast from float to int is much more complex in the generated assembly.
Because you are not using the float format specifier, try with:
printf("i = %d, j = %d, f = %f, d = %f", i,j,f,d);
Otherwise, if you want 4 ints you have to cast them before passing the argument to printf:
printf("i = %d, j = %d, f = %d, d = %d", i,j,(int)f,(int)d);
The reason your follow-up code works is because the character constant is promoted to an int before it is pushed onto the stack. So printf pops off 4 bytes for %c and for %d. In fact, character constants are of type int, not type char. C is strange that way.
printf uses variable length argument lists, which means you need to provide the type information. You're providing the wrong information, so it gets confused. Jack provides the practical solution.
It's worth noting that printf, being a function with a variable-length argument list, never receives a float; float arguments are "old school" promoted to doubles.
A recent standard draft introduces the "old school" default promotions first (n1570, 6.5.2.2/6):
If the expression that denotes the called function has a type that
does not include a prototype, the integer promotions are performed on
each argument, and arguments that have type float are promoted to
double. These are called the default argument promotions.
Then it discusses variable argument lists (6.5.2.2/7):
The
ellipsis notation in a function prototype declarator causes argument
type conversion to stop after the last declared parameter. The default
argument promotions are performed on trailing arguments.
The consequence for printf is that it is impossible to "print" a genuine float. A float expression is always promoted to double, which is an 8 byte value for IEEE 754 implementations. This promotion occurs on the calling side; printf will already have an 8 byte argument on the stack when its execution starts.
If we assign 11.22to a double and inspect its contents, with my x86_64-pc-cygwin gcc I see the byte sequence 000000e0a3702640.
That explains the int value printed by printf: Ints on this target still have 4 bytes, so that only the first four bytes 000000e0 are evaluated, and again in little endian, i.e. as 0xe0000000. This is -536870912 in decimal.
If we reverse all of the 8 bytes, because the Intel processor stores doubles in little endian, too, we get 402670a3e0000000. We can check the value this byte sequence represents in IEEE format on this web site; it's close to 1.122E1, i.e. 11.22, the expected result.

Resources