Puzzled about printf output - c

While using printf with %d as format specifier and giving a float as an argument e.g, 2.345, it prints 1546188227. So, I understand that it may be due to the conversion of single point float precision format to simple decimal format. But when we print 2.000 with the %d as format specifier, then why it prints 0 only ?
Please help.

Format specifier %d can only be used with values of type int (and compatible types). Trying to use %d with float or any other types produces undefined behavior. That's the only explanation that truly applies here. From the language point of view the output you see is essentially random. And you are not guaranteed to get any output at all.
If you are still interested in investigating the specific reason for the output you see (however little sense it makes), you'll have to perform a platform-specific investigation, because the actual behavior depends critically on various implementation details. And you are not even mentioning your platform in your post.
In any case, as a side note, note that it is impossible to pass float values as variadic arguments to variadic functions. float values in such cases are always converted to double and passed as double. So in your case it is double values you are attempting to print. Behavior is still undefined though.

Go here, enter 2.345, click "rounded". Observe 64-bit hex value: 4002C28F5C28F5C3. Observe that 1546188227 is 0x5c28f5c3.
Now repeat for 2.00. Observe that 64-bit hex value is 4000000000000000
P.S. When you say that you give a float argument, what you apparently mean is that you give a double argument.

Here what ISO/IEC 9899:1999 standard $7.19.6 states:
If a conversion specification is invalid, the behavior is undefined.239)
If any argument is not the correct type for the corresponding conversion
specification, the behavior is undefined.

If you're trying to make it print integer values, cast the floats to ints in the call:
printf("ints: %d %d", (int) 2.345, (int) 2.000);
Otherwise, use the floating point format identifier:
printf("floats: %f %f", 2.345, 2.000);

When you use printf with wrong format specifier for the corresponding argument, the result is undefined behavior. It could be anything and may differ from one implementation to another. Only correct use of format specified has defined behavior.

First a small nitpick: The literal 2.345 is actually of the type double, not float, and besides, even a float, such as the literal 2.345f, would be converted to double when used as an argument to a function that takes a variable number of arguments, such as printf.
But what happens here is that the (typically) 64 bits of the double value is sent to printf, and then it interprets (typically) 32 of those bits as an integer value. So it just happens that those bits were zero.
According to the standard, this is what is called undefined behavior: The compiler is allowed to do anything at all.

Related

why printf behaves differently when we try to print float as a hexadecimal? [duplicate]

I tried to print character as a float in printf and got output 0. What is the reason for this.
Also:
char c='z';
printf("%f %X",c,c);
is giving some weird output for hexadecimal while output is correct when I do this:
printf("%X",c);
why is it so?
The printf() function is a variadic function, which means that you can pass a variable number of arguments of unspecified types to it. This also means that the compiler doesn't know what type of arguments the function expects, and so it cannot convert the arguments to the correct types. (Modern compilers can warn you if you get the arguments wrong to printf, if you invoke it with enough warning flags.)
For historical reasons, you can not pass an integer argument of smaller rank than int, or a floating type of smaller rank than double to a variadic function. A float will be converted to double and a char will be converted to int (or unsigned int on bizarre implementations) through a process called the default argument promotions.
When printf parses its parameters (arguments are passed to a function, parameters are what the function receives), it retrieves them using whatever method is appropriate for the type specified by the format string. The "%f" specifier expects a double. The "%X" specifier expects an unsigned int.
If you pass an int and printf tries to retrieve a double, you invoke undefined behaviour.
If you pass an int and printf tries to retrieve an unsigned int, you invoke undefined behaviour.
Undefined behaviour may include (but is not limited to) printing strange values, crashing your program or (the most insidious of them all) doing exactly what you expect.
Source: n1570 (The final public draft of the current C standard)
You need to use a cast operator like this:
char c = 'z';
printf("%f %X", (float)c, c);
or
printf("%f %X", (double)c, c);
In Xcode, if I do not do this, I get the warning:
Format specifies specifies 'double' but the argument has type 'char', and the output is 0.000000.
I tried to print character as a float in printf and got output 0. What is the reason for this.
The question is, what value did you expect to see? Why would you expect something other than 0?
The short answer to your question is that the behavior of printf is undefined if the type of the argument doesn't match the conversion specifier. The %f conversion specifier expects its corresponding argument to have type double; if it isn't, all bets are off, and the exact output will vary.
To understand the floating point issue, consider reading: http://en.wikipedia.org/wiki/IEEE_floating_point
As for hexadecimal, let me guess.. the output was something like... 99?
This is because of encodings.. the machine has to represent information in some format, and usually that format entails either giving meanings to certain bits in a number, or having a table of symbols to numbers, or both
Floating points are sometimes represented as a (sign,mantissa,exponent) triplet all packed in a 32 or 64 bit number - characters are sometimes represented in a format named ASCII, which establishes which number corresponds to each character you type
Because printf, like any function that work with varargs, eg: int foobar(const char fmt, ...) {} tries to interpret its parameter to certain type.
If you say "%f", then pass c (as a char), then printf will try to read a float.
You can read more here: var_arg (even if this is C++, it still applies).

Adding an Integer to a float not working as expected

I know the following will not print 2.9 or 3. I can correct it but really want to understand what is internally happening so it is printing:
858993459
How does this number come ?
I am running this under windows 32 bit
int main()
{
double f = 1.9;
int t = 1;
printf("%d\n", t+f);
return 0;
}
Update
Simply believing that this would be "undefined behavior" was not possible for me, so I thought of investigating it further. I found this answer exactly what I wanted to understand.
As others have already mentioned, this is undefined behaviour. But by taking some assumptions about the machine architecture, one can answer why it is printing these numbers:
Using IEEE 754 64-bit format to represent doubles values, the value of 2.9 is the a stored as 0x4007333333333333.
In a Little-Endian machine, the %d specifier will read the lower 4 bytes of that value, which are 0x33333333, which equals 858.993.459
you're using wrong format specifier. use %f instead.
As per the implicit type promotion rule, while doing t+f, t will be promoted to double. You're trying to print a double value using %d, which is supposed to expect an int.
Note: While using wrong format specifier, the behaviour is undefined.
Related reading: c99 standard, chapter 7.19.6.1, paragraph 9, (emphasis mine)
If a conversion specification is invalid, the behavior is undefined. If any argument is
not the correct type for the corresponding conversion specification, the behavior is
undefined.
What happened is you try print a double using %d, so printf interpret this how an integer, but for you understand why this printed value, you have to understand how a double is stored by C language, C uses the IEEE 754 standard:
So %d interpret this as integer, and when you add an int and an double C keep this as double to not lose any part, and the result is a double, and %d interpret this sum as a integer, and like the format is different you see garbage.
you should use %f instead.
You are trying to add integer and double so according to the thumb rule(type promotion rule) i.e. while this addition happens the integer will be promoted to double and the summation will happen and you are trying to print double value using %d format specifier which will lead to undefined behavior.
Use %f instead
PS: Using a wrong format specifer to print the value will lead to undefined behavior that is what you are seeing here.
%d will transfer double to int
double f = 1.9;
printf("%d\n", f);
Outputs:
1717986918
Computers store ints and doubles in different ways, you can see it here:
http://en.wikipedia.org/wiki/Double-precision_floating-point_format

why printf behaves differently when we try to print character as a float and as a hexadecimal?

I tried to print character as a float in printf and got output 0. What is the reason for this.
Also:
char c='z';
printf("%f %X",c,c);
is giving some weird output for hexadecimal while output is correct when I do this:
printf("%X",c);
why is it so?
The printf() function is a variadic function, which means that you can pass a variable number of arguments of unspecified types to it. This also means that the compiler doesn't know what type of arguments the function expects, and so it cannot convert the arguments to the correct types. (Modern compilers can warn you if you get the arguments wrong to printf, if you invoke it with enough warning flags.)
For historical reasons, you can not pass an integer argument of smaller rank than int, or a floating type of smaller rank than double to a variadic function. A float will be converted to double and a char will be converted to int (or unsigned int on bizarre implementations) through a process called the default argument promotions.
When printf parses its parameters (arguments are passed to a function, parameters are what the function receives), it retrieves them using whatever method is appropriate for the type specified by the format string. The "%f" specifier expects a double. The "%X" specifier expects an unsigned int.
If you pass an int and printf tries to retrieve a double, you invoke undefined behaviour.
If you pass an int and printf tries to retrieve an unsigned int, you invoke undefined behaviour.
Undefined behaviour may include (but is not limited to) printing strange values, crashing your program or (the most insidious of them all) doing exactly what you expect.
Source: n1570 (The final public draft of the current C standard)
You need to use a cast operator like this:
char c = 'z';
printf("%f %X", (float)c, c);
or
printf("%f %X", (double)c, c);
In Xcode, if I do not do this, I get the warning:
Format specifies specifies 'double' but the argument has type 'char', and the output is 0.000000.
I tried to print character as a float in printf and got output 0. What is the reason for this.
The question is, what value did you expect to see? Why would you expect something other than 0?
The short answer to your question is that the behavior of printf is undefined if the type of the argument doesn't match the conversion specifier. The %f conversion specifier expects its corresponding argument to have type double; if it isn't, all bets are off, and the exact output will vary.
To understand the floating point issue, consider reading: http://en.wikipedia.org/wiki/IEEE_floating_point
As for hexadecimal, let me guess.. the output was something like... 99?
This is because of encodings.. the machine has to represent information in some format, and usually that format entails either giving meanings to certain bits in a number, or having a table of symbols to numbers, or both
Floating points are sometimes represented as a (sign,mantissa,exponent) triplet all packed in a 32 or 64 bit number - characters are sometimes represented in a format named ASCII, which establishes which number corresponds to each character you type
Because printf, like any function that work with varargs, eg: int foobar(const char fmt, ...) {} tries to interpret its parameter to certain type.
If you say "%f", then pass c (as a char), then printf will try to read a float.
You can read more here: var_arg (even if this is C++, it still applies).

Why does %d show two different values for *b and *c in the code [b and c points to same address]

Consider the Code below
{
float a=7.999,*b,*c;
b=&a;c=b;
printf("%d-b\n%d-c\n%d-a\n",*b,*c,a);
}
OUTPUT:
-536870912-b
1075838713-c
-536870912-a
I know we are not allowed to use %d instead of %f, but why does *b and *c give two different values?
Both have the same address, can someone explain?
I want to know the logic behind it
Here is a simplified example of your ordeal:
#include <stdio.h>
int main() {
float a=7.999, b=7.999;
printf("%d-a\n%d-b\n",a,b);
}
What's happening is that a and b are converted to doubles (8 bytes each) for the call to printf (since it is variadic). Inside the printf function, the given data, 16 bytes, is printed as two 4-byte ints. So you're printing the first and second half of one of the given double's data as ints.
Try changing the printf call to this and you'll see both halves of both doubles as ints:
printf("%d-a1\n%d-a2\n%d-b1\n%d-b2\n",a,b);
I should add that as far as the standard is concerned, this is simply "undefined behavior", so the above interpretation (tested on gcc) would apply only to certain implementations.
There can be any number of reasons.
The most obvious -- your platform passes some integers in integer registers and some floating point numbers in floating point registers, causing printf to look in registers that have never been set to any particular value.
Another possibility -- the variables are different sizes. So printf is looking in data that wasn't set or was set as part of some other operation.
Because printf takes its parameters through ..., type agreement is essential to ensure the implementation of the function is even looking in the right places for its parameters.
You would have to have deep knowledge of your platform and/or dig into the generated assembly code to know for sure.
Using wrong conversion specification invokes undefined behavior. You may get either expected or unexpected value. Your program may crash, may give different result on different compiler or any unexpected behavior.
C11: 7.21.6 Formatted input/output functions:
If a conversion specification is invalid, the behavior is undefined.282) If any argument is
not the correct type for the corresponding conversion specification, the behavior is
undefined.
// Bad
float a=7.999,*b,*c;
b=&a;c=b;
printf("%d-b\n%d-c\n%d-a\n",*b,*c,a);
// Good
float a=7.999,*b,*c;
b=&a;c=b;
printf("%f-b\n%f-c\n%f-a\n",*b,*c,a);
Using an integer format specified "%d" instead of correctly using a float specified "%f" is what alk elliptically fails to explain as "undefined behavior".
You need to use the correct format specifier.

Is %d a cast in C?

int a;
printf("%d\n", a);
I wonder if %d is a cast?
In any case it won't be a cast but a reinterpretation (like getting the address, casting to a pointer of a different type and then getting the contents as a new type).
Example:
printf("%d\n", 1.5);
won't print integer 1, but the integer value of the representation of 1.5 in IEEE 754. If you want to cast, you must explicitly put (int) in front of the value.
No, it is part of the format specifier string for printf() function's first argument; the format string. It will print out a decimal representation of that int you passed as the second argument.
It is not. It is just a "hint" for printf() function to treat the 'a' argument as an 'int'
No, it's not a cast, but I suggest you take a look at the source for printf() to understand this. There's nothing special about printf() -- it's just a varargs function like any other. It's one of the first functions you learn in C, usually well before you learn varargs, so it often sticks out in people's minds as special when it's really not. A quick study of the source will probably be enlightening.
When you pass a format string to printf(), you're telling the function what to expect in its argument list (generally on the stack), but that might not agree with what you actually put there. With %d, you're telling printf() to take the next integer-sized chunk of bytes off the argument list and format those bytes as if they represent a signed decimal number. So when printf() parses the format string and encounters a %d, it will probably do something like:
int num = va_arg(args, int);
And then format and output the bytes in "num" as if they were an integer, regardless of what kind of argument you actually passed. If you put a float in the arguments where printf() is told to expect an integer, the output will be a decimal representation of the IEEE floating point bytes -- probably not what you intended, and not what a true cast would have done.
No, it is a format specifier. It has semantic meaning only to the formatted I/O functions and is not part of the C language itself. You could equally write yopur own
All it does is specify the 'human readable' representation in which to present an int value; there is no type conversion or translation.

Resources