The problem about printf function to "output float with %d" in C - c

I am a newbie to the C language. When I was learning floating point numbers today, I found the following problems.
float TEST= 3.0f;
printf("%x\n",TEST);
printf("%d\n",TEST);
first output:
9c9e82a0
-1667333472
second output:
61ea32a0
1642738336
As shown above, each execution will output different results. I have checked a lot of IEEE 754 format and still don't understand the reasons. I would like to ask if anyone can explain or provide keywords for me to study, thank you.
-----------------------------------Edit-----------------------------------
Thank you for your replies. I know how to print IEEE 754 bit pattern. However, as Nate Eldredge, chux-Reinstate Monica said, using %x and %d in printf is undefined behavior. If there is no floating point register in our device, how does it work ? Is this described in the C99 specification?

Most of the time, when you call a function with the "wrong kind" (wrong type) of argument, an automatic conversion happens. For example, if you write
#include <stdio.h>
#include <math.h>
printf("%f\n", sqrt(144));
this works just fine. The compiler knows (from the function prototype in <math.h>) that the sqrt function expects an argument of type double. You passed it the int value 144, so the compiler automatically converted that int to double before passing it to sqrt.
But this is not true for the printf function. printf accepts arguments of many different types, and as long as each argument is right for the particular % format specifier it goes with in the format string, it's fine. So if you write
double f = 3.14;
printf("%f\n", f);
it works. But if you write
printf("%d\n", f); /* WRONG */
it doesn't work. %d expects an int, but you passed a double. In this case (because printf is special), there's no good way for the compiler to insert an automatic conversion. So, instead, it just fails to work.
And when it "fails", it really fails. You don't even necessarily get anything "reasonable", like an integer representing the bit pattern of the IEEE-754 floating-point number you thought you passed. If you want to inspect the bit pattern of a float or double, you'll have to do that another way.
If what you really wanted to do was to see the bits and bytes making up a float, here's a completely different way:
float test = 3.14;
unsigned char *p = (unsigned char *)&test;
int i;
printf("bytes in %f:", test);
for(i = 0; i < sizeof(test); i++) printf(" %02x", p[i]);
printf("\n");
There are some issues here with byte ordering ("endianness"), but this should get you started.

To print hex (ie how it is represented in the memory) representation of the float:
float TEST= 3.0f;
int y=0;
memcpy(&y, &TEST, sizeof(y));
printf("%x\n",y);
printf("%d\n",y);
or
union
{
float TEST;
int y;
}uf = {.y = 0};
uf.TEST = 3.0f;
printf("\n%x\n",(unsigned)uf.y);
printf("%d\n",uf.y);
Both examples assuming sizeof(float) <= sizeof(int) (if they are not equal I need to zero the integer)
And the result (same for both):
40400000
1077936128
As you can see it is completely different from your one.
https://godbolt.org/z/Kr61x6Kv3

Related

Pointer not giving expected output in c

Why doesn't the double variable show a garbage value?
I know I am playing with pointers, but I meant to. And is there anything wrong with my code? It threw a few warnings because of incompatible pointer assignments.
#include "stdio.h"
double y= 0;
double *dP = &y;
int *iP = dP;
void main()
{
printf("%10#x %#10x %#10x %#10x \n",&y,dP,iP,iP+1);
printf("%#10lf %#10lf %#10lf %#10lf \n",y,*dP,*iP,*(iP+1));
scanf("%lf %d %d",&y,iP,iP+1);
printf("%10#x %#10x %#10x %#10x \n",&y,dP,iP,iP+1);
printf("%#10lf %#10lf %#10d %#10d \n",y,*dP,*iP,*(iP+1));
}
Welcome to Stack Overflow. It's not very clear what you're trying to do with this code, but the first thing I'll say is that it does exactly what it says it does. It tries to format data with the wrong format string. The result is garbage, but that doesn't necessarily mean it will look like garbage.
If part of the idea is to print out the internal bit pattern of a double in hexadecimal, you can do that--but the code will be implementation-dependent. The following should work on just about any modern 32 or 64-bit desktop implementation using 64-bits for both double and long long int types:
double d = 3.141592653589793238;
printf("d = %g = 0x%016llX\n", d, *(long long*)&d);
The %g specification is a quick way to print out a double in (usually) easily readable form. The %llX format prints an unsigned long long int in hexadecimal. The byte order is implementation-dependent; even if you know that both double and long long int have the same number of bits. On a Mac, PC or other Intel/AMD architecture machine, you'll get the display in most-significant-digit-first order.
The *(long long *)&d expression (reading from right to left) will take the address of d, convert that double* pointer to a long long * pointer, then dereference that pointer to get a long long value to format.
Almost every implementation uses IEEE 754 format for hardware floating point this century.
64-bit IEEE format (aka double)
You can find out more about printf formatting at:
http://www.cplusplus.com/reference/cstdio/printf/

Inconsistent results while printing float as integer [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
print the float value in integer in C language
I am trying out a rather simple code like this:
float a = 1.5;
printf("%d",a);
It prints out 0. However, for other values, like 1.4,1.21, etc, it is printing out a garbage value. Not only for 1.5, for 1.25, 1.5, 1.75, 1.3125 (in other words, decimal numbers which can be perfectly converted into binary form), it is printing 0. What is the reason behind this? I found a similar post here, and the first answer looks like an awesome answer, but I couldn't discern it. Can any body explain why is this happening? What has endian-ness got to do with t?
you're not casting the float, printf is just interpreting it as an integer which is why you're getting seemingly garbage values.
Edit:
Check this example C code, which shows how a double is stored in memory:
int main()
{
double a = 1.5;
unsigned char *p = &a;
int i;
for (i=0; i<sizeof(double); i++) {
printf("%.2x", *(p+i));
}
printf("\n");
return 0;
}
If you run that with 1.5 it prints
000000000000f83f
If you try it with 1.41 it prints
b81e85eb51b8f63f
So when printf interprets 1.5 as an int, it prints zero because the 4 LSBs are zeros and some other value when trying with 1.41.
That being said, it is an undefined behaviour and you should avoid it plus you won't always get the same result it depends on the machine and how the arguments are passed.
Note: the bytes are reversed because this is compiled on a little indian machine which means the least significant byte comes first.
You don't take care about argument promotions. Because printf is a variadic function, the arguments are promoted:
C11 (n1570), ยง 6.5.2.2 Function calls
arguments that have type float are promoted to double.
So printf tries to interpret your double variable as an integer type. It leads to an undefined behavior. Just add a cast:
double a = 1.5;
printf("%d", (int)a);
Mismatch of arguments in printf is undefined beahivour
either typecast a or use %f
use this way
printf("%d",(int)a);
or
printf("%f",a);
d stands for : decimal. so, nevertheless a is float/double/integer/char,.... when you use : "%d", C will print that number by decimal. So, if a is integer type (integer, long), no problem. If a is char : because char is a type of integer, so, C will print value of char in ASCII.
But, the problem appears, when a is float type (float/double), just because if a is float type, C will have special way to read this, but not by decimal way. So, you will have strange result.
Why has this strange result ?
I just give a short explanation : in computer, real number is presented by two part: exponent and a mantissa. If you say : this is a real number, C will know which is exponent, which is mantissa. But, because you say : hey, this is integer. no difference between exponent part and mantissa part -> strange result.
If you want understand exactly, how can know which integer will it print (and of course, you can guess that). You can visit this link : represent FLOAT number in memory in C
If you don't want to have this trange result, you can cast int to float, so, it will print the integer part of float number.
float a = 1.5;
printf("%d",(int)a);
Hope this help :)

Convert int to double

I ran this simple program, but when I convert from int to double, the result is zero. The sqrt of the zeros then displays negative values. This is an example from an online tutorial so I'm not sure why this is happening. I tried in Windows and Unix.
/* Hello World program */
#include<stdio.h>
#include<math.h>
main()
{ int i;
printf("\t Number \t\t Square Root of Number\n\n");
for (i=0; i<=360; ++i)
printf("\t %d \t\t\t %d \n",i, sqrt((double) i));
}
Maybe this?
int number;
double dblNumber = (double)number;
The problem is incorrect use of printf format - use %g/%f instead of %d
BTW - if you are wondering what your code did here is some abridged explanation that may help you in understanding:
printf routine has treated your floating point result of sqrt as integer. Signed, unsigned integers have their underlying bit representations (put simply - it's the way how they are 'encoded' in memory, registers etc). By specifying format to printf you tell it how it should decipher that bit pattern in specific memory area/register (depends on calling conventions etc). For example:
unsigned int myInt = 0xFFFFFFFF;
printf( "as signed=[%i] as unsigned=[%u]\n", myInt, myInt );
gives: "as signed=[-1] as unsigned=[4294967295]"
One bit pattern used but treated as signed first and unsigned later. Same applies to your code. You've told printf to treat bit pattern that was used to 'encode' floating point result of sqrt as integer. See this:
float myFloat = 8.0;
printf( "%08X\n", *((unsigned int*)&myFloat) );
prints: "41000000"
According to single precision floating point encoding format.
8.0 is simply (-1)^0*(1+fraction=0)*2^(exp=130-127)=2*3=8.0 but printed as int looks like just 41000000 (hex of course).
sqrt() return a value of type double. You cannot print such a value with the conversion specifier "%d".
Try one of these two alternatives
printf("\t %d \t\t\t %f \n",i, sqrt(i)); /* use "%f" */
printf("\t %d \t\t\t %d \n",i, (int)sqrt(i)); /* cast to int */
The i argument to sqrt() is converted to double implicitly, as long as there is a prototype in scope. Since you included the proper header, there is no need for an explicit conversion.

Handling numbers in C

Couldnt understand how numbers are handled in C. Could anyone point to a good tutorial.
#include<stdio.h>
main()
{
printf("%f",16.0/3.0);
}
This code gave: 5.333333
But
#include<stdio.h>
main()
{
printf("%d",16.0/3.0);
}
Gave some garbage value: 1431655765
Then
#include<stdio.h>
main()
{
int num;
num=16.0/3.0;
printf("%d",num);
}
Gives: 5
Then
#include<stdio.h>
main()
{
float num;
num=16/3;
printf("%f",num);
}
Gives: 5.000000
printf is declared as
int printf(const char *format, ...);
the first arg (format) is string, and the rest can be anything. How the rest of the arguments will be used depending on the format specifiers in format. If you have:
printf("%d%c", x, y);
x will be treated as int, y will be treated as char.
So,
printf("%f",16.0/3.0);
is ok, since you ask for float double (%f), pass float double(16.0/3.0)
printf("%d",16.0/3.0);
you ask for int(%d), you pass float double (double and int have different internal representation) so, the bit representation of 16.0/3.0 (double) corresponds to bit representation of 1431655765(int).
int num;
num=16.0/3.0;
compiler knows that you are assigning to int, and converts it for you. Note that this is different than the previous case.
Ok, the first 1 is giving correct value as expected.
Second one you are passing a float while it is treating it as an int (hence the "%d" which is for displaying int datatypes, it is a little complicated to explain why and since it appears your just starting I wouldn't worry about why "%d" does this when passed a float) reading it wrong therefore giving you a wierd value. (not a garbage value though).
Third one it makes 16.0/3.0 an int while assigning it to the int datatype which will result in 5. Because while making the float an int it strips the decimals regardless of rounding.
In the fourth the right hand side (16/3) is treated as an int because you don't have the .0 zero at the end. It evaluates that then assigns 5 to float num. Thus explaining the output.
It is because the formatting strings you are choosing do not match the arguments you are passing. I suggest looking at the documentation on printf. If you have "%d" it expects an integer value, how that value is stored is irrelevant and likely machine dependent. If you have a "%f" it expects a floating point number, also likely machine dependent. If you do:
printf( "%f", <<integer>> );
the printf procedure will look for a floating point number where you have given an integer but it doesn't know its and integer it just looks for the appropriate number of bytes and assumes that you have put the correct things there.
16.0/3.0 is a float
int num = 16.0/3.0 is a float converted to an int
16/3 is an int
float num = 16/3 is an int converted to a float
You can search the web for printf documentation. One page is at http://linux.die.net/man/3/printf
You can understand numbers in C by using concept of Implecit Type Conversion.
During Evaluation of any Expression it adheres to very strict rules of type Conversion.
and your answer of expression is depends on this type conversion rules.
If the oparands are of different types ,the 'lower' type is automatically converted into the 'higher' type before the operation proceeds.
the result is of the higher type.
1:
All short and char are automatically converted to int then
2:
if one of the operands is int and the other is float, the int is converted into float because float is higher than an ** int**.
if you want more information about inplicit conversion you have to refer the book Programming in ANSI C by E Balagurusamy.
Thanks.
Bye:DeeP
printf formats a bit of memory into a human readable string. If you specify that the bit of memory should be considered a floating point number, you'll get the correct representation of a floating point number; however, if you specify that the bit of memory should be considered an integer and it is a floating point number, you'll get garbage.
printf("%d",16.0/3.0);
The result of 16.0/3.0 is 5.333333 which is represented in Single precision floating-point format as follows
0 | 10101010 | 10101010101010101010101
If you read it as 32bit integer value, the result would be 1431655765.
num=16.0/3.0;
is equivalent to num = (int)(16.0/3.0). This converts the result of float value(5.33333) to integer(5).
printf("%f",num);
is same as printf("%f",(float)num);

Weird result printing pointers as float in C

I know this is wrong and gcc will give you a warning about it, but why does it work (i.e. the numbers are printed correctly, with some rounding difference)?
int main() {
float *f = (float*) malloc(sizeof(float));
*f = 123.456;
printf("%f\n", *f);
printf("%f\n", f);
return 0;
}
Edit:
Yes, I'm using gcc with a 32-bit machine. I was curious to see what results I'd get with other compilers.
I meddled with things a little more following Christoph's suggestion:
int main() {
float *f = (float*) malloc(sizeof(float));
*f = 123.456;
printf("%f\n", f); // this
printf("%f\n", *f);
printf("%f\n", f); // that
return 0;
}
This results in the first printf printing a value different from the last printf, despite being identical.
Reorder the printf() statements and you'll see it won't work any longer, so GCC definetly doesn't fix anything behind your back.
As to why it works at all: Because of the default argument promotion of variable arguments, you'll actually pass a double with your first call. As pointers on your system seem to be 32bit, the second call only overwrites the lower half of the 64bit floating point value.
In regards to your modified example:
the first call will print a double precision value where the higher 32bits are garbage and the lower the bit value of the pointer f
the second call prints the value of *f promoted to double precision
the third call prints a double precision value with the higher 32bits coming from (double)*f (as these bits still remain on stack from the last call); as in the first case, the lower bits will again come from the pointer f
The numbers aren't printed correctly for me.
Output:
123.456001
0.000000
I'm using VC++ 2009.
printf has no knowledge about actual arguments type. It just analyzes format string and interprets data on stack accordingly.
By coincidence (more or less =)) pointer to float has the same size as float (32 bit) on your platform, so stack is balanced after removing this argument from it.
On other platforms or with other data types this may not work.

Resources