Using incorrect format specifier in printf() - c

I am trying to solve the next problem:
printf("%d", 1.0f); // Output is 0
So, I really do not know why it is so. The number 1.0 (32 bit in IEEE 754) has the next binary interpretation:
00111111 10000000 00000000 00000000
If convert this one to integer interpretation we get the next:
1 065 353 216
So, sizeof(int) == sizeof(float) == 4 bytes.
I know the float number in C will be converted into double by compiler, but I use f for float constant.
I tried different values and I counted the binary numbers, but I do not know. That is insanity.
I want to see the 1 065 353 216 in my console.

When you use the incorrect format specifier to printf, you invoke undefined behavior, meaning you can't accurately predict what will happen.
That being said, floating point values are typically passed to functions via floating point registers, while integer values are typically passed on the stack. So the value you're seeing is whatever happened to be sitting on the stack.
As an example, if I put that line by itself in a main function, it prints a different value every time I run it.
If you want to print the representation of a float, you can use a union:
union {
float f;
unsigned int i;
} u;
u.f = 1.0f;
printf("%d", u.i);

Related

From which endpoint and how does C read the variables?

I was messing around with pointers in C and was trying to read values from the same address using different types of pointers. First I created a double variable and assigned the number 26 to it.
double g = 26;
like so. And then I assigned g's address to a void pointer void *vptr = &g;. after that, I tried to read the value at the address of g as a float by type-casting.
float *atr = (float*) (vptr);. When I tried to print the value of *atr it gave me 0.000000. Then i used pointer to a character since characters are 1 byte and tried to see values of those 8 bytes one by one.
char *t;
t = (char*) vptr;
for (int i = 0; i < 8; i++){
printf("%x t[%d]: %d\n",t+i , i, t[i]);
}
it gave me this output
ffffcbe9 t[1]: 0
ffffcbea t[2]: 0
ffffcbeb t[3]: 0
ffffcbec t[4]: 0
ffffcbed t[5]: 0
ffffcbee t[6]: 58
ffffcbef t[7]: 64
Then I checked binary representation of g which is 01000000 00111010 00000000 00000000 00000000 00000000 00000000 00000000 using this website.
When I convert every byte to decimal individually, first byte becomes 64 and the second is 58.
So it was basically reversed. Then I tried to read as a float again but this time i shifted the address.
atr = (float*) (vptr+4);. I didn't know how many bytes it would shift but coincidentally i discovered that it shifts by one just like char pointers.
This time i printed as printf("%f\n",*atr); and now it gave me 2.906250.
When I checked it's binary representation it was 01000000 00111010 00000000 00000000 which is the first half of the variable g. So I am kind of confused how C is reading values from addresses since it looks like c reads the values from right-end and when i add positive numbers to addresses it shifts towards left-end. I am sorry for any spelling or grammatical mistakes.
The order in which the bytes of an scalar object are stored in C are implementation-defined, per C 2018 6.2.6.1 2. (Array elements are of course stored in ascending order by index, and members of structures are in order of declaration, possibly with padding between them.)
The behavior of using *atr after float *atr = (float*) (vptr); is not defined by the C standard, due to the aliasing rules in C 2018 6.5 7. It is defined to examine the bytes through a char lvalue, as you did with t[i], although which bytes are which is implementation-defined per above.
A proper way to reinterpret some bytes of a double as a float is to copy them in byte-by-byte, which you can do with manual code using a char * or simply float f; memcpy(&f, &g, sizeof f);. (memcpy is specified to work as if by copying bytes, per C 2018 7.24.2.1 2.) This will of course only reinterpret the low-addressed bytes of the double as a float, which has two problems:
The low-address bytes may not be the ones that contain the most significant bytes of the double.
float and double commonly use different formats, and the difference is not simply that float has fewer bits in the significand (the fraction portion). The exponent field is also a different width and has a different encoding bias. So reinterpreting the double this way generally will not give you a float that has about the same value as a double.
I didn't know how many bytes it would shift but coincidentally i discovered that it shifts by one just like char pointers.
Supporting arithmetic on void * is a GCC extension that is, as far as I know, needless. When offsets are added to or subtracted from void *, GCC does arithmetic as if it were a char *. This appears to be needless because one can do the desired arithmetic simply by using a char * instead of a void *, so the extension does not provide any new function.

Dereferencing int but casting to a float prints nothing in C

C Noob here trying to follow along with some online lectures. In the professors example he shows us that we can read the data stored in an int as a float by doing the following: *(float*)&i. I tried doing this with the following code but nothing happens. I am testing it here: http://ideone.com/ExmXSW
#include <stdio.h>
int main(void) {
// your code goes here
int i=37;
printf("%f", *(float*)&i);
return 0;
}
This causes undefined behaviour:
Executing *(float *)&i violates the strict aliasing rule
The wrong format specifier was used: %i is for int, however you supplied a float
When code causes undefined behaviour, anything may happen. A lecture advising you to do this is a rubbish lecture unless it is specifically showing this as an example of what NOT to do. It is incorrect to say "we can read the data stored in an int as float" by this method.
NB. ideone.com is not great for testing because it suppresses a whole lot of compiler error messages, so you may think your code is correct when it in fact is not.
What the professor may wanted to teach you that if you insert an integer in to a memory location (which represented by 32 bits in most machines) you can read it as a float (again 32 bits in most of the machines) but you will get different values. This is because integer is stored as a simple binary for example 0x000000001 is equals to integer 1 and 0x00000002 is for integer 2 etc.
However float representation in binary format is quite different. It is look like as follows:
bit 31 30 23 22 0
S EEEEEEEE MMMMMMMMMMMMMMMMMMMMMMM
where S is the sign, E is for exponent and M is for mantissa.
Here is a bit of code that I was working on to help you understand this:
#include <stdio.h>
int main(void) {
void* x = malloc(sizeof(int));
int* y = x;
float* z = x;
*y=955555555;
printf("%f", *z);
return 0;
}
What I have done in this code is to allocate a memory and let variable y interpret it as integer and variable z interpret it as floating point. Now you can change y and see the that z has totally different value. In this case the output of the program is 0.000117.
You can also change variable z and see the same happens with variable y because both of them are pointing to the same memory location but interpreting it as different types.
You need to use the correct format code. What you're doing is undefined behavior, but it would probably work if you changed the printf to:
printf("%f", *(float*)&i);
so it uses the correct format code. The problem is that, in modern x64 calling conventions, the first few values are passed in registers. But on x86-64 at least, it's a completely different set of registers for integer vs. floating point values, so using %i looks at a completely different register that has an effectively random value (it's deterministic only in the sense that you could examine the assembly to figure out what it will be, but not something you could guess from looking at the source code).
There is absolutely NO correlation between the bits you would use to store 37 as an int and what would be interpreted when cast to a float.[1] Why? Integers (presuming 32-bit for sake of argument) are stored in memory as a simple binary representation of the number in question (subject to the endianness of the current machine), while (32-bit) floating point numbers are stored in IEEE-754 single-precision floating point format comprised of a single sign-bit, and 8-bit exponent (8-bit excess-127 notation) and a 23-bit mantissa (in a normalized "hidden-bit" format).
The only thing the integer and floating point number have in common is the fact they both occupy 32 bits of memory. That is the only reason you can, despite violating every tenant of strict-aliasing, cast a pointer to the integer value address and attempt to interpret it as a float.
Let's, for sake of argument, look at what you are doing. When you store 37 as in integer, on a little-endian box, you will have the following in memory:
00000000-00000000-00000000-00100101
When you interpret those 32-bits of memory by casting to float, you are attempting to interpret an IEEE-754 single-precision floating point value in memory of:
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 1
|- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -|
|s| exp | mantissa |
when in reality, if you were looking at 37.0 in IEEE-754 single-precision floating point format, you would have the following in memory:
0 1 0 0 0 0 1 0 0 0 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
|- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -|
|s| exp | mantissa |
If you compare what you are trying to look at as a float with what you should be looking at as a float, you should notice your cast results in floating point representation with a 0 value for the 8-bit exponent and a nonsensically small 23-bit mantissa. Integer 37 interpreted as a float results in a conversion so small it is virtually non-printable regardless how may significant digits you specify in the format.
The bottom line is there is no relation between the integer value in memory and what a floating number created from those same bits would be. (aside from a few computations not relevant here). An integer is an integer and a float is a float. The only thing their memory storage has in common is that on some machines they can both occupy 32-bits of memory. While you can abuse a cast and attempt to use the same bits as a floating point value - just be aware there is little to nothing in the way of correlation between the two interpretations.
footnotes:
[1] there are limited determinations of the next possible higher/lower valid floating point value that can be drawn by interpreting the memory occupied by a float point value as an integer.
You wrote that "nothing happens", but the code actually works as expected (with some assumptions about the architecture, such as that ints and floats are both 32-bit). At least it prints something, but the result is probably not what you expected.
On Ideone.com the printed output is 0.000000. The reason for this is that the integer value 37 interpreted as float value is 5.1848e-44. When using the "%f" format specifier in printf, this extremely small number will be rounded to zero. If you change the format string to "%e", the output would be 5.184804e-44. Or if you would change the value of i to 1078530010, the output would be 3.141593, for example.
(NB: Note that the value is actually first converted from float to double, and the double is passed to printf(). The "%f" format specifier also expects a double, not a float, so that works out well.)
There's certainly truth in many of the already posted answers. The code indeed violates the strict aliasing rule, and in general, the results are defined. This is mostly because data types can differ between different CPU architectures (different sizes, different endianness, etc.). Also, the compiler is allowed to make certain assumptions and try to optimize your code, causing the compiled executable to behave different than intended.
In practice, the intended behavior can be "forced" by using constructs such as volatile pointers, restricting the compilers ability to optimize the code. Newer versions of the C/C++ standard have even more advanced constructs for this. However, generally speaking, for a given target architecture of which you know the data sizes and formats, the code you posted can work correctly.
However, there is a better solution, and I'm surprised nobody has mentioned it yet. The only (somewhat) portable way to do this without breaking the strict aliasing rule, is using unions. See the following example (and demo at ideone.com):
#include <stdio.h>
int main(void) {
union {
int i;
float f;
} test;
test.i = 1078530010;
printf("%f", test.f);
return 0;
}
This code also prints 3.141593, as expected.

How does ' %f ' work in C?

Hey i need to know how %f works , that is how
printf("%f",number);
extract a floating point number from a series of bits in number.
Consider the code:
main()
{
int i=1;
printf("\nd %d\nf %f",i,i);
}
Output is :
d 1
f -0.000000
So ultimately it doesn't depend on variable 'i', but just depends on the usage of %d and %f(or whatever) i just need to know how %f extracts the float number corresponding to series of bits in 'i'
To all those who misunderstood my question i know that %f can't be used to an integer and would load garbage values if size of integer was smaller than float. As for my case the size of integer and float are 4 bytes.
Let me be clear if value of is 1 then the corresponding binary value of i will be this:
0000 0000 0000 0000 0000 0000 0000 0001 [32 bits]
How would %f extract -0.0000 as in this case from this series of bits.(How it knows where to put decimal point etc , i can't find it from IEEE 754)
[PLEASE DO CORRECT ME IF I AM WRONG IN MY EXPLANATION OR ASSUMPION]
It's undefined behavior to use "%f" to an int, so the answer to your question is: you don't need to know, and you shouldn't do it.
The output depends on the format specifier like "%f" instead of the type of the argument i is because variadic functions (like printf() or scanf()) have no way of knowing the type of variable argument part.
As others have said, giving mismatched "%" specifier and arguments is undefined behavior, and, according to the C standard, anything can happen.
What does happen, in this case, on most modern computers, is this:
printf looks at the place in memory where the data should have been, interprets whatever data it finds there as a floating-point number, and prints that number.
Since printf is a function that can take a variable number of arguments, all floats are converted to doubles before being sent to the function, so printf expects to find a double, which (on normal modern computers) is 64 bits. But you send an int, which is only 32 bits, so printf will look at the 32 bits from the int, and 32 more bits of garbage that just happened to be there. When you tried this, it seems that the combination was a bit pattern corresponding to the double floating-point value -0.0.
Well.
It's easy to see how an integer can be packed into bytes, but how do you represent decimals?
The simplest technique is fixed point: of the n bits, the first m are before the point and the rest after. This is not a very good representation, however. Bits are wasted on some numbers, and it has uniform precision, while in real life, most desired decimals are between 0 and 1.
Enter floating point. The IEEE 754 spec defines a way of interpreting bits that has, since then, been almost universally accepted. It has very high near-zero precision, is compact, expandable and allows for very large numbers as well.
The linked articles are a good read.
You can output a floating-point number (float x;) manually by treating the value as a "black box" and extracting the digits one-by-one.
First, check if x < 0. If so, output a minus-sign - and negate the number. Now we know that it is positive.
Next, output the integer portion. Assign the floating-point number to an integer variable, which will truncate it, ie. int integer = x;. Then determine how many digits there are using the base-10 logarithm log10(). Note, log10(0) is undefined, so you'll have to handle zero as a special case. Then iterate from 0 up to the number of digits, each time dividing by 10^digit_index to move the desired digit into the unit's position, and take the 10-residue (modulus).
for (i=digits; i>=0; i--)
dig = (integer / pow(10,i)) % 10;
Then, output the decimal point ..
For the fractional part, subtract the integer from the original (absolute-value, remember) floating-point number. And output each digit in a similar way, but this time multiplying by 10^frac_digits. You won't be able to predict the number of significant fractional digits this way, so just use a fixed precision (constant number of fractional digits).
I have C code to fill a string with the representation of a floating-point number here, although I make no claims as to its readability.
IEEE formats store the number as a normalized binary fraction. It's more similar to scientific notation, like 3.57×102 instead of 357.0. So it is stored as an exponent-mantissa pair. Being "normalized" means there's actually an implicit additional 1 bit at the front of the mantissa that is not stored. Hopefully that's enough to help you understand a more detailed description of the format from elsewhere.
Remember, we're in binary, so there's no "decimal point". And with the exponent-mantissa notation, there isn't even a binary point in the format. It's implicitly represented in the exponent.
On the tangentially-related issue of passing floats to printf, remember that this is a variadic function. So it does not declare types of arguments that it receives, and all arguments passed undergo automatic conversions. So, float will automatically promote to double. So what you're doing is (substituting hex for brevity), passing 2 64-bit values:
double f, double f
0xabcdefgh 0xijklmnop 0xabcdefgh 0xijklmnop
Then you tell printf to interpret this sequence of words as an int followed by a double. So the 32-bit int seen by printf is only the first half of the floating-point number, and then the floating-point number seem by printf has its words reversed. The fourth word is never used.
To get the integer representation, you'll need to use type-punning with a pointer.
printf("%d %f\n", *(int *)&f, f);
Which reads (from right-to-left): take the address of the float, treat it as a pointer-to-int, follow the pointer.

Inconsistent results while printing float as integer [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
print the float value in integer in C language
I am trying out a rather simple code like this:
float a = 1.5;
printf("%d",a);
It prints out 0. However, for other values, like 1.4,1.21, etc, it is printing out a garbage value. Not only for 1.5, for 1.25, 1.5, 1.75, 1.3125 (in other words, decimal numbers which can be perfectly converted into binary form), it is printing 0. What is the reason behind this? I found a similar post here, and the first answer looks like an awesome answer, but I couldn't discern it. Can any body explain why is this happening? What has endian-ness got to do with t?
you're not casting the float, printf is just interpreting it as an integer which is why you're getting seemingly garbage values.
Edit:
Check this example C code, which shows how a double is stored in memory:
int main()
{
double a = 1.5;
unsigned char *p = &a;
int i;
for (i=0; i<sizeof(double); i++) {
printf("%.2x", *(p+i));
}
printf("\n");
return 0;
}
If you run that with 1.5 it prints
000000000000f83f
If you try it with 1.41 it prints
b81e85eb51b8f63f
So when printf interprets 1.5 as an int, it prints zero because the 4 LSBs are zeros and some other value when trying with 1.41.
That being said, it is an undefined behaviour and you should avoid it plus you won't always get the same result it depends on the machine and how the arguments are passed.
Note: the bytes are reversed because this is compiled on a little indian machine which means the least significant byte comes first.
You don't take care about argument promotions. Because printf is a variadic function, the arguments are promoted:
C11 (n1570), § 6.5.2.2 Function calls
arguments that have type float are promoted to double.
So printf tries to interpret your double variable as an integer type. It leads to an undefined behavior. Just add a cast:
double a = 1.5;
printf("%d", (int)a);
Mismatch of arguments in printf is undefined beahivour
either typecast a or use %f
use this way
printf("%d",(int)a);
or
printf("%f",a);
d stands for : decimal. so, nevertheless a is float/double/integer/char,.... when you use : "%d", C will print that number by decimal. So, if a is integer type (integer, long), no problem. If a is char : because char is a type of integer, so, C will print value of char in ASCII.
But, the problem appears, when a is float type (float/double), just because if a is float type, C will have special way to read this, but not by decimal way. So, you will have strange result.
Why has this strange result ?
I just give a short explanation : in computer, real number is presented by two part: exponent and a mantissa. If you say : this is a real number, C will know which is exponent, which is mantissa. But, because you say : hey, this is integer. no difference between exponent part and mantissa part -> strange result.
If you want understand exactly, how can know which integer will it print (and of course, you can guess that). You can visit this link : represent FLOAT number in memory in C
If you don't want to have this trange result, you can cast int to float, so, it will print the integer part of float number.
float a = 1.5;
printf("%d",(int)a);
Hope this help :)

A variable is declared as float f = 5.2 but while printing %d is used, I didn't get the o/p as 5 its printing some garbage value

/* Compiled uisng GCC Compiler in CentOs 5 */
#include <stdio.h>
int main(void)
{
float f = 5.2;
printf("f = %d\n",f);
return 0;
}
/* O/p is not 5 its printing some garbage value */
Why is the output not 5? What is the in-memory representation of float values?
"%d" prints a decimal integer. This means printf is interpreting what gets passed as an integer, not a float. It's not doing any smart conversion and I'm pretty sure this is undefined behaviour.
The in memory representation of a float is implementation specific. Most implementations use IEEE 754, but this is not guaranteed at all.
For the record using "-Wall -Wextra" with gcc would have picked this mistake up as a warning. If you want to print it as an integer you must cast it too:
printf("f = %d\n",(int)f);
Your code is not printing 5 because you're not giving it 5. You're giving it 5.2. 5 is an integer value and 5.2 is a floating point value. The first is typically encoded using 2s-complement while the second is typically encoded using IEEE floating point values. (There are other encodings possible and even occasionally in use, but the two you're most likely to encounter are the two I've mentioned.)
If you're telling the computer that you're giving it an integer (%d) and then you proceed to give it a floating point value (5.2) getting garbage is what you expect. It's taking the bits of IEEE floating point representation and reading them as if it were an integer. (It's the old formula: garbage in, garbage out.) If you try not lying to the computer you'll get better results.
The code you want to use in your printf call is %f instead of %d. Using it means you're no longer lying to the computer about the type of the data being passed in. That being said, to head off your inevitable next question, be sure to read this explanation of floating point so you understand why your floating point numbers aren't what you think they are.

Resources