Weird result printing pointers as float in C - c

I know this is wrong and gcc will give you a warning about it, but why does it work (i.e. the numbers are printed correctly, with some rounding difference)?
int main() {
float *f = (float*) malloc(sizeof(float));
*f = 123.456;
printf("%f\n", *f);
printf("%f\n", f);
return 0;
}
Edit:
Yes, I'm using gcc with a 32-bit machine. I was curious to see what results I'd get with other compilers.
I meddled with things a little more following Christoph's suggestion:
int main() {
float *f = (float*) malloc(sizeof(float));
*f = 123.456;
printf("%f\n", f); // this
printf("%f\n", *f);
printf("%f\n", f); // that
return 0;
}
This results in the first printf printing a value different from the last printf, despite being identical.

Reorder the printf() statements and you'll see it won't work any longer, so GCC definetly doesn't fix anything behind your back.
As to why it works at all: Because of the default argument promotion of variable arguments, you'll actually pass a double with your first call. As pointers on your system seem to be 32bit, the second call only overwrites the lower half of the 64bit floating point value.
In regards to your modified example:
the first call will print a double precision value where the higher 32bits are garbage and the lower the bit value of the pointer f
the second call prints the value of *f promoted to double precision
the third call prints a double precision value with the higher 32bits coming from (double)*f (as these bits still remain on stack from the last call); as in the first case, the lower bits will again come from the pointer f

The numbers aren't printed correctly for me.
Output:
123.456001
0.000000
I'm using VC++ 2009.

printf has no knowledge about actual arguments type. It just analyzes format string and interprets data on stack accordingly.
By coincidence (more or less =)) pointer to float has the same size as float (32 bit) on your platform, so stack is balanced after removing this argument from it.
On other platforms or with other data types this may not work.

Related

The problem about printf function to "output float with %d" in C

I am a newbie to the C language. When I was learning floating point numbers today, I found the following problems.
float TEST= 3.0f;
printf("%x\n",TEST);
printf("%d\n",TEST);
first output:
9c9e82a0
-1667333472
second output:
61ea32a0
1642738336
As shown above, each execution will output different results. I have checked a lot of IEEE 754 format and still don't understand the reasons. I would like to ask if anyone can explain or provide keywords for me to study, thank you.
-----------------------------------Edit-----------------------------------
Thank you for your replies. I know how to print IEEE 754 bit pattern. However, as Nate Eldredge, chux-Reinstate Monica said, using %x and %d in printf is undefined behavior. If there is no floating point register in our device, how does it work ? Is this described in the C99 specification?
Most of the time, when you call a function with the "wrong kind" (wrong type) of argument, an automatic conversion happens. For example, if you write
#include <stdio.h>
#include <math.h>
printf("%f\n", sqrt(144));
this works just fine. The compiler knows (from the function prototype in <math.h>) that the sqrt function expects an argument of type double. You passed it the int value 144, so the compiler automatically converted that int to double before passing it to sqrt.
But this is not true for the printf function. printf accepts arguments of many different types, and as long as each argument is right for the particular % format specifier it goes with in the format string, it's fine. So if you write
double f = 3.14;
printf("%f\n", f);
it works. But if you write
printf("%d\n", f); /* WRONG */
it doesn't work. %d expects an int, but you passed a double. In this case (because printf is special), there's no good way for the compiler to insert an automatic conversion. So, instead, it just fails to work.
And when it "fails", it really fails. You don't even necessarily get anything "reasonable", like an integer representing the bit pattern of the IEEE-754 floating-point number you thought you passed. If you want to inspect the bit pattern of a float or double, you'll have to do that another way.
If what you really wanted to do was to see the bits and bytes making up a float, here's a completely different way:
float test = 3.14;
unsigned char *p = (unsigned char *)&test;
int i;
printf("bytes in %f:", test);
for(i = 0; i < sizeof(test); i++) printf(" %02x", p[i]);
printf("\n");
There are some issues here with byte ordering ("endianness"), but this should get you started.
To print hex (ie how it is represented in the memory) representation of the float:
float TEST= 3.0f;
int y=0;
memcpy(&y, &TEST, sizeof(y));
printf("%x\n",y);
printf("%d\n",y);
or
union
{
float TEST;
int y;
}uf = {.y = 0};
uf.TEST = 3.0f;
printf("\n%x\n",(unsigned)uf.y);
printf("%d\n",uf.y);
Both examples assuming sizeof(float) <= sizeof(int) (if they are not equal I need to zero the integer)
And the result (same for both):
40400000
1077936128
As you can see it is completely different from your one.
https://godbolt.org/z/Kr61x6Kv3

Pointer not giving expected output in c

Why doesn't the double variable show a garbage value?
I know I am playing with pointers, but I meant to. And is there anything wrong with my code? It threw a few warnings because of incompatible pointer assignments.
#include "stdio.h"
double y= 0;
double *dP = &y;
int *iP = dP;
void main()
{
printf("%10#x %#10x %#10x %#10x \n",&y,dP,iP,iP+1);
printf("%#10lf %#10lf %#10lf %#10lf \n",y,*dP,*iP,*(iP+1));
scanf("%lf %d %d",&y,iP,iP+1);
printf("%10#x %#10x %#10x %#10x \n",&y,dP,iP,iP+1);
printf("%#10lf %#10lf %#10d %#10d \n",y,*dP,*iP,*(iP+1));
}
Welcome to Stack Overflow. It's not very clear what you're trying to do with this code, but the first thing I'll say is that it does exactly what it says it does. It tries to format data with the wrong format string. The result is garbage, but that doesn't necessarily mean it will look like garbage.
If part of the idea is to print out the internal bit pattern of a double in hexadecimal, you can do that--but the code will be implementation-dependent. The following should work on just about any modern 32 or 64-bit desktop implementation using 64-bits for both double and long long int types:
double d = 3.141592653589793238;
printf("d = %g = 0x%016llX\n", d, *(long long*)&d);
The %g specification is a quick way to print out a double in (usually) easily readable form. The %llX format prints an unsigned long long int in hexadecimal. The byte order is implementation-dependent; even if you know that both double and long long int have the same number of bits. On a Mac, PC or other Intel/AMD architecture machine, you'll get the display in most-significant-digit-first order.
The *(long long *)&d expression (reading from right to left) will take the address of d, convert that double* pointer to a long long * pointer, then dereference that pointer to get a long long value to format.
Almost every implementation uses IEEE 754 format for hardware floating point this century.
64-bit IEEE format (aka double)
You can find out more about printf formatting at:
http://www.cplusplus.com/reference/cstdio/printf/

Copying data from float to an int, without the data being casted

In the following code I try to copy data from a float f to an int i, bit for bit, without any conversion of the data. I cast the address of f to (int*), and dereference this address when I assign it to i. The thinking is that if the program sees &f as an int pointer, it won't do a type conversion when (int*) f is dereferenced and assigned to i. But this isn't working and I do not understand why.
void main(){
float f=3.0;
int i;
/* Check to make sure int and float have the same size on my machine */
printf("sizeof(float)=%d\n",sizeof(float)); /* prints "sizeof(float)=4" */
printf("sizeof(int)=%d\n",sizeof(int)); /* prints "sizeof(int)=4" */
/* Verify that &f and (int*) &f have the same value */
printf("&f = %p\n",&f); /* prints &f = 0x7ffc0670dff8 */
printf("(int*) &f = %p\n",(int*) &f); /* prints (int*) &f = 0x7ffc0670dff8 */
i=*((int*) &f);
printf("%f\n", i); /* prints 0.000000 (I would have expected 3.000000) */
return;
}
By assigning via typecasting you are copying the raw 4 bytes of data from one variable to another. The problem is that a 3 in a 32-bit floating point variable isn't stored like a 3 in an integer variable.
For example, a 3 in 64-bit float format on a Mac is stored as 0x4e808000. Assigning that into an integer will produce 1077936128.
See https://en.wikipedia.org/wiki/IEEE_floating_point or http://www.madirish.net/240 or https://users.cs.duke.edu/~raw/cps104/TWFNotes/floating.html
Creating two pointers of unrelated types to the same object violates the strict aliasing rule. You want to avoid this, as in complicated code it can cause the compiler to produce binaries that don't do what you want. It's also undefined behaviour.
The correct way to change the type of a bit pattern in C between int and float is to avoid pointers completely, and use a union:
union int_float { int i; float f; };
int ival = (union int_float){ .f = 4.5 }.i;
float fval = (union int_float){ .i = 45 }.f;
Results may vary slightly. Be sure to check that the sizes of your floating type and your integer type are identical, or data will be lost/garbage data generated.
Note that it is possible to produce values in this way that are not valid elements of the destination type (this is more or less what you're seeing above when non-zero integer values get interpreted as floating-point zeroes despite not having all-zero bit patterns), which could lead to undefined or unexpected behaviour further down the line. Be sure to verify the output.
after the conversion you obtain an integer stored in the variable i.
So if you wannt to print such a value you have to use:
printf("%d\n", i); /* prints 1077936128 */
Now the printf will interprets correctly the memory bits and print the correct value.
This is not a cast but a bit copy like you said.
Remember that ((int)&f) dereference that pointer into an int value. It won't do what you believe on machines where int and float have different sizes.
A different way to copy, if the sizes of int and float are identical, is:
memcpy(&i, &f, sizeof f);
After copying the four bytes contained in the memory area if we try to print the content as a sequence of four decimal values we can appreciate the redistribution just happened:
for floating:
3 160 0 3
for integer
0 204 204 0
This means that the four bytes are managed differently from the computer according to the type of rapresentation: int or float.

Inconsistent results while printing float as integer [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
print the float value in integer in C language
I am trying out a rather simple code like this:
float a = 1.5;
printf("%d",a);
It prints out 0. However, for other values, like 1.4,1.21, etc, it is printing out a garbage value. Not only for 1.5, for 1.25, 1.5, 1.75, 1.3125 (in other words, decimal numbers which can be perfectly converted into binary form), it is printing 0. What is the reason behind this? I found a similar post here, and the first answer looks like an awesome answer, but I couldn't discern it. Can any body explain why is this happening? What has endian-ness got to do with t?
you're not casting the float, printf is just interpreting it as an integer which is why you're getting seemingly garbage values.
Edit:
Check this example C code, which shows how a double is stored in memory:
int main()
{
double a = 1.5;
unsigned char *p = &a;
int i;
for (i=0; i<sizeof(double); i++) {
printf("%.2x", *(p+i));
}
printf("\n");
return 0;
}
If you run that with 1.5 it prints
000000000000f83f
If you try it with 1.41 it prints
b81e85eb51b8f63f
So when printf interprets 1.5 as an int, it prints zero because the 4 LSBs are zeros and some other value when trying with 1.41.
That being said, it is an undefined behaviour and you should avoid it plus you won't always get the same result it depends on the machine and how the arguments are passed.
Note: the bytes are reversed because this is compiled on a little indian machine which means the least significant byte comes first.
You don't take care about argument promotions. Because printf is a variadic function, the arguments are promoted:
C11 (n1570), ยง 6.5.2.2 Function calls
arguments that have type float are promoted to double.
So printf tries to interpret your double variable as an integer type. It leads to an undefined behavior. Just add a cast:
double a = 1.5;
printf("%d", (int)a);
Mismatch of arguments in printf is undefined beahivour
either typecast a or use %f
use this way
printf("%d",(int)a);
or
printf("%f",a);
d stands for : decimal. so, nevertheless a is float/double/integer/char,.... when you use : "%d", C will print that number by decimal. So, if a is integer type (integer, long), no problem. If a is char : because char is a type of integer, so, C will print value of char in ASCII.
But, the problem appears, when a is float type (float/double), just because if a is float type, C will have special way to read this, but not by decimal way. So, you will have strange result.
Why has this strange result ?
I just give a short explanation : in computer, real number is presented by two part: exponent and a mantissa. If you say : this is a real number, C will know which is exponent, which is mantissa. But, because you say : hey, this is integer. no difference between exponent part and mantissa part -> strange result.
If you want understand exactly, how can know which integer will it print (and of course, you can guess that). You can visit this link : represent FLOAT number in memory in C
If you don't want to have this trange result, you can cast int to float, so, it will print the integer part of float number.
float a = 1.5;
printf("%d",(int)a);
Hope this help :)

whats the difference of type cast in the C language between two code

code one is:
int a = 0x42500000;
float *f = (float *)&a;
printf("%f", *f); //output 52.00000
code two is:
int a = 0x42500000;
float f = (float)a;
printf("%f", f); //output 0.00000
why code two output 0.00000,who can tell me why?
First snippet interprets the contents of the memory location of a as if it were float, without casting. Unless you really know what you are doing, you don't want to do that, it's almost always a mistake.
The second snippet casts the value of a to float, which should give you the same value as the int. It really does do that. Your code gives me 1112539136.000000. What compiler are you using and getting 0 there?
The first cast tells the compiler to assume that the location where a is stored is a float and consider it likewise.
The second tells the compiler to assume that a is a float and consider it likewise.

Resources