What does *((int*)&f) do in C? - c

I found a way to convert a float to binary in c through this answer: Convert float to binary in C, but I'm not sure what the code used in the answer *((int*)&f) actually does to convert the number. What does it do exactly?

It invokes undefined behavior, meaning your program is invalid if it's reachable.
What someone intended for it to do is to reinterpret the bits of a float as an int, assuming int is 32-bit and probably also that float is IEEE single.
There are two correct ways to do this (int replaced with uint32_t to remove the first useless assumption):
(union { float f; uint32_t i; }){f}.i
uint32_t i; memcpy(&i,&f,sizeof i);

Related

Passing float value from C program to assembler level program using only integer registers?

For my class we are writing a simple asm program (with C and AT&T x86-64) that prints all the bits of an integer or float. I have the integer part working fine. For the float part my professor has instructed us to pass the float value only using integer registers. Not too sure why we're not allowed to use float registers. Regardless, does anyone have ideas on how to go about this?
my professor has instructed us to pass the float value only using integer registers.
A simple approach is to copy the float into an integer using memcpy()
float f = ...;
assert(sizeof f == sizeof(uint32_t));
uint32_t u;
memcpy(&u, &f, sizeof u);
foo(u);
Another is to use a union. Perhaps using a compound literal.
void foo (uint32_t);
int main() {
float f;
assert(sizeof f == sizeof(uint32_t));
// v----------- compound literal -----------v
foo((union { float f; uint32_t u; }) { .f = f}.u);
// ^------ union object ------- ^
}
Both require that the integer type used and the float are the same size.
Other issues include insuring the correct endian of the two, yet very commonly the endians of the float and integer will match.

Pointer not giving expected output in c

Why doesn't the double variable show a garbage value?
I know I am playing with pointers, but I meant to. And is there anything wrong with my code? It threw a few warnings because of incompatible pointer assignments.
#include "stdio.h"
double y= 0;
double *dP = &y;
int *iP = dP;
void main()
{
printf("%10#x %#10x %#10x %#10x \n",&y,dP,iP,iP+1);
printf("%#10lf %#10lf %#10lf %#10lf \n",y,*dP,*iP,*(iP+1));
scanf("%lf %d %d",&y,iP,iP+1);
printf("%10#x %#10x %#10x %#10x \n",&y,dP,iP,iP+1);
printf("%#10lf %#10lf %#10d %#10d \n",y,*dP,*iP,*(iP+1));
}
Welcome to Stack Overflow. It's not very clear what you're trying to do with this code, but the first thing I'll say is that it does exactly what it says it does. It tries to format data with the wrong format string. The result is garbage, but that doesn't necessarily mean it will look like garbage.
If part of the idea is to print out the internal bit pattern of a double in hexadecimal, you can do that--but the code will be implementation-dependent. The following should work on just about any modern 32 or 64-bit desktop implementation using 64-bits for both double and long long int types:
double d = 3.141592653589793238;
printf("d = %g = 0x%016llX\n", d, *(long long*)&d);
The %g specification is a quick way to print out a double in (usually) easily readable form. The %llX format prints an unsigned long long int in hexadecimal. The byte order is implementation-dependent; even if you know that both double and long long int have the same number of bits. On a Mac, PC or other Intel/AMD architecture machine, you'll get the display in most-significant-digit-first order.
The *(long long *)&d expression (reading from right to left) will take the address of d, convert that double* pointer to a long long * pointer, then dereference that pointer to get a long long value to format.
Almost every implementation uses IEEE 754 format for hardware floating point this century.
64-bit IEEE format (aka double)
You can find out more about printf formatting at:
http://www.cplusplus.com/reference/cstdio/printf/

Why do I get this output with a C union?

#include<stdio.h>
union U{
struct{
int x;
int y;
};
float xy;
};
int main(){
union U u;
u.x = 99;
printf("xy %f\n",u.xy); //output " 0 "
return 0;
}
I have figured out that it has something to do with how float is stored and read internally. Can someone explain it to me exactly?
Converting comments into an answer.
Printing with %f is not very useful; you should consider %g or %e. With %f, if the value is very small, it will be printed as 0.000000 even when it is not zero. (For example, any value smaller than 0.0000005 will be printed as 0.000000.) You need to read about IEEE 754 at Wikipedia, for example, to find out about how such values are represented.
For example, on a Mac running macOS Sierra 10.12.5 using GCC 7.2.0, printing with:
printf("xy %22.16g\n", u.xy);
produces:
xy 1.387285479681569e-43
The range of normal numbers in 4-byte float is normally 10⁺³⁸ to 10⁻³⁸, so a value 1.387…E-43 from a float is a subnormal value (though well within range of 8-byte double values). Remember that float values passed to printf() are promoted to a double automatically because of 'default argument promotions' — printf() never actually receives a float value.
The way float is represented is impacting the result. See How to represent FLOAT number in memory in C and https://softwareengineering.stackexchange.com/questions/215065/can-anyone-explain-representation-of-float-in-memory to see how the float is represented in the memory. You also did not initialize structure variable y. It can have any value in it. It may or may not be used. In your case, the value is very small and you are not printing full value. To see the value of float xy you need to print full value. As suggested in a comment here, if I use below statement in Codeblocks 16.1 on Windows( which contains MinGW compiler) I get value different than 0.000000.
printf("xy %.80f\n",u.xy);
gives me
xy 0.00000000000000000000000000000000000000000013872854796815689002144922874570169700
Yes, you're right, it has to do with the binary representation of a float value, which is defined in the standard document IEEE 754. See this great article by Steve Hollasch for an easy explanation. A float is a 32-bit value, and so is an int. So in your union U, xy falls exactly on the x member of the embedded struct, so when you set x to 99, the bits 1100011 (binary representation of 99) will be reinterpreted in xy as the mantissa of a float. As others have pointed out, this is a very small value, which may be printed as 0, depending on the printf format specifier.
Guessing from the naming of your union members (x, y, xy), I think you wanted to declare something different, e.g.:
union U
{
struct
{
short x;
short y;
};
float xy;
};
Or:
union U
{
struct
{
int x;
int y;
};
double xy;
};
In those declarations, both the x and y members are mapped onto the xy member.
The reason is that the value is very close to zero, so the default 6 digits
of precision isn't enough to display anything.
Try:
union { int i; float f; } u = {.i= 99};
printf("f %g\n", u.f);

why does type casting to float from int prints "0.0000"

Below is some code that i wrote to understand typecasting but I do not understand why the value of float or double is being printed as "0.000000" even if i type cast from as array of integers or try to interpret from a union's address.
#include <stdio.h>
union test1
{
int x;
float y;
double z;
};
int main()
{
int x[2]={200,300};
float *f;
f=(float*)(x);
printf("%f\n",*f); /*(1)why is this value 0.0*/
*f=(float)(x[0]); /*(2)this works as expected and prints 200.00000*/
printf("%f\n",*f);
union test1 u;
u.x=200;
/*(3)this line give compilation error why*/
//printf ("sizeof(test1) = %d \n", sizeof(test1));
/*(4) this line also prints the value of float and double as 0.0000 why*/
printf ("sizeof(test1) = %lu u.x:%d u.y:%f u.z:%lf \n", sizeof(u), u.x, u.y, u.z);
return 0;
}
First and foremost, int and float are not compatible types.
In your code, by saying
f=(float*)(x);
you're breaking the strict aliasing rule. Any further usage invokes undefined behavior.
To put in simple words, you cannot just take a pointer to a float, cast that to an int * and dereference that int * to get an int value. To quote the wikipedia article,
[..] pointer arguments in a function are assumed to not alias if they point to fundamentally different types, [...]
For a much detailed description, please see the already linked FAQ answer.
printf("%f\n",*f); /*(1)why is this value 0.0*/
You are taking the address of an int and treating it like it contains a float. That is undefined behavior.
printf("%f\n",*f); /*(1)why is this value 0.0*/
This is effectively undefined behavior, but saying this doesn't explain why you have 0.0 in your case. The result could have been anything because it is undefined behavior, but with a given compiler and a given machine, the behavior, while not specified, produced something.
You are reading as a float a memory that contains the encoding of an int; it is highly probable that the encoding of 200 as int, is read as a very small value (a denormalized small float number) as a float and written as 0.0. On my machine, denormalized float numbers are printed as 0.0 with printf (don't know if it is a standard printf behavior).
For float encoding, read IEEE 754.

Getting the IEEE Single-precision bits for a float

I need to write an IEEE single-precision floating point number to a 32-bit hardware register at a particular address. To do that, I need to convert a variable of type float to an unsigned integer. I can get the integer representation like this:
float a = 2.39;
unsigned int *target;
printf("a = %f\n",a);
target = &a;
printf("target = %08X\n",*target);
which returns:
a = 2.390000
target = 4018F5C3
All good. However this causes a compiler warning "cast.c:12: warning: assignment from incompatible pointer type"
Is there any other way to do this which doesn't generate the warning? This is for specific hardware, I don't need to handle different endianness etc and I don't want to loop through each char for performance reasons as some other questions tend to suggest. It seems like you might be able to use reinterpret_cast in C++ but I am using C.
You can use type punning with a union,
union {
float f;
uint32_t u;
} un;
un.f = your_float;
uint32_t target = un.u;
to get the bits. Or you can use memcpy.
You could creat a union type that contains a float and an unsigned int, store a value into the float member, then read it out of the int, like so:
union reg_val
{
float f_val;
unsigned int i_val;
} myRegister;
myRegister.f_val = 2.39;
printf("target = %08X", myRegister.i_val);
If you're trying to simply display the integral value of the float as it's stored in memory, then try using a union:
union {
float a;
unsigned int target;
} u;
Store the float value:
u.a = 2.39;
Print both float and integer values:
printf ("a = %f\n", u.a);
printf ("target = %08X\n", u.target); /* you could use %u to show decimal */
No compiler warnings. I use GNU compiler (gcc) on Linux.
Notice that target is not a pointer; this is the beauty (and hideousness) of unions. ;-)
EDIT: The union solution works everywhere I have tried it but somewhere on SO I had been pointed at standards that showed it didnt have to work. See the link below in the comments to find a LOT more info on this (Thank you Daniel!). Supposed to work or not supposed to work I would use it with care, I imagine endianness, etc gets involved as well (doubles broken into bytes, etc).
Another solution is a dummy asm function. For example on arm:
.globl echo
echo:
bx lr
unsigned int echo ( float );
...
unsigned int ra; float f;
f=1.0;
ra=echo(f);
some disassembly is required, needs to be on a system that doesnt have an fpu and/or uses gprs for carrying around floats.
memcpy as already mentioned is the cleanest and most reliable and portable solution (be aware of endianness).

Resources