Displaying floating point variable as a hex integer screws up neighbouring integer - c

I have this simple program
#include <stdio.h>
int main(void)
{
unsigned int a = 0x120;
float b = 1.2;
printf("%X %X\n", b, a);
return 0;
}
I expected the output to be
some-value 120 (some-value will depend on the bit pattern of `float b` )
But I see
40000000 3FF33333
Why is the value of a getting screwed up? %X treats its arguments as signed int and hence it should have retrieved 4 bytes from the stack and printed the calue of b and then fetching the next 4 bytes print the value of a which is 0x120

Firstly, it's undefined behaviour to pass arguments to printf not matching the format specifiers.
Secondly, the float is promoted to double when passed to printf, so it's eight bytes instead of four. Which bytes get interpreted as the two unsigned values expected by the printf format depends on the order in which the arguments are pushed.

If you want to see the bits of a stored float, use a union:
float b = 1.2;
union {
float f;
int i;
} u;
u.f = b;
printf ("%x\n", u.i);
results (32-bit x86):
3f99999a

Related

Why is the length of address of variable only 6 digits in hex code although my machine is 64 bit?

When I tried printing out address of variable in C to my standard output in hexadecimal format, I got a 6 digit answer to my surprise. Since, I work on a 64 bit machine, I expected to have a 16 digit long address since 2^{64} = 16^{16} but instead the address was just 6 digits long.
Code for reference:
#include<stdio.h>
int square();
int x;
int main(){
scanf("%d",&x);
printf("%d\n",square());
printf("\naddress of x is %x",&x);
return 0;
}
int square(){
return x*x;
}
The output was:
address of x is 407970
First of all printf("\naddress of x is %x",&x); is incorrect. %x expects an unsigned int.
The correct format specifier to print address is %p (more here):
printf("\naddress of x is %p",&x);
//To be more precise cast to void*, see #Gerhardh's comment below
printf("\naddress of x is %p", (void*)&x);
Why is the length of address of variable only 6 digits in hex code although my machine is 64 bit?
unsigned long x = 10;
printf("%lu", x); //prints 10
x is 10 in the above example. Does it mean x is now 8 bit wide? No. Similarly there is a range of addresses starting from 0, the address doesn't have to be big enough to be represented as full 64 bit always. You can assume it to be 0x00407970 in your case.
Side note: The following will have undefined behaviour if the value of x is large. Just enter 123123 as input and see what happens
int square(){
return x*x;
}

Error on casting unsigned int to float

For the following program.
#include <stdio.h>
int main()
{
unsigned int a = 10;
unsigned int b = 20;
unsigned int c = 30;
float d = -((a*b)*(c/3));
printf("d = %f\n", d);
return 0;
}
It is very strange that output is
d = 4294965248.000000
When I change the magic number 3 in the expression to calculate d to 3.0, I got correct result:
d = 2000.000000
If I change the type of a, b, c to int, I also got correct result.
I guess this error occurred by the conversion from unsigned int to float, but I do not know details about how the strange result was created.
I think you realize that you casting minus to unsigned int before assignment to float. If you run the below code, you will get highly likely 4294965296
#include <stdio.h>
int main()
{
unsigned int a = 10;
unsigned int b = 20;
unsigned int c = 30;
printf("%u", -((a*b)*(c/3)));
return 0;
}
The -2000 to the right of your equals sign is set up as a signed
integer (probably 32 bits in size) and will have the hexadecimal value
0xFFFFF830. The compiler generates code to move this signed integer
into your unsigned integer x which is also a 32 bit entity. The
compiler assumes you only have a positive value to the right of the
equals sign so it simply moves all 32 bits into x. x now has the
value 0xFFFFF830 which is 4294965296 if interpreted as a positive
number. But the printf format of %d says the 32 bits are to be
interpreted as a signed integer so you get -2000. If you had used
%u it would have printed as 4294965296.
#include <stdio.h>
#include <limits.h>
int main()
{
float d = 4294965296;
printf("d = %f\n\n", d);
return 0;
}
When you convert 4294965296 to float, the number you are using is long to fit into the fraction part. Now that some precision was lost. Because of the loss, you got 4294965248.000000 as I got.
The IEEE-754 floating-point standard is a standard for representing
and manipulating floating-point quantities that is followed by all
modern computer systems.
bit 31 30 23 22 0
S EEEEEEEE MMMMMMMMMMMMMMMMMMMMMMM
The bit numbers are counting from the least-significant bit. The first
bit is the sign (0 for positive, 1 for negative). The following
8 bits are the exponent in excess-127 binary notation; this
means that the binary pattern 01111111 = 127 represents an exponent
of 0, 1000000 = 128, represents 1, 01111110 = 126 represents
-1, and so forth. The mantissa fits in the remaining 24 bits, with
its leading 1 stripped off as described above. Source
As you can see, when doing conversion 4294965296 to float, precision which is 00011000 loss occurs.
11111111111111111111100 00011000 0 <-- 4294965296
11111111111111111111100 00000000 0 <-- 4294965248
This is because you use - on an unsigned int. The - inverts the bits of the number. Lets print some unsigned integers:
printf("Positive: %u\n", 2000);
printf("Negative: %u\n", -2000);
// Output:
// Positive: 2000
// Negative: 4294965296
Lets print the hex values:
printf("Positive: %x\n", 2000);
printf("Negative: %x\n", -2000);
// Output
// Positive: 7d0
// Negative: fffff830
As you can see, the bits are inverted. So the problem comes from using - on unsigned int, not from casting unsigned intto float.
As others have said, the issue is that you are trying to negate an unsigned number. Most of the solutions already given have you do some form of casting to float such that the arithmetic is done on floating point types. An alternate solution would be to cast the results of your arithmetic to int and then negate, that way the arithmetic operations will be done on integral types, which may or may not be preferable, depending on your actual use-case:
#include <stdio.h>
int main(void)
{
unsigned int a = 10;
unsigned int b = 20;
unsigned int c = 30;
float d = -(int)((a*b)*(c/3));
printf("d = %f\n", d);
return 0;
}
Your whole calculation will be done unsigned so it is the same as
float d = -(2000u);
-2000 in unsigned int (assuming 32bits int) is 4294965295
this gets written in your float d. But as float can not save this exact number it gets saved as 4294965248.
As a rule of thumb you can say that float has a precision of 7 significant base 10 digits.
What is calculated is 2^32 - 2000 and then floating point precision does the rest.
If you instead use 3.0 this changes the types in your calculation as follows
float d = -((a*b)*(c/3.0));
float d = -((unsigned*unsigned)*(unsigned/double));
float d = -((unsigned)*(double));
float d = -(double);
leaving you with the correct negative value.
you need to cast the ints to floats
float d = -((a*b)*(c/3));
to
float d = -(((float)a*(float)b)*((float)c/3.0));
-((a*b)*(c/3)); is all performed in unsigned integer arithmetic, including the unary negation. Unary negation is well-defined for an unsigned type: mathematically the result is modulo 2N where N is the number of bits in unsigned int. When you assign that large number to the float, you encounter some loss of precision; the result, due to its binary magnitude, is the nearest number to the unsigned int that divides 2048.
If you change 3 to 3.0, then c / 3.0 is a double type, and the result of a * b is therefore converted to a double before being multiplied. This double is then assigned to a float, with the precision loss already observed.

Could copy unsigned int bit values as float but float value not returned to the caller function correctly

In the below code, I have bits correct (it was originally bits<float> type in C++ program, but I just used uint32 in this C program.). I want to use the bits as the ieee754 float value. Assigning just float_var = int_val won't do it because it interprets the value and casts to float. I want to just use the bit values as floating point values.
uint32 bits = mantissa_table[offset_table[value>>10]+(value&0x3FF)] + exponent_table[value>>10];
ab_printf("bits = %x\n", bits);
float out;
//memcpy(&out, &bits, sizeof(float)); // original
char *outp = &out;
char *bitsp = &bits;
outp[0] = bitsp[0];
outp[1] = bitsp[1];
outp[2] = bitsp[2];
outp[3] = bitsp[3];
ab_printf("out = %x\n", out);
return out;
part of the program run result :
ff = 3.140000
hh = 4248
bits = 40490000
out = 40092000
There must be something basic I don't know.
For your information, above run is turning float 3.14 to half-precision and back to single precision and I printed the intermediate values. 0x4248 is in half-precision 3.140625 and bits 0x40490000 is in single-precision also 3.140625, so I just need to return it as float.
ADD : After reading comments and answers, I did some experiment and found that the single-float value is seen correct inside the function(using type punning using pointer, or using union), but when it is returned to the calling function, it is not printed correctly. method 0 ~ 3 all don't work. Inline function or not doesn't make any difference. There maybe another fault in our system (an embeded, bare-metal) but hope somebody could tell me what might be wrong here.(I am using part of C++ program in a C program here). (The ldexp, ldexpf didn't work).
== half.h ==
typedef unsigned short uint16;
typedef unsigned short half;
extern uint16 float2half_impl(float value);
extern float half2float_impl(half value);
== test4.c ==
#include "half.h"
int main()
{
float vflt = 3.14;
half vhlf;
float vflt2;
ab_printf("vflt = %f\n", vflt);
vhlf = float2half_impl(vflt);
ab_printf("vhlf = %x\n", *(unsigned short *)&vhlf);
float vflt2 = half2float_impl(vhlf);
ab_printf("received : vflt2 = %f\n", vflt2);
}
== half.c ==
#include "half.h"
....
inline float half2float_impl(uint16 value)
{
//typedef bits<float>::type uint32;
typedef unsigned int uint32;
static const uint32 mantissa_table[2048] = {
....
uint32 bits = mantissa_table[offset_table[value>>10]+(value&0x3FF)] + exponent_table[value>>10];
ab_printf("bits = %x\n", bits);
float out;
#define METHOD 3
#if METHOD == 0
memcpy(&out, &bits, sizeof(float));
return out;
#elif METHOD == 1
#warning METHOD 1
ab_printf("xx = %f\n", *(float *)&bits); // prints 3.140625
return bits;
#elif METHOD == 2 // prints float ok but return value float prints wrong
#warning METHOD 2
union {
unsigned int ui;
float xx;
} aa;
aa.ui = bits;
ab_printf("xx = %f\n", aa.xx); // prints 3.140625
return (float)aa.xx; // but return values prints wrong
#elif METHOD == 3 // prints float ok but return value float prints wrong
#warning METHOD 3
ab_printf("xx = %f\n", *(float *)&bits); // prints 3.140625
return *(float *)&bits; // but return values prints wrong
#else
#warning returning 0
return 0;
#endif
}
How about using a union?
union uint32_float_union
{
uint32_t i;
float f;
};
Then you can do something like
union uint32_float_union int_to_float;
int_to_float.i = bits;
printf("float value = %f\n", int_to_float.f);
Using unions for type punning is explicitly allowed by the C specification.
The memcpy way you have commented out should work to, but really breaks strict aliasing. You could use a byte-buffer as an intermediate though:
char buffer[sizeof(float)];
memcpy(buffer, &bits, sizeof(float));
float value;
memcpy(&value, buffer, sizeof(float));
Of course, all this requires that the value in bits actually corresponds to a valid float value (including correct endianness).
This:
out = *(float *)&bits;
Allows you to read bits as a float without any explicit or implicit conversion by using pointer magic.
Notice, however, that endinaness might get you a bit screwed doing this (just like memcpy() would too, so if it worked for you this method should work too, but keep in mind that this can change from architecture to architecture).
If you can be sure that the value bits of an uint32_t contain exactly the bit pattern of a IEEE754 binary32, you can "construct" your float number without requiring your uint32_t not to contain padding or your float actually conforming to IEEE754 (IOW, quite portably), by using the ldexp() function.
Here's a little example .. note it doesn't support subnormal numbers, NaN and inf; adding them is some work but can be done:
#include <stdint.h>
#include <math.h>
// read IEEE754 binary32 representation in a float
float toFloat(uint32_t bits)
{
int16_t exp = (bits >> 23 & 0xff) - 0x96;
// subtracts exponent bias (0x7f) and number of fraction bits (0x17)
int32_t sig = (bits & UINT32_C(0x7fffff)) | UINT32_C(0x800000);
if (bits & UINT32_C(0x80000000)) sig *= -1;
return ldexp(sig, exp);
}
(you could do something similar to create a float from an uint16_t containing a half precision representation, just adapt the constants for selecting the correct bits)

Convert ieee 754 float to hex with c - printf

Ideally the following code would take a float in IEEE 754 representation and convert it into hexadecimal
void convert() //gets the float input from user and turns it into hexadecimal
{
float f;
printf("Enter float: ");
scanf("%f", &f);
printf("hex is %x", f);
}
I'm not too sure what's going wrong. It's converting the number into a hexadecimal number, but a very wrong one.
123.1443 gives 40000000
43.3 gives 60000000
8 gives 0
so it's doing something, I'm just not too sure what.
Help would be appreciated
When you pass a float as an argument to a variadic function (like printf()), it is promoted to a double, which is twice as large as a float (at least on most platforms).
One way to get around this would be to cast the float to an unsigned int when passing it as an argument to printf():
printf("hex is %x", *(unsigned int*)&f);
This is also more correct, since printf() uses the format specifiers to determine how large each argument is.
Technically, this solution violates the strict aliasing rule. You can get around this by copying the bytes of the float into an unsigned int and then passing that to printf():
unsigned int ui;
memcpy(&ui, &f, sizeof (ui));
printf("hex is %x", ui);
Both of these solutions are based on the assumption that sizeof(int) == sizeof(float), which is the case on many 32-bit systems, but isn't necessarily the case.
When supported, use %a to convert floating point to a standard hexadecimal format. Here is the only document I could find that listed the %a option.
Otherwise you must pull the bits of the floating point value into an integer type of known size. If you know, for example, that both float and int are 32 bits, you can do a quick cast:
printf( "%08X" , *(unsigned int*)&aFloat );
If you want to be less dependent on size, you can use a union:
union {
float f;
//char c[16]; // make this large enough for any floating point value
char c[sizeof(float)]; // Edit: changed to this
} u;
u.f = aFloat;
for ( i = 0 ; i < sizeof(float) ; ++i ) printf( "%02X" , u.c[i] & 0x00FF );
The order of the loop would depend on the architecture endianness. This example is big endian.
Either way, the floating point format may not be portable to other architectures. The %a option is intended to be.
HEX to Float
I spend quite a long time trying to figure out how to convert a HEX input from a serial connection formatted as IEE754 float into float. Now I got it. Just wanted to share in case it could help somebody else.
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char *argv[])
{
uint16_t tab_reg[64] //declare input value recieved from serial connection
union IntFloat { int32_t i; float f; }; //Declare combined datatype for HEX to FLOAT conversion
union IntFloat val;
int i;
char buff[50]; //Declare buffer for string
i=0;
//tab_reg[i]=0x508C; //to test the code without a data stream,
//tab_reg[i+1]=0x4369; //you may uncomment these two lines.
printf("Raw1: %X\n",tab_reg[i]); //Print raw input values for debug
printf("Raw2: %X\n",tab_reg[i+1]); //Print raw input values for debug
rs = sprintf(buff,"0X%X%X", tab_reg[i+1], tab_reg[i]); //I need to swap the words, as the response is with the opposite endianness.
printf("HEX: %s",buff); //Show the word-swapped string
val.i = atof(buff); //Convert string to float :-)
printf("\nFloat: %f\n", val.f); //show the value in float
}
Output:
Raw1: 508C
Raw2: 436A
HEX: 0X436A508C
Float: 234.314636
This approach always worked pretty fine to me:
union converter{
float f_val;
unsigned int u_val;
};
union converter a;
a.f_val = 123.1443f;
printf("my hex value %x \n", a.u_val);
Stupidly simple example:
unsigned char* floatToHex(float val){
unsigned char* hexVals = malloc(sizeof(float));
hexVals[0] = ((unsigned char*)&val)[0];
hexVals[1] = ((unsigned char*)&val)[1];
hexVals[2] = ((unsigned char*)&val)[2];
hexVals[3] = ((unsigned char*)&val)[3];
return hexVals;
}
Pretty obvious solution when I figured it out. No bit masking, memcpy, or other tricks necessary.
In the above example, it was for a specific purpose and I knew floats were 32 bits. A better solution if you're unsure of the system:
unsigned char* floatToHex(float val){
unsigned char* hexVals = malloc(sizeof(float));
for(int i = 0; i < sizeof(float); i++){
hexVals[i] = ((unsigned char*)&val)[i];
}
return hexVals;
}
How about this:?
int main(void){
float f = 28834.38282;
char *x = (char *)&f;
printf("%f = ", f);
for(i=0; i<sizeof(float); i++){
printf("%02X ", *x++ & 0x0000FF);
}
printf("\n");
}
https://github.com/aliemresk/ConvertD2H/blob/master/main.c
Convert Hex to Double
Convert Double to Hex
this codes working IEEE 754 floating format.
What finally worked for me (convoluted as it seems):
#include <stdio.h>
int main((int argc, char** argv)
{
float flt = 1234.56789;
FILE *fout;
fout = fopen("outFileName.txt","w");
fprintf(fout, "%08x\n", *((unsigned long *)&flt);
/* or */
printf("%08x\n", *((unsigned long *)&flt);
}

convert int to float to hex

Using scanf, each number typed in, i would like my program to
print out two lines: for example
byte order: little-endian
> 2
2 0x00000002
2.00 0x40000000
> -2
-2 0xFFFFFFFE
-2.00 0xC0000000
I can get it to print out the 2 in hex
but i also need a float and of course i cant scanf as one
when i need to also scan as an int
If i cast as a float when i try to printf i get a zero. If i scan in as a float
i get the correct output. I have tried to convert the int to a
float but it still comes out as zero.
here is my output so far
Int - float - hex
byte order: little-endian
>2
2 0x000002
2.00 00000000
it looks like i am converting to a float fine
why wont it print as a hex?
if i scan in as a float i get the correct hex representation like the first example.
this should be something simple. i do need to scan in as a decimal
keep in mind
i am running this in cygwin
here is what i have so far..
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char *argv[])
{
int HexNumber;
float convert;
printf("Int - float - hex\n");
int a = 0x12345678;
unsigned char *c = (unsigned char*)(&a);
if (*c == 0x78)
{
printf("\nbyte order: little-endian\n");
}
else
{
printf("\nbyte order: big-endian\n");
}
printf("\n>");
scanf("%d", &HexNumber);
printf("\n%10d ",HexNumber);
printf("%#08x",HexNumber);
convert = (float)HexNumber; // converts but prints a zero
printf("\n%10.2f ", convert);
printf("%#08x", convert); // prints zeros
return 0;
}
try this:
int i = 2;
float f = (float)i;
printf("%#08X", *( (int*) &f ));
[EDIT]
#Corey:
let's parse it from inside out:
& f = address of f = say address 0x5ca1ab1e
(int*) &f = interpret the address 0x5ca1ab1e as integer pointer
* ((int*)&f) = get the integer at address 0x5ca1ab1e
the following is more concise, but it's hard to remember the C language's operator associativity and operator precedence(i prefer the extra clarity of some added parenthesis and whitespace provides):
printf("%#08X", *(int*)&f);
printf("%#08x", convert); // prints zeros
This line is not going to work because you are telling printf that you are passing in an int (by using the %x) but infact you are passing it in a float.
What is your intention with this line? To show the binary representation of the floating point number in hex? If so, you may want to try something like this:
printf("%lx\n", *(unsigned long *)(&convert));
What this line is doing is taking the address of convert (&convert) which is a pointer to a float and casting it into a pointer to an unsigned long (note: that the type you cast into here may be different depending on the size of float and long on your system). The last * is dereferencing the pointer to an unsigned long into an unsigned long which is passed to printf
Given an int x, converting to float, then printing out the bytes of that float in hex could be done something like this:
show_as_float(int x) {
float xx = x;
//Edit: note that this really prints the value as a double.
printf("%f\t", xx);
unsigned char *ptr = (unsigned char *)&xx;
for (i=0; i<sizeof(float); i++)
printf("%2.2x", ptr[i]);
}
The standards (C++ and C99) give "special dispensation" for unsigned char, so it's safe to use them to view the bytes of any object. C89/90 didn't guarantee that, but it was reasonably portable nonetheless.

Resources