Get float in HEX format - c

I have the string char str [8]. It says a number in the HEX format. It is float and with a sign. How can I write it to a variable of type float?
For example:
char str[9] = "41700000\0";
I need get val from this: 15.0

You can pun the data:
Using memcpy
unsigned int u = 0x41700000;
float f;
memcpy(&f, &u, sizeof(f));
printf("%f\n", f);
Using union (IMO legal, many people have opposite opinion)
union un
{
float f;
unsigned u;
};
void foo(unsigned x)
{
union un a = {.u = x};
printf("%f\n", a.f);
}
I assume floats && integers have the same size.
Of course you will have to convert string from your question to the unsigned value - but it is relatively easy (scanf, atoi ....)
PS BTW many compilers will generate exactly the same code for both (without the memcpy call) https://godbolt.org/z/VaCcxS

This is a quick and dirty way to revert your 15 back from the hexadecimal 4 bytes representation. There is too much wrong with it to even start talking about it, though, and the right thing to do would be to ask yourself "why do i need this to begin from, and how can i do something better instead".
float hexStrToFloat(const char* str)
{
union { unsigned int i; float f; } tmp;
sscanf_s(str, "%x", &tmp.i);
return(tmp.f);
}
Footnote: assumes little-endian, 32 bit or higher, machine.

Related

Float to Binary in C

I am asked to convert a float number into a 32 bit unsigned integer. I then have to check if all the bits are zero but I am having trouble with this. Sorry I am new to C
This is what I am doing
float number = 12.5;
// copying number into a 32-bit unsigned int
unsigned int val_number = *((unsigned int*) &number);
At this point I'm very confused on how to check if all bits are zero.
I think I need to loop through all the bits but I don't know how to do that.
To copy the bytes of a 32-bit float to an integer, best to copy to an integer type that is certainly 32-bit. unsigned may be less, same or more than 32-bits.
#include <inttypes.h>
float number = 12.5;
uint32_t val_number32; // 32-bit type
memcpy(&val_number32, &number, sizeof val_number32);
Avoid the cast and assign. It leads to aliasing problems with modern compilers #Andrew.
"... need cast the addresses of a and b to type (unsigned int *) and then dereference the addresses" reflects a risky programing technique.
To test if the bits of the unsigned integer are all zero, simply test with the constant 0.
int bit_all_zero = (val_number32 == 0);
An alternative is to use a union to access the bytes from 2 different encodings.
union {
float val_f;
uint32_t val_u;
} x = { .val_f = 12.5f };
int bit_all_zero = (x.val_u == 0);
Checking if all the bits are zero is equivalent to checking if the number is zero.
So it would be int is_zero = (val_number == 0);

Compile time/macro swap of endianess of float in c99

I have some floats (IEEE-754) that I want to initialize. The floats are fetched by another device (automagically) which runs big endian where I am using little endian and I can't change that.
Normally I would just swap with some built in function, but they are all run-time functions. I'd perfer not having to have an init() function just to swap endianess and it would be great if I could use it for const initializations also.
Something that result in this would be perfect:
#define COMPILE_TIME_SWAPPED_FLOAT(x) (...)
const float a = COMPILE_TIME_SWAPPED_FLOAT(1.0f);
Any great ideas?
Compile time/macro swap of endian-ness of float in c99
OP has other problem with using "reverse" float as a float
A local variable of type float encoding the "reverse" endian floating point value cannot certainly be initialized as a float. Many of the values in reverse byte order would correspond to a Not-A-Number (NAN) in the local float. The assignment may not be stable (bit pattern preserving). It could be:
// not a good plan
float f1 = some_particulate_not_a_number_bit_pattern;
float f2 = some_other_particulate_not_a_number_bit_pattern;
Instead the local "reversed" endian float should just be a uint32_t, 4-byte structure/union or 4-byte array initialized in some way with a float.
// Demo of why a reversed `float` can fail
// The Not-a-numbers bit to control signaling NaN vs. quiet NaN isn't certainly preserved.
int main(void) {
for (;;) {
union {
int32_t i32;
int32_t u32;
float f;
} x,y;
x.i32 = rand();
y.f = x.f;
if (x.u32 ^ y.u32) {
// If bit pattern preserved, this should never print
// v-------+---- biased exponent max (NaN)
// |-------|v--- signaling/quiet bit
// On my machine output is always x1111111 1?xxxxxx xxxxxxxx xxxxxxxx
printf("%08x\n", (unsigned) x.u32);
printf("%08x\n\n", (unsigned) y.u32);
}
}
}
Output
7f8181b1
7fc181b1
...
The below uses a compound literal to meet OP's goal. First initialize a union's float member with the desired float. Then extract it byte-by-byte from its uint8_t member (per desired endian) to initialize a new compound literal's uint8_t array member. Then extract the uint32_t. Works for local variables.
#include <stdint.h>
#include <stdio.h>
typedef uint32_t float_reversed;
typedef union {
uint8_t u8[4];
float_reversed u32;
float f;
} endian_f;
#define ENDIAN_FN(_f,_n) ( (endian_f){.f=(_f)}.u8[(_n)] )
#define ENDIAN_F(_f) ((endian_f){ \
ENDIAN_FN(_f,3), ENDIAN_FN(_f,2), \
ENDIAN_FN(_f,1), ENDIAN_FN(_f,0)}.u32)
void print_hexf(void *f) {
for (size_t i=0; i<sizeof f; i++) {
printf("%02X", ((unsigned char *)f)[i]);
}
puts("");
}
int main(void) {
float f1 = 1.0f;
print_hexf(&f1);
float_reversed f1r = ENDIAN_F(f1);
print_hexf(&f1r);
float_reversed f2r = ENDIAN_F(1.0);
print_hexf(&f2r);
}
Output
0000803F
3F800000
3F800000
I'd say having the preprocessor to swap bytes of some non byte variable isn't possible.
The preprocessor does not know about data types and their representation on byte-level.
If the endian reversal code is available to be inlined then any half decent optimizing compiler will work out the reversed value at compile time.
Taking the reversal code from https://stackoverflow.com/a/2782742/2348315 :
inline float ReverseFloat( const float inFloat )
{
float retVal;
char *floatToConvert = ( char* ) & inFloat;
char *returnFloat = ( char* ) & retVal;
// swap the bytes into a temporary buffer
returnFloat[0] = floatToConvert[3];
returnFloat[1] = floatToConvert[2];
returnFloat[2] = floatToConvert[1];
returnFloat[3] = floatToConvert[0];
return retVal;
}
And using it in away that compiler can see all the details:
float reversed10(){
const float reversed = ReverseFloat(10.0f);
return reversed;
}
Compiles to:
reversed10():
vmovss xmm0, DWORD PTR .LC0[rip]
ret
.LC0:
.long 8257
with GCC 7.1 with -O2 enabled.
You can try other compilers over here:
https://godbolt.org/g/rFmJGP

Convert one 32bit float number into two 16bit uint number and then convert back to that 32bit float again

I am working on transfter an 32bit float number from one platform to the other. Well it is only allowed to pass 16bit unsinged int member to the transfter register. I am thinking that I can then seperate the 32bit float into two 16bit and then conver to 32bit on the other side again.
(I am using C language)
Like:
float A = 3.14
uint16_t B = A & 0xffff;
uint16_t C = A & 0xffff0000;
float D = C<<16 & B;
Obevious this is not correct as float data will be converted to unsigned int when it is assigned. So how shall I do it usually? there shall be some quite mature methods to do similiar thing
Thanks
You can use a union for this, e.g.:
typedef union {
float f;
uint16_t a[2];
} U;
U u;
u.f = 3.14f;
printf("%g -> %#x %#x\n", u.f, u.a[0], u.a[1]);
LIVE DEMO
Note: strictly speaking this is undefined behaviour, but it's such a widely used technique that it is unlikely to fail. Alternatively you can take a safer, but potentially somewhat less efficient approach, and just use memcpy, like this:
float f = 3.14f;
uint16_t a[2];
memcpy(a, &f, sizeof(a));
printf("%g -> %#x %#x\n", f, a[0], a[1]);
LIVE DEMO

Convert ieee 754 float to hex with c - printf

Ideally the following code would take a float in IEEE 754 representation and convert it into hexadecimal
void convert() //gets the float input from user and turns it into hexadecimal
{
float f;
printf("Enter float: ");
scanf("%f", &f);
printf("hex is %x", f);
}
I'm not too sure what's going wrong. It's converting the number into a hexadecimal number, but a very wrong one.
123.1443 gives 40000000
43.3 gives 60000000
8 gives 0
so it's doing something, I'm just not too sure what.
Help would be appreciated
When you pass a float as an argument to a variadic function (like printf()), it is promoted to a double, which is twice as large as a float (at least on most platforms).
One way to get around this would be to cast the float to an unsigned int when passing it as an argument to printf():
printf("hex is %x", *(unsigned int*)&f);
This is also more correct, since printf() uses the format specifiers to determine how large each argument is.
Technically, this solution violates the strict aliasing rule. You can get around this by copying the bytes of the float into an unsigned int and then passing that to printf():
unsigned int ui;
memcpy(&ui, &f, sizeof (ui));
printf("hex is %x", ui);
Both of these solutions are based on the assumption that sizeof(int) == sizeof(float), which is the case on many 32-bit systems, but isn't necessarily the case.
When supported, use %a to convert floating point to a standard hexadecimal format. Here is the only document I could find that listed the %a option.
Otherwise you must pull the bits of the floating point value into an integer type of known size. If you know, for example, that both float and int are 32 bits, you can do a quick cast:
printf( "%08X" , *(unsigned int*)&aFloat );
If you want to be less dependent on size, you can use a union:
union {
float f;
//char c[16]; // make this large enough for any floating point value
char c[sizeof(float)]; // Edit: changed to this
} u;
u.f = aFloat;
for ( i = 0 ; i < sizeof(float) ; ++i ) printf( "%02X" , u.c[i] & 0x00FF );
The order of the loop would depend on the architecture endianness. This example is big endian.
Either way, the floating point format may not be portable to other architectures. The %a option is intended to be.
HEX to Float
I spend quite a long time trying to figure out how to convert a HEX input from a serial connection formatted as IEE754 float into float. Now I got it. Just wanted to share in case it could help somebody else.
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char *argv[])
{
uint16_t tab_reg[64] //declare input value recieved from serial connection
union IntFloat { int32_t i; float f; }; //Declare combined datatype for HEX to FLOAT conversion
union IntFloat val;
int i;
char buff[50]; //Declare buffer for string
i=0;
//tab_reg[i]=0x508C; //to test the code without a data stream,
//tab_reg[i+1]=0x4369; //you may uncomment these two lines.
printf("Raw1: %X\n",tab_reg[i]); //Print raw input values for debug
printf("Raw2: %X\n",tab_reg[i+1]); //Print raw input values for debug
rs = sprintf(buff,"0X%X%X", tab_reg[i+1], tab_reg[i]); //I need to swap the words, as the response is with the opposite endianness.
printf("HEX: %s",buff); //Show the word-swapped string
val.i = atof(buff); //Convert string to float :-)
printf("\nFloat: %f\n", val.f); //show the value in float
}
Output:
Raw1: 508C
Raw2: 436A
HEX: 0X436A508C
Float: 234.314636
This approach always worked pretty fine to me:
union converter{
float f_val;
unsigned int u_val;
};
union converter a;
a.f_val = 123.1443f;
printf("my hex value %x \n", a.u_val);
Stupidly simple example:
unsigned char* floatToHex(float val){
unsigned char* hexVals = malloc(sizeof(float));
hexVals[0] = ((unsigned char*)&val)[0];
hexVals[1] = ((unsigned char*)&val)[1];
hexVals[2] = ((unsigned char*)&val)[2];
hexVals[3] = ((unsigned char*)&val)[3];
return hexVals;
}
Pretty obvious solution when I figured it out. No bit masking, memcpy, or other tricks necessary.
In the above example, it was for a specific purpose and I knew floats were 32 bits. A better solution if you're unsure of the system:
unsigned char* floatToHex(float val){
unsigned char* hexVals = malloc(sizeof(float));
for(int i = 0; i < sizeof(float); i++){
hexVals[i] = ((unsigned char*)&val)[i];
}
return hexVals;
}
How about this:?
int main(void){
float f = 28834.38282;
char *x = (char *)&f;
printf("%f = ", f);
for(i=0; i<sizeof(float); i++){
printf("%02X ", *x++ & 0x0000FF);
}
printf("\n");
}
https://github.com/aliemresk/ConvertD2H/blob/master/main.c
Convert Hex to Double
Convert Double to Hex
this codes working IEEE 754 floating format.
What finally worked for me (convoluted as it seems):
#include <stdio.h>
int main((int argc, char** argv)
{
float flt = 1234.56789;
FILE *fout;
fout = fopen("outFileName.txt","w");
fprintf(fout, "%08x\n", *((unsigned long *)&flt);
/* or */
printf("%08x\n", *((unsigned long *)&flt);
}

convert int to float to hex

Using scanf, each number typed in, i would like my program to
print out two lines: for example
byte order: little-endian
> 2
2 0x00000002
2.00 0x40000000
> -2
-2 0xFFFFFFFE
-2.00 0xC0000000
I can get it to print out the 2 in hex
but i also need a float and of course i cant scanf as one
when i need to also scan as an int
If i cast as a float when i try to printf i get a zero. If i scan in as a float
i get the correct output. I have tried to convert the int to a
float but it still comes out as zero.
here is my output so far
Int - float - hex
byte order: little-endian
>2
2 0x000002
2.00 00000000
it looks like i am converting to a float fine
why wont it print as a hex?
if i scan in as a float i get the correct hex representation like the first example.
this should be something simple. i do need to scan in as a decimal
keep in mind
i am running this in cygwin
here is what i have so far..
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char *argv[])
{
int HexNumber;
float convert;
printf("Int - float - hex\n");
int a = 0x12345678;
unsigned char *c = (unsigned char*)(&a);
if (*c == 0x78)
{
printf("\nbyte order: little-endian\n");
}
else
{
printf("\nbyte order: big-endian\n");
}
printf("\n>");
scanf("%d", &HexNumber);
printf("\n%10d ",HexNumber);
printf("%#08x",HexNumber);
convert = (float)HexNumber; // converts but prints a zero
printf("\n%10.2f ", convert);
printf("%#08x", convert); // prints zeros
return 0;
}
try this:
int i = 2;
float f = (float)i;
printf("%#08X", *( (int*) &f ));
[EDIT]
#Corey:
let's parse it from inside out:
& f = address of f = say address 0x5ca1ab1e
(int*) &f = interpret the address 0x5ca1ab1e as integer pointer
* ((int*)&f) = get the integer at address 0x5ca1ab1e
the following is more concise, but it's hard to remember the C language's operator associativity and operator precedence(i prefer the extra clarity of some added parenthesis and whitespace provides):
printf("%#08X", *(int*)&f);
printf("%#08x", convert); // prints zeros
This line is not going to work because you are telling printf that you are passing in an int (by using the %x) but infact you are passing it in a float.
What is your intention with this line? To show the binary representation of the floating point number in hex? If so, you may want to try something like this:
printf("%lx\n", *(unsigned long *)(&convert));
What this line is doing is taking the address of convert (&convert) which is a pointer to a float and casting it into a pointer to an unsigned long (note: that the type you cast into here may be different depending on the size of float and long on your system). The last * is dereferencing the pointer to an unsigned long into an unsigned long which is passed to printf
Given an int x, converting to float, then printing out the bytes of that float in hex could be done something like this:
show_as_float(int x) {
float xx = x;
//Edit: note that this really prints the value as a double.
printf("%f\t", xx);
unsigned char *ptr = (unsigned char *)&xx;
for (i=0; i<sizeof(float); i++)
printf("%2.2x", ptr[i]);
}
The standards (C++ and C99) give "special dispensation" for unsigned char, so it's safe to use them to view the bytes of any object. C89/90 didn't guarantee that, but it was reasonably portable nonetheless.

Resources