I have some floats (IEEE-754) that I want to initialize. The floats are fetched by another device (automagically) which runs big endian where I am using little endian and I can't change that.
Normally I would just swap with some built in function, but they are all run-time functions. I'd perfer not having to have an init() function just to swap endianess and it would be great if I could use it for const initializations also.
Something that result in this would be perfect:
#define COMPILE_TIME_SWAPPED_FLOAT(x) (...)
const float a = COMPILE_TIME_SWAPPED_FLOAT(1.0f);
Any great ideas?
Compile time/macro swap of endian-ness of float in c99
OP has other problem with using "reverse" float as a float
A local variable of type float encoding the "reverse" endian floating point value cannot certainly be initialized as a float. Many of the values in reverse byte order would correspond to a Not-A-Number (NAN) in the local float. The assignment may not be stable (bit pattern preserving). It could be:
// not a good plan
float f1 = some_particulate_not_a_number_bit_pattern;
float f2 = some_other_particulate_not_a_number_bit_pattern;
Instead the local "reversed" endian float should just be a uint32_t, 4-byte structure/union or 4-byte array initialized in some way with a float.
// Demo of why a reversed `float` can fail
// The Not-a-numbers bit to control signaling NaN vs. quiet NaN isn't certainly preserved.
int main(void) {
for (;;) {
union {
int32_t i32;
int32_t u32;
float f;
} x,y;
x.i32 = rand();
y.f = x.f;
if (x.u32 ^ y.u32) {
// If bit pattern preserved, this should never print
// v-------+---- biased exponent max (NaN)
// |-------|v--- signaling/quiet bit
// On my machine output is always x1111111 1?xxxxxx xxxxxxxx xxxxxxxx
printf("%08x\n", (unsigned) x.u32);
printf("%08x\n\n", (unsigned) y.u32);
}
}
}
Output
7f8181b1
7fc181b1
...
The below uses a compound literal to meet OP's goal. First initialize a union's float member with the desired float. Then extract it byte-by-byte from its uint8_t member (per desired endian) to initialize a new compound literal's uint8_t array member. Then extract the uint32_t. Works for local variables.
#include <stdint.h>
#include <stdio.h>
typedef uint32_t float_reversed;
typedef union {
uint8_t u8[4];
float_reversed u32;
float f;
} endian_f;
#define ENDIAN_FN(_f,_n) ( (endian_f){.f=(_f)}.u8[(_n)] )
#define ENDIAN_F(_f) ((endian_f){ \
ENDIAN_FN(_f,3), ENDIAN_FN(_f,2), \
ENDIAN_FN(_f,1), ENDIAN_FN(_f,0)}.u32)
void print_hexf(void *f) {
for (size_t i=0; i<sizeof f; i++) {
printf("%02X", ((unsigned char *)f)[i]);
}
puts("");
}
int main(void) {
float f1 = 1.0f;
print_hexf(&f1);
float_reversed f1r = ENDIAN_F(f1);
print_hexf(&f1r);
float_reversed f2r = ENDIAN_F(1.0);
print_hexf(&f2r);
}
Output
0000803F
3F800000
3F800000
I'd say having the preprocessor to swap bytes of some non byte variable isn't possible.
The preprocessor does not know about data types and their representation on byte-level.
If the endian reversal code is available to be inlined then any half decent optimizing compiler will work out the reversed value at compile time.
Taking the reversal code from https://stackoverflow.com/a/2782742/2348315 :
inline float ReverseFloat( const float inFloat )
{
float retVal;
char *floatToConvert = ( char* ) & inFloat;
char *returnFloat = ( char* ) & retVal;
// swap the bytes into a temporary buffer
returnFloat[0] = floatToConvert[3];
returnFloat[1] = floatToConvert[2];
returnFloat[2] = floatToConvert[1];
returnFloat[3] = floatToConvert[0];
return retVal;
}
And using it in away that compiler can see all the details:
float reversed10(){
const float reversed = ReverseFloat(10.0f);
return reversed;
}
Compiles to:
reversed10():
vmovss xmm0, DWORD PTR .LC0[rip]
ret
.LC0:
.long 8257
with GCC 7.1 with -O2 enabled.
You can try other compilers over here:
https://godbolt.org/g/rFmJGP
Related
I am asked to convert a float number into a 32 bit unsigned integer. I then have to check if all the bits are zero but I am having trouble with this. Sorry I am new to C
This is what I am doing
float number = 12.5;
// copying number into a 32-bit unsigned int
unsigned int val_number = *((unsigned int*) &number);
At this point I'm very confused on how to check if all bits are zero.
I think I need to loop through all the bits but I don't know how to do that.
To copy the bytes of a 32-bit float to an integer, best to copy to an integer type that is certainly 32-bit. unsigned may be less, same or more than 32-bits.
#include <inttypes.h>
float number = 12.5;
uint32_t val_number32; // 32-bit type
memcpy(&val_number32, &number, sizeof val_number32);
Avoid the cast and assign. It leads to aliasing problems with modern compilers #Andrew.
"... need cast the addresses of a and b to type (unsigned int *) and then dereference the addresses" reflects a risky programing technique.
To test if the bits of the unsigned integer are all zero, simply test with the constant 0.
int bit_all_zero = (val_number32 == 0);
An alternative is to use a union to access the bytes from 2 different encodings.
union {
float val_f;
uint32_t val_u;
} x = { .val_f = 12.5f };
int bit_all_zero = (x.val_u == 0);
Checking if all the bits are zero is equivalent to checking if the number is zero.
So it would be int is_zero = (val_number == 0);
I have the string char str [8]. It says a number in the HEX format. It is float and with a sign. How can I write it to a variable of type float?
For example:
char str[9] = "41700000\0";
I need get val from this: 15.0
You can pun the data:
Using memcpy
unsigned int u = 0x41700000;
float f;
memcpy(&f, &u, sizeof(f));
printf("%f\n", f);
Using union (IMO legal, many people have opposite opinion)
union un
{
float f;
unsigned u;
};
void foo(unsigned x)
{
union un a = {.u = x};
printf("%f\n", a.f);
}
I assume floats && integers have the same size.
Of course you will have to convert string from your question to the unsigned value - but it is relatively easy (scanf, atoi ....)
PS BTW many compilers will generate exactly the same code for both (without the memcpy call) https://godbolt.org/z/VaCcxS
This is a quick and dirty way to revert your 15 back from the hexadecimal 4 bytes representation. There is too much wrong with it to even start talking about it, though, and the right thing to do would be to ask yourself "why do i need this to begin from, and how can i do something better instead".
float hexStrToFloat(const char* str)
{
union { unsigned int i; float f; } tmp;
sscanf_s(str, "%x", &tmp.i);
return(tmp.f);
}
Footnote: assumes little-endian, 32 bit or higher, machine.
In the below code, I have bits correct (it was originally bits<float> type in C++ program, but I just used uint32 in this C program.). I want to use the bits as the ieee754 float value. Assigning just float_var = int_val won't do it because it interprets the value and casts to float. I want to just use the bit values as floating point values.
uint32 bits = mantissa_table[offset_table[value>>10]+(value&0x3FF)] + exponent_table[value>>10];
ab_printf("bits = %x\n", bits);
float out;
//memcpy(&out, &bits, sizeof(float)); // original
char *outp = &out;
char *bitsp = &bits;
outp[0] = bitsp[0];
outp[1] = bitsp[1];
outp[2] = bitsp[2];
outp[3] = bitsp[3];
ab_printf("out = %x\n", out);
return out;
part of the program run result :
ff = 3.140000
hh = 4248
bits = 40490000
out = 40092000
There must be something basic I don't know.
For your information, above run is turning float 3.14 to half-precision and back to single precision and I printed the intermediate values. 0x4248 is in half-precision 3.140625 and bits 0x40490000 is in single-precision also 3.140625, so I just need to return it as float.
ADD : After reading comments and answers, I did some experiment and found that the single-float value is seen correct inside the function(using type punning using pointer, or using union), but when it is returned to the calling function, it is not printed correctly. method 0 ~ 3 all don't work. Inline function or not doesn't make any difference. There maybe another fault in our system (an embeded, bare-metal) but hope somebody could tell me what might be wrong here.(I am using part of C++ program in a C program here). (The ldexp, ldexpf didn't work).
== half.h ==
typedef unsigned short uint16;
typedef unsigned short half;
extern uint16 float2half_impl(float value);
extern float half2float_impl(half value);
== test4.c ==
#include "half.h"
int main()
{
float vflt = 3.14;
half vhlf;
float vflt2;
ab_printf("vflt = %f\n", vflt);
vhlf = float2half_impl(vflt);
ab_printf("vhlf = %x\n", *(unsigned short *)&vhlf);
float vflt2 = half2float_impl(vhlf);
ab_printf("received : vflt2 = %f\n", vflt2);
}
== half.c ==
#include "half.h"
....
inline float half2float_impl(uint16 value)
{
//typedef bits<float>::type uint32;
typedef unsigned int uint32;
static const uint32 mantissa_table[2048] = {
....
uint32 bits = mantissa_table[offset_table[value>>10]+(value&0x3FF)] + exponent_table[value>>10];
ab_printf("bits = %x\n", bits);
float out;
#define METHOD 3
#if METHOD == 0
memcpy(&out, &bits, sizeof(float));
return out;
#elif METHOD == 1
#warning METHOD 1
ab_printf("xx = %f\n", *(float *)&bits); // prints 3.140625
return bits;
#elif METHOD == 2 // prints float ok but return value float prints wrong
#warning METHOD 2
union {
unsigned int ui;
float xx;
} aa;
aa.ui = bits;
ab_printf("xx = %f\n", aa.xx); // prints 3.140625
return (float)aa.xx; // but return values prints wrong
#elif METHOD == 3 // prints float ok but return value float prints wrong
#warning METHOD 3
ab_printf("xx = %f\n", *(float *)&bits); // prints 3.140625
return *(float *)&bits; // but return values prints wrong
#else
#warning returning 0
return 0;
#endif
}
How about using a union?
union uint32_float_union
{
uint32_t i;
float f;
};
Then you can do something like
union uint32_float_union int_to_float;
int_to_float.i = bits;
printf("float value = %f\n", int_to_float.f);
Using unions for type punning is explicitly allowed by the C specification.
The memcpy way you have commented out should work to, but really breaks strict aliasing. You could use a byte-buffer as an intermediate though:
char buffer[sizeof(float)];
memcpy(buffer, &bits, sizeof(float));
float value;
memcpy(&value, buffer, sizeof(float));
Of course, all this requires that the value in bits actually corresponds to a valid float value (including correct endianness).
This:
out = *(float *)&bits;
Allows you to read bits as a float without any explicit or implicit conversion by using pointer magic.
Notice, however, that endinaness might get you a bit screwed doing this (just like memcpy() would too, so if it worked for you this method should work too, but keep in mind that this can change from architecture to architecture).
If you can be sure that the value bits of an uint32_t contain exactly the bit pattern of a IEEE754 binary32, you can "construct" your float number without requiring your uint32_t not to contain padding or your float actually conforming to IEEE754 (IOW, quite portably), by using the ldexp() function.
Here's a little example .. note it doesn't support subnormal numbers, NaN and inf; adding them is some work but can be done:
#include <stdint.h>
#include <math.h>
// read IEEE754 binary32 representation in a float
float toFloat(uint32_t bits)
{
int16_t exp = (bits >> 23 & 0xff) - 0x96;
// subtracts exponent bias (0x7f) and number of fraction bits (0x17)
int32_t sig = (bits & UINT32_C(0x7fffff)) | UINT32_C(0x800000);
if (bits & UINT32_C(0x80000000)) sig *= -1;
return ldexp(sig, exp);
}
(you could do something similar to create a float from an uint16_t containing a half precision representation, just adapt the constants for selecting the correct bits)
In C, I have a struct with a member "frequency", which is a long unsigned int. The hardware this code is running on is 32 bits.
The struct has its value set at instantiation.
struct config_list configuration = {
...
.frequency = (long unsigned int)5250000000u,
...
}
In my code, I pass this struct, via pointer, to a subroutine. Inside this subroutine, I don't seem to be able to get the right value, so to check, I put in this:
printf("Config frequency: %lu\n", ptr->frequency);
printf("Derefernced: %lu\n", (*ptr).frequency);
As those are the two ways I believe you would be able to access the struct data from the pointer.
However, in both cases the output I see is 955,032,704. This I recognize as just the first 32 bits of the frequency I set. My question is, why is my long unsigned int being cut to 32 bits? Isn't a "long" supposed to extend the range to 64 bits?
5250000000 is 0x1 38EC A480... it just peeks into the 33rd bit.
I would recommend that you use the following:
#include <stdio.h>
#include <stdint.h> /* gives us uint64_t and UINT64_C() */
#include <inttypes.h> /* gives us PRIu64 */
int main(void) {
uint64_t d = UINT64_C(5250000000);
printf("%"PRIu64"\n", d);
return 0;
}
uint64_t is guaranteed to be a 64-bit unsigned, on any system.
UINT64_C will append the correct suffix (typically UL or ULL).
PRIu64 will specify the correct format string for a 64-bit unsigned.
On my 64-bit system, it looks like this after the pre-processor (gcc -E):
int main(void) {
uint64_t d = 5250000000UL;
printf("%""l" "u""\n", d);
return 0;
}
And for a 32-bit build, it looks like this (gcc -E -m32):
int main(void) {
uint64_t d = 5250000000ULL;
printf("%""ll" "u""\n", d);
return 0;
}
I have a float in my program that changes after each iteration, namely G[0]. I want to use the last bit of the mantissa as a "random" bit that I will use to pick which path to take in my computation. However, the calculation I'm using now always prints out 0, while it should be roughly 50/50; what's my error? The function signature defines float* G.
unsigned rand_bit = *((unsigned *)&G[0])>>31;
printf("%i", rand_bit);
First of all, *((unsigned *)&G[0]) causes undefined behaviour by violating the strict aliasing rule. In Standard C it is not permitted to access memory of one type by using a different type, except for a handful of special cases.
You can fix this either by disabling strict aliasing in your compiler, or using a union or memcpy.
(Also your code is relying on unsigned being the same size as float, which is not true in general).
But supposing you did fix those issues, your code is testing the most-significant bit. In the IEEE 32-bit floating point format, that bit is the sign bit. So it will read 0 for positive numbers and 1 for negative numbers.
The last bit of the mantissa would be the least significant bit after reinterpreting the memory as integer.
Corrected code could look like:
unsigned u;
assert( sizeof u == sizeof *G );
memcpy(&u, G, sizeof u);
printf("%u", u & 1);
NB. I would be hesitant about assuming this bit will be "random", if you want a random distribution of bits there are much better options.
Although you actual problem got solved I want to propose to use a union instead, it is a bit cleaner, but I do not know what is faster (if you are using the GPU I think I can safely assume that you want it fast). Endianness is also a problem; I was not able to find much information in that direction regarding GPUs, so here are some lines you might want to use.
#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>
int last_bit(float f)
{
// Endianness testing should be done at compile-time, such that a simple macro
// would suffice. Your compiler/libc might already offer one for that
#ifdef USE_TYPE_PUNNING
// So called "type punning" is frowned upon by some
uint32_t ui = 0x76543210;
unsigned char *c = (unsigned char *) &ui;
#else
union {
uint32_t ui;
uint8_t uc[4];
} end = {0x76543210};
// only to save some code branching, end.uc can be used directly, of course
unsigned char *c = (unsigned char *) end.uc;
#endif
int rand_bit;
union {
float fl;
uint32_t ui;
} w;
w.fl = f;
#ifdef DEBUG
printf("%x\n", w.ui);
#endif
// Little endian
if (*c == 0x10) {
rand_bit = w.ui & 0x1;
}
// Big endian
else if (*c == 0x76) {
rand_bit = w.ui & 0x1;
}
// Too old/new
else {
fprintf(stderr, "unknown endianness\n");
return -1;
}
return rand_bit;
}
int main(int argc, char **argv)
{
float f;
// all checks omitted!
if (argc >= 2) {
f = atof(argv[1]);
} else {
// 0x40e00000 even
//f = 7;
// 0x3f8ccccd odd
f = 1.1;
}
printf("last bit of mantissa = %d\n", last_bit(f));
exit(EXIT_SUCCESS);
}