I'm changing an uint32_t to a float but without changing the actual bits.
Just to be sure: I don't wan't to cast it. So float f = (float) i is the exact opposite of what I wan't to do because it changes bits.
I'm going to use this to convert my (pseudo) random numbers to float without doing unneeded math.
What I'm currently doing and what is already working is this:
float random_float( uint64_t seed ) {
// Generate random and change bit format to ieee
uint32_t asInt = (random_int( seed ) & 0x7FFFFF) | (0x7E000000>>1);
// Make it a float
return *(float*)(void*)&asInt; // <-- pretty ugly and nees a variable
}
The Question: Now I'd like to get rid of the asInt variable and I'd like to know if there is a better / not so ugly way then getting the address of this variable, casting it twice and dereferencing it again?
You could try union - as long as you make sure the types are identical in memory sizes:
union convertor {
int asInt;
float asFloat;
};
Then you can assign your int to asFloat (or the other way around if you want to). I use it a lot when I need to do bitwise operations on one hand and still get a uint32_t representation on the number on the other hand
[EDIT]
Like many of the commentators rightfully state, you must take into consideration values that are not presentable by integers like like NAN, +INF, -INF, +0, -0.
So you seem to want to generate floating point numbers between 0.5 and 1.0 judging from your code.
Assuming that your microcontroller has a standard C library with floating point support, you can do this all standards compliant without actually involving any floating point operations, all you need is the ldexp function that itself doesn't actually do any floating point math.
This would look something like this:
return ldexpf((1 << 23) + random_thing_smaller_than_23_bits(), -24);
The trick here is that we happen to know that IEEE754 binary32 floating point numbers have integer precision between 2^23 and 2^24 (I could be off-by-one here, double check please, I'm translating this from some work I've done on doubles). So the compiler should know how to convert that number to a float trivially. Then ldexp multiplies that number by 2^-24 by just changing the bits in the exponent. No actual floating point operations involved and no undefined behavior, the code is fully portable to any standard C implementation with IEEE754 numbers. Double check the generated code, but a good compiler and c library should not use any floating point instructions here.
If you want to peek at some experiments I've done around generating random floating point numbers you can peek at this github repo. It's all about doubles, but should be trivially translatable to floats.
Reinterpreting the binary representation of an int to a float would result in major problems:
There are a lot of undefined codes in the binary representation of a float.
Other codes represent special conditions, like NAN, +INF, -INF, +0, -0 (sic!), etc.
Also, if that is a random value, even if catching all non-value representations, that would yield a very bad random distribution.
If you are working on an MCU without FPU, you should better think about avoiding float at all. An alternative might be fraction or scaled integers. There are many implementations of algorithms which use float, but can be easily converted to fixed point types with acceptable loss of precision (or even none at all). Some might even yield more precision than float (note that single precision float has only 23 bits of mantissa, an int32 would have 31 bits (+ 1 sign for either), same for a fractional or fixed scaled int.
Note that C11 added (optional) support for _Frac. You might want to research on that.
Edit:
According you your comments, you seem to convert the int to a float in range 0..<1. For that, you can assemble the float using bit operations on an uint32_t (e.g. the original value). You just need to follow the IEEE format (presumed your toolchain does comply to the C standard! See wikipedia.
The result (still uint32_t) can then be reinterpreted by a union or pointer as described by others already. Pack that in a system-dependent, well-commented library and dig it deep. Do not forget to check about endianess and alignment (likely both the same for float and uint32_t, but important for the bit-ops).
Related
I have a variable x of type float, and I need its fractional part. I know I can get it with
x - floorf(x), or
fmodf(x, 1.0f)
My questions: Is one of these always preferable to the other? Are they effectively the same? Is there a third alternative I might consider?
Notes:
If the answer depends on the processor I'm using, let's make it x86_64, and if you can elaborate about other processors that would be nice.
Please make sure and refer to the behavior on negative values of x. I don't mind this behavior or that, but I need to know what the behavior is.
Is there a third alternative I might consider?
There's the dedicated function for it. modff exists to decompose a number into its integral and fractional parts.
float modff( float arg, float* iptr );
Decomposes given floating point value arg into integral and fractional
parts, each having the same type and sign as arg. The integral part
(in floating-point format) is stored in the object pointed to by iptr.
I'd say that x - floorf(x) is pretty good (exact), except in corner cases
it has the wrong sign bit for negative zero or any other negative whole float
(we might expect the fraction part to wear the same sign bit).
it does not work that well with inf
modff does respect -0.0 sign bit for both int and frac part, and answer +/-0.0 for +/-inf fraction part - at least if implementation supports the IEC 60559 standard (IEEE 754).
A rationale for inf could be: since every float greater than 2^precision has a null fraction part, then it must be true for infinite float too.
That's minor, but nonetheless different.
EDIT Err, of course as pointed by #StoryTeller-UnslanderMonica the most obvious flaw of x - floor(x) is for the case of negative floating point with a fraction part, because applied to -2.25, it would return +0.75 for example, which is not what we expect...
Since c99 label is used, x - truncf(x) would be more correct, but still suffer from the minor problems onto which I initially focused.
The use case:
I have some large data arrays containing floating point constants that.
The file defining that array is generated and the template can be easily adapted.
I would like to make some tests, how reduced precision does influence the results in terms of quality, but also in compressibility of the binary.
Since I do not want to change other source code than the generated file, I am looking for a way to reduce the precision of the constants.
I would like to limit the mantissa to a fixed number of bits (set the lower ones to 0). But since floating point literals are in decimal, there are some difficulties, specifying numbers in a way that the binary representation does contain all zeros at the lower mantissa bits.
The best case would be something like:
#define FP_REDUCE(float) /* some macro */
static const float32_t veryLargeArray[] = {
FP_REDUCE(23.423f), FP_REDUCE(0.000023f), FP_REDUCE(290.2342f),
// ...
};
#undef FP_REDUCE
This should be done at compile time and it should be platform independent.
The following uses the Veltkamp-Dekker splitting algorithm to remove n bits (with rounding) from x, where p = 2n (for example, to remove eight bits, use 0x1p8f for the second argument). The casts to float32_t coerce the results to that type, as the C standard otherwise permits implementations to use more precision within expressions. (Double-rounding could produce incorrect results in theory, but this will not occur when float32_t is the IEEE basic 32-bit binary format and the C implementation computes this expression in that format or the 64-bit format or wider, as the former is the desired format and the latter is wide enough to represent intermediate results exactly.)
IEEE-754 binary floating-point is assumed, with round-to-nearest. Overflow occurs if x•(p+1) rounds to infinity.
#define RemoveBits(x, p) (float32_t) (((float32_t) ((x) * ((p)+1))) - (float32_t) (((float32_t) ((x) * ((p)+1))) - (x))))
What you're asking for can be done with varying degrees of partial portability, but not absolute unless you want to run the source file through your own preprocessing tool at build time to reduce the precision. If that's an option for you, it's probably your best one.
Short of that, I'm going to assume at least that your floating point types are base 2 and obey Annex F/IEEE semantics. This should be a reasonable assumption, but the latter is false with gcc on platforms (including 32-bit x86) with extended-precision under the default standards-conformance profile; you need -std=cNN or -fexcess-precision=standard to fix it.
One approach is to add and subtract a power of two chosen to cause rounding to the desired precision:
#define FP_REDUCE(x,p) ((x)+(p)-(p))
Unfortunately, this works in absolute precisions, not relative, and requires knowing the right value p for the particular x, which is going to be equal to the value of the leading base-2 place of x, times 2 raised to the power of FLT_MANT_DIG minus the bits of precision you want. This cannot be evaluated as a constant expression for use as an initializer, but you can write it in terms of FLT_EPSILON and, if you can assume C99+, a preprocessor-token-pasting to form a hex float literal, yielding the correct value for this factor. But you still need to know the power of two for the leading digit of x; I don't see any way to extract that as a constant expression.
Edit: I believe this is fixable, so as not to need an absolute precision but rather automatically scale to the value, but it depends on correctness of a work in progress. See Is there a correct constant-expression, in terms of a float, for its msb?. If that works I will later integrate the result with this answer.
Another approach I like, if your compiler supports compound literals in static initializers and if you can assume IEEE type representations, is using a union and masking off bits:
union { float x; uint32_t r; } fr;
#define FP_REDUCE(x) ((union fr){.r=(union fr){x}.r & (0xffffffffu<<n)}.x)
where n is the number of bits you want to drop. This will round towards zero rather than to nearest; if you want to make it round to nearest, it should be possible by adding an appropriate constant to the low bits before masking, but you have to take care about what happens when the addition overflows into the exponent bits.
I have a counter field from one TCP protocol which keeps track of samples sent out. It is defined as float64. I need to translate this protocol to a different one, in which the counter is defined as uint64.
Can I safely store the float64 value in the uint64 without losing any precision on the whole number part of the data? Assuming the fractional part can be ignored.
EDIT: Sorry if I didn't describe it clearly. There is no code. I'm just looking at two different protocol documentations to see if one can be translated to the other.
Please treat float64 as double, the documentation isn't well written and is pretty old.
Many thanks.
I am assuming you are asking about 64 bit floating point values such as IEEE Binary64 as documented in https://en.wikipedia.org/wiki/Double-precision_floating-point_format .
Converting a double represented as a 64 bit IEEE floating point value to a uint64_t will not lose any precision on the integral part of the value, as long as the value itself is non negative and less than 264. But if the value is larger than 253, the representation as a double does not allow complete precision, so whatever computation led to the value probably was inaccurate anyway.
Note that the reverse is false, IEEE floats have less precision than 64 bit uint64_t, so close but different large values will convert to the same double values.
Note that a counter implemented as a 64 bit double is intrinsically limited by the precision of the floating point type. Incrementing a value larger than 253 by one is likely to have no effect at all. Using a floating point type to implement a packet counter seems a very bad idea. Using a uint64_t counter directly seem a safer bet. You only have to worry about wrap around at 264, a condition that you can check for in the unlikely case where you would actually expect to count that far.
If you cannot change the format, verify that the floating point value is within range for the conversion and store an appropriate value if it is not:
#include <stdint.h>
...
double v = get_64bit_value();
uint64_t result;
if (v < 0) {
result = 0;
} else
if (v >= (double)UINT64_MAX) {
result = UINT64_MAX;
} else {
result = (uint64_t)v;
}
Yes, precision is lost. Negative numbers cannot be converted properly to uint64 (as this type is unsigned), as well as numbers greater than 2^64-1. In all other cases, the conversion is exact (providing you look at the float64 value as exact and rounding is done correctly).
I have a fx1.15 notation. The underlying integer value is 63183 (register value).
Now, according to wikipedia the the complete length is 15 bits. The value does not fit inside, right?
So assuming it is a fx1.16 value, how do I convert it to a human readable value?
To convert a fixed-point value into something human-readable, do a floating-point divide by 2 to the number of fractional bits. For example, if there are 15 fractional bits, 2^15 = 32768, so you would use something like this:
int x = <fixed-point-value-in-1.15-format>
printf("x = %g\n", x / 32768.0);
Now converting fixed-point numbers to floating-point and invoking printf() are expensive operations, and they usually destroy any performance gained by using fixed-point. I presume you are only doing this for diagnostic purposes.
Also, note that if your platform is doing fixed-point because floating-point operations are forbidden or not available, then you'll have to do something different, along the lines of manually doing the decimal conversion. Model the integer as the underlying floating-point value multiplied by 32768 and go from there. There's some useful fixed-point code here.
p.s. I'm not sure you're still interested in this answer, ashirk, (I wrote it more for others), but if you are, welcome to Stack Overflow!
Can I compare a floating-point number to an integer?
Will the float compare to integers in code?
float f; // f has a saved predetermined floating-point value to it
if (f >=100){__asm__reset...etc}
Also, could I...
float f;
int x = 100;
x+=f;
I have to use the floating point value f received from an attitude reference system to adjust a position value x that controls a PWM signal to correct for attitude.
The first one will work fine. 100 will be converted to a float, and IEE754 can represent all integers exactly as floats, up to about 223.
The second one will also work but will be converted into an integer first, so you'll lose precision (that's unavoidable if you're turning floats into integers).
Since you've identified yourself as unfamiliar with the subtleties of floating point numbers, I'll refer you to this fine paper by David Goldberg: What Every Computer Scientist Should Know About Floating-Point Arithmetic (reprint at Sun).
After you've been scared by that, the reality is that most of the time floating point is a huge boon to getting calculations done. And modern compilers and languages (including C) handle conversions sensibly so that you don't have to worry about them. Unless you do.
The points raised about precision are certainly valid. An IEEE float effectively has only 24 bits of precision, which is less than a 32-bit integer. Use of double for intermediate calculations will push all rounding and precision loss out to the conversion back to float or int.
Mixed-mode arithmetic (arithmetic between operands of different types and/or sizes) is legal but fragile. The C standard defines rules for type promotion in order to convert the operands to a common representation. Automatic type promotion allows the compiler to do something sensible for mixed-mode operations, but "sensible" does not necessarily mean "correct."
To really know whether or not the behavior is correct you must first understand the rules for promotion and then understand the representation of the data types. In very general terms:
shorter types are converted to longer types (float to double, short to int, etc.)
integer types are converted to floating-point types
signed/unsigned conversions favor avoiding data loss (whether signed is converted to
unsigned or vice-versa depends on the size of the respective types)
Whether code like x > y (where x and y have different types) is right or wrong depends on the values that x and y can take. In my experience it's common practice to prohibit (via the coding standard) implicit type conversions. The programmer must consider the context and explicitly perform any type conversions necessary.
Can you compare a float and an integer, sure. But the problem you will run into is precision. On most C/C++ implementations, float and int have the same size (4 bytes) and wildly different precision levels. Neither type can hold all values of the other type. Since one type cannot be converted to the other type without loss of precision and the types cannot be native compared, doing a comparison without considering another type will result in precision loss in some scenarios.
What you can do to avoid precision loss is to convert both types to a type which has enough precision to represent all values of float and int. On most systems, double will do just that. So the following usually does a non-lossy comparison
float f = getSomeFloat();
int i = getSomeInt();
if ( (double)i == (double)f ) {
...
}
LHS defines the precision,
So if your LHS is int and RHS is float, then this results in loss of precision.
Also take a look at FP related CFAQ
Yes, you can compare them, you can do math on them without terribly much regard for which is which, in most cases. But only most. The big bugaboo is that you can check for f<i etc. but should not check for f==i. An integer and a float that 'should' be identical in value are not necessarily identical.
Yeah, it'll work fine. Specifically, the int will be converted to float for the purposes of the conversion. In the second one you'll need to cast to int but it should be fine otherwise.
Yes, and sometimes it'll do exactly what you expect.
As the others have pointed out, comparing, eg, 1.0 == 1 will work out, because the integer 1 is type cast to double (not float) before the comparison.
However, other comparisons may not.
About that, the notation 1.0 is of type double so the comparison is made in double by type promotion rules like said before. 1.f or 1.0f is of type float and the comparison would have been made in float. And it would have worked as well since we said that 2^23 first integers are representible in a float.