Converting double to float without relying on the FPU rounding mode - c

Does anyone have handy the snippets of code to convert an IEEE 754 double to the immediately inferior (resp. superior) float, without changing or assuming anything about the FPU's current rounding mode?
Note: this constraint probably implies not using the FPU at all. I expect the simplest way to do it in these conditions is to read the bits of the double in a 64-bit long and to work with that.
You can assume the endianness of your choice for simplicity, and that the double in question is available through the d field of the union below:
union double_bits
{
long i;
double d;
};
I would try to do it myself but I am certain I would introduce hard-to-notice bugs for denormalized or negative numbers.

I think the following works, but I will state my assumptions first:
floating-point numbers are stored in IEEE-754 format on your implementation,
No overflow,
You have nextafterf() available (it's specified in C99).
Also, most likely, this method is not very efficient.
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
int main(int argc, char *argv[])
{
/* Change to non-zero for superior, otherwise inferior */
int superior = 0;
/* double value to convert */
double d = 0.1;
float f;
double tmp = d;
if (argc > 1)
d = strtod(argv[1], NULL);
/* First, get an approximation of the double value */
f = d;
/* Now, convert that back to double */
tmp = f;
/* Print the numbers. %a is C99 */
printf("Double: %.20f (%a)\n", d, d);
printf("Float: %.20f (%a)\n", f, f);
printf("tmp: %.20f (%a)\n", tmp, tmp);
if (superior) {
/* If we wanted superior, and got a smaller value,
get the next value */
if (tmp < d)
f = nextafterf(f, INFINITY);
} else {
if (tmp > d)
f = nextafterf(f, -INFINITY);
}
printf("converted: %.20f (%a)\n", f, f);
return 0;
}
On my machine, it prints:
Double: 0.10000000000000000555 (0x1.999999999999ap-4)
Float: 0.10000000149011611938 (0x1.99999ap-4)
tmp: 0.10000000149011611938 (0x1.99999ap-4)
converted: 0.09999999403953552246 (0x1.999998p-4)
The idea is that I am converting the double value to a float value—this could be less than or greater than the double value depending upon the rounding mode. When converted back to double, we can check if it is smaller or greater than the original value. Then, if the value of the float is not in the right direction, we look at the next float number from the converted number in the original number's direction.

To do this job more accurately than just re-combine mantissa and exponent bit's check this out:
http://www.mathworks.com/matlabcentral/fileexchange/23173
regards

I posted code to do this here: https://stackoverflow.com/q/19644895/364818 and copied it below for your convenience.
// d is IEEE double, but double is not natively supported.
static float ConvertDoubleToFloat(void* d)
{
unsigned long long x;
float f; // assumed to be IEEE float
unsigned long long sign ;
unsigned long long exponent;
unsigned long long mantissa;
memcpy(&x,d,8);
// IEEE binary64 format (unsupported)
sign = (x >> 63) & 1; // 1
exponent = ((x >> 52) & 0x7FF); // 11
mantissa = (x >> 0) & 0x000FFFFFFFFFFFFFULL; // 52
exponent -= 1023;
// IEEE binary32 format (supported)
exponent += 127; // rebase
exponent &= 0xFF;
mantissa >>= (52-23); // left justify
x = mantissa | (exponent << 23) | (sign << 31);
memcpy(&f,&x,4);
return f;
}

Related

How can I obtain a float value from a double, with mantissa?

I'm sorry if I can't explain correctly, but my english management is so bad.
Well, the question is: I have a double var, and I cast this var to float, because I need to send exclusively 4 bytes, not 8. This isn't work for me, so I decide to calculate the value directly from IEEE754 standard.
I have this code:
union DoubleNumberIEEE754{
struct{
uint64_t mantissa : 52;
uint64_t exponent : 11;
uint64_t sign : 1;
}raw;
double d;
char c[8];
}dnumber;
floatval = (pow((-1), dnumber.raw.sign) * (1 + dnumber.raw.mantissa) * pow(2, (dnumber.raw.exponent - 1023)));
With these code, I can't obtain the correct value.
I am watching the header from linux to see the correct order of components, but I don't know if this code is correct.
I am skeptical that the double-to-float conversion is broken, but, assuming it is:
#include <math.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
// Create a mask of n low bits, for n from 0 to 63.
#define Mask(n) (((uint64_t) 1 << (n)) - 1)
/* This routine converts float values to double values:
float and double must be IEEE-754 binary32 and binary64, respectively.
The payloads of NaNs are not preserved, and only a quiet NaN is
returned.
The double is represented to the nearest value in float, with ties
rounded to the float with the even low bit in the significand.
We assume a standard C conversion from double to float is broken for
unknown reasons but that a converstion from a representable uint32_t to a
float works.
*/
static float ConvertDoubleToFloat(double x)
{
// Copy the double into a uint64_t so we can access its representation.
uint64_t u;
memcpy(&u, &x, sizeof u);
// Extract the fields from the representation of a double.
int SignCode = u >> 63;
int ExponentCode = u >> 52 & Mask(11);
uint64_t SignificandCode = u & Mask(52);
/* Convert the fields to their represented values.
The sign code merely encodes - or +.
The exponent code is biased by 1023 from the actual exponent.
The significand code represents the portion of the significand
after the radix point. However, since there is some problem
converting float to double, we will maintain it with an integer
type, scaled by 2**52 from its represented value.
The exponent code also represents the portion of the significand
before the radix point -- 1 if the exponent is non-zero, 0 if the
exponent is zero. We include that in the significand, scaled by
2**52.
*/
float Sign = SignCode ? -1 : +1;
int Exponent = ExponentCode - 1023;
uint64_t ScaledSignificand =
(ExponentCode ? ((uint64_t) 1 << 52) : 0) + SignificandCode;
// Handle NaNs and infinities.
if (ExponentCode == Mask(11))
return Sign * (SignificandCode == 0 ? INFINITY : NAN);
/* Round the significand:
If Exponent < -150, all bits of the significand are below 1/2 ULP
of the least positive float, so they round to zero.
If -150 <= Exponent < -126, only bits of the significand
corresponding to exponent -149 remain in the significand, so we
shift accordingly and round the residue.
Otherwise, the top 24 bits of the significand remain in the
significand (except when there is overflow to infinity), so we
shift accordingly and round the residue.
Note that the scaling in the new significand is 2**23 instead of 2**52,
since we are shifting it for the float format.
*/
uint32_t NewScaledSignificand;
if (Exponent < -150)
NewScaledSignificand = 0;
else
{
unsigned Shift = 53 - (Exponent < -126 ? Exponent - -150 : 24);
NewScaledSignificand = ScaledSignificand >> Shift;
// Clamp the exponent for subnormals.
if (Exponent < -126)
Exponent = -126;
// Examine the residue being lost and round accordingly.
uint64_t Residue = ScaledSignificand - ((uint64_t) NewScaledSignificand << Shift);
uint64_t Half = (uint64_t) 1 << Shift-1;
// If the residue is greater than 1/2 ULP, round up (in magnitude).
if (Half < Residue)
NewScaledSignificand += 1;
/* If the residue is 1/2 ULP, round 0.1 to 0 and 1.1 to 10.0 (these
numerals are binary with "." marking the ULP position).
*/
else if (Half == Residue)
NewScaledSignificand += NewScaledSignificand & 1;
/* Otherwise, the residue is less than 1/2, and we have already
rounded down, in the shift.
*/
}
// Combine the components, including removing the significand scaling.
return Sign * ldexpf(NewScaledSignificand, Exponent-23);
}
static void TestOneSign(double x)
{
float Expected = x;
float Observed = ConvertDoubleToFloat(x);
if (Observed != Expected && !(isnan(Observed) && isnan(Expected)))
{
printf("Error, %a -> %a, but expected %a.\n",
x, Observed, Expected);
exit(EXIT_FAILURE);
}
}
static void Test(double x)
{
TestOneSign(+x);
TestOneSign(-x);
}
int main(void)
{
for (int e = -1024; e < 1024; ++e)
{
Test(ldexp(0x1.0p0, e));
Test(ldexp(0x1.4p0, e));
Test(ldexp(0x1.8p0, e));
Test(ldexp(0x1.cp0, e));
Test(ldexp(0x1.5555540p0, e));
Test(ldexp(0x1.5555548p0, e));
Test(ldexp(0x1.5555550p0, e));
Test(ldexp(0x1.5555558p0, e));
Test(ldexp(0x1.5555560p0, e));
Test(ldexp(0x1.5555568p0, e));
Test(ldexp(0x1.5555570p0, e));
Test(ldexp(0x1.5555578p0, e));
}
Test(3.14);
Test(0);
Test(INFINITY);
Test(NAN);
Test(1/3.);
Test(0x1p128);
Test(0x1p128 - 0x1p104);
Test(0x1p128 - 0x.9p104);
Test(0x1p128 - 0x.8p104);
Test(0x1p128 - 0x.7p104);
}

Turn int to IEEE 754, extract exponent, and add 1 to exponent

Problem
I need to multiply a number without using * or + operator or other libs, only binary logic
To multiply a number by two using the IEEE norm, you add one to the exponent, for example:
12 = 1 10000010 100000(...)
So the exponent is: 10000010 (130)
If I want to multiply it by 2, I just add 1 to it and it becomes 10000011 (131).
Question
If I get a float, how do I turn it into, binary, then IEEE norm? Example:
8.0 = 1000.0 in IEEE I need it to have only one number on the left side, so 1.000 * 2^3. Then how do I add one so I multiply it by 2?
I need to get a float, ie. 6.5
Turn it to binary 110.1
Then to IEEE 754 0 10000001 101000(...)
Extract the exponent 10000001
Add one to it 10000010
Return it to IEEE 754 0 10000010 101000(...)
Then back to float 13
Given that the C implementation is known to use IEEE-754 basic 32-bit binary floating-point for its float type, the following code shows how to take apart the bits that represent a float, adjust the exponent, and reassemble the bits. Only simple multiplications involving normal numbers are handled.
#include <assert.h>
#include <stdio.h>
#include <stdint.h>
#include <string.h>
int main(void)
{
float f = 6.125;
// Copy the bits that represent the float f into a 32-bit integer.
uint32_t u;
assert(sizeof f == sizeof u);
memcpy(&u, &f, sizeof u);
// Extract the sign, exponent, and significand fields.
uint32_t sign = u >> 31;
uint32_t exponent = (u >> 23) & 0xff;
uint32_t significand = u & 0x7fffff;
// Assert the exponent field is in the normal range and will remain so.
assert(0 < exponent && exponent < 254);
// Increment the exponent.
++exponent;
// Reassemble the bits and copy them back into f.
u = sign << 31 | exponent << 23 | significand;
memcpy(&f, &u, sizeof f);
// Display the result.
printf("%g\n", f);
}
Maybe not exactly what you are looking for, but C has a library function ldexp which does exactly what you need:
double x = 6.5;
x = ldexp(x, 1); // now x is 13
Maybe unions is the tool you need.
#include<iostream>
union fb {
float f;
struct b_s {
unsigned int sign :1;
unsigned int mant :22;
unsigned int exp :8;
} b;
};
fb num;
int main() {
num.f = 3.1415;
num.b.exp++;
std::cout << num.f << std::endl;
return 0;
}

Getting the mantissa (of a float) of either an unsigned int or a float (C)

So, i am trying to program a function which prints a given float number (n) in its (mantissa * 2^exponent) format. I was abled to get the sign and the exponent, but not the mantissa (whichever the number is, mantissa is always equal to 0.000000). What I have is:
unsigned int num = *(unsigned*)&n;
unsigned int m = num & 0x007fffff;
mantissa = *(float*)&m;
Any ideas of what the problem might be?
The C library includes a function that does this exact task, frexp:
int expon;
float mant = frexpf(n, &expon);
printf("%g = %g * 2^%d\n", n, mant, expon);
Another way to do it is with log2f and exp2f:
if (n == 0) {
mant = 0;
expon = 0;
} else {
expon = floorf(log2f(fabsf(n)));
mant = n * exp2f(-expon);
}
These two techniques are likely to give different results for the same input. For instance, on my computer the frexpf technique describes 4 as 0.5 × 23 but the log2f technique describes 4 as 1 × 22. Both are correct, mathematically speaking. Also, frexp will give you the exact bits of the mantissa, whereas log2f and exp2f will probably round off the last bit or two.
You should know that *(unsigned *)&n and *(float *)&m violate the rule against "type punning" and have undefined behavior. If you want to get the integer with the same bit representation as a float, or vice versa, use a union:
union { uint32_t i; float f; } u;
u.f = n;
num = u.i;
(Note: This use of unions is well-defined in C since roughly 2003, but, due to the C++ committee's long-standing habit of not paying sufficient attention to changes going into C, it is not officially well-defined in C++.)
You should also know IEEE floating-point numbers use "biased" exponents. When you initialize a float variable's mantissa field but leave its exponent field at zero, that gives you the representation of a number with a large negative exponent: in other words, a number so small that printf("%f", n) will print it as zero. Whenever printf("%f", variable) prints zero, change %f to %g or %a and rerun the program before assuming that variable actually is zero.
You are stripping off the bits of the exponent, leaving 0. An exponent of 0 is special, it means the number is denormalized and is quite small, at the very bottom of the range of representable numbers. I think you'd find if you looked closely that your result isn't quite exactly zero, just so small that you have trouble telling the difference.
To get a reasonable number for the mantissa, you need to put an appropriate exponent back in. If you want a mantissa in the range of 1.0 to 2.0, you need an exponent of 0, but adding the bias means you really need an exponent of 127.
unsigned int m = (num & 0x007fffff) | (127 << 23);
mantissa = *(float*)&m;
If you'd rather have a fully integer mantissa you need an exponent of 23, biased it becomes 150.
unsigned int m = (num & 0x007fffff) | ((23+127) << 23);
mantissa = *(float*)&m;
In addition to zwol's remarks: if you want to do it yourself you have to acquire some knowledge about the innards of an IEEE-754 float. Once you have done so you can write something like
#include <stdlib.h>
#include <stdio.h>
#include <math.h> // for testing only
typedef union {
float value;
unsigned int bits; // assuming 32 bit large ints (better: uint32_t)
} ieee_754_float;
// clang -g3 -O3 -W -Wall -Wextra -Wpedantic -Weverything -std=c11 -o testthewest testthewest.c -lm
int main(int argc, char **argv)
{
unsigned int m, num;
int exp; // the exponent can be negative
float n, mantissa;
ieee_754_float uf;
// neither checks nor balances included!
if (argc == 2) {
n = atof(argv[1]);
} else {
exit(EXIT_FAILURE);
}
uf.value = n;
num = uf.bits;
m = num & 0x807fffff; // extract mantissa (i.e.: get rid of sign bit and exponent)
num = num & 0x7fffffff; // full number without sign bit
exp = (num >> 23) - 126; // extract exponent and subtract bias
m |= 0x3f000000; // normalize mantissa (add bias)
uf.bits = m;
mantissa = uf.value;
printf("n = %g, mantissa = %g, exp = %d, check %g\n", n, mantissa, exp, mantissa * powf(2, exp));
exit(EXIT_SUCCESS);
}
Note: the code above is one of the quick&dirty(tm) species and is not meant for production. It also lacks handling for subnormal (denormal) numbers, a thing you must include. Hint: multiply the mantissa with a large power of two (e.g.: 2^25 or in that ballpark) and adjust the exponent accordingly (if you took the value of my example subtract 25).

function to convert float to int (huge integers)

This is a university question. Just to make sure :-) We need to implement (float)x
I have the following code which must convert integer x to its floating point binary representation stored in an unsigned integer.
unsigned float_i2f(int x) {
if (!x) return x;
/* get sign of x */
int sign = (x>>31) & 0x1;
/* absolute value of x */
int a = sign ? ~x + 1 : x;
/* calculate exponent */
int e = 0;
int t = a;
while(t != 1) {
/* divide by two until t is 0*/
t >>= 1;
e++;
};
/* calculate mantissa */
int m = a << (32 - e);
/* logical right shift */
m = (m >> 9) & ~(((0x1 << 31) >> 9 << 1));
/* add bias for 32bit float */
e += 127;
int res = sign << 31;
res |= (e << 23);
res |= m;
/* lots of printf */
return res;
}
One problem I encounter now is that when my integers are too big then my code fails. I have this control procedure implemented:
float f = (float)x;
unsigned int r;
memcpy(&r, &f, sizeof(unsigned int));
This of course always produces the correct output.
Now when I do some test runs, this are my outputs (GOAL is what It needs to be, result is what I got)
:!make && ./btest -f float_i2f -1 0x80004999
make: Nothing to be done for `all'.
Score Rating Errors Function
x: [-2147464807] 10000000000000000100100110011001
sign: 1
expone: 01001110100000000000000000000000
mantis: 00000000011111111111111101101100
result: 11001110111111111111111101101100
GOAL: 11001110111111111111111101101101
So in this case, a 1 is added as the LSB.
Next case:
:!make && ./btest -f float_i2f -1 0x80000001
make: Nothing to be done for `all'.
Score Rating Errors Function
x: [-2147483647] 10000000000000000000000000000001
sign: 1
expone: 01001110100000000000000000000000
mantis: 00000000011111111111111111111111
result: 11001110111111111111111111111111
GOAL: 11001111000000000000000000000000
Here 1 is added to the exponent while the mantissa is the complement of it.
I tried hours to look ip up on the internet plus in my books etc but I can't find any references to this problem. I guess It has something to do with the fact that the mantissa is only 23 bits. But how do I have to handle it then?
EDIT: THIS PART IS OBSOLETE THANKS TO THE COMMENTS BELOW. int l must be unsigned l.
int x = 2147483647;
float f = (float)x;
int l = f;
printf("l: %d\n", l);
then l becomes -2147483648.
How can this happen? So C is doing the casting wrong?
Hope someone can help me here!
Thx
Markus
EDIT 2:
My updated code is now this:
unsigned float_i2f(int x) {
if (x == 0) return 0;
/* get sign of x */
int sign = (x>>31) & 0x1;
/* absolute value of x */
int a = sign ? ~x + 1 : x;
/* calculate exponent */
int e = 158;
int t = a;
while (!(t >> 31) & 0x1) {
t <<= 1;
e--;
};
/* calculate mantissa */
int m = (t >> 8) & ~(((0x1 << 31) >> 8 << 1));
m &= 0x7fffff;
int res = sign << 31;
res |= (e << 23);
res |= m;
return res;
}
I also figured out that the code works for all integers in the range -2^24, 2^24. Everything above/below sometimes works but mostly doesn't.
Something is missing, but I really have no idea what. Can anyone help me?
The answer printed is absolutely correct as it's totally dependent on the underlying representation of numbers being cast. However, If we understand the binary representation of the number, you won't get surprised with this result.
To understand an implicit conversion is associated with the assignment operator (ref C99 Standard 6.5.16). The C99 Standard goes on to say:
6.3.1.4 Real floating and integer
When a finite value of real floating type is converted to an integer type other than _Bool, the fractional part is discarded (i.e., the value is truncated toward zero). If the value of the integral part cannot be represented by the integer type, the behavior is undefined.
Your earlier example illustrates undefined behavior due to assigning a value outside the range of the destination type. Trying to assign a negative value to an unsigned type, not from converting floating point to integer.
The asserts in the following snippet ought to prevent any undefined behavior from occurring.
#include <limits.h>
#include <math.h>
unsigned int convertFloatingPoint(double v) {
double d;
assert(isfinite(v));
d = trunc(v);
assert((d>=0.0) && (d<=(double)UINT_MAX));
return (unsigned int)d;
}
Another way for doing the same thing, Create a union containing a 32-bit integer and a float. The int and float are now just different ways of looking at the same bit of memory;
union {
int myInt;
float myFloat;
} my_union;
my_union.myInt = 0x BFFFF2E5;
printf("float is %f\n", my_union.myFloat);
float is -1.999600
You are telling the compiler to take the number you have (large integer) and make it into a float, not to interpret the number AS float. To do that, you need to tell the compiler to read the number from that address in a different form, so this:
myFloat = *(float *)&myInt ;
That means, if we take it apart, starting from the right:
&myInt - the location in memory that holds your integer.
(float *) - really, I want the compiler use this as a pointer to float, not whatever the compiler thinks it may be.
* - read from the address of whatever is to the right.
myFloat = - set this variable to whatever is to the right.
So, you are telling the compiler: In the location of (myInt), there is a floating point number, now put that float into myFloat.

Convert 64-bit double precision floating point data to Uint32 on a C compiler that doesn't support double precision

I need to decode a timestamp encoded as an IEEE double (from iOS NSTimeInterval) and stored in a 8 byte array, so that I can use ctime to print out the timestamp in a human readable format. This is trivial on most systems, but not on mine.
Example: on iOS side
uint8_t data[8];
double x = 3.14;
memcpy(data,&x,8);
I'm running on a MSP430F5438A and I need to use the COFF ABI to link with 3rd part libraries. TI's Code Composer does not support 64-bit floating point types (IEEE double) if you use the COFF ABI. It treats double the same as float (ie single precision).
These don't work.
uint8_t data[8];
double x;
memcpy(&x,data,8);
or
x = *(double*)data;
or
union {
uint8_t data[8];
double d;
} x;
memcpy(x.data,data,8);
I just get gibberish, because double is only 4 bytes using Code Composer. I need a way to directly convert the uint8_t[8] data (which is a legitimate IEEE double precision value) into an integer value.
This will convert an integral double to an exact uint32_t (as long as there is no overflow, up to 2^32-1). It will not work if the double is Nan or Inf, but that is unlikely.
static unsigned long ConvertDoubleToULong(void* d)
{
unsigned long long x;
unsigned long long sign ;
long exponent;
unsigned long long mantissa;
memcpy(&x,d,8);
// IEEE binary64 format (unsupported)
sign = (x >> 63) & 1; // 1 bit
exponent = ((x >> 52) & 0x7FF); // 11 bits
mantissa = (x >> 0) & 0x000FFFFFFFFFFFFFULL; // 52 bits
exponent -= 1023;
mantissa |= 0x0010000000000000ULL; // add implicit 1
int rshift = 52 - exponent;
if (rshift > 52) {
x = 0;
} else if (rshift >=0) {
x = mantissa >> rshift;
} else {
x = 0x7FFFFFFF;
}
if (sign == 0) {
return x;
} else {
return -x;
}
}

Resources