For the embedded SW application on a microcontroller I have to write a some kind of customised division operator for float. The solution, I want to go with, is shortly described below.
In fact, I am not sure if this approach could be efficient enough in terms of execution time for the embedded application with high performance requirements.
Has anybody experience in diverse approaches to handle float division by zero, which can be efficient/optimised for embedded applications?
typedef union MyFloatType_
{
unsigned long Ulong;
float Float;
}MyFloatType;
float div_float(float a, float b)
{
float numerator = a;
MyFloatType denominator;
denominator.Float= b;
float result = 0.0f;
if((denominator.Ulong & 0x7fffffffUL) != 0UL)
{
result = numerator / denominator.Float;
} else
{
/*handle devision by zero, for example:*/
result = 0.0f;
}
return result;
}
In most applications, there will be some value below which a floating-point number "might as well be" zero. For example, consider something like:
float intensity(float dx, float dy)
{
float result = 1/(dx*dx + dy*dy);
if (result > 65535.0f) result = 65535.0f;
return result;
}
If the divisor is less than 1/65535.0f, then in cases where the distance isn't exactly zero the function should return 65535.0f regardless of the actual value of the divisor, and such behavior would probably be useful even if it is zero. Thus, the function could be rewritten as:
float intensity(float dx, float dy)
{
float distSq = dx*dx + dy*dy;
if (distSq <= (1.0f/65535.0f))
return 65535.0;
else
return 1/distSq;
}
Note that the corner-case handling of this sort of code may be very slightly imperfect. While not a problem for 65535.0f in particular, there may be cases where distSq is precisely equal to the reciprocal of the maximum value, but the reciprocal of that is less than the maximum value. For example, if the maximum value was 46470.0f and distSq was 0.0000215238924f, the correct result would be 46459.9961f but the function would return 46470.0f. Such issues are unlikely to pose problems in practice, but one should be aware of them. Note that if the comparison had used less-than rather than less-than-or-equal, and the maximum had been 46590.0f, a distSq value of 0.0000214642932f would yield a result of 46589.0039, which exceeds the maximum.
Incidentally, on many systems, the cost of computing an approximate reciprocal of the divisor and multiplying by the dividend may be much cheaper than the cost of performing a floating-point division. Such an approach may be useful in the many situations where its precision would be adequate.
It's not faster but potentially much slower. It has to move the float to an integer register (possible by writing it into memory and reading it back) and then perform the integer operation. Just perform == 0.0f and a floating point comparison is done instead and is the most efficient way.
If you want it to be "high performance" then try the following:
float div_float(float a, float b)
{
b = b == 0.0f ? 1.0f : b;
return a / b;
}
This can be optimized to few simple instructions without branching and is much faster overall as any branching kills performance. This is used in graphics drivers where if the user is supplying garbage data that will eventually result in division by zero then the results are already invalid but throwing a floating point exception is not desired.
Related
I was trying to write a program to calculate the value of x^n using a while loop:
#include <stdio.h>
#include <math.h>
int main()
{
float x = 3, power = 1, copyx;
int n = 22, copyn;
copyx = x;
copyn = n;
while (n)
{
if ((n % 2) == 1)
{
power = power * x;
}
n = n / 2;
x *= x;
}
printf("%g^%d = %f\n", copyx, copyn, power);
printf("%g^%d = %f\n", copyx, copyn, pow(copyx, copyn));
return 0;
}
Up until the value of 15 for n, the answer from my created function and the pow function (from math.h) gives the same value; but, when the value of n exceeds 15, then it starts giving different answers.
I cannot understand why there is a difference in the answer. Is it that I have written the function in the wrong way or it is something else?
You are mixing up two different types of floating-point data. The pow function uses the double type but your loop uses the float type (which has less precision).
You can make the results coincide by either using the double type for your x, power and copyx variables, or by calling the powf function (which uses the float type) instead of pow.
The latter adjustment (using powf) gives the following output (clang-cl compiler, Windows 10, 64-bit):
3^22 = 31381059584.000000
3^22 = 31381059584.000000
And, changing the first line of your main to double x = 3, power = 1, copyx; gives the following:
3^22 = 31381059609.000000
3^22 = 31381059609.000000
Note that, with larger and larger values of n, you are increasingly likely to get divergence between the results of your loop and the value calculated using the pow or powf library functions. On my platform, the double version gives the same results, right up to the point where the value overflows the range and becomes Infinity. However, the float version starts to diverge around n = 55:
3^55 = 174449198498104595772866560.000000
3^55 = 174449216944848669482418176.000000
When I run your code I get this:
3^22 = 31381059584.000000
3^22 = 31381059609.000000
This would be because pow returns a double but your code uses float. When I changed to powf I got identical results:
3^22 = 31381059584.000000
3^22 = 31381059584.000000
So simply use double everywhere if you need high resolution results.
Floating point math is imprecise (and float is worse than double, having even fewer bits to store the data in; using double might delay the imprecision longer). The pow function (usually) uses an exponentiation algorithm that minimizes precision loss, and/or delegates to a chip-level instruction that may do stuff more efficiently, more precisely, or both. There could be more than one implementation of pow too, depending on whether you tell the compiler to use strictly conformant floating point math, the fastest possible, the hardware instruction, etc.
Your code is fine (though using double would get more precise results), but matching the improved precision of math.h's pow is non-trivial; by the time you've done so, you'll have reinvented it. That's why you use the library function.
That said, for logically integer math as you're using here, precision loss from your algorithm likely doesn't matter, it's purely the float vs. double issue where you lose precision from the type itself. As a rule, default to using double, and only switch to float if you're 100% sure you don't need the precision and can't afford the extra memory/computation cost of double.
Precision
float x = 3, power = 1; ... power = power * x forms a float product.
pow(x, y) forms a double result and good implementations internally use even wider math.
OP's loop method incurs rounded results after the 15th iteration. These roundings slowly compound the inaccuracy of the final result.
316 is a 26 bit odd number.
float encodes all odd numbers exactly until typically 224. Larger values are all even and of only 24 significant binary digits.
double encodes all odd numbers exactly until typically 253.
To do a fair comparison, use:
double objects and pow() or
float objects and powf().
For large powers, the pow(f)() function is certain to provide better answers than a loop at such functions often use internally extended precision and well managed rounding vs. the loop approach.
I am new to C, and my task is to create a function
f(x) = sqrt[(x^2)+1]-1
that can handle very large numbers and very small numbers. I am submitting my script on an online interface that checks my answers.
For very large numbers I simplify the expression to:
f(x) = x-1
By just using the highest power. This was the correct answer.
The same logic does not work for smaller numbers. For small numbers (on the order of 1e-7), they are very quickly truncated to zero, even before they are squared. I suspect that this has to do with floating point precision in C. In my textbook, it says that the float type has smallest possible value of 1.17549e-38, with 6 digit precision. So although 1e-7 is much larger than 1.17e-38, it has a higher precision, and is therefore rounded to zero. This is my guess, correct me if I'm wrong.
As a solution, I am thinking that I should convert x to a long double when x < 1e-6. However when I do this, I still get the same error. Any ideas? Let me know if I can clarify. Code below:
#include <math.h>
#include <stdio.h>
double feval(double x) {
/* Insert your code here */
if (x > 1e299)
{;
return x-1;
}
if (x < 1e-6)
{
long double g;
g = x;
printf("x = %Lf\n", g);
long double a;
a = pow(x,2);
printf("x squared = %Lf\n", a);
return sqrt(g*g+1.)- 1.;
}
else
{
printf("x = %f\n", x);
printf("Used third \n");
return sqrt(pow(x,2)+1.)-1;
}
}
int main(void)
{
double x;
printf("Input: ");
scanf("%lf", &x);
double b;
b = feval(x);
printf("%f\n", b);
return 0;
}
For small inputs, you're getting truncation error when you do 1+x^2. If x=1e-7f, x*x will happily fit into a 32 bit floating point number (with a little bit of error due to the fact that 1e-7 does not have an exact floating point representation, but x*x will be so much smaller than 1 that floating point precision will not be sufficient to represent 1+x*x.
It would be more appropriate to do a Taylor expansion of sqrt(1+x^2), which to lowest order would be
sqrt(1+x^2) = 1 + 0.5*x^2 + O(x^4)
Then, you could write your result as
sqrt(1+x^2)-1 = 0.5*x^2 + O(x^4),
avoiding the scenario where you add a very small number to 1.
As a side note, you should not use pow for integer powers. For x^2, you should just do x*x. Arbitrary integer powers are a little trickier to do efficiently; the GNU scientific library for example has a function for efficiently computing arbitrary integer powers.
There are two issues here when implementing this in the naive way: Overflow or underflow in intermediate computation when computing x * x, and substractive cancellation during final subtraction of 1. The second issue is an accuracy issue.
ISO C has a standard math function hypot (x, y) that performs the computation sqrt (x * x + y * y) accurately while avoiding underflow and overflow in intermediate computation. A common approach to fix issues with subtractive cancellation is to transform the computation algebraically such that it is transformed into multiplications and / or divisions.
Combining these two fixes leads to the following implementation for float argument. It has an error of less than 3 ulps across all possible inputs according to my testing.
/* Compute sqrt(x*x+1)-1 accurately and without spurious overflow or underflow */
float func (float x)
{
return (x / (1.0f + hypotf (x, 1.0f))) * x;
}
A trick that is often useful in these cases is based on the identity
(a+1)*(a-1) = a*a-1
In this case
sqrt(x*x+1)-1 = (sqrt(x*x+1)-1)*(sqrt(x*x+1)+1)
/(sqrt(x*x+1)+1)
= (x*x+1-1) / (sqrt(x*x+1)+1)
= x*x/(sqrt(x*x+1)+1)
The last formula can be used as an implementation. For vwry small x sqrt(x*x+1)+1 will be close to 2 (for small enough x it will be 2) but we don;t loose precision in evaluating it.
The problem isn't with running into the minimum value, but with the precision.
As you said yourself, float on your machine has about 7 digits of precision. So let's take x = 1e-7, so that x^2 = 1e-14. That's still well within the range of float, no problems there. But now add 1. The exact answer would be 1.00000000000001. But if we only have 7 digits of precision, this gets rounded to 1.0000000, i.e. exactly 1. So you end up computing sqrt(1.0)-1 which is exactly 0.
One approach would be to use the linear approximation of sqrt around x=1 that sqrt(x) ~ 1+0.5*(x-1). That would lead to the approximation f(x) ~ 0.5*x^2.
I am interested in writing a fast C program that flips the exponent of a double. For instance, this program should convert 1e300 to 1e-300. I guess the best way would be some bit operations, but I lack enough knowledge to fulfill that. Any good idea?
Assuming you mean to negate the decimal exponent, the power of ten exponent in scientific notation:
#include <math.h>
double negate_decimal_exponent(const double value)
{
if (value != 0.0) {
const double p = pow(10.0, -floor(log10(fabs(value))));
return (value * p) * p;
} else
return value;
}
Above, floor(log10(fabs(value))) is the base 10 logarithm of the absolute value of value, rounded down. Essentially, it is the power of ten exponent in value using the scientific notation. If we negate it, and raise ten to that power, we have the inverse of that power of ten.
We can't calculate the square of p, because it might underflow for very large values of value in magnitude, or overflow for very small values of value in magnitude. Instead, we multiply value by p, so that the product is near unity in magnitude (that is, decimal exponent is zero); then multiply that with p, to essentially negate the decimal exponent.
Because the base-ten logarithm of zero is undefined, so we need to deal with that separately. (I initially missed this corner case; thanks to chux for pointing it out.)
Here is an example program to demonstrate:
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
double negate_decimal_exponent(const double value)
{
if (value != 0.0) {
const double p = pow(10.0, -floor(log10(fabs(value))));
return (value * p) * p;
} else
return value;
}
#define TEST(val) printf("negate_decimal_exponent(%.16g) = %.16g\n", val, negate_decimal_exponent(val))
int main(void)
{
TEST(1.0e300);
TEST(1.1e300);
TEST(-1.0e300);
TEST(-0.8e150);
TEST(0.35e-25);
TEST(9.83e-200);
TEST(23.4728395e-220);
TEST(0.0);
TEST(-0.0);
return EXIT_SUCCESS;
}
which, when compiled (remember to link with the math library, -lm) and run, outputs (on my machine; should output the same on all machines using IEEE-754 Binary64 for doubles):
negate_decimal_exponent(1e+300) = 1e-300
negate_decimal_exponent(1.1e+300) = 1.1e-300
negate_decimal_exponent(-1e+300) = -1e-300
negate_decimal_exponent(-8e+149) = -8e-149
negate_decimal_exponent(3.5e-26) = 3.5e+26
negate_decimal_exponent(9.83e-200) = 9.83e+200
negate_decimal_exponent(2.34728395e-219) = 2.34728395e+219
negate_decimal_exponent(0) = 0
negate_decimal_exponent(-0) = -0
Are there faster methods to do this?
Sure. Construct a look-up table of powers of ten, and use a binary search to find the largest value that is smaller than value in magnitude. Have a second look-up table have the two multipliers that when multiplied with value, negates the decimal power of ten. Two factors are needed, because a single one does not have the necessary range and precision. (However, the two values are symmetric with respect to the base-ten logarithm.) For a look-up table with thousand exponents (covers IEEE-754 doubles, but one should check at compile time that it does cover DBL_MAX), that would be ten comparisons and two multiplications (using floating-point values), so it'd be quite fast.
A portable program could calculate the tables necessary at run-time, too.
Strange output when I use float instead of double
#include <stdio.h>
void main()
{
double p,p1,cost,cost1=30;
for (p = 0.1; p < 10;p=p+0.1)
{
cost = 30-6*p+p*p;
if (cost<cost1)
{
cost1=cost;
p1=p;
}
else
{
break;
}
printf("%lf\t%lf\n",p,cost);
}
printf("%lf\t%lf\n",p1,cost1);
}
Gives output as expected at p = 3;
But when I use float the output is a little weird.
#include <stdio.h>
void main()
{
float p,p1,cost,cost1=40;
for (p = 0.1; p < 10;p=p+0.1)
{
cost = 30-6*p+p*p;
if (cost<cost1)
{
cost1=cost;
p1=p;
}
else
{
break;
}
printf("%f\t%f\n",p,cost);
}
printf("%f\t%f\n",p1,cost1);
}
Why is the increment of p in the second case going weird after 2.7?
This is happening because the float and double data types store numbers in base 2. Most base-10 numbers can’t be stored exactly. Rounding errors add up much more quickly when using floats. Outside of embedded applications with limited memory, it’s generally better, or at least easier, to use doubles for this reason.
To see this happening for double types, consider the output of this code:
#include <stdio.h>
int main(void)
{
double d = 0.0;
for (int i = 0; i < 100000000; i++)
d += 0.1;
printf("%f\n", d);
return 0;
}
On my computer, it outputs 9999999.981129. So after 100 million iterations, rounding error made a difference of 0.018871 in the result.
For more information about how floating-point data types work, read What Every Computer Scientist Should Know About Floating-Point Arithmetic. Or, as akira mentioned in a comment, see the Floating-Point Guide.
Your program can work fine with float. You don't need double to compute a table of 100 values to a few significant digits. You can use double, and if you do, it will have chances to work even if you use binary floating-point binary at cross-purposes. The IEEE 754 double-precision format used for double by most C compilers is so precise that it makes many misuses of floating-point unnoticeable (but not all of them).
Values that are simple in decimal may not be simple in binary
A consequence is that a value that is simple in decimal may not be represented exactly in binary.
This is the case for 0.1: it is not simple in binary, and it is not represented exactly as either double or float, but the double representation has more digits and as a result, is closer to the intended value 1/10.
Floating-point operations are not exact in general
Binary floating-point operations in a format such as float or double have to produce a result in the intended format. This leads to some digits having to be dropped from the result each time an operation is computed. When using binary floating-point in an advanced manner, the programmer sometimes knows that the result will have few enough digits for all the digits to be represented in the format (in other words, sometimes a floating-point operation can be exact and advanced programmers can predict and take advantage of conditions in which this happens). But here, you are adding 0.1, which is not simple and (in binary) uses all the available digits, so most of the times, this addition is not be exact.
How to print a small table of values using only float
In for (p = 0.1; p < 10;p=p+0.1), the value of p, being a float, will be rounded at each iteration. Each iteration will be computed from a previous iteration that was already rounded, so the rounding errors will accumulate and make the end result drift away from the intended, mathematical value.
Here is a list of improvements over what you wrote, in reverse order of exactness:
for (i = 1, p = 0.1f; i < 100; i++, p = i * 0.1f)
In the above version, 0.1f is not exactly 1/10, but the computation of p involves only one multiplication and one rounding, instead of up to 100. That version gives a more precise approximation of i/10.
for (i = 1, p = 0.1f; i < 100; i++, p = i * 0.1)
In the very slightly different version above, i is multiplied by the double value 0.1, which more closely approximates 1/10. The result is always the closest float to i/10, but this solution is cheating a bit, since it uses a double multiplication. I said a solution existed with only float!
for (i = 1, p = 0.1f; i < 100; i++, p = i / 10.0f)
In this last solution, p is computed as the division of i, represented exactly as a float because it is a small integer, by 10.0f, which is also exact for the same reason. The only computation approximation is that of a single operation, and the arguments are exactly what we wanted them to, so this is the best solution. It produces the closest float to i/10 for all values of i between 1 and 99.
I need to find maximum and minimum of 8 float values I get. I did as follows. But float comparisons are going awry as warned by any good C book!
How do I compute the max and min in a accurate way.
main()
{
float mx,mx1,mx2,mx3,mx4,mn,mn1,mn2,mn3,mn4,tm1,tm2;
mx1 = mymax(2.1,2.01); //this returns 2.09999 instead of 2.1 because a is passed as 2.09999.
mx2 = mymax(-3.5,7.000001);
mx3 = mymax(7,5);
mx4 = mymax(7.0000011,0); //this returns incorrectly- 7.000001
tm1 = mymax(mx1,mx2);
tm2 = mymax(mx3,mx4);
mx = mymax(tm1,tm2);
mn1 = mymin(2.1,2.01);
mn2 = mymin(-3.5,7.000001);
mn3 = mymin(7,5);
mn4 = mymin(7.0000011,0);
tm1 = mymin(mx1,mx2);
tm2 = mymin(mx3,mx4);
mn = mymin(tm1,tm2);
printf("Max is %f, Min is %f \n",mx,mn);
getch();
}
float mymax(float a,float b)
{
if(a >= b)
{
return a;
}
else
{
return b;
}
}
float mymin(float a,float b)
{
if(a <= b)
{
return a;
}
else
{
return b;
}
}
How can I do exact comparisons of these floats? This is all C code.
thank you.
-AD.
You are doing exact comparison of these floats. The problem (with your example code at least) is that float simply does not have enough digits of precision to represent the values of your literals sufficiently. 7.000001 and 7.0000011 simply are so close together that the mantissa of a 32 bit float cannot represent them differently.
But the example seems artificial. What is the real problem you're trying to solve? What values will you actually be working with? Or is this just an academic exercise?
The best solution depends on the answer to that. If your actual values just require somewhat more more precision than float can provide, use double. If you need exact representation of decimal digits, use a decimal type library. If you want to improve your understanding of how floating point values work, read The Floating-Point Guide.
You can do exact comparison of floats. Either directly as floats, or by casting them to int with the same bit representation.
float a = 1.0f;
float b = 2.0f;
int &ia = *(int *)(&a);
int &ib = *(int *)(&b);
/* you can compare a and b, or ia and ib, the results will be the same,
whatever the values of the floats are.
Floats are ordered the correct way when its bits are considered as int
and thus can be compared (provided that float and int both are 32 bits).
*/
But you will never be able to represent exactly 2.1 as a float.
Your problem is not a problem of comparison, it is a problem of representation of a value.
I'd claim that these comparisons are actually exact, since no value is altered.
The problem is that many float literals can't be represented exactly by IEEE-754 floating point numbers. So for example 2.1.
If you need an exact representation of base 10 pointed numbers you could - for example - write your own fixed point BCD arithmetic.
Concerning finding min and max at the same time:
A way that needs less comparisons is for each index pair (2*i, 2*i+1) first finding the minimum (n/2 comparisons)
Then find the minimum of the minima ((n-1)/2 comparisons) and the maximum of the maxima ((n-1)/2 comparisons).
So we get (3*n-2)/2 comparisons instead of (2*n-2)/2 when finding the minimum and maximum separated.
The < and > comparison always works correct with floats or doubles. Only the == comparison has problems, therefore you are advised to use epsilon.
So your method of calculating min, max has no issue. Note that if you use float, you should use the notation 2.1f instead of 2.1. Just a note.