Vectorizable implementation of complementary error function erfcf() - c

The complementary error function, erfc, is a special functions closely related to the standard normal distribution. It is frequently used in statistics and the natural sciences (e.g. diffusion problems) where the "tails" of this distribution need to be considered, and use of the error function, erf, is therefore not suitable.
The complementary error function was made available in the ISO C99 standard math library as the functions erfcf, erfc, and erfcl; these were subsequently adopted into ISO C++ as well. Thus source code can readily be found in open-source implementations of that library, for example in glibc.
However, many existing implementations are scalar in nature, while modern processor hardware is SIMD-oriented (either explicitly, as in x86 CPU, or implicitly, as in GPUs). For performance reasons, a vectorizable implementation is therefore highly desirable. This means branches need to be avoided, except as part of select assignment. Likewise, extensive use of tables is not indicated, as parallelized lookup is often inefficient.
How would one go about constructing an efficient vectorizable implementation of the single-precision function erfcf()? The accuracy, as measured in ulp, should be roughly the same as glibc's scalar implementation, which has a maximum error of 3.12575 ulps (determined by exhaustive testing). The availability of fused multiply-add (FMA) can be assumed, as all major processor architectures (CPUs and GPUs) offer it at this time. While handling of floating-point status flags and errno can be ignored, denormals, infinities, and NaNs should be handled in accordance with the IEEE 754 bindings for ISO C.

After looking into various approaches, the one that seems most suitable is the algorithm proposed in the following paper:
M. M. Shepherd and J. G. Laframboise, "Chebyshev Approximation of (1 + 2 x) exp(x2) erfc x in 0 ≤ x < ∞." Mathematics of Computation, Volume 36, No. 153, January 1981, pp. 249-253 (online copy)
The basic idea of the paper is to create an approximation to (1 + 2 x) exp(x2) erfc(x), from which we can compute erfcx(x) by simply dividing by (1 + 2 x), and erfc(x) by then multiplying with exp(-x2). The tightly bounded range of the function, with function values roughly in [1, 1.3], and its general "flatness" lend itself well to polynomial approximation. Numerical properties of this approach are further improved by narrowing the approximation interval: the original argument x is transformed by q = (x - K) / (x + K), where K is a suitably chosen constant, followed by computing p (q), where p is a polynomial.
Since erfc -x = 2 - erfc x, we only need to consider the interval [0, ∞] which is mapped to the interval [-1, 1] by this transformation. For IEEE-754 single-precision, erfcf() vanishes (becomes zero) for x > 10.0546875, so one needs to consider only x ∈ [0, 10.0546875). What is the "optimal' value of K for this range? I know of no mathematical analysis that would provide the answer, the paper suggests K = 3.75 based on experiments.
One can readily establish that for single-precision computation, a minimax polynomial approximation of degree 9 is sufficient for various values of K in that general vicinity. Systematically generating such approximations with the Remez algorithm, with K varying between 1.5 and 4 in steps of 1/16, lowest approximation error is observed for K = {2, 2.625, 3.3125}. Of these, K = 2 is the most advantageous choice, since it lends itself to very accurate computation of (x - K) / (x + K), as shown in this question.
The value K = 2 and the input domain for x would suggest that it is necessary to use variant 4 from my answer, however once can demonstrate experimentally that the less expensive variant 5 achieves the same accuracy here, which is likely due to the very shallow slope of the approximated function for q > -0.5, which causes any error in the argument q to be reduced by roughly a factor of ten.
Since computation of erfc() requires post-processing steps in addition to the initial approximation, it is clear that the accuracy of both of these computations must be high in order to achieve a sufficiently accurate final result. Error correcting techniques must be used.
One observes that the most significant coefficient in the polynomial approximation of (1 + 2 x) exp(x2) erfc(x) is of the form (1 + s), where s < 0.5. This means we can represent the leading coefficient more accurately by splitting off 1, and only using s in the polynomial. So instead of computing a polynomial p(q), then multiplying by the reciprocal r = 1 / (1 + 2 x), it is mathematically equivalent but numerically advantageous to compute the core approximation as p(q) + 1, and use p to compute fma (p, r, r).
The accuracy of the division can be enhanced by computing an initial quotient q from the reciprocal r, compute the residual e = p+1 - q * (1 + 2 x) with the help of an FMA, then use e to apply the correction q = q + (e * r), again using an FMA.
Exponentiation has error magnification properties, therefore computation of e-x2 must be performed carefully. The availability of FMA trivially allows the computation of -x2 as a double-float shigh:slow. ex is its own derivative, so one can compute eshigh:slow as eshigh + eshigh * slow. This computation can be combined with the multiplication of the previous intermediate result r to yield r = r * eshigh + r * eshigh * slow. By use of FMA, one ensures that the most significant term r * eshigh is computed as accurately as possible.
Combining the steps above with a few simple selections to handle exceptional cases and negative arguments, one arrives at the following C code:
float my_expf (float);
/* Compute complementary error function.
*
* Based on: M. M. Shepherd and J. G. Laframboise, "Chebyshev Approximation of
* (1+2x)exp(x^2)erfc x in 0 <= x < INF", Mathematics of Computation, Vol. 36,
* No. 153, January 1981, pp. 249-253.
*
* maximum error: 2.65184 ulps
*/
float my_erfcf (float x)
{
float a, d, e, p, q, r, s, t;
a = fabsf (x);
/* Compute q = (a-2)/(a+2) accurately. [0, 10.0546875] -> [-1, 0.66818] */
p = a + 2.0f;
r = 1.0f / p;
q = fmaf (-4.0f, r, 1.0f);
t = fmaf (q + 1.0f, -2.0f, a);
e = fmaf (-a, q, t);
q = fmaf (r, e, q);
/* Approximate (1+2*a)*exp(a*a)*erfc(a) as p(q)+1 for q in [-1, 0.66818] */
p = -0x1.a4a000p-12f; // -4.01139259e-4
p = fmaf (p, q, -0x1.42a260p-10f); // -1.23075210e-3
p = fmaf (p, q, 0x1.585714p-10f); // 1.31355342e-3
p = fmaf (p, q, 0x1.1adcc4p-07f); // 8.63227434e-3
p = fmaf (p, q, -0x1.081b82p-07f); // -8.05991981e-3
p = fmaf (p, q, -0x1.bc0b6ap-05f); // -5.42046614e-2
p = fmaf (p, q, 0x1.4ffc46p-03f); // 1.64055392e-1
p = fmaf (p, q, -0x1.540840p-03f); // -1.66031361e-1
p = fmaf (p, q, -0x1.7bf616p-04f); // -9.27639827e-2
p = fmaf (p, q, 0x1.1ba03ap-02f); // 2.76978403e-1
/* Divide (1+p) by (1+2*a) ==> exp(a*a)*erfc(a) */
d = fmaf (2.0f, a, 1.0f);
r = 1.0f / d;
q = fmaf (p, r, r); // q = (p+1)/(1+2*a)
e = fmaf (fmaf (q, -a, 0.5f), 2.0f, p - q); // residual: (p+1)-q*(1+2*a)
r = fmaf (e, r, q);
/* Multiply by exp(-a*a) ==> erfc(a) */
s = a * a;
e = my_expf (-s);
t = fmaf (-a, a, s);
r = fmaf (r, e, r * e * t);
/* Handle NaN, Inf arguments to erfc() */
if (!(a < INFINITY)) r = x + x;
/* Clamp result for large arguments */
if (a > 10.0546875f) r = 0.0f;
/* Handle negative arguments to erfc() */
if (x < 0.0f) r = 2.0f - r;
return r;
}
/* Compute exponential base e. Maximum ulp error = 0.86565 */
float my_expf (float a)
{
float c, f, r;
int i;
// exp(a) = exp(i + f); i = rint (a / log(2))
c = 0x1.800000p+23f; // 1.25829120e+7
r = fmaf (0x1.715476p+0f, a, c) - c; // 1.44269502e+0
f = fmaf (r, -0x1.62e400p-01f, a); // -6.93145752e-1 // log_2_hi
f = fmaf (r, -0x1.7f7d1cp-20f, f); // -1.42860677e-6 // log_2_lo
i = (int)r;
// approximate r = exp(f) on interval [-log(2)/2,+log(2)/2]
r = 0x1.694000p-10f; // 1.37805939e-3
r = fmaf (r, f, 0x1.125edcp-07f); // 8.37312452e-3
r = fmaf (r, f, 0x1.555b5ap-05f); // 4.16695364e-2
r = fmaf (r, f, 0x1.555450p-03f); // 1.66664720e-1
r = fmaf (r, f, 0x1.fffff6p-02f); // 4.99999851e-1
r = fmaf (r, f, 0x1.000000p+00f); // 1.00000000e+0
r = fmaf (r, f, 0x1.000000p+00f); // 1.00000000e+0
// exp(a) = 2**i * exp(f);
r = ldexpf (r, i);
if (!(fabsf (a) < 104.0f)) {
r = a + a; // handle NaNs
if (a < 0.0f) r = 0.0f;
if (a > 0.0f) r = INFINITY;
}
return r;
}
I used my own implementation of expf() in the above code to isolate my work from differences in the expf() implementations on different compute platforms. But any implementation of expf() whose maximum error is close to 0.5 ulp should work well. As shown above, that is, when using my_expf(), my_erfcf() has a maximum error of 2.65184 ulps.
Provided a vectorizable expf() is available, the code above should vectorize without problem. I did a quick check with the Intel compiler 13.1.3.198. I put a call to my_erfcf() in a loop, added #include <mathimf.h>, replaced the call to my_expf() with a call to expf(), then compiled using these command line switches:
/Qstd=c99 /O3 /QxCORE-AVX2 /fp:precise /Qfma /Qimf-precision:high:expf /Qvec_report=2
The Intel compiler reported that the loop had been vectorized, which I double checked by inspection of the disassembled binary code.
Since my_erfcf() only uses reciprocals rather than full divisions, it is amenable to the use of fast reciprocal implementations, provided they deliver almost correctly-rounded results. For processors that provide a fast single-precision reciprocal approximation in hardware, this can easily be achieved by coupling this with a Halley iteration with cubic convergence. A (scalar) example of this approach for x86 processors is:
/* Compute 1.0f / a almost correctly rounded. Halley iteration with cubic convergence */
float fast_recipf (float a)
{
__m128 t;
float e, r;
t = _mm_set_ss (a);
t = _mm_rcp_ss (t);
_mm_store_ss (&r, t);
e = fmaf (r, -a, 1.0f);
e = fmaf (e, e, e);
r = fmaf (e, r, r);
return r;
}

Related

Efficient and accurate computation of the Mills ratio of the standard normal distribution

The Mills ratio M(x) was introduced by John Mills to express the relationship between a distribution's cumulative distribution function and its probability density function:
J. P. Mills, "Table of the ratio: Area to bounding ordinate, for any portion of normal curve". Biometrika, Vol. 18, No. 3/4 (Nov. 1926), pp. 395-400. (online)
The definition of the Mills ratio is (1 - D(x)) / P(x), where D denotes the distribution function and P(x) is the probability density function. In the specific case of the standard normal distribution, we then have M(x) = (1 - Φ(x)) / ϕ(x) = Φ(-x) / ϕ(x), or when expressed via the complementary error function, M(x) = ex2 √(π/2) erfc (x/√2) = √(π/2) erfcx (x/√2).
Previous questions have dealt with the computation of the Mills ratio in mathematical environments like R and Matlab, but the sophisticated computational facilities of these environments have no equivalent in C. How can one compute the Mills ratio for the standard normal distribution accurately and efficiently using just the C standard math library?
In previous answers I have shown how to use the C standard math library to efficiently and accurately compute the PDF of the standard normal distribution, normpdf(), the CDF of the standard normal distribution, normcdf(), and the scaled complementary error function erfcx(). Based of these three implementations, one could easily code the computation of the Mills ratio in straightforward manner in one of the following two ways:
double my_mills_ratio_1 (double a)
{
return my_normcdf (-a) / my_normpdf (a);
}
double my_mills_ratio_2 (double a)
{
const double SQRT_HALF_HI = 0x1.6a09e667f3bccp-01; // 1/sqrt(2), msbs;
const double SQRT_HALF_LO = 0x1.21165f626cdd5p-54; // 1/sqrt(2), lsbs;
const double SQRT_PIO2_HI = 0x1.40d931ff62705p+00; // sqrt(pi/2), msbs;
const double SQRT_PIO2_LO = 0x1.2caf9483f5ce4p-53; // sqrt(pi/2), lsbs;
double r;
a = fma (SQRT_HALF_HI, a, SQRT_HALF_LO * a);
r = my_erfcx (a);
return fma (SQRT_PIO2_HI, r, SQRT_PIO2_LO * r);
}
However, both of these approaches are numerically flawed. For mills_ratio_1(), both the PDF term and CDF term vanish rapidly in the positive half-plane as the magnitude of the argument increases. In IEEE-754 double precision both become zero around a = 38, leading to a NaN result due to a division of zero by zero. With regard to my_mills_ratio_2(), the exponential growth in the negative half-plane leads to error magnification and therefore large ulp errors. One way to fix this is to simply combine the well-behaved portions of each of the two approximations:
double my_mills_ratio_3 (double a)
{
return (a < 0) ? my_mills_ratio_1 (a) : my_mills_ratio_2 (a);
}
This works reasonably well. Using the Intel compiler version 13.1.3.198 to build the code I presented in my previous answers, using 4 billion test vectors, a maximum error of 2.79346 ulps is observed in the positive half-plane, while a maximum error of 6.81248 ulps is observed in the negative half-plane. The somewhat larger errors in the negative half-plane occur for large results close to overflow, because at that point the values of the PDF are so small that they are represented as subnormal double-precision numbers with reduced accuracy.
One alternative solution is to address the error magnification issues affecting my_mills_ratio_2() in the negative half-plane. One can do so by computing the argument to erfcx() to better than double precision, and using the low-order bits of this argument for linear interpolation of the erfcx() result.
For this, one also needs the slope of erfcx(x), which is erfcx'(x) = 2x erfcx(x) - 2/√π. Availability of the FMA (fused multiply-add) operation via the C standard math function fma() provides for the efficient implementation of this quasi double-double computation. The risk of overflow in the magnitude of the slope during intermediate computations can be avoided by local rescaling.
The resulting implementation has an error of less than 4 ulps across the entire input domain:
/* Compute Mills ratio of the standard normal distribution:
*
* M(x) = normcdf(-x)/normpdf(x) = sqrt(pi/2) * erfcx(x/sqrt(2))
*
* maximum ulp error in positive half-plane: 2.79346
* maximum ulp error in negative half-plane: 3.90753
*/
double my_mills_ratio (double a)
{
double s, t, r, h, l;
const double SQRT_HALF_HI = 0x1.6a09e667f3bccp-01; // 1/sqrt(2), msbs
const double SQRT_HALF_LO = 0x1.21165f626cdd5p-54; // 1/sqrt(2), lsbs
const double SQRT_PIO2_HI = 0x1.40d931ff62705p+00; // sqrt(pi/2), msbs
const double SQRT_PIO2_LO = 0x1.2caf9483f5ce4p-53; // sqrt(pi/2), lsbs
const double TWO_RSQRT_PI = 0x1.20dd750429b6dp+00; // 2/sqrt(pi)
const double MAX_IEEE_DBL = 0x1.fffffffffffffp+1023;
const double SCALE_DOWN = 0.03125; // prevent ovrfl in intermed. computation
const double SCALE_UP = 1.0 / SCALE_DOWN;
// Compute argument a/sqrt(2) as a head-tail pair of doubles h:l
h = fma (SQRT_HALF_HI, a, SQRT_HALF_LO * a);
l = fma (-SQRT_HALF_LO, a, fma (-SQRT_HALF_HI, a, h));
// Compute scaled complementary error function for argument "head"
t = my_erfcx (h);
// Enhance accuracy if in negative half-plane, if result has not overflowed
if ((a < -1.0) && (t <= MAX_IEEE_DBL)) {
// Compute slope: erfcx'(x) = 2x * erfcx(x) - 2/sqrt(pi)
s = fma (h, t * SCALE_DOWN, -TWO_RSQRT_PI * SCALE_DOWN); // slope
// Linearly interpolate result based on derivative and argument "tail"
t = fma (s, -2.0 * SCALE_UP * l, t);
}
// Scale by sqrt(pi/2) for final result
r = fma (SQRT_PIO2_HI, t, SQRT_PIO2_LO * t);
return r;
}
The single-precision implementation looks almost identical, except for the constants involved:
/* Compute Mills ratio of the standard normal distribution:
*
* M(x) = normcdf(-x)/normpdf(x) = sqrt(pi/2) * erfcx(x/sqrt(2))
*
* maximum ulp error in positive half-plane: 2.41987
* maximum ulp error in negative half-plane: 3.39521
*/
float my_mills_ratio_f (float a)
{
float h, l, r, s, t;
const float SQRT_HALF_HI = 0x1.6a09e6p-01f; // sqrt(1/2), msbs
const float SQRT_HALF_LO = 0x1.9fcef4p-27f; // sqrt(1/2), lsbs
const float SQRT_PIO2_HI = 0x1.40d930p+00f; // sqrt(pi/2), msbs
const float SQRT_PIO2_LO = 0x1.ff6270p-24f; // sqrt(pi/2), lsbs
const float TWO_RSQRT_PI = 0x1.20dd76p+00f; // 2/sqrt(pi)
const float MAX_IEEE_FLT = 0x1.fffffep+127f;
const float SCALE_DOWN = 0.0625f; // prevent ovrfl in intermed. computation
const float SCALE_UP = 1.0f / SCALE_DOWN;
// Compute argument a/sqrt(2) as a head-tail pair of floats h:l
h = fmaf (SQRT_HALF_HI, a, SQRT_HALF_LO * a);
l = fmaf (-SQRT_HALF_LO, a, fmaf (-SQRT_HALF_HI, a, h));
// Compute scaled complementary error function for argument "head"
t = my_erfcxf (h);
// Enhance accuracy if in negative half-plane, if result has not overflowed
if ((a < -1.0f) && (t <= MAX_IEEE_FLT)) {
// Compute slope: erfcx'(x) = 2x * erfcx(x) - 2/sqrt(pi)
s = fmaf (h, t * SCALE_DOWN, -TWO_RSQRT_PI * SCALE_DOWN);
// Linearly interpolate result based on derivative and argument "tail"
t = fmaf (s, -2.0f * SCALE_UP * l, t);
}
// Scale by sqrt(pi/2) for final result
r = fmaf (SQRT_PIO2_HI, t, SQRT_PIO2_LO * t);
return r;
}

Efficiently computing (a - K) / (a + K) with improved accuracy

In various contexts, for example for the argument reduction for mathematical functions, one needs to compute (a - K) / (a + K), where a is a positive variable argument and K is a constant. In many cases, K is a power of two, which is the use case relevant to my work. I am looking for efficient ways to compute this quotient more accurately than can be accomplished with the straightforward division. Hardware support for fused multiply-add (FMA) can be assumed, as this operation is provided by all major CPU and GPU architectures at this time, and is available in C/C++ via the functionsfma() and fmaf().
For ease of exploration, I am experimenting with float arithmetic. Since I plan to port the approach to double arithmetic as well, no operations using higher than the native precision of both argument and result may be used. My best solution so far is:
/* Compute q = (a - K) / (a + K) with improved accuracy. Variant 1 */
m = a - K;
p = a + K;
r = 1.0f / p;
q = m * r;
t = fmaf (q, -2.0f*K, m);
e = fmaf (q, -m, t);
q = fmaf (r, e, q);
For arguments a in the interval [K/2, 4.23*K], code above computes the quotient almost correctly rounded for all inputs (maximum error is exceedingly close to 0.5 ulps), provided that K is a power of 2, and there is no overflow or underflow in intermediate results. For K not a power of two, this code is still more accurate than the naive algorithm based on division. In terms of performance, this code can be faster than the naive approach on platforms where the floating-point reciprocal can be computed faster than the floating-point division.
I make the following observation when K = 2n: When the upper bound of the work interval increases to 8*K, 16*K, ... maximum error increases gradually and starts to slowly approximate the maximum error of the naive computation from below. Unfortunately, the same does not appear to be true for the lower bound of the interval. If the lower bound drops to 0.25*K, the maximum error of the improved method above equals the maximum error of the naive method.
Is there a method to compute q = (a - K) / (a + K) that can achieve smaller maximum error (measured in ulp vs the mathematical result) compared to both the naive method and the above code sequence, over a wider interval, in particular for intervals whose lower bound is less than 0.5*K? Efficiency is important, but a few more operations than are used in the above code can likely be tolerated.
In one answer below, it was pointed out that I could enhance accuracy by returning the quotient as an unevaluated sum of two operands, that is, as a head-tail pair q:qlo, i.e. similar to the well-known double-float and double-double formats. In my code above, this would mean changing the last line to qlo = r * e.
This approach is certainly useful, and I had already contemplated its use for an extended-precision logarithm for use in pow(). But it doesn't fundamentally help with the desired widening of the interval on which the enhanced computation provides more accurate quotients. In a particular case I am looking at, I would like to use K=2 (for single precision) or K=4 (for double precision) to keep the primary approximation interval narrow, and the interval for a is roughly [0,28]. The practical problem I am facing is that for arguments < 0.25*K the accuracy of the improved division is not substantially better than with the naive method.
If a is large compared to K, then (a-K)/(a+K) = 1 - 2K / (a + K) will give a good approximation. If a is small compared to K, then 2a / (a + K) - 1 will give a good approximation. If K/2 ≤ a ≤ 2K, then a-K is an exact operation, so doing the division will give a decent result.
One possibility is to track error of m and p into m1 and p1 with classical Dekker/Schewchuk:
m=a-k;
k0=a-m;
a0=k0+m;
k1=k0-k;
a1=a-a0;
m1=a1+k1;
p=a+k;
k0=p-a;
a0=p-k0;
k1=k-k0;
a1=a-a0;
p1=a1+k1;
Then, correct the naive division:
q=m/p;
r0=fmaf(p,-q,m);
r1=fmaf(p1,-q,m1);
r=r0+r1;
q1=r/p;
q=q+q1;
That'll cost you 2 divisions, but should be near half ulp if I didn't screw up.
But these divisions can be replaced by multiplications with inverse of p without any problem, since the first incorrectly rounded division will be compensated by remainder r, and second incorrectly rounded division does not really matter (the last bits of correction q1 won't change anything).
I don't really have an answer (proper floating point error analyses are very tedious) but a few observations:
Fast reciprocal instructions (such as RCPSS) are not as accurate as division, so you may see a reduction in accuracy if using these.
m is computed exactly if a &in; [0.5×Kb, 21+n×Kb), where Kb is the power of 2 below K (or K itself if K is a power of 2), and n is the number of trailing zeros in the significand of K (i.e. if K is a power of 2, then n=23).
This is similar to a simplified form of the div2 algorithm from Dekker (1971): to expand the range (particularly the lower bound), you'll probably have to incorporate more correction terms from this (i.e. store m as the sum of 2 floats, or use a double).
Since my goal is to merely widen the interval on which accurate results are achieved, rather than to find a solution that works for all possible values of a, making use of double-float arithmetic for all intermediate computation seems too costly.
Thinking some more about the problem, it is clear that the computation of the remainder of the division, e in the code from my question, is the crucial part of achieving more accurate result. Mathematically, the remainder is (a-K) - q * (a+K). In my code, I simply used m to represent (a-K) and represented (a+k) as m + 2*K, as this delivers numerically superior results to the straightforward representation.
With relatively small additional computational cost, (a+K) can be represented as a double-float, that is, a head-tail pair p:plo, which leads to the following modified version of my original code:
/* Compute q = (a - K) / (a + K) with improved accuracy. Variant 2 */
m = a - K;
p = a + K;
r = 1.0f / p;
q = m * r;
mx = fmaxf (a, K);
mn = fminf (a, K);
plo = (mx - p) + mn;
t = fmaf (q, -p, m);
e = fmaf (q, -plo, t);
q = fmaf (r, e, q);
Testing shows that this delivers nearly correctly rounded results for a in [K/2, 224*K), allowing for a substantial increase to the upper bound of the interval on which accurate results are achieved.
Widening the interval at the lower end requires the more accurate representation of (a-K). We can compute this as a double-float head-tail pair m:mlo, which leads to the following code variant:
/* Compute q = (a - K) / (a + K) with improved accuracy. Variant 3 */
m = a - K;
p = a + K;
r = 1.0f / p;
q = m * r;
plo = (a < K) ? ((K - p) + a) : ((a - p) + K);
mlo = (a < K) ? (a - (K + m)) : ((a - m) - K);
t = fmaf (q, -p, m);
e = fmaf (q, -plo, t);
e = e + mlo;
q = fmaf (r, e, q);
Exhaustive testing hows that this delivers nearly correctly rounded results for a in the interval [K/224, K*224). Unfortunately, this comes at a cost of ten additional operations compared to the code in my question, which is a steep price to pay to get the maximum error from around 1.625 ulps with the naive computation down to near 0.5 ulp.
As in my original code from the question, one can express (a+K) in terms of (a-K), thus eliminating the computation of the tail of p, plo. This approach results in the following code:
/* Compute q = (a - K) / (a + K) with improved accuracy. Variant 4 */
m = a - K;
p = a + K;
r = 1.0f / p;
q = m * r;
mlo = (a < K) ? (a - (K + m)) : ((a - m) - K);
t = fmaf (q, -2.0f*K, m);
t = fmaf (q, -m, t);
e = fmaf (q - 1.0f, -mlo, t);
q = fmaf (r, e, q);
This turns out to be advantageous if the main focus is decreasing the lower limit of the interval, which is my particular focus as explained in the question. Exhaustive testing of the single-precision case shows that when K=2n nearly correctly rounded results are produced for values of a in the interval [K/224, 4.23*K]. With a total of 14 or 15 operations (depending on whether an architecture supports full predication or just conditional moves), this requires seven to eight more operations than my original code.
Lastly, one might base the residual computation directly on the original variable a to avoid the error inherent in the computation of m and p. This leads to the following code that, for K = 2n, computes nearly correctly rounded results for a in the interval [K/224, K/3):
/* Compute q = (a - K) / (a + K) with improved accuracy. Variant 5 */
m = a - K;
p = a + K;
r = 1.0f / p;
q = m * r;
t = fmaf (q + 1.0f, -K, a);
e = fmaf (q, -a, t);
q = fmaf (r, e, q);
If you can relax the API to return another variable that models the error, then the solution becomes much simpler:
float foo(float a, float k, float *res)
{
float ret=(a-k)/(a+k);
*res = fmaf(-ret,a+k,a-k)/(a+k);
return ret;
}
This solution only handles truncation error of division, but does not handle the loss of precision of a+k and a-k.
To handle those errors, I think I need to use double precision, or bithack to use fixed point.
Test code is updated to artificially generate non zero least significant bits
in the input
test code
https://ideone.com/bHxAg8
The problem is the addition in (a + K). Any loss of precision in (a + K) is magnified by the division. The problem isn't the division itself.
If the exponents of a and K are the same (almost) no precision is lost, and if the absolute difference between the exponents is greater than the significand size then either (a + K) == a (if a has larger magnitude) or (a + K) == K (if K has larger magnitude).
There is no way to prevent this. Increasing the significand size (e.g. using 80-bit "extended double" on 80x86) only helps widen the "accurate result range" slightly. To understand why, consider smallest + largest (where smallest is the smallest positive denormal a 32-bit floating point number can be). In this case (for 32-bit floats) you'd need a significand size of about 260 bits for the result to avoid precision loss completely. Doing (e.g.) temp = 1/(a + K); result = a * temp - K / temp; won't help much either because you've still got exactly the same (a + K) problem (but it would avoid a similar problem in (a - K)). Also you can't do result = anything / p + anything_error/p_error because division doesn't work like that.
There are only 3 alternatives I can think of to get close to 0.5 ulps for all possible positive values of a that can fit in 32-bit floating point. None are likely to be acceptable.
The first alternative involves pre-computing a lookup table (using "big real number" maths) for every value of a, which (with some tricks) ends up being about 2 GiB for 32-bit floating point (and completely insane for 64-bit floating point). Of course if the range of possible values of a is smaller than "any positive value that can fit in a 32-bit float" the size of the lookup table would be reduced.
The second alternative is to use something else ("big real number") for the calculation at run-time (and convert to/from 32-bit floating point).
The third alternative involves, "something" (I don't know what it's called, but it's expensive). Set the rounding mode to "round to positive infinity" and calculate temp1 = (a + K); if(a < K) temp2 = (a - K); then switch to "round to negative infinity" and calculate if(a >= K) temp2 = (a - K); lower_bound = temp2 / temp1;. Next do a_lower = a and decrease a_lower by the smallest amount possible and repeat the "lower_bound" calculation, and keep doing that until you get a different value for lower_bound, then revert back to the previous value of a_lower. After that you do essentially the same (but opposite rounding modes, and incrementing not decrementing) to determine upper_bound and a_upper (starting with the original value of a). Finally, interpolate, like a_range = a_upper - a_lower; result = upper_bound * (a_upper - a) / a_range + lower_bound * (a - a_lower) / a_range;. Note that you will want to calculate an initial upper and lower bound and skip all of this if they're equal. Also be warned that this is all "in theory, completely untested" and I probably borked it somewhere.
Mainly what I'm saying is that (in my opinion) you should give up and accept that there's nothing that you can do to get close to 0.5 ulp. Sorry.. :)

Most accurate way to compute asinhf() from log1pf()?

The inverse hyperbolic function asinh() is closely related to the natural logarithm. I am trying to determine the most accurate way to compute asinh() from the C99 standard math function log1p(). For ease of experimentation, I am limiting myself to IEEE-754 single-precision computation right now, that is I am looking at asinhf() and log1pf(). I intend to re-use the exact same algorithm for double precision computation, i.e. asinh() and log1p(), later.
My primary goal is to minimize ulp error, the secondary goal is to minimize the number of incorrectly rounded results, under the constraint that the improved code would at most be minimally slower than the versions posted below. Any incremental improvement to accuracy, say 0.2 ulp, would be welcome. Adding a couple of FMAs (fused multiply-adds) would be fine, on the other hand I am hoping someone could identify a solution which employs a fast rsqrtf() (reciprocal square root).
The resulting C99 code should lend itself to vectorization, possibly by some minor straightforward transformations. All intermediate computation must occur at the precision of the function argument and result, as any switch to higher precision may have a severe negative performance impact. The code must work correctly both with IEEE-754 denormal support and in FTZ (flush to zero) mode.
So far, I have identified the following two candidate implementations. Note that the code may be easily transformed into a branchless vectorizable version with a single call to log1pf(), but I have not done so at this stage to avoid unnecessary obfuscation.
/* for a >= 0, asinh(a) = log (a + sqrt (a*a+1))
= log1p (a + (sqrt (a*a+1) - 1))
= log1p (a + sqrt1pm1 (a*a))
= log1p (a + (a*a / (1 + sqrt(a*a + 1))))
= log1p (a + a * (a / (1 + sqrt(a*a + 1))))
= log1p (fma (a / (1 + sqrt(a*a + 1)), a, a)
= log1p (fma (1 / (1/a + sqrt(1/a*a + 1)), a, a)
*/
float my_asinhf (float a)
{
float fa, t;
fa = fabsf (a);
#if !USE_RECIPROCAL
if (fa >= 0x1.0p64f) { // prevent overflow in intermediate computation
t = log1pf (fa) + 0x1.62e430p-1f; // log(2)
} else {
t = fmaf (fa / (1.0f + sqrtf (fmaf (fa, fa, 1.0f))), fa, fa);
t = log1pf (t);
}
#else // USE_RECIPROCAL
if (fa > 0x1.0p126f) { // prevent underflow in intermediate computation
t = log1pf (fa) + 0x1.62e430p-1f; // log(2)
} else {
t = 1.0f / fa;
t = fmaf (1.0f / (t + sqrtf (fmaf (t, t, 1.0f))), fa, fa);
t = log1pf (t);
}
#endif // USE_RECIPROCAL
return copysignf (t, a); // restore sign
}
With a particular log1pf() implementation that is accurate to < 0.6 ulps, I am observing the following error statistics when testing exhaustively across all 232 possible IEEE-754 single-precision inputs. When USE_RECIPROCAL = 0, the maximum error is 1.49486 ulp, and there are 353,587,822 incorrectly rounded results. With USE_RECIPROCAL = 1, the maximum error is 1.50805 ulp, and there are only 77,569,390 incorrectly rounded results.
In terms of performance, the variant USE_RECIPROCAL = 0 will be faster if reciprocals and full divisions take roughly the same amount of time, but the variant USE_RECIPROCAL = 1 could be faster if very fast reciprocal support is available.
Answers can assume that all basic arithmetic, including FMA (fused multiply-add) is correctly rounded according to IEEE-754 round-to-nearest-or-even mode. In addition, faster, nearly correctly rounded, versions of reciprocal and rsqrtf() may be available, where "nearly correctly rounded" means the maximum ulp error will be limited to something like 0.53 ulps and the overwhelming majority of results, say > 95%, are correctly rounded. Basic arithmetic with directed roundings may be available at no additional cost to performance.
Firstly, you may want to look into the accuracy and speed of your log1pf function: these can vary a bit between libms (I've found the OS X math functions to be fast, the glibc ones to be slower but typically correctly rounded).
Openlibm, based on the BSD libm, which in turn is based on Sun's fdlibm, use multiple approaches by range, but the main bit is the relation:
t = x*x;
w = log1pf(fabsf(x)+t/(one+sqrtf(one+t)));
You may also want to try compiling with the -fno-math-errno option, which disables the old System V error codes for sqrt (IEEE-754 exceptions will still work).
After various additional experiments, I have convinced myself that a simple argument transformation that does not use higher precision than the argument and result cannot achieve a tighter error bound than the one achieved by the first variant in the code I posted.
Since my question is about minimizing the error for the argument transformation which is incurred in addition to the error in log1pf() itself, the most straightforward approach to use for experimentation is to utilize a correctly rounded implementation of that logarithm function. Note that a correctly-rounded implementation is highly unlikely to exist in the context of a high-performance environment. According to the works of J.-M. Muller et. al., to produce accurate single-precision results, x86 extended precision computation should be sufficient, for example
float accurate_log1pf (float a)
{
float res;
__asm fldln2;
__asm fld dword ptr [a];
__asm fyl2xp1;
__asm fst dword ptr [res];
__asm fcompp;
return res;
}
An implementation of asinhf() using the first variant from my question then looks as follows:
float my_asinhf (float a)
{
float fa, s, t;
fa = fabsf (a);
if (fa >= 0x1.0p64f) { // prevent overflow in intermediate computation
t = log1pf (fa) + 0x1.62e430p-1f; // log(2)
} else {
t = fmaf (fa / (1.0f + sqrtf (fmaf (fa, fa, 1.0f))), fa, fa);
t = accurate_log1pf (t);
}
return copysignf (t, a); // restore sign
}
Testing with all 232 IEEE-754 single-precision operands shows that the maximum error of 1.49486070 ulp occurs at ±0x1.ff5022p-9 and there are 353,521,140 incorrectly rounded results. What happens if the entire argument transformation uses double-precision arithmetic? The code changes to
float my_asinhf (float a)
{
float fa, s, t;
fa = fabsf (a);
if (fa >= 0x1.0p64f) { // prevent overflow in intermediate computation
t = log1pf (fa) + 0x1.62e430p-1f; // log(2)
} else {
double tt = fa;
tt = fma (tt / (1.0 + sqrt (fma (tt, tt, 1.0))), tt, tt);
t = (float)tt;
t = accurate_log1pf (t);
}
return copysignf (t, a); // restore sign
}
However, the error bound does not improve with this change! The maximum error of 1.49486070 ulp still occurs at ±0x1.ff5022p-9 and there are now 350,971,046 incorrectly rounded results, slightly fewer than before. The issue seems to be that a float operand cannot convey enough information to log1pf() to produce more accurate results. A similar problem occurs when computing sinf() and cosf(). If the reduced argument, represented as a correctly rounded float operand, is passed to the core polynomials, the resulting error in sinf() and cosf() is just a tad under 1.5 ulp, just as we are observing here with my_asinhf().
One solution is to compute the transformed argument to higher than single precision, for example as a double-float operand pair (a useful brief overwiew of double-float techniques can be found in this paper by Andrew Thall). In this case, we can use the additional information to perform linear interpolation on the result, based on the knowledge that the derivative of the logarithm is the reciprocal. This gives us:
float my_asinhf (float a)
{
float fa, s, t;
fa = fabsf (a);
if (fa >= 0x1.0p64f) { // prevent overflow in intermediate computation
t = log1pf (fa) + 0x1.62e430p-1f; // log(2)
} else {
double tt = fa;
tt = fma (tt / (1.0 + sqrt (fma (tt, tt, 1.0))), tt, tt);
t = (float)tt; // "head" of double-float
s = (float)(tt - (double)t); // "tail" of double-float
t = fmaf (s, 1.0f / (1.0f + t), accurate_log1pf (t)); // interpolate
}
return copysignf (t, a); // restore sign
}
Exhaustive test of this version indicates that the maximum error has been reduced to 0.99999948 ulp, it occurs at ±0x1.deeea0p-22. There are 349,653,534 incorrectly rounded results. A faithfully-rounded implementation of asinhf() has been achieved.
Unfortunately, the practical utility of this result is limited. Depending on HW platform, the throughput of arithmetic operations on double may only be 1/2 to 1/32 of the throughput of float operations. The double-precision computation can be replaced with double-float computation, but this would incur very significant cost as well. Lastly, my approach here was to use the single-precision implementation as a proving ground for subsequent double-precision work, and many hardware platforms (certainly all the ones I am interested in) do not offer hardware support for a numeric format with higher precision than IEEE-754 binary64 (double precision). Therefore any solution should not require higher-precision arithmetic in intermediate computation.
Since all the troublesome arguments in the case of asinhf() are small in magnitude, one could [partially?] address the accuracy issue by using a polynomial minimax approximation for the region around the origin. As this would create another code branch, it would likely make vectorization more difficult.

Efficient implementation of natural logarithm (ln) and exponentiation

I'm looking for implementation of log() and exp() functions provided in C library <math.h>. I'm working with 8 bit microcontrollers (OKI 411 and 431). I need to calculate Mean Kinetic Temperature. The requirement is that we should be able to calculate MKT as fast as possible and with as little code memory as possible. The compiler comes with log() and exp() functions in <math.h>. But calling either function and linking with the library causes the code size to increase by 5 Kilobytes, which will not fit in one of the micro we work with (OKI 411), because our code already consumed ~12K of available ~15K code memory.
The implementation I'm looking for should not use any other C library functions (like pow(), sqrt() etc). This is because all library functions are packed in one library and even if one function is called, the linker will bring whole 5K library to code memory.
EDIT
The algorithm should be correct up to 3 decimal places.
Using Taylor series is not the simplest neither the fastest way of doing this. Most professional implementations are using approximating polynomials. I'll show you how to generate one in Maple (it is a computer algebra program), using the Remez algorithm.
For 3 digits of accuracy execute the following commands in Maple:
with(numapprox):
Digits := 8
minimax(ln(x), x = 1 .. 2, 4, 1, 'maxerror')
maxerror
Its response is the following polynomial:
-1.7417939 + (2.8212026 + (-1.4699568 + (0.44717955 - 0.056570851 * x) * x) * x) * x
With the maximal error of: 0.000061011436
We generated a polynomial which approximates the ln(x), but only inside the [1..2] interval. Increasing the interval is not wise, because that would increase the maximal error even more. Instead of that, do the following decomposition:
So first find the highest power of 2, which is still smaller than the number (See: What is the fastest/most efficient way to find the highest set bit (msb) in an integer in C?). That number is actually the base-2 logarithm. Divide with that value, then the result gets into the 1..2 interval. At the end we will have to add n*ln(2) to get the final result.
An example implementation for numbers >= 1:
float ln(float y) {
int log2;
float divisor, x, result;
log2 = msb((int)y); // See: https://stackoverflow.com/a/4970859/6630230
divisor = (float)(1 << log2);
x = y / divisor; // normalized value between [1.0, 2.0]
result = -1.7417939 + (2.8212026 + (-1.4699568 + (0.44717955 - 0.056570851 * x) * x) * x) * x;
result += ((float)log2) * 0.69314718; // ln(2) = 0.69314718
return result;
}
Although if you plan to use it only in the [1.0, 2.0] interval, then the function is like:
float ln(float x) {
return -1.7417939 + (2.8212026 + (-1.4699568 + (0.44717955 - 0.056570851 * x) * x) * x) * x;
}
The Taylor series for e^x converges extremely quickly, and you can tune your implementation to the precision that you need. (http://en.wikipedia.org/wiki/Taylor_series)
The Taylor series for log is not as nice...
If you don't need floating-point math for anything else, you may compute an approximate fractional base-2 log pretty easily. Start by shifting your value left until it's 32768 or higher and store the number of times you did that in count. Then, repeat some number of times (depending upon your desired scale factor):
n = (mult(n,n) + 32768u) >> 16; // If a function is available for 16x16->32 multiply
count<<=1;
if (n < 32768) n*=2; else count+=1;
If the above loop is repeated 8 times, then the log base 2 of the number will be count/256. If ten times, count/1024. If eleven, count/2048. Effectively, this function works by computing the integer power-of-two logarithm of n**(2^reps), but with intermediate values scaled to avoid overflow.
Would basic table with interpolation between values approach work? If ranges of values are limited (which is likely for your case - I doubt temperature readings have huge range) and high precisions is not required it may work. Should be easy to test on normal machine.
Here is one of many topics on table representation of functions: Calculating vs. lookup tables for sine value performance?
Necromancing.
I had to implement logarithms on rational numbers.
This is how I did it:
Occording to Wikipedia, there is the Halley-Newton approximation method
which can be used for very-high precision.
Using Newton's method, the iteration simplifies to (implementation), which has cubic convergence to ln(x), which is way better than what the Taylor-Series offers.
// Using Newton's method, the iteration simplifies to (implementation)
// which has cubic convergence to ln(x).
public static double ln(double x, double epsilon)
{
double yn = x - 1.0d; // using the first term of the taylor series as initial-value
double yn1 = yn;
do
{
yn = yn1;
yn1 = yn + 2 * (x - System.Math.Exp(yn)) / (x + System.Math.Exp(yn));
} while (System.Math.Abs(yn - yn1) > epsilon);
return yn1;
}
This is not C, but C#, but I'm sure anybody capable to program in C will be able to deduce the C-Code from that.
Furthermore, since
logn(x) = ln(x)/ln(n).
You have therefore just implemented logN as well.
public static double log(double x, double n, double epsilon)
{
return ln(x, epsilon) / ln(n, epsilon);
}
where epsilon (error) is the minimum precision.
Now as to speed, you're probably better of using the ln-cast-in-hardware, but as I said, I used this as a base to implement logarithms on a rational numbers class working with arbitrary precision.
Arbitrary precision might be more important than speed, under certain circumstances.
Then, use the logarithmic identities for rational numbers:
logB(x/y) = logB(x) - logB(y)
In addition to Crouching Kitten's answer which gave me inspiration, you can build a pseudo-recursive (at most 1 self-call) logarithm to avoid using polynomials. In pseudo code
ln(x) :=
If (x <= 0)
return NaN
Else if (!(1 <= x < 2))
return LN2 * b + ln(a)
Else
return taylor_expansion(x - 1)
This is pretty efficient and precise since on [1; 2) the taylor series converges A LOT faster, and we get such a number 1 <= a < 2 with the first call to ln if our input is positive but not in this range.
You can find 'b' as your unbiased exponent from the data held in the float x, and 'a' from the mantissa of the float x (a is exactly the same float as x, but now with exponent biased_0 rather than exponent biased_b). LN2 should be kept as a macro in hexadecimal floating point notation IMO. You can also use http://man7.org/linux/man-pages/man3/frexp.3.html for this.
Also, the trick
unsigned long tmp = *(ulong*)(&d);
for "memory-casting" double to unsigned long, rather than "value-casting", is very useful to know when dealing with floats memory-wise, as bitwise operators will cause warnings or errors depending on the compiler.
Possible computation of ln(x) and expo(x) in C without <math.h> :
static double expo(double n) {
int a = 0, b = n > 0;
double c = 1, d = 1, e = 1;
for (b || (n = -n); e + .00001 < (e += (d *= n) / (c *= ++a)););
// approximately 15 iterations
return b ? e : 1 / e;
}
static double native_log_computation(const double n) {
// Basic logarithm computation.
static const double euler = 2.7182818284590452354 ;
unsigned a = 0, d;
double b, c, e, f;
if (n > 0) {
for (c = n < 1 ? 1 / n : n; (c /= euler) > 1; ++a);
c = 1 / (c * euler - 1), c = c + c + 1, f = c * c, b = 0;
for (d = 1, c /= 2; e = b, b += 1 / (d * c), b - e/* > 0.0000001 */;)
d += 2, c *= f;
} else b = (n == 0) / 0.;
return n < 1 ? -(a + b) : a + b;
}
static inline double native_ln(const double n) {
// Returns the natural logarithm (base e) of N.
return native_log_computation(n) ;
}
static inline double native_log_base(const double n, const double base) {
// Returns the logarithm (base b) of N.
return native_log_computation(n) / native_log_computation(base) ;
}
Try it Online
Building off #Crouching Kitten's great natural log answer above, if you need it to be accurate for inputs <1 you can add a simple scaling factor. Below is an example in C++ that i've used in microcontrollers. It has a scaling factor of 256 and it's accurate to inputs down to 1/256 = ~0.04, and up to 2^32/256 = 16777215 (due to overflow of a uint32 variable).
It's interesting to note that even on an STMF103 Arm M3 with no FPU, the float implementation below is significantly faster (eg 3x or better) than the 16 bit fixed-point implementation in libfixmath (that being said, this float implementation still takes a few thousand cycles so it's still not ~fast~)
#include <float.h>
float TempSensor::Ln(float y)
{
// Algo from: https://stackoverflow.com/a/18454010
// Accurate between (1 / scaling factor) < y < (2^32 / scaling factor). Read comments below for more info on how to extend this range
float divisor, x, result;
const float LN_2 = 0.69314718; //pre calculated constant used in calculations
uint32_t log2 = 0;
//handle if input is less than zero
if (y <= 0)
{
return -FLT_MAX;
}
//scaling factor. The polynomial below is accurate when the input y>1, therefore using a scaling factor of 256 (aka 2^8) extends this to 1/256 or ~0.04. Given use of uint32_t, the input y must stay below 2^24 or 16777216 (aka 2^(32-8)), otherwise uint_y used below will overflow. Increasing the scaing factor will reduce the lower accuracy bound and also reduce the upper overflow bound. If you need the range to be wider, consider changing uint_y to a uint64_t
const uint32_t SCALING_FACTOR = 256;
const float LN_SCALING_FACTOR = 5.545177444; //this is the natural log of the scaling factor and needs to be precalculated
y = y * SCALING_FACTOR;
uint32_t uint_y = (uint32_t)y;
while (uint_y >>= 1) // Convert the number to an integer and then find the location of the MSB. This is the integer portion of Log2(y). See: https://stackoverflow.com/a/4970859/6630230
{
log2++;
}
divisor = (float)(1 << log2);
x = y / divisor; // FInd the remainder value between [1.0, 2.0] then calculate the natural log of this remainder using a polynomial approximation
result = -1.7417939 + (2.8212026 + (-1.4699568 + (0.44717955 - 0.056570851 * x) * x) * x) * x; //This polynomial approximates ln(x) between [1,2]
result = result + ((float)log2) * LN_2 - LN_SCALING_FACTOR; // Using the log product rule Log(A) + Log(B) = Log(AB) and the log base change rule log_x(A) = log_y(A)/Log_y(x), calculate all the components in base e and then sum them: = Ln(x_remainder) + (log_2(x_integer) * ln(2)) - ln(SCALING_FACTOR)
return result;
}

What is a simple way to find real roots of a (cubic) polynomial?

this seems like an obvious question to me, but I couldn't find it anywhere on SO.
I have a cubic polynomial and I need to find real roots of the function. What is THE way of doing this?
I have found several closed form formulas for roots of a cubic function, but all of them use either complex numbers or lots of goniometric functions and I don't like them (and also don't know which one to choose).
I need something simple; faster is better; and I know that I will eventually need to solve polynomials of higher order, so having a numerical solver would maybe help too.
I know I could use some library to do the hard work for me, but lets say I want to do this as an exercise.
I'm coding in C, so no import magic_poly_solver, please.
Bonus question: How do I find only roots inside a given interval?
For a cubic polynomial there are closed form solutions, but they are not particularly well suited for numerical calculus.
I'd do the following for the cubic case: any cubic polynomial has at least one real root, you can find it easily with Newton's method. Then, you use deflation to get the remaining quadratic polynomial to solve, see my answer there for how to do this latter step correctly.
One word of caution: if the discriminant is close to zero, there will be a numerically multiple real root, and Newton's method will miserably fail. Moreover, since to the vicinity of the root, the polynomial is like (x - x0)^2, you'll lose half your significant digits (since P(x) will be < epsilon as soon as x - x0 < sqrt(epsilon)). So you may want to rule this out and use the closed form solution in this particular case, or solve the derivative polynomial.
If you want to find roots in a given interval, check Sturm's theorem.
A more general (complex) algorithm for generic polynomial solving is Jenkins-Traub algorithm. This is clearly overkill here, but it works well on cubics. Usually, you use a third-party implementation.
Since you do C, using the GSL is surely your best bet.
Another generic method is to find the eigenvalues of the companion matrix with eg. balanced QR decomposition, or reduction to Householder form. This is the approach taken by GSL.
For solving cubic equations with simple C code, I have found the QBC solver by noted numerics expert professor William Kahan to be sufficiently robust, reasonably fast and reasonably accurate:
William Kahan, "To solve a real cubic equation." PAM-352, Center for Pure and Applied Mathematics, University of California, Berkely. November 10, 1986. (online, online)
This uses a derivative-based iterative method to find the real root, reduces to a quadratic equation based on that, finally uses a numerically robust quadratic equation solver to find the two remaining roots. Typically, the iterative solver requires about five to ten iterations to converge to the result. Both solvers can be enhanced for accuracy and performance by judicious use of fused multiply-add (FMA) operations, available in ISO C99 via the fma() standard math function.
Crucial to the accuracy of the quadratic solver is the computation of the discriminant. For this, I use the following code based on recent research:
/* Compute B*B - A*C, accurately
Claude-Pierre Jeannerod, Nicolas Louvet, and Jean-Michel Muller,
"Further Analysis of Kahan's Algorithm for the Accurate Computation
of 2x2 Determinants". Mathematics of Computation, Vol. 82, No. 284,
Oct. 2013, pp. 2245-2264
https://www.ams.org/journals/mcom/2013-82-284/S0025-5718-2013-02679-8/S0025-5718-2013-02679-8.pdf
*/
double DISC (double A, double B, double C)
{
double w = C * A;
double e = fma (-C, A, w);
double f = fma (B, B, -w);
double r = f + e;
return r;
}
Using double-precision arithmetic, Kahan's solver cannot always produce result accurate to double precision. One of the test cases provided in Kahan's paper illustrates why this is the case:
658x³ - 190125x² + 18311811x - 587898164
Using an arbitrary precision math library, we find that the roots of this cubic equation are as follows:
96.229639346592182_18...
96.357064825184152_07... ± i * 0.069749752043689625_43...
QBC using double-precision arithmetic computes the roots as
96.2296393 50445893
96.35706482 3257289 ± i * 0.0697497 48521837268
The reason for this is that the function evaluation around the real root suffers from errors as large as 60% in the computed function value, preventing the iterative solver from getting closer to the root. By changing the function and derivative evaluation to use double-double computation for intermediate computation (at hefty computational cost), we can address that issue.
/* Data type for double-double computation */
typedef struct {
double l; // low / tail
double h; // high / head
} dbldbl;
dbldbl make_dbldbl (double head, double tail);
double get_dbldbl_head (dbldbl a);
double get_dbldbl_tail (dbldbl a);
dbldbl add_dbldbl (dbldbl a, dbldbl b);
dbldbl mul_dbldbl (dbldbl a, dbldbl b);
void EVAL (double X, double A, double B, double C, double D,
double * restrict Q, double * restrict Qprime,
double * restrict B1, double * restrict C2)
{
#if USE_DBLDBL_EVAL
dbldbl AA, BB, CC, DD, XX, AX, TT, UU;
AA = make_dbldbl (A, 0);
BB = make_dbldbl (B, 0);
CC = make_dbldbl (C, 0);
DD = make_dbldbl (D, 0);
XX = make_dbldbl (X, 0);
AX = mul_dbldbl (AA, XX);
TT = add_dbldbl (AX, BB);
*B1 = get_dbldbl_head (TT) + get_dbldbl_tail(TT);
UU = add_dbldbl (mul_dbldbl (TT, XX), CC);
*C2 = get_dbldbl_head (UU) + get_dbldbl_tail(UU);
TT = add_dbldbl (mul_dbldbl (add_dbldbl (AX, TT), XX), UU);
*Qprime = get_dbldbl_head (TT) + get_dbldbl_tail(TT);
UU = add_dbldbl (mul_dbldbl (UU, XX), DD);
*Q = get_dbldbl_head (UU) + get_dbldbl_tail(UU);
#else // USE_DBLDBL_EVAL
*B1 = fma (A, X, B);
*C2 = fma (*B1, X, C);
*Qprime = fma (fma (A, X, *B1), X, *C2);
*Q = fma (*C2, X, D);
#endif // USE_DBLDBL_EVAL
}
/* Construct new dbldbl number. |tail| must be <= 0.5 ulp of |head| */
dbldbl make_dbldbl (double head, double tail)
{
dbldbl z;
z.l = tail;
z.h = head;
return z;
}
/* Return the head of a double-double number */
double get_dbldbl_head (dbldbl a)
{
return a.h;
}
/* Return the tail of a double-double number */
double get_dbldbl_tail (dbldbl a)
{
return a.l;
}
/* Add two dbldbl numbers */
dbldbl add_dbldbl (dbldbl a, dbldbl b)
{
dbldbl z;
double e, q, r, s, t, u;
/* Andrew Thall, "Extended-Precision Floating-Point Numbers for GPU
Computation." 2006. http://andrewthall.org/papers/df64_qf128.pdf
*/
q = a.h + b.h;
r = q - a.h;
t = (a.h + (r - q)) + (b.h - r);
s = a.l + b.l;
r = s - a.l;
u = (a.l + (r - s)) + (b.l - r);
t = t + s;
s = q + t;
t = (q - s) + t;
t = t + u;
z.h = e = s + t;
z.l = (s - e) + t;
/* For result of zero or infinity, ensure that tail equals head */
if (isinf (s)) {
z.h = s;
z.l = s;
}
if (z.h == 0) {
z.l = z.h;
}
return z;
}
/* Multiply two dbldbl numbers */
dbldbl mul_dbldbl (dbldbl a, dbldbl b)
{
dbldbl z;
double e, s, t;
s = a.h * b.h;
t = fma (a.h, b.h, -s);
t = fma (a.l, b.l, t);
t = fma (a.h, b.l, t);
t = fma (a.l, b.h, t);
z.h = e = s + t;
z.l = (s - e) + t;
/* For result of zero or infinity, ensure that tail equals head */
if (isinf (s)) {
z.h = s;
z.l = s;
}
if (z.h == 0) {
z.l = z.h;
}
return z;
}
The roots computed with the more accurate function and derivative evaluation are:
96.22963934659218 0
96.35706482518415 3 ± i * 0.06974975204 5672006
While the real parts are now accurate to within the limits of double precision, the imaginary parts are still off. The reason for this is is that in this case the quadratic equation is sensitive to minute differences in the coefficients. A one ulp error in either of them can cause differences of around 10-11 in the imaginary part. This could be worked around by representing the coefficients to higher than double precision and using higher-precision computation in the quadratic solver.
If you don't want to use the closed from solutions (or expect polynoms of larger order), the most obvious method would be to calculate approximate roots by using Newton's method.
Unfortunately it's not possible to decide which roots you will get when iterating, although it depends on the starting value.
Also see here.
See Solving quartics and cubics for graphics by D Herbison-Evans, published in Graphics Gems V.
/*******************************************************************************
* FindCubicRoots solves:
* coeff[3] * x^3 + coeff[2] * x^2 + coeff[1] * x + coeff[0] = 0
* returns:
* 3 - 3 real roots
* 1 - 1 real root (2 complex conjugate)
*******************************************************************************/
int FindCubicRoots(const FLOAT coeff[4], FLOAT x[3]);
http://www.realitypixels.com/turk/opensource/index.html#CubicRoots

Resources