Is the Carmack/Welsh inverse square root algorithm biased - c

When implementing "Carmack's Inverse Square Root" algorithm I noticed that the results seem biased. The following code seems to give better results:
float InvSqrtF(float x)
{
// Initial approximation by Greg Walsh.
int i = * ( int* ) &x;
i = 0x5f3759df - ( i >> 1 );
float y = * ( float * ) &i;
// Two iterations of Newton-Raphson's method to refine the initial estimate.
x *= 0.5f;
float f = 1.5F;
y = y * ( f - ( x * y * y ) );
y = y * ( f - ( x * y * y ) );
* ( int * )(&y) += 0x13; // More magic.
return y;
}
The key difference is in the penultimate "more magic" line. Since the initial results were too low by a fairly constant factor, this adds 19 * 2^(exponent(y)-bias) to the result with just a single instruction. It seems to give me about 3 extra bits, but am I overlooking something?

Newton's method produces a bias. The function whose zero is to be found,
f(y) = x - 1/y²
is concave, so - unless you start with an y ≥ √(3/x) - the Newton method only produces approximations ≤ 1/√x (and strictly smaller, unless you start with the exact result) with exact arithmetic.
Floating point arithmetic occasionally produces too large approximations, but typically not in the first two iterations (since the initial guess usually isn't close enough).
So yes, there is a bias, and adding a small quantity generally improves the result. But not always. In the region around 1.25 or 0.85 for example, the results without the adjustment are better than with. In other regions, the adjustment yields one bit of additional precision, in yet others more.
In any case, the magic constant to add should be adjusted to the region from which x is most often taken for the best results.

As this method is an approximation, the result will be overestimated some times and underestimated some others. You can find on McEniry's paper some nice figures about how this error is distributed for different configurations, and the math behind them.
So, unless you have solid proofs that in your domain of application the result is clearly biased, I would prefer tuning the magic constant as suggested in Lomont's document :-)

Related

Issue with Square Root in C Algorithm

I have a bit of code that finds a point on a unit sphere. Recall, for a unit sphere:
1 = sqrt( x^2 + y^2 + z^2 )
The algorithm picks two random points (the x and y coordinates) between zero and one. Provided their magnitude is less than one we have room to define a third coordinate by solving the above equation for z.
void pointOnSphere(double *point){
double x, y;
do {
x = 2*randf() - 1;
y = 2*randf() - 1;
} while (x*x + y*y > 1);
double mag = sqrt(fabs(1 - x*x - y*y));
point[0] = 2*(x*mag);
point[1] = 2*(y*mag);
point[2] = 1 - 2*(mag*mag);
}
Technically, I inherited this code. The previous owner compiled using -Ofast which "Disregards strict standards compliance". TL;DR it means your code doesn't need to follow strict IEEE standards. So when I tried to compile without optimization I ran into an error.
undefined reference to `sqrt'
What are IEEE standards? Well, because computers can't store floating point numbers to infinite precision, rounding errors pop up during certain calculations if you're not careful.
After some googling I ran into this question which got me on the right track about using proper IEEE stuff. I even read this article about floating point numbers (which I recommend). Unfortunately it didn't answer my questions.
I'd like to use sqrt() in my function as opposed to something like Newton Iteration. I understand the issue in my algorithm probably comes from the fact I could potentially (even though not really) pass a negative number to the sqrt() function. I'm just not quite sure how to remedy the issue. Thanks for all the help!
Oh, and if it's relevant I'm using a Mersenne Twister number generator.
Just to clarify, I am linking libm with -lm! I have also confirmed it is pointing to the correct library.
As for the undefined reference to sqrt you need to link with libm, usually with -lm or similar option.
Also note that
Provided their magnitude is less than one we have room to define a third coordinate by solving the above equation for z.
is wrong. The x and y must satisfy x * x + y * y <= 1 in order for there to be a solution for z.
I'd use spherical coordinates
theta = randf()*M_PI;
phi = randf()*2*M_PI;
r = 1.0;
x = r*sin(theta)*cos(phi);
y = r*sin(theta)*sin(phi);
z = r*cos(theta);
To insure the points meet a condition, test for the condition itself as part of the while loop, rather than a derivation of the condition.
// functions like `sqrt(), hypot()` benefit with declaration before use
// and without it may generate "undefined reference to `sqrt'"
// Some functions like `sqrt()` are understood and optimized out by a smart compiler.
// Still, best to always declare them.
#include <math.h>
void pointOnSphere(double *point){
double x, y, z;
do {
x = 2*randf() - 1;
y = 2*randf() - 1;
double zz = 1.0 - hypot(x,y);
if (zz < 0.) continue; // On rare negative values due to imprecision
z = sqrt(zz);
if (rand()%2) z = -z; // Flip z half the time
} while (x*x + y*y + z*z > 1); // Must meet this condition
point[0] = x;
point[1] = y;
point[2] = z;
}

Computing fractional exponents in C

I'm trying to evaluate a^n, where a and n are rational numbers.
I don't want to use any predefined functions like sqrt() or pow()
So I'm trying to use Newton's Method to get an approximate solution using this approach:
3^0.2 = 3^(1/5) , so if x = 3^0.2, x^5 = 3.
Probably the best way to solve that (without a calculator but still
using the basic arithmetic operations) is to use "Newton's method".
Newton's method for solving the equation f(x)= 0 is to set up a
sequence of numbers xn defined by taking x0 as some initial "guess"
and then xn+1= xn- f(xn/f '(xn) where f '(x) is the derivative of f.
Posted on physicsforums
The problem with that method is that if I want to compute 5.2^0.33333, I'll need to find the roots for this equation x^10000 - 5.2^33333 = 0. I end up with huge numbers, and get inf and nan errors most of the time.
Can someone give me advice on how to solve this problem? Or, can someone provide another algorithm to compute a^n?
It seems your task is to calculate
⎛ xN ⎞(aN / aD)
⎜⎼⎼⎼⎼⎟ where xN,xD,aN,aD ∈ ℤ, xD,aD ≠ 0
⎝ xD ⎠
using only multiplications, divisions, additions, and subtractions, with Newton's method as the suggested method to implement.
The equation we're trying to solve (for y) is
(aN / aD)
y = (xN / xD) where y ∈ ℝ
Newton's method finds a root of a function. If we want to use it to solve the above, we substract the right side from the left side, to get a function whose zero gives us the y we want:
(aN/aD)
f(y) = y - (xN/xD) = 0
Not much help. I guess this is as far as you got? The point here is to not form that function just yet, because we don't have a way to calculate a rational power of a rational number!
First, let's decide that aD and xD are both positive. We can do that simply by negating both aN and aD if aD was negative (so sign of aN/aD does not change), and negating both xN and xD if xD was negative. Remember, by definition neither xD or aD is zero. Then, we can simply raise both sides to the aD'th power:
aD aN aN aN
y = (xN / xD) = xN / xD
We can even eliminate the division by multiplying both sides by the last term:
aD aN aN
y × xD = xN
Now, this looks quite promising! The function we get from this is
aD aN aN
f(y) = y xD - xN
Newton's method also requires the derivative, which is obviously
f(y) aD aN
⎼⎼⎼⎼ = df(y) = y xD y / aD
dy
Newton's method itself relies on iterating
f(y)
y = y - ⎼⎼⎼⎼⎼⎼
i+1 i df(y)
If you work out the math, you'll find that the iteration is just
aD
y[i] y[i] xN
y[i+1] = y[i] - ⎼⎼⎼⎼ + ⎼⎼⎼⎼⎼⎼⎼⎼⎼⎼⎼⎼⎼⎼
aD aD aN
aD y[i] xD
You don't need to keep all the y values in memory; it is enough to remember the last one, and stop iterating when their difference is small enough.
You do still have exponentiation above, but now they are integer exponentiation only, i.e.
aD
xN = xN × xN × .. × xN
╰───────┬───────╯
aD times
which you can do very simply, for example just by multiplying the argument by itself the desired number of times, e.g. in C,
double ipow(const double base, const int exponent)
{
double result = 1.0;
int i;
for (i = 0; i < exponent; i++)
result *= base;
return result;
}
There are more efficient methods to do integer exponentiation, but the above function should be perfectly acceptable for this.
The final problem is to pick the initial y so that you get convergence. You cannot use 0, because (a power of) y is used as a denominator in the division; you'd get division by zero error. Personally, I'd check whether the result ought to be positive or negative, and smaller than or greater than one in magnitude; two rules overall to pick a safe initial y.
Questions?
You can use the generalized binomial theorem. Substitute y=1 and x=a-1. You would want to truncate the infinite series after enough terms, based on the desired accuracy. To be able to link number of terms to accuracy, you would need to ensure that the x^r terms are decreasing in absolute value. So, depending on the value of a and n, you should apply the formula to compute one of a^n and a^(-n) and use that to get your desired result.
A solution for raising an integer number to a power is:
int poweri (int x, unsigned int y)
{
int temp;
if (y == 0)
return 1;
temp = poweri (x, y / 2);
if ((y % 2) == 0)
return temp * temp;
else
return x * temp * temp;
}
However, the square root doesn't provide as clean of a closed solution. There is a good bit of background to be found at wikipedia-square root and at Wolfram Mathworks Square Root Algorithms Both provide several methods that will meet your needs, you just have to choose the one that fits your purpose.
With slight modification, this routine from wikipedia (modified to return the square root and refine accuracy) returns a surprisingly accurate square root. Yes, there will be howls about the use of a union, and it is only valid where integer and float storage are equivalent, but if you are hacking your own square root, this is relatively efficient:
float sqrt_f (float x)
{
float xhalf = 0.5f*x;
union
{
float x;
int i;
} u;
u.x = x;
u.i = 0x5f3759df - (u.i >> 1);
/* The next line can be repeated any number of times to increase accuracy */
// u.x = u.x * (1.5f - xhalf * u.x * u.x);
int i = 10;
while (i--)
u.x *= 1.5f - xhalf * u.x * u.x;
return 1.0f / u.x;
}

accuracy of sqrt of integers

I have a loop like this:
for(uint64_t i=0; i*i<n; i++) {
This requires doing a multiplication every iteration. If I could calculate the sqrt before the loop then I could avoid this.
unsigned cut = sqrt(n)
for(uint64_t i=0; i<cut; i++) {
In my case it's okay if the sqrt function rounds up to the next integer but it's not okay if it rounds down.
My question is: is the sqrt function accurate enough to do this for all cases?
Edit: Let me list some cases. If n is a perfect square so that n = y^2 my question would be - is cut=sqrt(n)>=y for all n? If cut=y-1 then there is a problem. E.g. if n = 120 and cut = 10 it's okay but if n=121 (11^2) and cut is still 10 then it won't work.
My first concern was the fractional part of float only has 23 bits and double 52 so they can't store all the digits of some 32-bit or 64-bit integers. However, I don't think this is a problem. Let's assume we want the sqrt of some number y but we can't store all the digits of y. If we let the fraction of y we can store be x we can write y = x + dx then we want to make sure that whatever dx we choose does not move us to the next integer.
sqrt(x+dx) < sqrt(x) + 1 //solve
dx < 2*sqrt(x) + 1
// e.g for x = 100 dx < 21
// sqrt(100+20) < sqrt(100) + 1
Float can store 23 bits so we let y = 2^23 + 2^9. This is more than sufficient since 2^9 < 2*sqrt(2^23) + 1. It's easy to show this for double as well with 64-bit integers. So although they can't store all the digits as long as the sqrt of what they can store is accurate then the sqrt(fraction) should be sufficient. Now let's look at what happens for integers close to INT_MAX and the sqrt:
unsigned xi = -1-1;
printf("%u %u\n", xi, (unsigned)(float)xi); //4294967294 4294967295
printf("%u %u\n", (unsigned)sqrt(xi), (unsigned)sqrtf(xi)); //65535 65536
Since float can't store all the digits of 2^31-2 and double can they get different results for the sqrt. But the float version of the sqrt is one integer larger. This is what I want. For 64-bit integers as long as the sqrt of the double always rounds up it's okay.
First, integer multiplication is really quite cheap. So long as you have more than a few cycles of work per loop iteration and one spare execute slot, it should be entirely hidden by reorder on most non-tiny processors.
If you did have a processor with dramatically slow integer multiply, a truly clever compiler might transform your loop to:
for (uint64_t i = 0, j = 0; j < cut; j += 2*i+1, i++)
replacing the multiply with an lea or a shift and two adds.
Those notes aside, let’s look at your question as stated. No, you can’t just use i < sqrt(n). Counter-example: n = 0x20000000000000. Assuming adherence to IEEE-754, you will have cut = 0x5a82799, and cut*cut is 0x1ffffff8eff971.
However, a basic floating-point error analysis shows that the error in computing sqrt(n) (before conversion to integer) is bounded by 3/4 of an ULP. So you can safely use:
uint32_t cut = sqrt(n) + 1;
and you’ll perform at most one extra loop iteration, which is probably acceptable. If you want to be totally precise, instead use:
uint32_t cut = sqrt(n);
cut += (uint64_t)cut*cut < n;
Edit: z boson clarifies that for his purposes, this only matters when n is an exact square (otherwise, getting a value of cut that is “too small by one” is acceptable). In that case, there is no need for the adjustment and on can safely just use:
uint32_t cut = sqrt(n);
Why is this true? It’s pretty simple to see, actually. Converting n to double introduces a perturbation:
double_n = n*(1 + e)
which satisfies |e| < 2^-53. The mathematical square root of this value can be expanded as follows:
square_root(double_n) = square_root(n)*square_root(1+e)
Now, since n is assumed to be a perfect square with at most 64 bits, square_root(n) is an exact integer with at most 32 bits, and is the mathematically precise value that we hope to compute. To analyze the square_root(1+e) term, use a taylor series about 1:
square_root(1+e) = 1 + e/2 + O(e^2)
= 1 + d with |d| <~ 2^-54
Thus, the mathematically exact value square_root(double_n) is less than half an ULP away from[1] the desired exact answer, and necessarily rounds to that value.
[1] I’m being fast and loose here in my abuse of relative error estimates, where the relative size of an ULP actually varies across a binade — I’m trying to give a bit of the flavor of the proof without getting too bogged down in details. This can all be made perfectly rigorous, it just gets to be a bit wordy for Stack Overflow.
All my answer is useless if you have access to IEEE 754 double precision floating point, since Stephen Canon demonstrated both
a simple way to avoid imul in loop
a simple way to compute the ceiling sqrt
Otherwise, if for some reason you have a non IEEE 754 compliant platform, or only single precision, you could get the integer part of square root with a simple Newton-Raphson loop. For example in Squeak Smalltalk we have this method in Integer:
sqrtFloor
"Return the integer part of the square root of self"
| guess delta |
guess := 1 bitShift: (self highBit + 1) // 2.
[
delta := (guess squared - self) // (guess + guess).
delta = 0 ] whileFalse: [
guess := guess - delta ].
^guess - 1
Where // is operator for quotient of integer division.
Final guard guess*guess <= self ifTrue: [^guess]. can be avoided if initial guess is fed in excess of exact solution as is the case here.
Initializing with approximate float sqrt was not an option because integers are arbitrarily large and might overflow
But here, you could seed the initial guess with floating point sqrt approximation, and my bet is that the exact solution will be found in very few loops. In C that would be:
uint32_t sqrtFloor(uint64_t n)
{
int64_t diff;
int64_t delta;
uint64_t guess=sqrt(n); /* implicit conversions here... */
while( (delta = (diff=guess*guess-n) / (guess+guess)) != 0 )
guess -= delta;
return guess-(diff>0);
}
That's a few integer multiplications and divisions, but outside the main loop.
What you are looking for is a way to calculate a rational upper bound of the square root of a natural number. Continued fraction is what you need see wikipedia.
For x>0, there is
.
To make the notation more compact, rewriting the above formula as
Truncate the continued fraction by removing the tail term (x-1)/2's at each recursion depth, one gets a sequence of approximations of sqrt(x) as below:
Upper bounds appear at lines with odd line numbers, and gets tighter. When distance between an upper bound and its neighboring lower bound is less than 1, that approximation is what you need. Using that value as the value of cut, here cut must be a float number, solves the problem.
For very large number, rational number should be used, so no precision is lost during conversion between integer and floating point number.

Floating-point division - bias to avoid a result less than an 'exact' value

I am currently tightening floating-point numerics for an estimate of a value. (It's: p(k,t) for those who are interested.) Essentially, the utility can never yield an under-estimate of this value: the security of probable prime generation depends on a numerically robust implementation. While output results agree with the published values, I have used the DBL_EPSILON value to ensure that division, in particular, yields a result that is never less than the true value:
Consider: double x, y; /* assigned some values... */
The evaluation: r = x / y; occurs frequently, but these (finite precision) results may truncate significant digits from the true result - a possibly infinite precision rational expansion. I currently try to mitigate this by applying a bias to the numerator, i.e.,
r = ((1.0 + DBL_EPSILON) * x) / y;
If you know anything about this subject, p(k,t) is typically much smaller than most estimates - but it's simply not good enough to dismiss the issue with this "observation". I can of course state:
(((1.0 + DBL_EPSILON) * x) / y) >= (x / y)
Of course, I need to ensure that the 'biased' result is greater than, or equal to, the 'exact' value. While I am certain it has to do with manipulating or scaling DBL_EPSILON, I obviously want the 'biased' result to exceed the 'exact' result by a minimum - demonstrable under IEEE-754 arithmetic assumptions.
Yes, I've looked though Goldberg's paper, and I've searched for a robust solution. Please don't suggest manipulation of rounding modes. Ideally, I'm after an answer by someone with a very good grasp on floating-point theorems, or knows of a very well illustrated example.
EDIT: To clarify, (((1.0 + DBL_EPSILON) * x) / y) or a form (((1.0 + c) * x) / y), is not a prerequisite. This was simply an approach I was using as 'probably good enough', without having provided a solid basis for it. I can state that the numerator and denominator will not be special values: NaNs, Infs, etc., nor will the denominator be zero.
First: I know that you don't want to set the rounding mode, but it really should be said that
in terms of precision, as others have noted, setting the rounding mode will produce as good of an answer as possible. Specifically, assuming that x and y are both positive (which seems to be the case, but hasn't been explicitly stated in your question), the following is a standard C snippet with the desired effect[1]:
#include <math.h>
#pragma STDC FENV_ACCESS on
int OldRoundingMode = fegetround();
fesetround(FE_UPWARD);
r = x/y;
fesetround(OldRoundingMode);
Now, that aside, there are legitimate reasons not to want to change the rounding mode (some platforms don't support round-to-plus-infinity, on some platforms changing the rounding mode introduces a large serializing stall, etc etc), and your desire not to do so shouldn't be brushed aside so casually. So, respecting your question, what else can we do?
If your platform supports fused multiply-add, there's a very elegant solution available to you:
#include <math.h>
r = x/y;
if (fma(r,y,-x) < 0) r = nextafter(r, INFINITY);
On platforms with hardware fma support, this is very efficient. Even if fma( ) is implemented in software, it may be acceptable. This approach has the virtue that it will deliver the same result as would changing the rounding mode; that is, the tightest bound possible.
If your platform's C library is antediluvian and does not provide fma, there is still hope. Your claimed statement is correct (assuming no denormal values, at least -- I would need to think more about what happens for denormals); (1.0+DBL_EPSILON)*x/y really is always greater than or equal to the infinitely precise x/y. It will sometimes be one ulp larger than the smallest value with this property, but that's a very small and probably acceptable margin. The proof of these claims is pretty fussy, and probably not suitable for StackOverflow, but I'll give a quick sketch:
Ignoring denormals, it suffices to restrict ourselves to x, y in [1.0, 2.0).
(1.0 + eps)*x >= x + eps > x. To see this, observe:
(1.0 + eps)*x = x + x*eps >= x + eps > x.
Let P be the mathematically precise x/y. We have:
(1.0 + eps)*x/y >= (x + eps)/y = x/y + eps/y = P + eps/y
Now, y is bounded above by 2, so this gives us:
(1.0 + eps)*x/y > P + eps/2
which is sufficient to guarantee that the result rounds to a value >= P. This also shows us the way to a tighter bound. We could instead use nextafter(x,INFINITY)/y to get the desired effect with a tighter bound in many cases. (nextafter(x,INFINITY) is always x + ulp, whereas (1.0 + eps)*x will be x + 2ulp half of the time. If you want to avoid calling the nextafter library function, you can use (x + (0.75*DBL_EPSILON)*x) instead to get the same result, under the working assumption of positive normal values).
In order to be really pedantically correct, this would become significantly more complicated. No one really writes code like this, but it would be along these lines:
#include <math.h>
#pragma STDC FENV_ACCESS on
#if defined FE_UPWARD
int OldRoundingMode = fegetround();
if (OldRoundingMode < 0) goto Error;
if (fesetround(FE_UPWARD)) goto Error;
r = x/y;
if (fesetround(OldRoundingMode)) goto TrulyHosed;
return r;
TrulyHosed:
// we established the desired rounding mode and did our computation,
// but now we can't set it back to the original mode. I have no idea
// how you handle this gracefully.
Error:
#else
// we can't establish the desired rounding mode, so fall back on
// something else.

Problem with Precision floating point operation in C

For one of my course project I started implementing "Naive Bayesian classifier" in C. My project is to implement a document classifier application (especially Spam) using huge training data.
Now I have problem implementing the algorithm because of the limitations in the C's datatype.
( Algorithm I am using is given here, http://en.wikipedia.org/wiki/Bayesian_spam_filtering )
PROBLEM STATEMENT:
The algorithm involves taking each word in a document and calculating probability of it being spam word. If p1, p2 p3 .... pn are probabilities of word-1, 2, 3 ... n. The probability of doc being spam or not is calculated using
Here, probability value can be very easily around 0.01. So even if I use datatype "double" my calculation will go for a toss. To confirm this I wrote a sample code given below.
#define PROBABILITY_OF_UNLIKELY_SPAM_WORD (0.01)
#define PROBABILITY_OF_MOSTLY_SPAM_WORD (0.99)
int main()
{
int index;
long double numerator = 1.0;
long double denom1 = 1.0, denom2 = 1.0;
long double doc_spam_prob;
/* Simulating FEW unlikely spam words */
for(index = 0; index < 162; index++)
{
numerator = numerator*(long double)PROBABILITY_OF_UNLIKELY_SPAM_WORD;
denom2 = denom2*(long double)PROBABILITY_OF_UNLIKELY_SPAM_WORD;
denom1 = denom1*(long double)(1 - PROBABILITY_OF_UNLIKELY_SPAM_WORD);
}
/* Simulating lot of mostly definite spam words */
for (index = 0; index < 1000; index++)
{
numerator = numerator*(long double)PROBABILITY_OF_MOSTLY_SPAM_WORD;
denom2 = denom2*(long double)PROBABILITY_OF_MOSTLY_SPAM_WORD;
denom1 = denom1*(long double)(1- PROBABILITY_OF_MOSTLY_SPAM_WORD);
}
doc_spam_prob= (numerator/(denom1+denom2));
return 0;
}
I tried Float, double and even long double datatypes but still same problem.
Hence, say in a 100K words document I am analyzing, if just 162 words are having 1% spam probability and remaining 99838 are conspicuously spam words, then still my app will say it as Not Spam doc because of Precision error (as numerator easily goes to ZERO)!!!.
This is the first time I am hitting such issue. So how exactly should this problem be tackled?
This happens often in machine learning. AFAIK, there's nothing you can do about the loss in precision. So to bypass this, we use the log function and convert divisions and multiplications to subtractions and additions, resp.
SO I decided to do the math,
The original equation is:
I slightly modify it:
Taking logs on both sides:
Let,
Substituting,
Hence the alternate formula for computing the combined probability:
If you need me to expand on this, please leave a comment.
Here's a trick:
for the sake of readability, let S := p_1 * ... * p_n and H := (1-p_1) * ... * (1-p_n),
then we have:
p = S / (S + H)
p = 1 / ((S + H) / S)
p = 1 / (1 + H / S)
let`s expand again:
p = 1 / (1 + ((1-p_1) * ... * (1-p_n)) / (p_1 * ... * p_n))
p = 1 / (1 + (1-p_1)/p_1 * ... * (1-p_n)/p_n)
So basically, you will obtain a product of quite large numbers (between 0 and, for p_i = 0.01, 99). The idea is, not to multiply tons of small numbers with one another, to obtain, well, 0, but to make a quotient of two small numbers. For example, if n = 1000000 and p_i = 0.5 for all i, the above method will give you 0/(0+0) which is NaN, whereas the proposed method will give you 1/(1+1*...1), which is 0.5.
You can get even better results, when all p_i are sorted and you pair them up in opposed order (let's assume p_1 < ... < p_n), then the following formula will get even better precision:
p = 1 / (1 + (1-p_1)/p_n * ... * (1-p_n)/p_1)
that way you devide big numerators (small p_i) with big denominators (big p_(n+1-i)), and small numerators with small denominators.
edit: MSalter proposed a useful further optimization in his answer. Using it, the formula reads as follows:
p = 1 / (1 + (1-p_1)/p_n * (1-p_2)/p_(n-1) * ... * (1-p_(n-1))/p_2 * (1-p_n)/p_1)
Your problem is caused because you are collecting too many terms without regard for their size. One solution is to take logarithms. Another is to sort your individual terms. First, let's rewrite the equation as 1/p = 1 + ∏((1-p_i)/p_i). Now your problem is that some of the terms are small, while others are big. If you have too many small terms in a row, you'll underflow, and with too many big terms you'll overflow the intermediate result.
So, don't put too many of the same order in a row. Sort the terms (1-p_i)/p_i. As a result, the first will be the smallest term, the last the biggest. Now, if you'd multiply them straight away you would still have an underflow. But the order of calculation doesn't matter. Use two iterators into your temporary collection. One starts at the beginning (i.e. (1-p_0)/p_0), the other at the end (i.e (1-p_n)/p_n), and your intermediate result starts at 1.0. Now, when your intermediate result is >=1.0, you take a term from the front, and when your intemediate result is < 1.0 you take a result from the back.
The result is that as you take terms, the intermediate result will oscillate around 1.0. It will only go up or down as you run out of small or big terms. But that's OK. At that point, you've consumed the extremes on both ends, so it the intermediate result will slowly approach the final result.
There's of course a real possibility of overflow. If the input is completely unlikely to be spam (p=1E-1000) then 1/p will overflow, because ∏((1-p_i)/p_i) overflows. But since the terms are sorted, we know that the intermediate result will overflow only if ∏((1-p_i)/p_i) overflows. So, if the intermediate result overflows, there's no subsequent loss of precision.
Try computing the inverse 1/p. That gives you an equation of the form 1 + 1/(1-p1)*(1-p2)...
If you then count the occurrence of each probability--it looks like you have a small number of values that recur--you can use the pow() function--pow(1-p, occurences_of_p)*pow(1-q, occurrences_of_q)--and avoid individual roundoff with each multiplication.
You can use probability in percents or promiles:
doc_spam_prob= (numerator*100/(denom1+denom2));
or
doc_spam_prob= (numerator*1000/(denom1+denom2));
or use some other coefficient
I am not strong in math so I cannot comment on possible simplifications to the formula that might eliminate or reduce your problem. However, I am familiar with the precision limitations of long double types and am aware of several arbitrary and extended precision math libraries for C. Check out:
http://www.nongnu.org/hpalib/
and
http://www.tc.umn.edu/~ringx004/mapm-main.html

Resources