Related
In various contexts, for example for the argument reduction for mathematical functions, one needs to compute (a - K) / (a + K), where a is a positive variable argument and K is a constant. In many cases, K is a power of two, which is the use case relevant to my work. I am looking for efficient ways to compute this quotient more accurately than can be accomplished with the straightforward division. Hardware support for fused multiply-add (FMA) can be assumed, as this operation is provided by all major CPU and GPU architectures at this time, and is available in C/C++ via the functionsfma() and fmaf().
For ease of exploration, I am experimenting with float arithmetic. Since I plan to port the approach to double arithmetic as well, no operations using higher than the native precision of both argument and result may be used. My best solution so far is:
/* Compute q = (a - K) / (a + K) with improved accuracy. Variant 1 */
m = a - K;
p = a + K;
r = 1.0f / p;
q = m * r;
t = fmaf (q, -2.0f*K, m);
e = fmaf (q, -m, t);
q = fmaf (r, e, q);
For arguments a in the interval [K/2, 4.23*K], code above computes the quotient almost correctly rounded for all inputs (maximum error is exceedingly close to 0.5 ulps), provided that K is a power of 2, and there is no overflow or underflow in intermediate results. For K not a power of two, this code is still more accurate than the naive algorithm based on division. In terms of performance, this code can be faster than the naive approach on platforms where the floating-point reciprocal can be computed faster than the floating-point division.
I make the following observation when K = 2n: When the upper bound of the work interval increases to 8*K, 16*K, ... maximum error increases gradually and starts to slowly approximate the maximum error of the naive computation from below. Unfortunately, the same does not appear to be true for the lower bound of the interval. If the lower bound drops to 0.25*K, the maximum error of the improved method above equals the maximum error of the naive method.
Is there a method to compute q = (a - K) / (a + K) that can achieve smaller maximum error (measured in ulp vs the mathematical result) compared to both the naive method and the above code sequence, over a wider interval, in particular for intervals whose lower bound is less than 0.5*K? Efficiency is important, but a few more operations than are used in the above code can likely be tolerated.
In one answer below, it was pointed out that I could enhance accuracy by returning the quotient as an unevaluated sum of two operands, that is, as a head-tail pair q:qlo, i.e. similar to the well-known double-float and double-double formats. In my code above, this would mean changing the last line to qlo = r * e.
This approach is certainly useful, and I had already contemplated its use for an extended-precision logarithm for use in pow(). But it doesn't fundamentally help with the desired widening of the interval on which the enhanced computation provides more accurate quotients. In a particular case I am looking at, I would like to use K=2 (for single precision) or K=4 (for double precision) to keep the primary approximation interval narrow, and the interval for a is roughly [0,28]. The practical problem I am facing is that for arguments < 0.25*K the accuracy of the improved division is not substantially better than with the naive method.
If a is large compared to K, then (a-K)/(a+K) = 1 - 2K / (a + K) will give a good approximation. If a is small compared to K, then 2a / (a + K) - 1 will give a good approximation. If K/2 ≤ a ≤ 2K, then a-K is an exact operation, so doing the division will give a decent result.
One possibility is to track error of m and p into m1 and p1 with classical Dekker/Schewchuk:
m=a-k;
k0=a-m;
a0=k0+m;
k1=k0-k;
a1=a-a0;
m1=a1+k1;
p=a+k;
k0=p-a;
a0=p-k0;
k1=k-k0;
a1=a-a0;
p1=a1+k1;
Then, correct the naive division:
q=m/p;
r0=fmaf(p,-q,m);
r1=fmaf(p1,-q,m1);
r=r0+r1;
q1=r/p;
q=q+q1;
That'll cost you 2 divisions, but should be near half ulp if I didn't screw up.
But these divisions can be replaced by multiplications with inverse of p without any problem, since the first incorrectly rounded division will be compensated by remainder r, and second incorrectly rounded division does not really matter (the last bits of correction q1 won't change anything).
I don't really have an answer (proper floating point error analyses are very tedious) but a few observations:
Fast reciprocal instructions (such as RCPSS) are not as accurate as division, so you may see a reduction in accuracy if using these.
m is computed exactly if a ∈ [0.5×Kb, 21+n×Kb), where Kb is the power of 2 below K (or K itself if K is a power of 2), and n is the number of trailing zeros in the significand of K (i.e. if K is a power of 2, then n=23).
This is similar to a simplified form of the div2 algorithm from Dekker (1971): to expand the range (particularly the lower bound), you'll probably have to incorporate more correction terms from this (i.e. store m as the sum of 2 floats, or use a double).
Since my goal is to merely widen the interval on which accurate results are achieved, rather than to find a solution that works for all possible values of a, making use of double-float arithmetic for all intermediate computation seems too costly.
Thinking some more about the problem, it is clear that the computation of the remainder of the division, e in the code from my question, is the crucial part of achieving more accurate result. Mathematically, the remainder is (a-K) - q * (a+K). In my code, I simply used m to represent (a-K) and represented (a+k) as m + 2*K, as this delivers numerically superior results to the straightforward representation.
With relatively small additional computational cost, (a+K) can be represented as a double-float, that is, a head-tail pair p:plo, which leads to the following modified version of my original code:
/* Compute q = (a - K) / (a + K) with improved accuracy. Variant 2 */
m = a - K;
p = a + K;
r = 1.0f / p;
q = m * r;
mx = fmaxf (a, K);
mn = fminf (a, K);
plo = (mx - p) + mn;
t = fmaf (q, -p, m);
e = fmaf (q, -plo, t);
q = fmaf (r, e, q);
Testing shows that this delivers nearly correctly rounded results for a in [K/2, 224*K), allowing for a substantial increase to the upper bound of the interval on which accurate results are achieved.
Widening the interval at the lower end requires the more accurate representation of (a-K). We can compute this as a double-float head-tail pair m:mlo, which leads to the following code variant:
/* Compute q = (a - K) / (a + K) with improved accuracy. Variant 3 */
m = a - K;
p = a + K;
r = 1.0f / p;
q = m * r;
plo = (a < K) ? ((K - p) + a) : ((a - p) + K);
mlo = (a < K) ? (a - (K + m)) : ((a - m) - K);
t = fmaf (q, -p, m);
e = fmaf (q, -plo, t);
e = e + mlo;
q = fmaf (r, e, q);
Exhaustive testing hows that this delivers nearly correctly rounded results for a in the interval [K/224, K*224). Unfortunately, this comes at a cost of ten additional operations compared to the code in my question, which is a steep price to pay to get the maximum error from around 1.625 ulps with the naive computation down to near 0.5 ulp.
As in my original code from the question, one can express (a+K) in terms of (a-K), thus eliminating the computation of the tail of p, plo. This approach results in the following code:
/* Compute q = (a - K) / (a + K) with improved accuracy. Variant 4 */
m = a - K;
p = a + K;
r = 1.0f / p;
q = m * r;
mlo = (a < K) ? (a - (K + m)) : ((a - m) - K);
t = fmaf (q, -2.0f*K, m);
t = fmaf (q, -m, t);
e = fmaf (q - 1.0f, -mlo, t);
q = fmaf (r, e, q);
This turns out to be advantageous if the main focus is decreasing the lower limit of the interval, which is my particular focus as explained in the question. Exhaustive testing of the single-precision case shows that when K=2n nearly correctly rounded results are produced for values of a in the interval [K/224, 4.23*K]. With a total of 14 or 15 operations (depending on whether an architecture supports full predication or just conditional moves), this requires seven to eight more operations than my original code.
Lastly, one might base the residual computation directly on the original variable a to avoid the error inherent in the computation of m and p. This leads to the following code that, for K = 2n, computes nearly correctly rounded results for a in the interval [K/224, K/3):
/* Compute q = (a - K) / (a + K) with improved accuracy. Variant 5 */
m = a - K;
p = a + K;
r = 1.0f / p;
q = m * r;
t = fmaf (q + 1.0f, -K, a);
e = fmaf (q, -a, t);
q = fmaf (r, e, q);
If you can relax the API to return another variable that models the error, then the solution becomes much simpler:
float foo(float a, float k, float *res)
{
float ret=(a-k)/(a+k);
*res = fmaf(-ret,a+k,a-k)/(a+k);
return ret;
}
This solution only handles truncation error of division, but does not handle the loss of precision of a+k and a-k.
To handle those errors, I think I need to use double precision, or bithack to use fixed point.
Test code is updated to artificially generate non zero least significant bits
in the input
test code
https://ideone.com/bHxAg8
The problem is the addition in (a + K). Any loss of precision in (a + K) is magnified by the division. The problem isn't the division itself.
If the exponents of a and K are the same (almost) no precision is lost, and if the absolute difference between the exponents is greater than the significand size then either (a + K) == a (if a has larger magnitude) or (a + K) == K (if K has larger magnitude).
There is no way to prevent this. Increasing the significand size (e.g. using 80-bit "extended double" on 80x86) only helps widen the "accurate result range" slightly. To understand why, consider smallest + largest (where smallest is the smallest positive denormal a 32-bit floating point number can be). In this case (for 32-bit floats) you'd need a significand size of about 260 bits for the result to avoid precision loss completely. Doing (e.g.) temp = 1/(a + K); result = a * temp - K / temp; won't help much either because you've still got exactly the same (a + K) problem (but it would avoid a similar problem in (a - K)). Also you can't do result = anything / p + anything_error/p_error because division doesn't work like that.
There are only 3 alternatives I can think of to get close to 0.5 ulps for all possible positive values of a that can fit in 32-bit floating point. None are likely to be acceptable.
The first alternative involves pre-computing a lookup table (using "big real number" maths) for every value of a, which (with some tricks) ends up being about 2 GiB for 32-bit floating point (and completely insane for 64-bit floating point). Of course if the range of possible values of a is smaller than "any positive value that can fit in a 32-bit float" the size of the lookup table would be reduced.
The second alternative is to use something else ("big real number") for the calculation at run-time (and convert to/from 32-bit floating point).
The third alternative involves, "something" (I don't know what it's called, but it's expensive). Set the rounding mode to "round to positive infinity" and calculate temp1 = (a + K); if(a < K) temp2 = (a - K); then switch to "round to negative infinity" and calculate if(a >= K) temp2 = (a - K); lower_bound = temp2 / temp1;. Next do a_lower = a and decrease a_lower by the smallest amount possible and repeat the "lower_bound" calculation, and keep doing that until you get a different value for lower_bound, then revert back to the previous value of a_lower. After that you do essentially the same (but opposite rounding modes, and incrementing not decrementing) to determine upper_bound and a_upper (starting with the original value of a). Finally, interpolate, like a_range = a_upper - a_lower; result = upper_bound * (a_upper - a) / a_range + lower_bound * (a - a_lower) / a_range;. Note that you will want to calculate an initial upper and lower bound and skip all of this if they're equal. Also be warned that this is all "in theory, completely untested" and I probably borked it somewhere.
Mainly what I'm saying is that (in my opinion) you should give up and accept that there's nothing that you can do to get close to 0.5 ulp. Sorry.. :)
I am stuck in a program while finding modulus of division.
Say for example I have:
((a*b*c)/(d*e)) % n
Now, I cannot simply calculate the expression and then modulo it to n as the multiplication and division are going in a loop and the value is large enough to not fit even in long long.
As clarified in comments, n can be considered prime.
I found that, for multiplication, I can easily calculate it as:
((a%n*b%n)%n*c%n)%n
but couldn't understand how to calculate the division part then.
The problem I am facing is say for a simple example:
((7*3*5)/(5*3)) % 11
The value of above expression would be 7
but if I calculate the multiplication, modulo, it would be like:
((7%11)*(3%11))%11 = 10
((10%11)*(5%11))%11 = 6
now I am left with 6/15 and I have no way to generate correct answer.
Could someone help me. Please make me understand the logic by above example.
Since 11 is prime, Z11 is a field. Since 15 % 11 is 4, 1/15 equals 3 (since 3 * 4 % 11 is 1). Therefore, 6/15 is 6 * 3 which is 7 mod 11.
In your comments below the question, you clarify that the modulus will always be a prime.
To efficiently generate a table of multiplicative inverses, you can raise 2 to successive powers to see which values it generates. Note that in a field Zp, where p is an odd prime, 2p-1 = 1. So, for Z11:
2^1 = 2
2^2 = 4
2^3 = 8
2^4 = 5
2^5 = 10
2^6 = 9
2^7 = 7
2^8 = 3
2^9 = 6
So the multiplicative inverse of 5 (which is 24) is 26 (which is 9).
So, you can generate the above table like this:
power_of_2[0] = 1;
for (int i = 1; i < n; ++i) {
power_of_2[i] = (2*power_of_2[i-1]) % n;
}
And the multiplicative inverse table can be computed like this:
mult_inverse[1] = 1;
for (int i = 1; i < n; ++i) {
mult_inverse[power_of_2[i]] = power_of_2[n-1-i];
}
In your example, since 15 = 4 mod 11, you actually end up with having to evaluate (6/4) mod 11.
In order to find an exact solution to this, rearrange it as 6 = ( (x * 4) mod 11), which makes clearer how the modulo division works.
If nothing else, if the modulus is always small, you can iterate from 0 to modulus-1 to get the solution.
Note that when the modulus is not prime, there may be multiple solutions to the reduced problem. For instance, there are two solutions to 4 = ( ( x * 2) mod 8): 2 and 6. This will happen for a reduced problem of form:
a = ( (x * b) mod c)
whenever b and c are NOT relatively prime (ie whenever they DO share a common divisor).
Similarly, when b and c are NOT relatively prime, there may be no solution to the reduced problem. For instance, 3 = ( (x * 2) mod 8) has no solution. This happens whenever the largest common divisor of b and c does not also divide a.
These latter two circumstances are consequences of the integers from 0 to n-1 not forming a group under multiplication (or equivalently, a field under + and *) when n is not prime, but rather forming simply the less useful structure of a ring.
I think the way the question is asked, it should be assumed that the numerator is divisible by the denominator. In that case the finite field solution for prime n and speculations about possible extensions and caveats for non-prime n is basically overkill. If you have all the numerator terms and denominator terms stored in arrays, you can iteratively test pairs of (numerator term, denominator term) and quickly find the greatest common divisor (gcd), and then divide the numerator term and denominator term by the gcd. (Finding the gcd is a classical problem and you can easily find a simple solution online.) In the worst case you will have to iterate over all possible pairs but at some point, if the denominator indeed divides the numerator, then you'll eventually be left with reduced numerator terms and all denominator terms will be 1. Then you're ready to apply multiplication (avoiding overflow) the way you described.
As n is prime, dividing an integer b is simply multiplying b's inverse. That is:
(a / b) mod n = (a * inv(b)) mod n
where
inv(b) = (b ^ (n - 2)) mod n
Calculating inv(b) can be done in O(log(n)) time using the Exponentiation by squaring algorithm. Here is the code:
int inv(int b, int n)
{
int r = 1, m = n - 2;
while (m)
{
if (m & 1) r = (long long)r * b % n;
b = (long long)b * b % n;
m >>= 1;
}
return r;
}
Why it works? According to Fermat's little theorem, if n is prime, b ^ (n - 1) mod n = 1 for any positive integer b. Therefore we have inv(b) * b mod n = 1.
Another solution for finding inv(b) is the Extended Euclidean algorithm, which needs a bit more code to implement.
I think you can distribute the division like
z = d*e/3
(a/z)*(b/z)*(c/z) % n
Remains only the integer division problem.
I think the problem you had was that you picked a problem that was too simple for an example. In that case the answer was 7 , but what if a*b*c was not evenly divisible by c*d ? You should probably look up how to do division with modulo first, it should be clear to you :)
Instead of dividing, think in terms of multiplicative inverses. For each number in a mod-n system, there ought to be an inverse, if certain conditions are met. For d and e, find those inverses, and then it's all just multiplying. Finding the inverses is not done by dividing! There's plenty of info out there...
Background:
Given n balls such that:
'a' balls are of colour GREEN
'b' balls are of colour BLUE
'c' balls are of colour RED
...
(of course a + b + c + ... = n)
The number of permutations in which these balls can be arranged is given by:
perm = n! / (a! b! c! ..)
Question 1:
How can I 'elegantly' calculate perm so as to avoid an integer overflow as as long as possible, and be sure that when I am done calculating, I either have the correct value of perm, or I know that the final result will overflow?
Basically, I want to avoid using something like GNU GMP.
Optionally, Question 2:
Is this a really bad idea, and should I just go ahead and use GMP?
These are known as the multinomial coefficients, which I shall denote by m(a,b,...).
And you can efficiently calculate them avoiding overflow by exploiting this identity (which should be fairly simple to prove):
m(a,b,c,...) = m(a-1,b,c,...) + m(a,b-1,c,...) + m(a,b,c-1,...) + ...
m(0,0,0,...) = 1 // base case
m(anything negative) = 0 // base case 2
Then it's a simple matter of using recursion to calculate the coefficients. Note that to avoid an exponential running time, you need to either cache your results (to avoid recomputation) or use dynamic programming.
To check for overflow, just make sure the sums won't overflow.
And yes, it's a very bad idea to use arbitrary precision libraries to do this simple task.
If you have globs of cpu time, you can make lists out of all the factorials, then find the prime factorization of all the numbers in the lists, then cancel all the numbers on the top with those on the bottom, until the numbers are completely reduced.
The overflow-safest way is the way Dave suggested. You find the exponent with which the prime p divides n! by the summation
m = n;
e = 0;
do{
m /= p;
e += m;
}while(m > 0);
Subtract the exponents of p in the factorisations of a! etc. Do that for all primes <= n, and you have the factorisation of the multinomial coefficient. That calculation overflows if and only if the final result overflows. But the multinomial coefficients grow rather fast, so you will have overflow already for fairly small n. For substantial calculations, you will need a bignum library (if you don't need exact results, you can get by a bit longer using doubles).
Even if you use a bignum library, it is worthwhile to keep intermediate results from getting too large, so instead of calculating the factorials and dividing huge numbers, it is better to calculate the parts in sequence,
n!/(a! * b! * c! * ...) = n! / (a! * (n-a)!) * (n-a)! / (b! * (n-a-b)!) * ...
and to compute each of these factors, let's take the second for illustration,
(n-a)! / (b! * (n-a-b)!) = \prod_{i = 1}^b (n-a+1-i)/i
is calculated with
prod = 1
for i = 1 to b
prod = prod * (n-a+1-i)
prod = prod / i
Finally multiply the parts. This requires n divisions and n + number_of_parts - 1 multiplications, keeping the intermediate results moderately small.
According to this link, you can calculate multinomial coefficients as a product of several binomial coefficients, controlling integer overflow on the way.
This reduces original problem to overflow-controlled computation of a binomial coefficient.
Notations: n! = prod(1,n) where prod you may guess what does.
It's very easy, but first you must know that for any 2 positive integers (i, n > 0) that expression is a positive integer:
prod(i,i+n-1)/prod(1,n)
Thus the idea is to slice the computation of n! in small chunks and to divide asap.
With a, than with b and so on.
perm = (a!/a!) * (prod(a+1, a+b)/b!) * ... * (prod(a+b+c+...y+1,n)/z!)
Each of these factors is an integer, so if perm will not overflow, neither one of its factors will do.
Though, in the calculation of a such factor could be an overflow in numerator or denominator but that's avoidable doing a multiplication in numerator then a division in alternance:
prod(a+1, a+b)/b! = (a+1)(a+2)/2*(a+3)/3*..*(a+b)/b
In that way every division will yield an integer.
How can I calculate the time complexity of a recursive algorithm?
int pow1(int x,int n) {
if(n==0){
return 1;
}
else{
return x * pow1(x, n-1);
}
}
int pow2(int x,int n) {
if(n==0){
return 1;
}
else if(n&1){
int p = pow2(x, (n-1)/2)
return x * p * p;
}
else {
int p = pow2(x, n/2)
return p * p;
}
}
Analyzing recursive functions (or even evaluating them) is a nontrivial task. A (in my opinion) good introduction can be found in Don Knuths Concrete Mathematics.
However, let's analyse these examples now:
We define a function that gives us the time needed by a function. Let's say that t(n) denotes the time needed by pow(x,n), i.e. a function of n.
Then we can conclude, that t(0)=c, because if we call pow(x,0), we have to check whether (n==0), and then return 1, which can be done in constant time (hence the constant c).
Now we consider the other case: n>0. Here we obtain t(n) = d + t(n-1). That's because we have again to check n==1, compute pow(x, n-1, hence (t(n-1)), and multiply the result by x. Checking and multiplying can be done in constant time (constant d), the recursive calculation of pow needs t(n-1).
Now we can "expand" the term t(n):
t(n) =
d + t(n-1) =
d + (d + t(n-2)) =
d + d + t(n-2) =
d + d + d + t(n-3) =
... =
d + d + d + ... + t(1) =
d + d + d + ... + c
So, how long does it take until we reach t(1)? Since we start at t(n) and we subtract 1 in each step, it takes n-1 steps to reach t(n-(n-1)) = t(1). That, on the other hands, means, that we get n-1 times the constant d, and t(1) is evaluated to c.
So we obtain:
t(n) =
...
d + d + d + ... + c =
(n-1) * d + c
So we get that t(n)=(n-1) * d + c which is element of O(n).
pow2 can be done using Masters theorem. Since we can assume that time functions for algorithms are monotonically increasing. So now we have the time t(n) needed for the computation of pow2(x,n):
t(0) = c (since constant time needed for computation of pow(x,0))
for n>0 we get
/ t((n-1)/2) + d if n is odd (d is constant cost)
t(n) = <
\ t(n/2) + d if n is even (d is constant cost)
The above can be "simplified" to:
t(n) = floor(t(n/2)) + d <= t(n/2) + d (since t is monotonically increasing)
So we obtain t(n) <= t(n/2) + d, which can be solved using the masters theorem to t(n) = O(log n) (see section Application to Popular Algorithms in the wikipedia link, example "Binary Search").
Let's just start with pow1, because that's the simplest one.
You have a function where a single run is done in O(1). (Condition checking, returning, and multiplication are constant time.)
What you have left is then your recursion. What you need to do is analyze how often the function would end up calling itself. In pow1, it'll happen N times. N*O(1)=O(N).
For pow2, it's the same principle - a single run of the function runs in O(1). However, this time you're halving N every time. That means it will run log2(N) times - effectively once per bit. log2(N)*O(1)=O(log(N)).
Something which might help you is to exploit the fact that recursion can always be expressed as iteration (not always very simply, but it's possible. We can express pow1 as
result = 1;
while(n != 0)
{
result = result*n;
n = n - 1;
}
Now you have an iterative algorithm instead, and you might find it easier to analyze it that way.
It can be a bit complex, but I think the usual way is to use Master's theorem.
Complexity of both functions ignoring recursion is O(1)
For the first algorithm pow1(x, n) complexity is O(n) because the depth of recursion correlates with n linearly.
For the second complexity is O(log n). Here we recurse approximately log2(n) times. Throwing out 2 we get log n.
So I'm guessing you're raising x to the power n. pow1 takes O(n).
You never change the value of x but you take 1 from n each time until it gets to 1 (and you then just return) This means that you will make a recursive call n times.
My current implementation of an Hash Table is using Linear Probing and now I want to move to Quadratic Probing (and later to chaining and maybe double hashing too). I've read a few articles, tutorials, wikipedia, etc... But I still don't know exactly what I should do.
Linear Probing, basically, has a step of 1 and that's easy to do. When searching, inserting or removing an element from the Hash Table, I need to calculate an hash and for that I do this:
index = hash_function(key) % table_size;
Then, while searching, inserting or removing I loop through the table until I find a free bucket, like this:
do {
if(/* CHECK IF IT'S THE ELEMENT WE WANT */) {
// FOUND ELEMENT
return;
} else {
index = (index + 1) % table_size;
}
while(/* LOOP UNTIL IT'S NECESSARY */);
As for Quadratic Probing, I think what I need to do is change how the "index" step size is calculated but that's what I don't understand how I should do it. I've seen various pieces of code, and all of them are somewhat different.
Also, I've seen some implementations of Quadratic Probing where the hash function is changed to accommodated that (but not all of them). Is that change really needed or can I avoid modifying the hash function and still use Quadratic Probing?
EDIT:
After reading everything pointed out by Eli Bendersky below I think I got the general idea. Here's part of the code at http://eternallyconfuzzled.com/tuts/datastructures/jsw_tut_hashtable.aspx:
15 for ( step = 1; table->table[h] != EMPTY; step++ ) {
16 if ( compare ( key, table->table[h] ) == 0 )
17 return 1;
18
19 /* Move forward by quadratically, wrap if necessary */
20 h = ( h + ( step * step - step ) / 2 ) % table->size;
21 }
There's 2 things I don't get... They say that quadratic probing is usually done using c(i)=i^2. However, in the code above, it's doing something more like c(i)=(i^2-i)/2
I was ready to implement this on my code but I would simply do:
index = (index + (index^index)) % table_size;
...and not:
index = (index + (index^index - index)/2) % table_size;
If anything, I would do:
index = (index + (index^index)/2) % table_size;
...cause I've seen other code examples diving by two. Although I don't understand why...
1) Why is it subtracting the step?
2) Why is it diving it by 2?
There is a particularly simple and elegant way to implement quadratic probing if your table size is a power of 2:
step = 1;
do {
if(/* CHECK IF IT'S THE ELEMENT WE WANT */) {
// FOUND ELEMENT
return;
} else {
index = (index + step) % table_size;
step++;
}
} while(/* LOOP UNTIL IT'S NECESSARY */);
Instead of looking at offsets 0, 1, 2, 3, 4... from the original index, this will look at offsets 0, 1, 3, 6, 10... (the ith probe is at offset (i*(i+1))/2, i.e. it's quadratic).
This is guaranteed to hit every position in the hash table (so you are guaranteed to find an empty bucket if there is one) provided the table size is a power of 2.
Here is a sketch of a proof:
Given a table size of n, we want to show that we will get n distinct values of (i*(i+1))/2 (mod n) with i = 0 ... n-1.
We can prove this by contradiction. Assume that there are fewer than n distinct values: if so, there must be at least two distinct integer values for i in the range [0, n-1] such that (i*(i+1))/2 (mod n) is the same. Call these p and q, where p < q.
i.e. (p * (p+1)) / 2 = (q * (q+1)) / 2 (mod n)
=> (p2 + p) / 2 = (q2 + q) / 2 (mod n)
=> p2 + p = q2 + q (mod 2n)
=> q2 - p2 + q - p = 0 (mod 2n)
Factorise => (q - p) (p + q + 1) = 0 (mod 2n)
(q - p) = 0 is the trivial case p = q.
(p + q + 1) = 0 (mod 2n) is impossible: our values of p and q are in the range [0, n-1], and q > p, so (p + q + 1) must be in the range [2, 2n-2].
As we are working modulo 2n, we must also deal with the tricky case where both factors are non-zero, but multiply to give 0 (mod 2n):
Observe that the difference between the two factors (q - p) and (p + q + 1) is (2p + 1), which is an odd number - so one of the factors must be even, and the other must be odd.
(q - p) (p + q + 1) = 0 (mod 2n) => (q - p) (p + q + 1) is divisible by 2n. If n (and hence 2n) is a power of 2, this requires the even factor to be a multiple of 2n (because all of the prime factors of 2n are 2, whereas none of the prime factors of our odd factor are).
But (q - p) has a maximum value of n-1, and (p + q + 1) has a maximum value of 2n-2 (as seen in step 9), so neither can be a multiple of 2n.
So this case is impossible as well.
Therefore the assumption that there are fewer than n distinct values (in step 2) must be false.
(If the table size is not a power of 2, this falls apart at step 10.)
You don't have to modify the hash function for quadratic probing. The simplest form of quadratic probing is really just adding consequent squares to the calculated position instead of linear 1, 2, 3.
There's a good resource here. The following is taken from there. This is the simplest form of quadratic probing when the simple polynomial c(i) = i^2 is used:
In the more general case the formula is:
And you can pick your constants.
Keep, in mind, however, that quadratic probing is useful only in certain cases. As the Wikipedia entry states:
Quadratic probing provides good memory
caching because it preserves some
locality of reference; however, linear
probing has greater locality and,
thus, better cache performance.
Quadratic probing better avoids the
clustering problem that can occur with
linear probing, although it is not
immune.
EDIT: Like many things in computer science, the exact constants and polynomials of quadratic probing are heuristic. Yes, the simplest form is i^2, but you may choose any other polynomial. Wikipedia gives the example with h(k,i) = (h(k) + i + i^2)(mod m).
Therefore, it is difficult to answer your "why" question. The only "why" here is why do you need quadratic probing at all? Having problems with other forms of probing and getting a clustered table? Or is it just a homework assignment, or self-learning?
Keep in mind that by far the most common collision resolution technique for hash tables is either chaining or linear probing. Quadratic probing is a heuristic option available for special cases, and unless you know what you're doing very well, I wouldn't recommend using it.