A numbers power between 0 and 1 in C - c

I'm making a program to replace math.h's pow() function.
I'm not using any functions from math.h.
The problem is, I can calculate powers as integers like
15-2
45.3211
but I can't calculate
x2.132
My program first finds integer power of x (x2) and multiplies it by (x0.132).
I know that x0.132 is 1000th root of x to the power 132 but I can't solve it.
How can I find xy (0 < y < 1)

To compute x ^ y, 0 < y < 1 :
Approximate y as a rational fraction, (a/b)
(Easiest way: Pick whatever b you want to get sufficient accuracy as a constant.
Then use: a = b * y.)
Approximate the b root of y using any method you like, such as Newton's.
(Simplest way: You know it's between 0 and b and can easily tell if a given value is too low or too high. So keep a min that starts at zero and a max that starts at b. Repeatedly try (min + max) / 2, see if it's too big or too small, and adjust min or max appropriately. Repeat until min and max are nearly the same.)
Raise that to the a power.
(Possibly by repeatedly multiplying it by itself. Optimize this if you like. For example, a^4 can be computed with just two multiplications, one to find a^2 and then one to square it. This generalizes easily.)

Use the factorization inherent in floating point formats to split x=2^e*m with 1<=m<2 to create the sub-problems 2^(e*y) and m^y
Use square roots, x^y=sqrt(x)^(2*y) and if there is an integer part in 2*b, split that off.
Use the binomial theorem for x close to 1, which will occur when iterating the square root.
(1+h)^y=1+y*h+(y*(y-1))/2*h^2+...+binom(y,j)*h^j+...
where the quotient from one term to the next is (y-j)/(j+1)*h
h=x-1;
term = y*h;
sum = 1+term;
j=1;
while(1+term !=1) {
term *= h*(y-j)/(1+j);
sum += term;
j+=1;
}

Related

While loop time complexity O(logn)

I cannot understand why the time complexity for this code is O(logn):
double n;
/* ... */
while (n>1) {
n*=0.999;
}
At least it says so in my study materials.
Imagine if the code were as follows:
double n;
/* ... */
while (n>1) {
n*=0.5;
}
It should be intuitive to see how this is O(logn), I hope.
When you multiply by 0.999 instead, it becomes slower by a constant factor, but of course the complexity is still written as O(logn)
You want to calculate how many iterations you need before n becomes equal to (or less than) 1.
If you call the number of iterations for k you want to solve
n * 0.999^k = 1
It goes like this
n * 0.999^k = 1
0.999^k = 1/n
log(0.999^k) = log(1/n)
k * log(0.999) = -log(n)
k = -log(n)/log(0.999)
k = (-1/log(0.999)) * log(n)
For big-O we only care about "the big picture" so we throw away constants. Here log(0.999) is a negative constant so (-1/log(0.999)) is a positive constant that we can "throw away", i.e. set to 1. So we get:
k ~ log(n)
So the code is O(logn)
From this you can also notice that the value of the constant (i.e. 0.999 in your example) doesn't matter for the big-O calculation. All constant values greater than 0 and less than 1 will result in O(logn).
Logarithm has two inputs: a base and a number. The result of a logarithm is the power you need to raise the base to to achieve the given number. Since your base is 0.999, the number is the first smaller than 1 and you have a scalar, which is n, effectively the number of steps depends on the power you need to raise your base to achieve such a small number, which multiplied with n will yield a smaller number than 1. This corresponds to the definition of the logarithm, with which I have started my answer.
EDIT:
Think about it this way: You have n as an input and you search for the first k where
n * .999^k < 1
since you are searching k by incrementing it, since if you have l >= n at a step, then you will have l * .999 in the next step. Repeating this achieves a logarithmic complexity for your multiplication algorithm.

Finding whether an interval contains at least one integer without math.h

For a class project I need to split some audio clips in smaller sections, for which we are provided a min length and a max length, to figure out whether this is possible, I do the following:
a = length/max
b = length/min
mathematically I figured that [a,b] contains at least one integer if ⌊b⌋ >= ⌈a⌉, but I can't use math.h for floor() and ceil(). Since a and b are always positive I can use type casting for floor(), but I am at a loss at how to do ceil(). I thought about using ((int)x)+1 but that would round integers up which would break the formula.
I would like either a way to do ceil() which would solve my problem, or another way to check whether an interval contains at least one integer.
You don't need the math.h to perform floor. Please look at the following code:
int length=5,min=2,max=3; // only an example of inputs.
int a = length/max;
int b = length/min;
if(a!=b){
//there is at least one integer in the interval.
}else{
if(length % min==0 || length % max==0 ){
//there is at least one integer in the interval.
}else{
//there is no integer in the interval.
}
}
The result for the above example will be that there is an integer in the interval.
You can also perform ceil without using math.h as following:
int a;
if(length % max == 0){
a = length / max;
}else{
a = (length / max) + 1;
}
If I understood you question right, I guess, you can do ceil(a) in this case, and then check if the result is less then b. Thus, for example, for interval [1.3, 3.5], ceil(1.3) will return 2, which fits into this interval.
UPD
Also you could do (b - a). If it's > 1, there's for sure at least one integer between them.
There is a general trick in programming that will come in hand if you ever find yourself programming Apple Basic, or any other language where floating point math is supported.
You can "round" a number by addition, then truncation, as follows:
x = some floating value
rounded_x = int(x + roundoff_amount)
Where roundoff_amount is the difference between the lowest fraction to round up, and 1.
So, to round at .5, your round_off would be 1 - .5 = .5, and you would do int(x + .5). If x is .5 or .51 then the result becomes 1.0 or 1.01 and int() takes that to 1. Obviously, if x is higher, then you still get rounded to 1, until x becomes 1.5 when rounding takes it to 2. To round upwards starting at .6, your roundoff amount would be 1 - .6 = .4, and you would do int(x + .4), etc.
You can do a similar thing to get ceil behavior. Set your roundoff_amount to be 0.99999... and do the round. You can choose your value to provide a "nearby" window, since floats have some inaccuracy inherent that might prevent getting a perfectly integer value after adding fractions.

Optimizing loop by using floating point value as loop counter

I need to execute a loop
while (X) do Y
for a N times, which is a large number. While Y, the loop body, is rather quick, the test X takes up about 70% of the runtime.
I can calculate the number of loop iterations N beforehand so instead of using X as the condition, a simple For-Loop would be possible.
for (i=1 to N) do Y
However, N might exceed the maximum value an integer can store on the machine, so this is not an option. As an alternative, I proposed to use a floating point variable F instead. As N is large, it is likely that F cannot be exactly equal to N. Thus, I calculate F to be the largest floating point number smaller than N. This allows me to run
for (i=1 to F) do Y
while (X) do Y
Most of the iterations will not need to test X everytime, only the last N-F do.
The question is: How would I implement a for-Loop from 1 to F? Simply increasing a counter or decreasing F by 1 in each step would not work, as the numerical error would grow too large. My current solution is:
for (while F > MAXINT)
for (i=1 to MAXINT)
do Y
F -= MAXINT
while (X) do Y
Is there a better way to solve this problem?
What do you mean by numerical error? Floating point counting is exact within its precision.
Here are the maximum values representable by integers exactly using the following data types:
uint32max = 4294967295
uint64max = 18446744073709551615
float32intmax = 16777216
float64intmax = 9007199254740992
Every integer from 0 to the max is exactly representable without numerical error.
As you can see, the largest count is available with uint64. Next comes float64, then uint32 and finally float32.
What happens when you increment uint32=4294967295 ? 0.
What happens when you increment float32=16777216 ? 16777216.
Which is the better behavior?
Have you thought about using a 2-dimensional loop? If N ism the maximum count for a 1-dimensional loop, then N x N is the maximum for a 2-dimensional loop, etc. so that if you maximum count is less than MAXUINT x MAXUINT, then decompose you number N such that:
N == M * MAXUINT + R;
where
M = N / MAXUINT;
R = N - M * MAXUINT;
then
for (i = M; i--;) for (j = MAXUINT; j--;) DoStuff();
for (i = R; i--;) DoStuff();
If MAXUINT*MAXUINT is not a large enough count for you, you can add 3-, 4-, ... -dimensional loops.

accuracy of sqrt of integers

I have a loop like this:
for(uint64_t i=0; i*i<n; i++) {
This requires doing a multiplication every iteration. If I could calculate the sqrt before the loop then I could avoid this.
unsigned cut = sqrt(n)
for(uint64_t i=0; i<cut; i++) {
In my case it's okay if the sqrt function rounds up to the next integer but it's not okay if it rounds down.
My question is: is the sqrt function accurate enough to do this for all cases?
Edit: Let me list some cases. If n is a perfect square so that n = y^2 my question would be - is cut=sqrt(n)>=y for all n? If cut=y-1 then there is a problem. E.g. if n = 120 and cut = 10 it's okay but if n=121 (11^2) and cut is still 10 then it won't work.
My first concern was the fractional part of float only has 23 bits and double 52 so they can't store all the digits of some 32-bit or 64-bit integers. However, I don't think this is a problem. Let's assume we want the sqrt of some number y but we can't store all the digits of y. If we let the fraction of y we can store be x we can write y = x + dx then we want to make sure that whatever dx we choose does not move us to the next integer.
sqrt(x+dx) < sqrt(x) + 1 //solve
dx < 2*sqrt(x) + 1
// e.g for x = 100 dx < 21
// sqrt(100+20) < sqrt(100) + 1
Float can store 23 bits so we let y = 2^23 + 2^9. This is more than sufficient since 2^9 < 2*sqrt(2^23) + 1. It's easy to show this for double as well with 64-bit integers. So although they can't store all the digits as long as the sqrt of what they can store is accurate then the sqrt(fraction) should be sufficient. Now let's look at what happens for integers close to INT_MAX and the sqrt:
unsigned xi = -1-1;
printf("%u %u\n", xi, (unsigned)(float)xi); //4294967294 4294967295
printf("%u %u\n", (unsigned)sqrt(xi), (unsigned)sqrtf(xi)); //65535 65536
Since float can't store all the digits of 2^31-2 and double can they get different results for the sqrt. But the float version of the sqrt is one integer larger. This is what I want. For 64-bit integers as long as the sqrt of the double always rounds up it's okay.
First, integer multiplication is really quite cheap. So long as you have more than a few cycles of work per loop iteration and one spare execute slot, it should be entirely hidden by reorder on most non-tiny processors.
If you did have a processor with dramatically slow integer multiply, a truly clever compiler might transform your loop to:
for (uint64_t i = 0, j = 0; j < cut; j += 2*i+1, i++)
replacing the multiply with an lea or a shift and two adds.
Those notes aside, let’s look at your question as stated. No, you can’t just use i < sqrt(n). Counter-example: n = 0x20000000000000. Assuming adherence to IEEE-754, you will have cut = 0x5a82799, and cut*cut is 0x1ffffff8eff971.
However, a basic floating-point error analysis shows that the error in computing sqrt(n) (before conversion to integer) is bounded by 3/4 of an ULP. So you can safely use:
uint32_t cut = sqrt(n) + 1;
and you’ll perform at most one extra loop iteration, which is probably acceptable. If you want to be totally precise, instead use:
uint32_t cut = sqrt(n);
cut += (uint64_t)cut*cut < n;
Edit: z boson clarifies that for his purposes, this only matters when n is an exact square (otherwise, getting a value of cut that is “too small by one” is acceptable). In that case, there is no need for the adjustment and on can safely just use:
uint32_t cut = sqrt(n);
Why is this true? It’s pretty simple to see, actually. Converting n to double introduces a perturbation:
double_n = n*(1 + e)
which satisfies |e| < 2^-53. The mathematical square root of this value can be expanded as follows:
square_root(double_n) = square_root(n)*square_root(1+e)
Now, since n is assumed to be a perfect square with at most 64 bits, square_root(n) is an exact integer with at most 32 bits, and is the mathematically precise value that we hope to compute. To analyze the square_root(1+e) term, use a taylor series about 1:
square_root(1+e) = 1 + e/2 + O(e^2)
= 1 + d with |d| <~ 2^-54
Thus, the mathematically exact value square_root(double_n) is less than half an ULP away from[1] the desired exact answer, and necessarily rounds to that value.
[1] I’m being fast and loose here in my abuse of relative error estimates, where the relative size of an ULP actually varies across a binade — I’m trying to give a bit of the flavor of the proof without getting too bogged down in details. This can all be made perfectly rigorous, it just gets to be a bit wordy for Stack Overflow.
All my answer is useless if you have access to IEEE 754 double precision floating point, since Stephen Canon demonstrated both
a simple way to avoid imul in loop
a simple way to compute the ceiling sqrt
Otherwise, if for some reason you have a non IEEE 754 compliant platform, or only single precision, you could get the integer part of square root with a simple Newton-Raphson loop. For example in Squeak Smalltalk we have this method in Integer:
sqrtFloor
"Return the integer part of the square root of self"
| guess delta |
guess := 1 bitShift: (self highBit + 1) // 2.
[
delta := (guess squared - self) // (guess + guess).
delta = 0 ] whileFalse: [
guess := guess - delta ].
^guess - 1
Where // is operator for quotient of integer division.
Final guard guess*guess <= self ifTrue: [^guess]. can be avoided if initial guess is fed in excess of exact solution as is the case here.
Initializing with approximate float sqrt was not an option because integers are arbitrarily large and might overflow
But here, you could seed the initial guess with floating point sqrt approximation, and my bet is that the exact solution will be found in very few loops. In C that would be:
uint32_t sqrtFloor(uint64_t n)
{
int64_t diff;
int64_t delta;
uint64_t guess=sqrt(n); /* implicit conversions here... */
while( (delta = (diff=guess*guess-n) / (guess+guess)) != 0 )
guess -= delta;
return guess-(diff>0);
}
That's a few integer multiplications and divisions, but outside the main loop.
What you are looking for is a way to calculate a rational upper bound of the square root of a natural number. Continued fraction is what you need see wikipedia.
For x>0, there is
.
To make the notation more compact, rewriting the above formula as
Truncate the continued fraction by removing the tail term (x-1)/2's at each recursion depth, one gets a sequence of approximations of sqrt(x) as below:
Upper bounds appear at lines with odd line numbers, and gets tighter. When distance between an upper bound and its neighboring lower bound is less than 1, that approximation is what you need. Using that value as the value of cut, here cut must be a float number, solves the problem.
For very large number, rational number should be used, so no precision is lost during conversion between integer and floating point number.

Permutation with Repetition: Avoiding Overflow

Background:
Given n balls such that:
'a' balls are of colour GREEN
'b' balls are of colour BLUE
'c' balls are of colour RED
...
(of course a + b + c + ... = n)
The number of permutations in which these balls can be arranged is given by:
perm = n! / (a! b! c! ..)
Question 1:
How can I 'elegantly' calculate perm so as to avoid an integer overflow as as long as possible, and be sure that when I am done calculating, I either have the correct value of perm, or I know that the final result will overflow?
Basically, I want to avoid using something like GNU GMP.
Optionally, Question 2:
Is this a really bad idea, and should I just go ahead and use GMP?
These are known as the multinomial coefficients, which I shall denote by m(a,b,...).
And you can efficiently calculate them avoiding overflow by exploiting this identity (which should be fairly simple to prove):
m(a,b,c,...) = m(a-1,b,c,...) + m(a,b-1,c,...) + m(a,b,c-1,...) + ...
m(0,0,0,...) = 1 // base case
m(anything negative) = 0 // base case 2
Then it's a simple matter of using recursion to calculate the coefficients. Note that to avoid an exponential running time, you need to either cache your results (to avoid recomputation) or use dynamic programming.
To check for overflow, just make sure the sums won't overflow.
And yes, it's a very bad idea to use arbitrary precision libraries to do this simple task.
If you have globs of cpu time, you can make lists out of all the factorials, then find the prime factorization of all the numbers in the lists, then cancel all the numbers on the top with those on the bottom, until the numbers are completely reduced.
The overflow-safest way is the way Dave suggested. You find the exponent with which the prime p divides n! by the summation
m = n;
e = 0;
do{
m /= p;
e += m;
}while(m > 0);
Subtract the exponents of p in the factorisations of a! etc. Do that for all primes <= n, and you have the factorisation of the multinomial coefficient. That calculation overflows if and only if the final result overflows. But the multinomial coefficients grow rather fast, so you will have overflow already for fairly small n. For substantial calculations, you will need a bignum library (if you don't need exact results, you can get by a bit longer using doubles).
Even if you use a bignum library, it is worthwhile to keep intermediate results from getting too large, so instead of calculating the factorials and dividing huge numbers, it is better to calculate the parts in sequence,
n!/(a! * b! * c! * ...) = n! / (a! * (n-a)!) * (n-a)! / (b! * (n-a-b)!) * ...
and to compute each of these factors, let's take the second for illustration,
(n-a)! / (b! * (n-a-b)!) = \prod_{i = 1}^b (n-a+1-i)/i
is calculated with
prod = 1
for i = 1 to b
prod = prod * (n-a+1-i)
prod = prod / i
Finally multiply the parts. This requires n divisions and n + number_of_parts - 1 multiplications, keeping the intermediate results moderately small.
According to this link, you can calculate multinomial coefficients as a product of several binomial coefficients, controlling integer overflow on the way.
This reduces original problem to overflow-controlled computation of a binomial coefficient.
Notations: n! = prod(1,n) where prod you may guess what does.
It's very easy, but first you must know that for any 2 positive integers (i, n > 0) that expression is a positive integer:
prod(i,i+n-1)/prod(1,n)
Thus the idea is to slice the computation of n! in small chunks and to divide asap.
With a, than with b and so on.
perm = (a!/a!) * (prod(a+1, a+b)/b!) * ... * (prod(a+b+c+...y+1,n)/z!)
Each of these factors is an integer, so if perm will not overflow, neither one of its factors will do.
Though, in the calculation of a such factor could be an overflow in numerator or denominator but that's avoidable doing a multiplication in numerator then a division in alternance:
prod(a+1, a+b)/b! = (a+1)(a+2)/2*(a+3)/3*..*(a+b)/b
In that way every division will yield an integer.

Resources