Time complexity of a function in Big-O - c

I'm trying to find the time complexity of this function:
int bin_search(int a[], int n, int x); // Binary search on an array with size n.
int f(int a[], int n) {
int i = 1, x = 1;
while (i < n) {
if (bin_search(a, i, x) >= 0) {
return x;
}
i *= 2;
x *= 2;
}
return 0;
}
The answer is (log n)^2. How come?
Best I could get is log n. First the i is 1, so the while will be run log n times.At first interaction, when i=1, the binary search will have only one interaction because the array's size is 1(i). Then, when i=2, two interactions, and so on until it's log n interactions.
So the formula I thought would fit is this.
The summation is for the while and the inner equation is because for i=1 it's log(1), for i=2 it's log(2) and so on until it's log(n) at the last.
Where am I wrong?

Each iteration performs a binary search on the first 2^i elements of the array.
You can compute the number of operations (comparisons):
log2(1) + log2(2) + log2(4) + ... + log2(2^m)
log(2^n) equals n, so this series simplifies into:
0 + 1 + 2 + ... + m
Where m is floor(log2(n)).
The series evaluates to m * (m + 1) / 2, replacing m we get
floor(log2(n)) * (floor(log2(n)) + 1) / 2
-> 0.5 * floor(log2(n))^2 + 0.5 * floor(log2(n))
The first element dominates the second, hence the complexity is O(log(n)^2)

Related

The time complexity answer to the question confuses me - n^(2/3)

I'm trying to figure out why the time complexity of this code is n2/3. The space complexity is log n, but I don't know how to continue the time complexity calculation (or if it's right).
int g2 (int n, int m)
{
if (m >= n)
{
for (int i = 0; i < n; ++i)
printf("#");
return 1;
}
return 1 + g2 (n / 2, 4 * m);
}
int main (int n)
{
return g2 (n, 1);
}
As long as m < n, you perform an O(1) operation: making a recursive call. You halve n and quadruple m, so after k steps, you get
n(k) = n(0) * 0.5^k
m(k) = m(0) * 4^k
You can set them equal to each other to find that
n(0) / m(0) = 8^k
Taking the log
log(n(0)) - log(m(0)) = k log(8)
or
k = log_8(n(0)) - log_8(m(0))
On the kth recursion you perform n(k) loop iterations.
You can plug k back into n(k) = n(0) * 0.5^k to estimate the number of iterations. Let's ignore m(0) for now:
n(k) = n(0) * 0.5^log_8(n(0))
Taking again the log of both sides,
log_8(n(k)) = log_8(n(0)) + log_8(0.5) * log_8(n(0))
Since log_8(0.5) = -1/3, you get
log_8(n(k)) = log_8(n(0)) * (2/3)`
Taking the exponent again:
n(k) = n(0)^(2/3)
Since any positive exponent will overwhelm the O(log(n)) recursion, your final complexity is indeed O(n^(2/3)).
Let's look for a moment what happens if m(0) > 1.
n(k) = n(0) * 0.5^(log_8(n(0)) - log_8(m(0)))
Again taking the log:
log_8(n(k)) = log_8(n(0)) - 1/3 * (log_8(n(0)) - log_8(m(0)))
log_8(n(k)) = log_8(n(0)^(2/3)) + log_8(m(0)^(1/3))
So you get
n(k) = n(0)^(2/3) * m(0)^(1/3)
Or
n(k) = (m n^2)^(1/3)
Quick note on corner cases in the starting conditions:
For m > 0:
If n <= 0:, n <= m is immediately true and the recursion terminates and there is no loop.
For m < 0:
If n <= m, the recursion terminates immediately and there is no loop. If n > m, n will converge to zero while m diverges, and the algorithm will run forever.
The only interesting case is where m == 0. Regardless of whether n is positive or negative, it will reach zero because of integer truncation, so the complexity depends on when it reaches 1:
n(0) * 0.5^k = 1
log_2(n(0)) - k = 0
So in this case, the runtime of the recursion is still O(log(n)). The loop does not run.
m starts at 1, and at each step n -> n/2 and m -> m*4 until m>n. After k steps, n_final = n/2^k and m_final = 4^k. So the final value of k is where n/2^k = 4^k, or k = log8(n).
When this is reached, the inner loop performs n_final (approximately equal to m_final) steps, leading to a complexity of O(4^k) = O(4^log8(n)) = O(4^(log4(n)/log4(8))) = O(n^(1/log4(8))) = O(n^(2/3)).

When is the sum of series 1+1/2+1/3+1/4+......+1/n=log n and when is the same sum equal to n, i.e. 1+1/2+1/3+1/4+......+1/n=n

I'm new to understanding asymptotic analysis, while trying to find the big O notation, in a few problems it is given as log n for the same simplification of series and n for another problem.
Here are the questions:
int fun(int n)
{
int count = 0;
for (int i= n; i> 0; i/=2)
for (int j = 0; j < i; j++)
count ++;
return count;
}
T(n)=O(n)
int fun2(int n)
{
int count = 0;
for(i = 1; i < n; i++)
for(j = 1; j <= n; j += i)
count ++;
return count;
}
T(n)=O(n log n)
I'm really confused. Why are the complexities of these seemingly similar algorithms different?
The series formed in both the cases are different
Time Complexity Analysis
In this case first i will be n and the loop for j will go till n, then i will be n/2 and loop will go till n/2 and so on , So the time complexity will be
= n + n/2 + n/4 + n/8.......
The result of this sum is 2n-1 and hence the time complexity O(n)
In this case when i is n, we will loop for j n times, next time i will be 2 and we will skip one entry at a time, which means we are iterating n/2 times, and so on. So the time complexity will be
= n + n/2 + n/3 + n/4........
= n (1 + 1/2 + 1/3 + 1/4 +....)
= O(nlogn)
The sum of 1 + 1/2 + 1/3... is O(logn). For solution see.
For the former, the inner loop runs approximately (exactly if n is a power of 2)
n + n/2 + n/4 + n/8 + ... + n/2^log2(n)
times. It can be factored into
n * (1 + 1/2 + 1/4 + 1/8 + ... + (1/2)^(log2 n))
The 2nd factor is called (a partial sum of) the geometric series which converges, meaning that as we approach the infinity it will approach a constant. Therefore it is θ(1); when you multiply this by n you get θ(n)
I've made an analysis of the latter algorithm just a couple days ago. The number of iterations for n in that algorithm are
ceil(n) + ceil(n / 2) + ceil(n/3) + ... + ceil(n/n)
It is quite close to a partial sum the harmonic series multiplied by n:
n * (1 + 1/2 + 1/3 + 1/4 + ... 1/n)
Unlike the geometric series the harmonic series does not converge, but it diverges as we add more terms. The partial sums of first n terms can be bounded above and below by ln n + C, hence the time complexity of the entire algorithm is θ(n log n).

Writing a function that calculates the sum of squares within a range in one line in C

My try
double sum_squares_from(double x, double n){
return n<=0 ? 0 : x*x + sum_squares_from((x+n-1)*(x+n-1),n-1);
}
Instead of using loops my professor wants us to write functions like this...
What the exercise asks for is a function sum_squares_from() with double x being the starting number and n is the number of number. For example if you do x = 2 and n = 4 you get 2*2+3*3+4*4+5*5. It returns zero if n == 0.
My thinking was that in my example what I have is basically x*x+(x+1)(x+1)+(x+1+1)(x+1+1)+(x+1+1+1)(x+1+1+1) = (x+0)(x+0)+(x+1)(x+1)+(x+2)(x+2)+(x+3)(x+3) = (x+n-1)^2 repeated n times where n gets decremented every time by one until it becomes zero and then you sum everything.
Did I do it right?
(if my professor seems a bit demanding... he somehow does this sort of thing all in his head without auxiliary calculations. Scary guy)
It's not recursive, but it's one line:
int
sum_squares(int x, int n) {
return ((x + n - 1) * (x + n) * (2 * (x + n - 1) + 1) / 6) - ((x - 1) * x * (2 * (x - 1) + 1) / 6);
}
Sum of squares (of integers) has a closed-form solution for 1 .. n. This code calculates the sum of squares from 1 .. (x+n) and then subtracts the sum of squares from 1 .. (x-1).
The original version of this answer used ASCII art.
So,
&Sum;i:0..n i = n(n+1)(&half;)
&Sum;i:0..n i2 = n(n+1)(2n+1)(&frac16;)
We note that,
&Sum;i:0..n (x+i)2
&equals; &Sum;i:0...n x2 + 2xi + i2
&equals; (n+1)x2 + (2x)&Sum;i:0..n i + &Sum;i:0..n i2
&equals; (n+1)x2 + n(n+1)x + n(n+1)(2n+1)(&frac16;)
Thus, your sum has the closed form:
double sum_squares_from(double x, int n) {
return ((n-- > 0)
? (n + 1) * x * x
+ x * n * (n + 1)
+ n * (n + 1) * (2 * n + 1) / 6.
: 0);
}
If I apply some obfuscation, the one-line version becomes:
double sum_squares_from(double x, int n) {
return (n-->0)?(n+1)*(x*x+x*n+n*(2*n+1)/6.):0;
}
If the task is to implement the summation in a loop, use tail recursion. Tail recursion can be mechanically replaced with a loop, and many compilers implement this optimization.
static double sum_squares_from_loop(double x, int n, double s) {
return (n <= 0) ? s : sum_squares_from_loop(x+1, n-1, s+x*x);
}
double sum_squares_from(double x, int n) {
return sum_squares_from_loop(x, n, 0);
}
As an illustration, if you observe the generated assembly in GCC at a sufficient optimization level (-Os, -O2, or -O3), you will notice that the recursive call is eliminated (and sum_squares_from_loop is inlined to boot).
Try it online!
As mentioned in my original comment, n should not be type double, but instead be type int to avoid floating point comparison problems with n <= 0. Making the change and simplifying the multiplication and recursive call, you do:
double sum_squares_from(double x, int n)
{
return n <= 0 ? 0 : x * x + sum_squares_from (x + 1, n - 1);
}
If you think about starting with x * x and increasing x by 1, n times, then the simple x * x + sum_squares_from (x + 1, n - 1) is quite easy to understand.
Maybe this?
double sum_squares_from(double x, double n) {
return n <= 0 ? 0 : (x + n - 1) * (x + n - 1) + sum_squares_from(x, n - 1);
}

How to write iterative version of computing m^n whose time complexity is O(log n)?

If I have two integers m and n , and I have to compute power(m,n) i.e. mn , then I wrote the below recursive code and this gives time complexity as O(log n), but I am unable to write the iterative version of it which gives similar time complexity.
int power(int m, int n) {
int p;
if (n == 1)
return m;
p = power(m, n / 2);
if (n % 2 == 1)
return p * p * m;
else
return p * p;
}
If the least bit of n is 1, m is multiplied to the result.
If the second least bit of n is 1, m*m is multiplied to the result.
If the third least bit of n is 1, (m*m)*(m*m) is multiplied to the result.
Repeating this, the code should be like this:
int power(int m, int n)
{
int res = 1;
while (n > 0) /* check all bit which is 1 */
{
if (n % 2 == 1) res *= m; /* check current bit */
m *= m; /* calculate what to multiply if next bit is 1 */
n >>= 1; /* proceed to next bit */
}
return res;
}

sum's sum of divizors of numbers less than or equal to N

I really need some help at this problem:
Given a positive integer N, we define xsum(N) as sum's sum of all positive integer divisors' numbers less or equal to N.
For example: xsum(6) = 1 + (1 + 2) + (1 + 3) + (1 + 2 + 4) + (1 + 5) + (1 + 2 + 3 + 6) = 33.
(xsum - sum of divizors of 1 + sum of divizors of 2 + ... + sum of div of 6)
Given a positive integer K, you are asked to find the lowest N that satisfies the condition: xsum(N) >= K
K is a nonzero natural number that has at most 14 digits
time limit : 0.2 sec
Obviously, the brute force will fall for most cases with Time Limit Exceeded. I haven't find something better than it yet, so that's the code:
fscanf(fi,"%lld",&k);
i=2;
sum=1;
while(sum<k) {
sum=sum+i+1;
d=2;
while(d*d<=i) {
if(i%d==0 && d*d!=i)
sum=sum+d+i/d;
else
if(d*d==i)
sum+=d;
d++;
}
i++;
}
Any better ideas?
For each number n in range [1 , N] the following applies: n is divisor of exactly roundDown(N / n) numbers in range [1 , N]. Thus for each n we add a total of n * roundDown(N / n) to the result.
int xsum(int N){
int result = 0;
for(int i = 1 ; i <= N ; i++)
result += (N / i) * i;//due to the int-division the two i don't cancel out
return result;
}
The idea behind this algorithm can aswell be used to solve the main-problem (smallest N such that xsum(N) >= K) in faster time than brute-force search.
The complete search can be further optimized using some rules we can derive from the above code: K = minN * minN (minN would be the correct result if K = 2 * 3 * ...). Using this information we have a lower-bound for starting the search.
Next step would be to search for the upper bound. Since the growth of xsum(N) is (approximately) quadratic we can use this to approximate N. This optimized guessing allows to find the searched value pretty fast.
int N(int K){
//start with the minimum-bound of N
int upperN = (int) sqrt(K);
int lowerN = upperN;
int tmpSum;
//search until xsum(upperN) reaches K
while((tmpSum = xsum(upperN)) < K){
int r = K - tmpSum;
lowerN = upperN;
upperN += (int) sqrt(r / 3) + 1;
}
//Now the we have an upper and a lower bound for searching N
//the rest of the search can be done using binary-search (i won't
//implement it here)
int N;//search for the value
return N;
}

Resources