What is the time complexity of following code:
int a = 0, i = N;
while (i > 0) {
a += i;
i /= 2;
}
It's complexity will be O(logn).
On every iteration, value of n will be divided by 2, i.e. n + n/2 + n/4 + n/8 ... 1.
To understand logn check this out
Related
I'm trying to figure out why the time complexity of this code is n2/3. The space complexity is log n, but I don't know how to continue the time complexity calculation (or if it's right).
int g2 (int n, int m)
{
if (m >= n)
{
for (int i = 0; i < n; ++i)
printf("#");
return 1;
}
return 1 + g2 (n / 2, 4 * m);
}
int main (int n)
{
return g2 (n, 1);
}
As long as m < n, you perform an O(1) operation: making a recursive call. You halve n and quadruple m, so after k steps, you get
n(k) = n(0) * 0.5^k
m(k) = m(0) * 4^k
You can set them equal to each other to find that
n(0) / m(0) = 8^k
Taking the log
log(n(0)) - log(m(0)) = k log(8)
or
k = log_8(n(0)) - log_8(m(0))
On the kth recursion you perform n(k) loop iterations.
You can plug k back into n(k) = n(0) * 0.5^k to estimate the number of iterations. Let's ignore m(0) for now:
n(k) = n(0) * 0.5^log_8(n(0))
Taking again the log of both sides,
log_8(n(k)) = log_8(n(0)) + log_8(0.5) * log_8(n(0))
Since log_8(0.5) = -1/3, you get
log_8(n(k)) = log_8(n(0)) * (2/3)`
Taking the exponent again:
n(k) = n(0)^(2/3)
Since any positive exponent will overwhelm the O(log(n)) recursion, your final complexity is indeed O(n^(2/3)).
Let's look for a moment what happens if m(0) > 1.
n(k) = n(0) * 0.5^(log_8(n(0)) - log_8(m(0)))
Again taking the log:
log_8(n(k)) = log_8(n(0)) - 1/3 * (log_8(n(0)) - log_8(m(0)))
log_8(n(k)) = log_8(n(0)^(2/3)) + log_8(m(0)^(1/3))
So you get
n(k) = n(0)^(2/3) * m(0)^(1/3)
Or
n(k) = (m n^2)^(1/3)
Quick note on corner cases in the starting conditions:
For m > 0:
If n <= 0:, n <= m is immediately true and the recursion terminates and there is no loop.
For m < 0:
If n <= m, the recursion terminates immediately and there is no loop. If n > m, n will converge to zero while m diverges, and the algorithm will run forever.
The only interesting case is where m == 0. Regardless of whether n is positive or negative, it will reach zero because of integer truncation, so the complexity depends on when it reaches 1:
n(0) * 0.5^k = 1
log_2(n(0)) - k = 0
So in this case, the runtime of the recursion is still O(log(n)). The loop does not run.
m starts at 1, and at each step n -> n/2 and m -> m*4 until m>n. After k steps, n_final = n/2^k and m_final = 4^k. So the final value of k is where n/2^k = 4^k, or k = log8(n).
When this is reached, the inner loop performs n_final (approximately equal to m_final) steps, leading to a complexity of O(4^k) = O(4^log8(n)) = O(4^(log4(n)/log4(8))) = O(n^(1/log4(8))) = O(n^(2/3)).
I'm new to understanding asymptotic analysis, while trying to find the big O notation, in a few problems it is given as log n for the same simplification of series and n for another problem.
Here are the questions:
int fun(int n)
{
int count = 0;
for (int i= n; i> 0; i/=2)
for (int j = 0; j < i; j++)
count ++;
return count;
}
T(n)=O(n)
int fun2(int n)
{
int count = 0;
for(i = 1; i < n; i++)
for(j = 1; j <= n; j += i)
count ++;
return count;
}
T(n)=O(n log n)
I'm really confused. Why are the complexities of these seemingly similar algorithms different?
The series formed in both the cases are different
Time Complexity Analysis
In this case first i will be n and the loop for j will go till n, then i will be n/2 and loop will go till n/2 and so on , So the time complexity will be
= n + n/2 + n/4 + n/8.......
The result of this sum is 2n-1 and hence the time complexity O(n)
In this case when i is n, we will loop for j n times, next time i will be 2 and we will skip one entry at a time, which means we are iterating n/2 times, and so on. So the time complexity will be
= n + n/2 + n/3 + n/4........
= n (1 + 1/2 + 1/3 + 1/4 +....)
= O(nlogn)
The sum of 1 + 1/2 + 1/3... is O(logn). For solution see.
For the former, the inner loop runs approximately (exactly if n is a power of 2)
n + n/2 + n/4 + n/8 + ... + n/2^log2(n)
times. It can be factored into
n * (1 + 1/2 + 1/4 + 1/8 + ... + (1/2)^(log2 n))
The 2nd factor is called (a partial sum of) the geometric series which converges, meaning that as we approach the infinity it will approach a constant. Therefore it is θ(1); when you multiply this by n you get θ(n)
I've made an analysis of the latter algorithm just a couple days ago. The number of iterations for n in that algorithm are
ceil(n) + ceil(n / 2) + ceil(n/3) + ... + ceil(n/n)
It is quite close to a partial sum the harmonic series multiplied by n:
n * (1 + 1/2 + 1/3 + 1/4 + ... 1/n)
Unlike the geometric series the harmonic series does not converge, but it diverges as we add more terms. The partial sums of first n terms can be bounded above and below by ln n + C, hence the time complexity of the entire algorithm is θ(n log n).
I really need some help at this problem:
Given a positive integer N, we define xsum(N) as sum's sum of all positive integer divisors' numbers less or equal to N.
For example: xsum(6) = 1 + (1 + 2) + (1 + 3) + (1 + 2 + 4) + (1 + 5) + (1 + 2 + 3 + 6) = 33.
(xsum - sum of divizors of 1 + sum of divizors of 2 + ... + sum of div of 6)
Given a positive integer K, you are asked to find the lowest N that satisfies the condition: xsum(N) >= K
K is a nonzero natural number that has at most 14 digits
time limit : 0.2 sec
Obviously, the brute force will fall for most cases with Time Limit Exceeded. I haven't find something better than it yet, so that's the code:
fscanf(fi,"%lld",&k);
i=2;
sum=1;
while(sum<k) {
sum=sum+i+1;
d=2;
while(d*d<=i) {
if(i%d==0 && d*d!=i)
sum=sum+d+i/d;
else
if(d*d==i)
sum+=d;
d++;
}
i++;
}
Any better ideas?
For each number n in range [1 , N] the following applies: n is divisor of exactly roundDown(N / n) numbers in range [1 , N]. Thus for each n we add a total of n * roundDown(N / n) to the result.
int xsum(int N){
int result = 0;
for(int i = 1 ; i <= N ; i++)
result += (N / i) * i;//due to the int-division the two i don't cancel out
return result;
}
The idea behind this algorithm can aswell be used to solve the main-problem (smallest N such that xsum(N) >= K) in faster time than brute-force search.
The complete search can be further optimized using some rules we can derive from the above code: K = minN * minN (minN would be the correct result if K = 2 * 3 * ...). Using this information we have a lower-bound for starting the search.
Next step would be to search for the upper bound. Since the growth of xsum(N) is (approximately) quadratic we can use this to approximate N. This optimized guessing allows to find the searched value pretty fast.
int N(int K){
//start with the minimum-bound of N
int upperN = (int) sqrt(K);
int lowerN = upperN;
int tmpSum;
//search until xsum(upperN) reaches K
while((tmpSum = xsum(upperN)) < K){
int r = K - tmpSum;
lowerN = upperN;
upperN += (int) sqrt(r / 3) + 1;
}
//Now the we have an upper and a lower bound for searching N
//the rest of the search can be done using binary-search (i won't
//implement it here)
int N;//search for the value
return N;
}
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
What’s the complexity of for i: for o = i+1
I have done the following sorting algorithm for arrays of length 5:
int myarray[5] = {2,4,3,5,1};
int i;
for (i = 0; i < 5; i++)
{
printf("%d", myarray[i]);
int j;
for (j=i+1; j < 5; j++)
{
int tmp = myarray[i];
if (myarray[i] > myarray[j]) {
tmp = myarray[i];
myarray[i] = myarray[j];
myarray[j] = tmp;
}
}
}
I believe that the complexity of this sorting algorithm is O(n*n) because for each element you compare it with the rest. However, I also notice that for each time we increate in the outer loop, we don't compare with all the rest, but with the rest - i. What would be the complexity?
It's still O(n²) (or O(n * n), as you wrote). Only the highest order term matters when analysing computational complexity.
You are right:
It's O(1 + 2 + 3... + N)
But mathematically it's just:
= O(n*((n-1)/2))
but that is just:
= O(n^2)
You are right, that it is O(n2).
Here's how to calculate it. On the first iteration, you will look at n elements; on the next, n - 1, and so on. If you write two copies of that sum, and divide by two, you can pair up the terms, such that you add the first term in the first copy n to the last term of the second copy 1, and so on. You wind up with n copies of n + 1, so the sum winds up being n * (n + 1) / 2. Big-O only distinguishes asymptotic behavior; the asymptotic behavior is described by the highest order term, without regard to constant factor, which is n2.
n + (n - 1) + (n - 2) ... + 1
= 2 * (n + (n - 1) + (n - 2) ... + 1) / 2
= ((n + 1) + (n - 1 + 2) + (n - 2 + 3) + ... + (1 + n)) / 2
= ((n + 1) + (n + 1) + ... + (n + 1)) / 2
= n * (n + 1) / 2
= 1/2 * n2 + 1/2 * n
= O(n2)
This is bubble sort, and it is indeed of complexity O(n^2)
The entire run time of the algorithm can be surmised in the following summation:
n + (n-1) + (n-2) + ... + 1 = n(n+1)/2
Since only the highest order term is of interest in asymptotic analysis, the complexity is O(n^2)
The big O notation is asymptotic. It means that we overlook constant factors such as - i. The complexity of your algorithm is O(N²) (see also here).
The complexity is O(1). The O notation only makes sense for large inputs, with, where an increase or decrease is not only visible, but relevant.
If you were to extend it, it would be O(n^2), yes.
for multiple loops
n*m*..no.of loops
for above code in worst case its n*n=n^2
BigOh means the max bound.
so the maximum complexity can't be greater then this.
For
i=0 it runs for n time
i=1 it runs for n-1 time
i=2 it runs for n-2 time
....
So total Sum = (n) + (n-1) + (n-2) + .... + 1
sum = (n*n) - (1 + 2 + ...)
= n^2 -
So big O complexity = O(n^2) { upper bound; + or - gets ignored }
Very similar complexity examples. I am trying to understand as to how these questions vary. Exam coming up tomorrow :( Any shortcuts for find the complexities here.
CASE 1:
void doit(int N) {
while (N) {
for (int j = 0; j < N; j += 1) {}
N = N / 2;
}
}
CASE 2:
void doit(int N) {
while (N) {
for (int j = 0; j < N; j *= 4) {}
N = N / 2;
}
}
CASE 3:
void doit(int N) {
while (N) {
for (int j = 0; j < N; j *= 2) {}
N = N / 2;
}
}
Thank you so much!
void doit(int N) {
while (N) {
for (int j = 0; j < N; j += 1) {}
N = N / 2;
}
}
To find the O() of this, notice that we are dividing N by 2 each iteration. So, (not to insult your intelligence, but for completeness) the final non-zero iteration through the loop we will have N=1. The time before that we will have N=a(2), then before that N=a(4)... where 0< a < N (note those are non-inclusive bounds). So, this loop will execute a total of log(N) times, meaning the first iteration we see that N=a2^(floor(log(N))).
Why do we care about that? Well, it's a geometric series which has a nice closed form:
Sum = \sum_{k=0}^{\log(N)} a2^k = a*\frac{1-2^{\log N +1}}{1-2} = 2aN-a = O(N).
If someone can figure out how to get that latexy notation to display correctly for me I would really appreciate it.
You already have the answer to number 1 - O(n), as given by #NickO, here is an alternative explanation.
Denote the number of outer repeats of inner loop by T(N), and let the number of outer loops be h. Note that h = log_2(N)
T(N) = N + N/2 + ... + N / (2^i) + ... + 2 + 1
< 2N (sum of geometric series)
in O(N)
Number 3: is O((logN)^2)
Denote the number of outer repeats of inner loop by T(N), and let the number of outer loops be h. Note that h = log_2(N)
T(N) = log(N) + log(N/2) + log(N/4) + ... + log(1) (because log(a*b) = log(a) + log(b)
= log(N * (N/2) * (N/4) * ... * 1)
= log(N^h * (1 * 1/2 * 1/4 * .... * 1/N))
= log(N^h) + log(1 * 1/2 * 1/4 * .... * 1/N) (because log(a*b) = log(a) + log(b))
< log(N^h) + log(1)
= log(N^h) (log(1) = 0)
= h * log(N) (log(a^b) = b*log(a))
= (log(N))^2 (because h=log_2(N))
Number 2 is almost identical to number 3.
(In 2,3: assuming j starts from 1, not from 0, if this is not the case #WhozCraig giving the reason why it never breaks)