#include <stdio.h>
int main() {
int N = 8; /* for example */
int sum = 0;
for (int i = 1; i <= N; i++)
for (int j = 1; j <= i*i; j++)
sum++;
printf("Sum = %d\n", sum);
return 0;
}
for each n value (i variable), j values will be n^2. So the complexity will be n . n^2 = n^3. Is that correct?
If problem becomes:
#include <stdio.h>
int main() {
int N = 8; /* for example */
int sum = 0;
for (int i = 1; i <= N; i++)
for (int j = 1; j <= i*i; j++)
for (int k = 1; k <= j*j; k++)
sum++;
printf("Sum = %d\n", sum);
return 0;
}
Then you use existing n^3 . n^2 = n^5 ? Is that correct?
We have i and j < i*i and k < j*j which is x^1 * x^2 * (x^2)^2 = x^3 * x^4 = x^7 by my count.
In particular, since 1 < i < N we have O(N) for the i loop. Since 1 < j <= i^2 <= N^2 we have O(n^2) for the second loop. Extending the logic, we have 1 < k <= j^2 <= (i^2)^2 <= N^4 for the third loop.
Inner to Outer loops, we execute up to N^4 times for each j loop, and up to N^2 times for each i loop, and up to N times over the i loop, making the total be of order N^4 * N^2 * N = N^7 = O(N^7).
I think the complexity is actually O(n^7).
The first loop executes N steps.
The second loop executes N^2 steps.
In the third loop, j*j can reach N^4, so it has O(N^4) complexity.
Overall, N * N^2 * N^4 = O(N^7)
For i = 1 inner loop runs 1^1 times, for i = 2inner loop runs 2^2 times .... and for i = N inner loop runs N^N times. Its complexity is (1^1 + 2^2 + 3^3 + ...... + N^N) of order O(N^3).
In second case, for i = N first inner loop iterates N^N times and hence the second inner loop(inner most) will iterate up to N * (N^N) * (N^N) times. Hence the complexity is of order N * N^2 * N^4, i.e, O(N^7).
Yes. In the first example, the i loop runs N times, and the inner j loop tuns i*i times, which is O(N^2). So the whole thing is O(N^3).
In the second example there is an additional O(N^4) loop (loop to j*j), so it is O(N^5) overall.
For a more formal proof, work out how many times sum++ is executed in terms of N, and look at the highest polynomial order of N. In the first example it will be a(N^3)+b(N^2)+c(N)+d (for some values of a, b, c and d), so the answer is 3.
NB: Edited re example 2 to say it's O(N^4): misread i*i for j*j.
Consider the number of times all loops will be called.
int main() {
int N = 8; /* for example */
int sum = 0;
for (int i = 1; i <= N; i++) /* Called N times */
for (int j = 1; j <= i*i; j++) /* Called N*N times for i=0..N times */
for (int k = 1; k <= j*j; k++) /* Called N^2*N^2 times for j=0..N^2 times and i=0..N times */
sum++;
printf("Sum = %d\n", sum);
return 0;
}
Thus sum++ statement is called O(N^4)*O(N^2)*O(N) times = O(N^7) and this the overall complexity of the program.
The incorrect way to solve this (although common, and often gives the correct answer) is to approximate the average number of iterations of an inner loop with its worst-case. Here, the inner loop loops at worst O(N^4), the middle loop loops at worst O(N^2) times and the outer loop loops O(N) times, giving the (by chance correct) solution of O(N^7) by multiplying these together.
The right way is to work from the inside out, being careful to be explicit about what's being approximated.
The total number of iterations, T, of the increment instruction is the same as your code. Just writing it out:
T = sum(i=1..N)sum(j=1..i^2)sum(k=1..j^2)1.
The innermost sum is just j^2, giving:
T = sum(i=1..N)sum(j=1..i^2)j^2
The sum indexed by j is a sum of squares of consecutive integers. We can calculate that exactly: sum(j=1..n)j^2 is n*(n+1)*(2n+1)/6. Setting n=i^2, we get
T = sum(i=1..N)i^2*(i^2+1)*(2i^2+1)/6
We could continue to compute the exact answer, by using the formula for sums of 6th, 4th and 2nd powers of consecutive integers, but it's a pain, and for complexity we only care about the highest power of i. So we can approximate.
T = sum(i=1..N)(i^6/3 + o(i^5))
We can now use that sum(i=1..N)i^p = Theta(N^{p+1}) to get the final result:
T = Theta(N^7)
Related
I wanted to confirm if I got the correct big-O for a few snippets of code involving for-loops.
for ( int a = 0; a < n; a ++)
for ( int b = a; b < 16 ; b ++)
sum += 1;
I think this one is O(16N) => O(N), but the fact that b starts at a rather than 0 in the second for-loop is throwing me off.
int b = 0;
for ( int a = 0; a < n ; a ++)
sum += a * n;
for ( ; b < n ; b ++)
sum++;
I want to say O(N^2) since there are nested for-loops where both loops go to n. However, b in the second loop uses the initialization from the outer scope, and I'm not sure if that affects the runtime.
for (int a = 0; a < (n * n); a ++)
sum++;
if (a % 2 == 1)
for (; a < (n * n * n); a ++)
sum++;
I got that the first for-loop is O(N^2) and the one under the if-statement is O(N^3), but I don't know how to account for the if-statement.
The first one is O(n*min(n, 16)) because the 16-for-loop counts as O(1) assuming n > 16. if n < 16 then it's O(n^2).
The second one is O(n) because after the first iteration b is already n.
The third one is O(n^3) because after a maximum of 2 iterations you reach the if statement and then a is incremented until n^3 which means that in the next outer for-loop iteration a<n*n is indeed true so it will exit out of the for loop entirely.
Hopefully this answers your questions, good luck!
int f1(int N) {
int Sum, i, j, k;
Sum = 0;
for (i = 0; i < N; i++)
for (j = 0; j < i * i; j++)
for (k = 0; k < j; k++)
Sum++;
return Sum;
}
int f2(int N) {
int Sum, i, j;
Sum = 0;
for (i = 0; i < 10; i++)
for (j = 0; j < i; j++)
Sum += j * N;
return Sum;
}
What are the complexities of f1 and f2?
I have no idea about the complexity of f1 and I think the complexity of f2 should be O(1) since the number of iterations is constant. It is correct?
Your first function has the complexity O(N^(1+2+2)) = O(N^5).
In the first loop i goes from 0 on N, the second one j loops over a limit that depends on N^2 and in the 3rd one k loops on an interval whose size depends on N^2 as well.
The function F2 is constant time, so O(1) because the loops do not have any degree of liberty.
This kind of stuff is studied in the courses of algorithms at the topic "complexity".
There is also another kind of measurement of complexity of algorithms, based on omega-notation.
The complexity of f1 is in O(n^5) since
for(i=0; i<N; i++) //i has a upper bound of n
for(j=0; j<i*i; j++) //j has a upper bound of i^2, which itself has the upper bound of n, so this is n^2
for(k=0; k<j; k++) //k has a upper bound of j, which is n^2
Sum++; //constant
So the complete upper bound is n * n^2 * n^2 which is n^5 so f1 is in O(n^5).
for (i = 0; i < 10; i++) //upper bound of 10 so in O(10) which is in O(1)
for (j = 0; j < i; j++) //upper bound of i so also O(1)
Sum += j * N; //N is just a integer here, and multiplication is a constant operation independent of the size of N, so also O(1)
So f2 is in O(1*1*1) which is simply O(1).
Note all assignments and declarations are also constant.
BTW since Sum++ has no side effects and with the according loops develops a series we know a solution for (math yay), a programmer or optimal compiler optimiser could reduce f1 to a constant program using the gaussian sum formula (n*n+n) / 2, so sum could be just calculated by something like (N*N + N ) / 2 * (N*N*N*N + N*N) / 2) * 2 , however my formula does not consider starting at 0.
Using sigma notation:
f1:
The outer loop runs from 0 to N, the one inside it runs from 0 to i^2 and the last one runs from 0 to j, and inside we only have one operation so we are summing 1. Thus we get:
1+1+1... j times gives 1*j=j, thus we get:
Using the rule of the summation of natural numbers but we replace n (in the Wikipedia article) with i^2 so we get:
The reason for the approximation is because when finding the time complexity of a function and we have the addition of multiple powers we take the highest one. This just makes the math simpler. For example f(n)=(n^3+n^2+n)=O(n^3) (supposing that f(n) represents the maximal running time required by the given algorithm depending on the input size n) .
And using the formula for the summation of the first N numbers to 4th power we get (look at the note in the end):
Thus the time complexity for f1 is O(n^5).
f2:
Using the same method we get:
But this just gives a constant which doesn't depend on n thus the time complexity for f2 is O(1).
note:
When we have a summation of the first N numbers that are to the K power, the time complexity of it would be N^(K+1), so you obviously don't need to remember the formula. For example:
I understand how to use summation to extract the time complexity (and Big O) from linear for-loops using summation, but how would you use it for multiplication incremental loops to get O(logn). For example, the code below is O(nlogn), but I don't know why.
for (i = 0; i < n; i++)
for (j = 1; j < n; j*7)
/*some O(1) operations*/
Also, why is a while loop O(logn) and a do-while loop O(n^2).
At each iteration of the inner loop you perform j = j * 7 (I assume this is what you meant)
That is, at each iteration j = 7j
After n iterations, j = j*7*7*7*7*...*7*7 = j*(7 ^ n)
Let n be the number we want to reach and m the number of iterations, so:
n = j*7*7*7*...7 = j*(7 ^ m)
Let's take a log from both sides:
log(n) = log(j * (7 ^ m)) ~= m*log(7) = O(m)
So, as we can see - the inner loop runs O(log(n)) times.
I am trying to understand the subtle difference in the complexity of
each of the examples below.
Example A
int sum = 0;
for (int i = 1; i < N; i *= 2)
for (int j = 0; j < N; j++)
sum++;
My Analysis:
The first for loop goes for lg n times.
The inner loop is independent of outer loop and executes N times every time outer loop executes.
So the complexity must be:
n+n+n... lg n times
Therefore the complexity is n lg n.
Is this correct?
Example B
int sum = 0;
for (int i = 1; i < N; i *= 2)
for(int j = 0; j < i; j++)
sum++;
My Analysis:
The first for loop goes for lg n times.
The inner loop execution depends on outer loop.
So how do I calculate the complexity when no of times inner loop executes depends on outer loop?
Example C
int sum = 0;
for (int n = N; n > 0; n /= 2)
for (int i = 0; i < n; i++)
sum++;
I think example C and example B must have same complexity because no of times the inner loop executes depends on outer loop.
Is this correct?
In examples B and C, the inner loop executes 1 + 2 + ... + n/2 + n times. There happen to be lg n terms in this sequence, and that does mean that int i = 0 executes lg n times, however the sum for the statement(s) in the inner loop is 2n. So we get O(n + lg n) = O(n)
(a) Your analysis is correct
(b) The outer loop goes log(N) times. The inner loop goes in the sequence 1, 2, 4, 8, ... for log(N) times which is a geometric series and is equal to (approx) O(2^log(N)) or twice the amount of the highest multiple.
E.g. : 1 + 2 + 4 = (approx)2*4, 1 + 2 + 4 + 8 = (approx)2*8.
Hence the total complexity is O(2^log(N)) = O(N)
(c) This is same as (b) in reverse order
Fine Time complexity
I=1;
K=1;
While(k<n)
{
Stmt;
K=k+i;
I++;
}
I am trying to figure out the complexity of a for loop using Big O notation. I have done this before in my other classes, but this one is more rigorous than the others because it is on the actual algorithm. The code is as follows:
for(i=n ; i>1 ; i/=2) //for any size n
{
for(j = 1; j < i; j++)
{
x+=a
}
}
and
for(i=1 ; i<=n;i++,x=1) //for any size n
{
for(j = 1; j <= i; j++)
{
for(k = 1; k <= j; x+=a,k*=a)
{
}
}
}
I have arrived that the first loop is of O(n) complexity because it is going through the list n times. As for the second loop I am a little lost!
Thank you for the help in the analysis. Each loop is in its own space, they are not together.
Consider the first code fragment,
for(i=n ; i>1 ; i/=2) //for any size n
{
for(j = 1; j < i; j++)
{
x+=a
}
}
The instruction x+=a is executed for a total of n + n/2 + n/4 + ... + 1 times.
Sum of the first log2n terms of a G.P. with starting term n and common ratio 1/2 is, (n (1-(1/2)log2n))/(1/2). Thus the complexity of the first code fragment is O(n).
Now consider the second code fragment,
for(i=1 ; i<=n; i++,x=1)
{
for(j = 1; j <= i; j++)
{
for(k = 1; k <= j; x+=a,k*=a)
{
}
}
}
The two outer loops together call the innermost loop a total of n(n+1)/2 times. The innermost loop is executed at most log<sub>a</sub>n times. Thus the total time complexity of the second code fragment is O(n2logan).
You may formally proceed like the following:
Fragment 1:
Fragment 2 (Pochhammer, G-Function, and Stirling's Approximation):
With log(G(n)).
[UPDATE of Fragment 2]:
With some enhancements from "DISCRETE LOOPS AND WORST CASE PERFORMANCE" publication, by Dr. Johann Blieberger (All cases verified for a = 2):
Where:
Therefore,
EDIT: I agree the first code block is O( n )
You decrement the outer loop i by diving by 2, and in the inner loop you run i times, so the number of iterations will be a sum over all the powers of two less than or equal to N but greater than 0, which is nlog(n)+1 - 1, so O(n).
The second code block is O(loga(n)n2) assuming a is a constant.
The two outermost loops equate to a sum of all the numbers less than or equal to n, which is n(n-1)/2, so O(n2). Finally the inner loop is the powers of a less than an upper bound of n, which is O(logan).