I had to determinate big O complexity of this piece of code.
I thought the answer is nlogn but apparently its n. Can anyone help explain why that is so?
void funct(int n)
{
for (int i = n; i > 0; i /= 2)
for(int j = 0; j < i; j++)
printf("%d\n", j%2);
}
That's geometric progression
The first time the inner loop is executed n times.
The second time it is executed n/2 times.
etc...
So we have the sequence:
n + n/2 + n/4 + ... + 1
so the final formula is:
n*(1 - (1/2)^(log n))/(1/2)
which is equivalent to n
Look these can be solved using Double Sigma's :
Let $ represents sigma.
so this problem is :
$(i=n downto 0 by a factor of 2 )$(j=0 to i-1) 1
where 1 represent a unit cost
now for inner sigma its sum of 1 i times that is = i
now problem is
$(i=n downto 1 by a factor of 2 ) i
which is sum of i values i.e. n+n/2+n/4+...+1(when n/2^x=1 or after log(n) terms)+0
or
n*(1+1/2+.....log(n) terms)
which is a convergent Geometric progression. and the result will be n*(1 - (1/2)^(log n))/(1/2) i.e O(n)
The outer loop, as I'm sure you can see is executed log(n) times. The inner loop is executed on average log(n)/2 times. So the printf statement is executed log(n) * (log(n) / 2) times which equals n / 2. So the complexity of the code is O(n).
Related
I have been given the following code:
void sort(int a[], int n)
{
for(int i = 0; i < n - 1; i++)
for(int j = 0; j < n - i - 1; j++)
if(a[j] > a[j+1])
swap(a + j, a + j + 1);
}
I have to calculate the worst-case running time O(n) of this code.
n is 20, so I was thinking, is O(n) = 20 - 1, or is it O(n)= n-1?
Any time you see a double-nested for loop, your first reaction should be :likely O(N²).
But let's prove it.
Start with the outer loop. It will iterate n-1 times. As n gets really large, the -1 part is negligible. So the outer loop iterates pretty close to n iterations.
When i==0, the inner loop will iterate n-2 times. But again, there's not much difference between n and n-2 in terms of scale. So let's just say it iterates n times.
When i==n-2, the inner loop will iterate exactly once.
Hence, the inner loop iterates an average of n/2 times.
Hence, if(a[j] > a[j+1]) is evaluated approximately n * n/2 times. Or n²/2 times. And for Big-O notation, we only care about the largest polynomial and none of the factors in front of it. Hence, O(N²) is the running time. Best, worst, and average.
What would be the time complexity of this following block of code
void function(int n).
My attempt was that the outermost loop would run n/2 times and the inner two would run 2^q times. Then I equated 2^q with n and got q as 1/2(log n) with base 2. Multiplying the time complexities I get my value as O(nlog(n)) while the answer is O(nlog^2(n)).
void function(int n) {
int count = 0;
for (int i=n/2; i<=n; i++)
for (int j=1; j<=n; j = 2 * j)
for (int k=1; k<=n; k = k * 2)
count++;
}
Time to apply the golden rule of understanding loop nests:
When in doubt, work inside out!
Let’s start with the original loop nest:
for (int i=n/2; i<=n; i++)
for (int j=1; j<=n; j = 2 * j)
for (int k=1; k<=n; k = k * 2)
count++;
That inner loop will run Θ(log n) times, since after m iterations of the loop we see that k = 2m and we stop when k ≥ n = 2lg n. So let’s replace that inner loop with this simpler expression:
for (int i=n/2; i<=n; i++)
for (int j=1; j<=n; j = 2 * j)
do Theta(log n) work;
Now, look at the innermost remaining loop. With exactly the same reasoning as before we see that this loop runs Θ(log n) times as well. Since we do Θ(log n) iterations that each do Θ(log n) work, we see that this loop can be replaced with this simpler one:
for (int i=n/2; i<=n; i++)
do Theta(log^2 n) work;
And here that outer loop runs Θ(n) times, so the overall runtime is Θ(n log2 n).
I think that, based on what you said in your question, you had the right insights but just forgot to multiply in two copies of the log term, one for each of the two inner loops.
In your code there are 3 nested loops.
First loop runs n/2 times which is almost equivalent to n while calculating complexity.
Second loop runs logn times.
Third loop runs logn times.
So, finally the time complexity will be O( n * logn * logn ) == O(nlog^2n).
Now, one may wonder how the run time complexity of the two inner loops is logn. This can be generalized as follows:
Since we are multiplying by 2 in each iteration, we need value of q such that:
n = 2 ^ q.
Taking log base 2 on both sides,
log2 n = log2 (2^q)
log2 n = q log2(2)
log2 n = q * 1 [ since, log2(2) is 1 ]
So, q is equal to logn.
So, overall time complexity is: O(n*log^2n).
First loop takes: n/2
Second loop: log(n)
Third loop: log(n)
Notice that because the step of the inner loops is multiplied by two, their respective counters grow exponentially, reaching n in a log(n), in terms of time complexity.
Then, also notice that constants like 1/2 can safely be ignored in that case, which results in O(n * log(n) *log(n)), thus:
O(nlog2n)
Why isn't the time complexity here O(n^2) and instead it is O(n)?
Isn't the first loop is n times, & the same is the second one, so it becomes O(n*n), what is wrong here?
void f(int n){
for( ; n>0; n/=2){
int i;
for(i=0; i<n; i++){
printf("hey");
}
}
}
Isn't the first loop is n times, & the same is the second one, so it becomes O(n*n).
Above statement is false, since:
The outer loop does not run n times. (The outer loop runs O(log n) times, but it does not matter in this case.)
For the inner loop, the number of loops differs as the value of n changes.
To get the time complexity of this code, we should count the total number of times the body of the inner loop is executed.
Clearly, the body of the inner loop is executed n times, for each value of n.
The value of n is determined by the for statement of the outer loop. It starts from the value given as the argument to the function, and is halved each time the body of outer loop is executed.
So as already stated by the comments, since n + n/2 + n/4 + n/8 + ... = 2n, the time complexity for this algorithm is O(n).
For some more concrete mathematical proof on this:
Find an integer k such that 2^(k-1) < n <= 2^k. For this k:
A lower bound for the total number of inner loops is 1 + 2 + 4 + ... + 2^(k-1) = 2^k - 1 >= n - 1 ∈ Ω(n).
An upper bound for the total number of inner loops is 1 + 2 + 4 + ... + 2^k = 2^(k+1) - 1 < 4n - 1 ∈ O(n).
Therefore the total number of inner loops is Θ(n), as well as O(n).
So, I have this code and I need to make it run on a time complexity of less than O(n^3). I've just started learning about complexity and I really have no idea what to do.
int n, i, j, k, x=0;
printf("Type n: \n");
scanf("%d",&n);
for(i=1; i<n; i++)
{
for(j=1; j<i; j++)
{
for(k=1; k<j; k++)
{
x=x+1;
}
}
}
printf("%d\n",x);
I think I get why it's O(n^3), but I don't really know how to make it more efficient. I tried turning it into a recursive function, is it possible?
You're adding 1 to the result for each i, j, k with 0 < k < j < i < n. There's choose(n-1, 3) such values of i, j, k (one for each subset of size 3 of {1, 2, ..., n-1}). (Here "choose" in the binomial coefficient function).
Thus, you can replace your loop-based computation with choose(n-1, 3) which is (n - 1)(n - 2)(n - 3) / 6 if n is positive.
int n;
printf("Type n: \n");
scanf("%d",&n);
printf("%d\n", n > 0 ? (n-1)*(n-2)*(n-3)/6 : 0);
This is O(1) to compute the result, and O(log N) to output it (since the result has O(log N) digits).
Your current function is just a lousy O(n^3) way to calculate some mathematical function ...
In Out
0 0
1 0
2 0
3 0
4 1
5 4
6 10
7 20
8 35
9 56
10 84
x will end up being equal to the number of iterations.
Your assignment is likely to reinterpret that for loop into an equation.
We know that the outer loop will execute its block (n-1) times. The next inner loop will execute its block a total of 1+2+..+(n-2) times. That's (n-1)(n-2)/2 times. (At this point I get stuck myself, none of my extrapolations get (n-1)(n-2)(n-3)/6)
Another way: Since we know that 1, 2, 3 all are zero roots, we also know the function at least is (n - 1)(n - 2)(n - 3). Solve for n=4 and you get 1/6 as the constant factor.
I refactored your loop as follows:
for(i=1; i<n-2; i++)
{
x = x + ( ( i * ( i + 1 ) ) / 2 );
}
This works because ( ( i * ( i + 1 ) ) / 2 ) = sum of all values in the series 1 through i.
You inner most loop (using variable k) is the equivalent of adding the value of j to x. Your second loop (using variable j) is then the equivalent of calculating the sum of the series 1 through i.
So I've replaced your second and third loop with the sum of the series 1 through i. We keep your first loop, and at each iteration add the sum of the series 1 through i to your previous value.
Note that I've added a -2 to your outer loop to simulate the < sign in your two inner loops. If your requirement was <= on each inner loop then that -2 would not be needed.
This is an O(n) solution, which is not as good as Paul Hankin's O(1) solution.
I know there is many similar questions about the Big Oh notation, but this example is quite interesting and not trivial:
int sum = 0;
for (int i = 1; i <= N; i = i*2)
for (int j = 0; j < i; j++)
sum++;
The outer loop will iterate lg(N) times, but what with inner loop? And what is T(N) for all the operations?
I can see only 3 posibilities :
T(N) = lg(N) * 2^N
T(N) = log(N) * (N-1)
T(N) = N
My opinion - T(N) = N - but it is just my intuition from observations value of sum variable when N was multiplied many times - sum was almost equal to 2N, which gives us N.
Basically I do not know how to count it. Please help me with this task and explain the solution - it is quite important for me.
Thanks
The inner loop iterates the last time max. N. Before the last run it iterates N/2. If you sum it up N + N/2 + N/4 + N/8 This add up to 2*N. And that's all as you counted all runs. T(N) = N