Time Complexity when inner loop starts with j=2 - c

I get O(n^2 logn) as output of the following code. Yet I am unable to understand why?
int unknown(int n) {
int i, j, k = 0;
for (i = n / 2; i <= n; i++)
for (j = 2; j <= n; j = j * 2)
k = k + n / 2;
return k;
}

A fixed constant starting point will make no difference to the inner loop in terms of complexity.
Starting at two instead of one will mean one less iteration but the ratio is still a logarithmic one.
Think in terms of what happens when you double n. This adds one more iteration to that loop regardless of whether you start at one or two. Hence it's O(log N) complexity.
However, you should keep in mind that the outer loop is an O(N) one since the number of iteratations is proportional to N. That makes the function as a whole O(N log N), not the O(N2 log N) you posit.

Related

What's the time complexity of this piece of code?

I'm studying for exam and I came across this piece of code and I need to find best and worst case.
A(n):
for (int i = 1; i < n; i*=2) {
for (int j = 0; j < i; j++) {
if (i == j) return; // does nothing
for (int k = n; k > 0; k--)
if (f(n))
return g(n);
}
}
whereas functions worst and best cases are:
f(n) O(n) Ω(log^7(n))
g(n) O(n^2 * log^6(n)) Ω(n^2 * log^6(n))
Worst case:
Complexity of the first loop is log(n), and the second loop depends on first but I would say that it's complexity is n. Third for loop is n. f(n) is checked in O(n) and in worst case g(n) will be executed in last iteration and it's complexity is O(n^2 * log^6(n)). So I would say that worst case is log(n) * n * n * n + n^2 * log^6(n), so it's O(n^3 * log(n)).
Other logic would be, that as second loop depends on the first one, the iterations will go 1 + 2 + 4 + 8 + 16... which is geometric series and it's value is 2^log(n) which is n. Everything under first loop would stay same, so in this case Big-O would be O(n^3)
Best case: I found that the best case would be (n^2 * log^6(n)) as it would go straight to return statement without iterating at all.
Basically, the main question is how does log(n) times executed loop affects nested n times executed loop which depends on it.
Which logic is right for worst case, and is the best case ok?

Writing down the worst-case running time O(n) of this given code?

I have been given the following code:
void sort(int a[], int n)
{
for(int i = 0; i < n - 1; i++)
for(int j = 0; j < n - i - 1; j++)
if(a[j] > a[j+1])
swap(a + j, a + j + 1);
}
I have to calculate the worst-case running time O(n) of this code.
n is 20, so I was thinking, is O(n) = 20 - 1, or is it O(n)= n-1?
Any time you see a double-nested for loop, your first reaction should be :likely O(N²).
But let's prove it.
Start with the outer loop. It will iterate n-1 times. As n gets really large, the -1 part is negligible. So the outer loop iterates pretty close to n iterations.
When i==0, the inner loop will iterate n-2 times. But again, there's not much difference between n and n-2 in terms of scale. So let's just say it iterates n times.
When i==n-2, the inner loop will iterate exactly once.
Hence, the inner loop iterates an average of n/2 times.
Hence, if(a[j] > a[j+1]) is evaluated approximately n * n/2 times. Or n²/2 times. And for Big-O notation, we only care about the largest polynomial and none of the factors in front of it. Hence, O(N²) is the running time. Best, worst, and average.

Nested loop analyzing (each loop bounds inner loop)

In my data structure lecture, I got homework about algorithms and time complexity. I could not actually find what I need to do for this.
Question : What is the time-complexity of this algorithm ?
My solution was the analyzing loop by loop, removing constant and lower order terms of each of loop itself. For this reason , there are three loops within each other. Complexity should be O(n3). Critical point is that the innermost loop is bounded dynamically.
What is the mistake on this table ( if there is ) :
int c = 0;
for (int i = 0; i < n * n; ++i)
for (int j = 0; j < n; ++j)
for (int k = 0; k < j * 2; ++k)
c = c + 1;
return c;
All answers are welcomed.
In order to compute the time complexity, you can try and evaluate the number of iterations of the innermost loop.
the loop on k evaluates a simple expression 2 * j times.
the loop on j runs n times. Hence the inner loop runs 2 * n * (n + 1) / 2, which simplifies to n * (n + 1).
the outer loop runs n * n times. Hence the inner loops runs exactly n * n * n * (n + 1) times.
Simplifying for the dominant term, the resulting time complexity is n4.
Yet a very shrewd compiler would reduce this complexity to constant time O(1) and generate code for:
return n * n * n * (n + 1);
Trying this on Godbolt's compiler explorer shows that none of the common compilers achieve this as of now, albeit clang goes to great lengths trying to optimize the code with unfathomable SIMD code.

Time complexity of nested for loop

What would be the time complexity of this following block of code
void function(int n).
My attempt was that the outermost loop would run n/2 times and the inner two would run 2^q times. Then I equated 2^q with n and got q as 1/2(log n) with base 2. Multiplying the time complexities I get my value as O(nlog(n)) while the answer is O(nlog^2(n)).
void function(int n) {
int count = 0;
for (int i=n/2; i<=n; i++)
for (int j=1; j<=n; j = 2 * j)
for (int k=1; k<=n; k = k * 2)
count++;
}
Time to apply the golden rule of understanding loop nests:
When in doubt, work inside out!
Let’s start with the original loop nest:
for (int i=n/2; i<=n; i++)
for (int j=1; j<=n; j = 2 * j)
for (int k=1; k<=n; k = k * 2)
count++;
That inner loop will run Θ(log n) times, since after m iterations of the loop we see that k = 2m and we stop when k ≥ n = 2lg n. So let’s replace that inner loop with this simpler expression:
for (int i=n/2; i<=n; i++)
for (int j=1; j<=n; j = 2 * j)
do Theta(log n) work;
Now, look at the innermost remaining loop. With exactly the same reasoning as before we see that this loop runs Θ(log n) times as well. Since we do Θ(log n) iterations that each do Θ(log n) work, we see that this loop can be replaced with this simpler one:
for (int i=n/2; i<=n; i++)
do Theta(log^2 n) work;
And here that outer loop runs Θ(n) times, so the overall runtime is Θ(n log2 n).
I think that, based on what you said in your question, you had the right insights but just forgot to multiply in two copies of the log term, one for each of the two inner loops.
In your code there are 3 nested loops.
First loop runs n/2 times which is almost equivalent to n while calculating complexity.
Second loop runs logn times.
Third loop runs logn times.
So, finally the time complexity will be O( n * logn * logn ) == O(nlog^2n).
Now, one may wonder how the run time complexity of the two inner loops is logn. This can be generalized as follows:
Since we are multiplying by 2 in each iteration, we need value of q such that:
n = 2 ^ q.
Taking log base 2 on both sides,
log2 n = log2 (2^q)
log2 n = q log2(2)
log2 n = q * 1 [ since, log2(2) is 1 ]
So, q is equal to logn.
So, overall time complexity is: O(n*log^2n).
First loop takes: n/2
Second loop: log(n)
Third loop: log(n)
Notice that because the step of the inner loops is multiplied by two, their respective counters grow exponentially, reaching n in a log(n), in terms of time complexity.
Then, also notice that constants like 1/2 can safely be ignored in that case, which results in O(n * log(n) *log(n)), thus:
O(nlog2n)

Why does this loop return a value that's O(n log log n) and not O(n log n)?

Consider the following C function:
int fun1 (int n)
{
int i, j, k, p, q = 0;
for (i = 1; i<n; ++i)
{
p = 0;
for (j=n; j>1; j=j/2)
++p;
for (k=1; k<p; k=k*2)
++q;
}
return q;
}
The question is to decide which of the following most closely approximates the return value of the function fun1?
(A) n^3
(B) n (logn)^2
(C) nlogn
(D) nlog(logn)
This was the explanation which was given :
int fun1 (int n)
{
int i, j, k, p, q = 0;
// This loop runs T(n) time
for (i = 1; i < n; ++i)
{
p = 0;
// This loop runs T(Log Log n) time
for (j=n; j > 1; j=j/2)
++p;
// This loop runs T(Log Log n) time
for (k=1; k < p; k=k*2)
++q;
}
return q;
}
But Time Complexity of a loop is considered as O(Logn) if the loop variables is divided / multiplied by a constant amount.
for (int i = 1; i <=n; i *= c) {
// some O(1) expressions
}
for (int i = n; i > 0; i /= c) {
// some O(1) expressions
}
But it was mentioned that the inner loops take Θ(Log Log n) time each , can anyone explain me the reason ar is the answer wrong?
This question is tricky - there is a difference between what the runtime of the code is and what the return value is.
The first loop's runtime is indeed O(log n), not O(log log n). I've reprinted it here:
p = 0;
for (j=n; j > 1; j=j/2)
++p;
On each iteration, the value of j drops by a factor of two. This means that the number of steps required for this loop to terminate is given by the minimum value of k such that n / 2k ≤ 1. Solving, we see that k = O(log2 n).
Notice that each iteration of this loop increases the value of p by one. This means that at the end of the loop, the value of p is Θ(log n). Consequently, this next loop does indeed run in time O(log log n):
for (k=1; k < p; k=k*2)
++q;
}
The reason for this is that, using similar reasoning to the previous section, the runtime of this loop is Θ(log p), and since p = Θ(log n), this ends up being Θ(log log n).
However, the question is not asking what the runtime is. It's asking what the return value is. On each iteration, the value of q, which is what's ultimately returned, increases by Θ(log log n) because it's increased once per iteration of a loop that runs in time Θ(log log n). This means that the net value of q is Θ(n log log n). Therefore, although the algorithm runs in time O(n log n), it returns a value that's O(n log log n).
Hope this helps!
The only thing wrong i see here concerns the second loop:
for (j=n; j>1; j=j/2)
You say in comments : This loop runs Θ(Log Log n) time
As I see it, this loop runs O(Log n) times
The running times for the first and third loops are correct (O(n) and O(Log Log n)).
EDIT: I agree with the previous answer. I did not notice that the question is about the return value, not the running time!
Answer would be (D) O(n * log(log n)). The reason is described below :-
The first for loop encompasses other 2 for loops which are based on the values of j and k respectively. Also, j is getting halved from n, until it is greater than 1. So, p will be equal to greatest integer(log n). And, k is doubling till it equals p --- where p has been set from previous loop and would be equal to [log n], where [x] is equal to greatest integer of x.
So, the third loop will run for log (log n) time, so value of q will be log (log n). And, hence, both the inner loops being part of the outer for-loop which runs for n times.
Approximate value of q = n* log (log n)) = O(n log(log n)) .

Resources