I wanted to figure out Big O of this loops
for(int count=0,i=1;i<=N;i*=2){
for(int j=1;j<=i;j++){
count++;
}
}
my attempt is that
the outer loop: runs log(n) times
the inner loop: runs (n-1) times
so in Total: O(log(n)(n-1))
I am not sure Though
Related
I have been given the following code:
void sort(int a[], int n)
{
for(int i = 0; i < n - 1; i++)
for(int j = 0; j < n - i - 1; j++)
if(a[j] > a[j+1])
swap(a + j, a + j + 1);
}
I have to calculate the worst-case running time O(n) of this code.
n is 20, so I was thinking, is O(n) = 20 - 1, or is it O(n)= n-1?
Any time you see a double-nested for loop, your first reaction should be :likely O(N²).
But let's prove it.
Start with the outer loop. It will iterate n-1 times. As n gets really large, the -1 part is negligible. So the outer loop iterates pretty close to n iterations.
When i==0, the inner loop will iterate n-2 times. But again, there's not much difference between n and n-2 in terms of scale. So let's just say it iterates n times.
When i==n-2, the inner loop will iterate exactly once.
Hence, the inner loop iterates an average of n/2 times.
Hence, if(a[j] > a[j+1]) is evaluated approximately n * n/2 times. Or n²/2 times. And for Big-O notation, we only care about the largest polynomial and none of the factors in front of it. Hence, O(N²) is the running time. Best, worst, and average.
what is run time in function exe:log(n)
int fun(int n) {
int count = 0;
for (int i = n; i > 0; i /= 2)
for (int j = 0; j < i; j++)
count += 1;
return count;
}
It's not O(log n), but it is O(n). You can think about it like this: Each run of the outer loop sends the remaining data (originally n) into the inner loop for processing, and then removes one half of it. The inner loop is clearly linear in the data it processes.
At first iteration, the outer loop sends the whole n into the inner loop, which "pays" n steps for processing it.
At the second iteration, there is n / 2 data left, so the innter loop pays n / 2 for it; it has payed 1.5n in total.
At the next iteration, there is n / 2 / 2 == n/4 data left, for which the inner loop pays an extra n/4, so 1.75n in total.
And so on, until the entire n has been paid for twice, so the cost is 2n, which is O(n), actually even ϴ(n).
The complexity would be
O(n)
For example suppose we take n=32
so for various iterations the number of times loop will run is
32,16,8,4,2,1
So on adding it will be 63 which is the total number of times loop ran
and that is 2*n-1
Mathematically ,for any value which it is a G.P Sum where the series is like n,n/2,n/4,n/8......1
suppose we take n=32 again
then
sum = a * (1-r^nof)/(1-r) = 32 * (1-(1/2)^5)/(1-(1/2)) = 63
where nof(number of times outer loop ran)=5 is log2n, a=32, r=(1/2)
for any number it will be less than 2*n
The Time Complexity of your code is O(n) only.
What would be the time complexity of this following block of code
void function(int n).
My attempt was that the outermost loop would run n/2 times and the inner two would run 2^q times. Then I equated 2^q with n and got q as 1/2(log n) with base 2. Multiplying the time complexities I get my value as O(nlog(n)) while the answer is O(nlog^2(n)).
void function(int n) {
int count = 0;
for (int i=n/2; i<=n; i++)
for (int j=1; j<=n; j = 2 * j)
for (int k=1; k<=n; k = k * 2)
count++;
}
Time to apply the golden rule of understanding loop nests:
When in doubt, work inside out!
Let’s start with the original loop nest:
for (int i=n/2; i<=n; i++)
for (int j=1; j<=n; j = 2 * j)
for (int k=1; k<=n; k = k * 2)
count++;
That inner loop will run Θ(log n) times, since after m iterations of the loop we see that k = 2m and we stop when k ≥ n = 2lg n. So let’s replace that inner loop with this simpler expression:
for (int i=n/2; i<=n; i++)
for (int j=1; j<=n; j = 2 * j)
do Theta(log n) work;
Now, look at the innermost remaining loop. With exactly the same reasoning as before we see that this loop runs Θ(log n) times as well. Since we do Θ(log n) iterations that each do Θ(log n) work, we see that this loop can be replaced with this simpler one:
for (int i=n/2; i<=n; i++)
do Theta(log^2 n) work;
And here that outer loop runs Θ(n) times, so the overall runtime is Θ(n log2 n).
I think that, based on what you said in your question, you had the right insights but just forgot to multiply in two copies of the log term, one for each of the two inner loops.
In your code there are 3 nested loops.
First loop runs n/2 times which is almost equivalent to n while calculating complexity.
Second loop runs logn times.
Third loop runs logn times.
So, finally the time complexity will be O( n * logn * logn ) == O(nlog^2n).
Now, one may wonder how the run time complexity of the two inner loops is logn. This can be generalized as follows:
Since we are multiplying by 2 in each iteration, we need value of q such that:
n = 2 ^ q.
Taking log base 2 on both sides,
log2 n = log2 (2^q)
log2 n = q log2(2)
log2 n = q * 1 [ since, log2(2) is 1 ]
So, q is equal to logn.
So, overall time complexity is: O(n*log^2n).
First loop takes: n/2
Second loop: log(n)
Third loop: log(n)
Notice that because the step of the inner loops is multiplied by two, their respective counters grow exponentially, reaching n in a log(n), in terms of time complexity.
Then, also notice that constants like 1/2 can safely be ignored in that case, which results in O(n * log(n) *log(n)), thus:
O(nlog2n)
int j=0;
for (int i=0; i<N; i++)
{
while ( (j<N-1) && (A[i]-A[j] > D) )
j++;
if (A[i]-A[j] == D)
return 1;
}
This code is said to have the time complexity of O(n), but I don't really get it. The inner loop is executed N times and the outer should be also N times? Is it maybe because of the j = 0; outside the loop that is making it only run N times?
But even if it would only run N times in the inner loop, the if statment check should be done also N times, which should bring the total time complexity to O(n^2)?
The reason why this is O(n) is because j is not set back to 0 in the body of the for loop.
Indeed if we take a look at the body of the for loop, we see:
while ( (j<N-1) && (A[i]-A[j] > D) )
j++;
That thus means that j++ is done at most n-1 times, since if j succeeds N-1 times, then the first constraint fails.
If we take a look at the entire for loop, we see:
int j=0;
for (int i=0; i<N; i++) {
while ( (j<N-1) && (A[i]-A[j] > D) )
j++;
if (A[i]-A[j] == D)
return 1;
}
It is clear that the body of the for loop is repeated n times, since we set i to i=0, and stop when i >= N, and each iteration we increment i.
Now depending on the values in A we will or will not increment j (multiple times) in the body of the for loop. But regardless how many times it is done in a single iteration, at the end of the for loop, j++ is done at most n times, for the reason we mentioned above.
The condition in the while loop is executed O(n) (well at most 2×n-1 times to be precise) times as well: it is executed once each time we enter the body of the for loop, and each time after we execute a j++ command, but since both are O(n), this is done at most O(n+n) thus O(n) times.
The if condition in the for loop executed n times: once per iteration of the for loop, so again O(n).
So this indeed means that all "basic instructions" (j++, i = 0, j = 0, j < N-1, etc.) are all done either a constant number of times O(1), or a linear number of times O(n), hence the algorithm is O(n).
I am trying to get more clarity on the complexity of the algorithm I wrote below:
left = 1
right = 1
for i=0; i < array.len; i ++:
j = i+1
for j; j < array.len; j++:
right *= array[j]
tmp[i] = array[idx]
left *= array[idx]
right = 1
return tmp
If we define array size to be n, then O(n) for the outer loop, but inner loop doesn't really iterate n-1 times all the time, only the first time when i=0.
So, what would be the complexity?
O(n) for the outer loop and
O(n-j) for the inner loop?
So, maybe O(n(n-j)) ? Which ends up being O(n^2)?
Help please.
yes, O(n^2) is the time complexity. first loop runs n times. second loop runs n times for each iteration of the first loop. n * n = n^2