What's the time complexity of this piece of code? - c

I'm studying for exam and I came across this piece of code and I need to find best and worst case.
A(n):
for (int i = 1; i < n; i*=2) {
for (int j = 0; j < i; j++) {
if (i == j) return; // does nothing
for (int k = n; k > 0; k--)
if (f(n))
return g(n);
}
}
whereas functions worst and best cases are:
f(n) O(n) Ω(log^7(n))
g(n) O(n^2 * log^6(n)) Ω(n^2 * log^6(n))
Worst case:
Complexity of the first loop is log(n), and the second loop depends on first but I would say that it's complexity is n. Third for loop is n. f(n) is checked in O(n) and in worst case g(n) will be executed in last iteration and it's complexity is O(n^2 * log^6(n)). So I would say that worst case is log(n) * n * n * n + n^2 * log^6(n), so it's O(n^3 * log(n)).
Other logic would be, that as second loop depends on the first one, the iterations will go 1 + 2 + 4 + 8 + 16... which is geometric series and it's value is 2^log(n) which is n. Everything under first loop would stay same, so in this case Big-O would be O(n^3)
Best case: I found that the best case would be (n^2 * log^6(n)) as it would go straight to return statement without iterating at all.
Basically, the main question is how does log(n) times executed loop affects nested n times executed loop which depends on it.
Which logic is right for worst case, and is the best case ok?

Related

time complexity of nested loops - always just a multiplication of each of them seperated?

When looking at this code for example :
for (int i = 1; i < n; i*=2)
for (int j = 0; j < i; j +=2)
{
// some contstant time operations
}
Is it as simple as saying that because the outer loop is log and and inner loop is n , that combined the result is big(O) of nlogn ?
Here is the analysis of the example in the question. For simplicity I will neglect the increment of 2 in the inner loop and will consider it as 1, because in terms of complexity it does not matter - the inner loop is linear in i and the constant factor of 2 does not matter.
So we can notice, that the outer loop is producing is of values which are powers of 2 capped by n, that is:
1, 2, 4, 8, ... , 2^(log2 n)
these numbers are also the numbers that the "constant time operation" in the inner loop is running for each i.
So all we have to do is to sum up the above series. It is easy to see that these are geometric series:
2^0 + 2^1 + 2^2 + ... + 2^(log2 n)
and it has a well known solution:
(from Wiki )
We have a=1, r=2, and... well n_from_the_image =log n. We have a same name for different variables here, so it is a bit of a problem.
Now let's substitute and get that the sum equals
(1-2^((log2 n) + 1) / (1 - 2) = (1 - 2*n) / (1-2) = 2n-1
Which is a linear O(n) complexity.
Generally, we take the O time complexity to be the number of times the innermost loop is executed (and here we assume the innermost loop consists of statements of O(1) time complexity).
Consider your example. The first loop executes O(log N) times, and the second innermost loop executes O(N) times. If something O(N) is being executed O(log N) times, then yes, the final time complexity is just them multiplied: O(N log N).
Generally, this holds true with most nested loops: you can assume their big-O time complexity to be the time complexity of each loop, multiplied.
However, there are exceptions to this rule when you can consider the break statement. If the loop has the possibility of breaking out early, the time complexity will be different.
Take a look at this example I just came up with:
for(int i = 1; i <= n; ++i) {
int x = i;
while(true) {
x = x/2;
if(x == 0) break;
}
}
Well, the innermost loop is O(infinity), so can we say that the total time complexity is O(N) * O(infinity) = O(infinity)? No. In this case we know the innermost loop will always break in O(log N), giving a total O(N log N) time complexity.

Writing down the worst-case running time O(n) of this given code?

I have been given the following code:
void sort(int a[], int n)
{
for(int i = 0; i < n - 1; i++)
for(int j = 0; j < n - i - 1; j++)
if(a[j] > a[j+1])
swap(a + j, a + j + 1);
}
I have to calculate the worst-case running time O(n) of this code.
n is 20, so I was thinking, is O(n) = 20 - 1, or is it O(n)= n-1?
Any time you see a double-nested for loop, your first reaction should be :likely O(N²).
But let's prove it.
Start with the outer loop. It will iterate n-1 times. As n gets really large, the -1 part is negligible. So the outer loop iterates pretty close to n iterations.
When i==0, the inner loop will iterate n-2 times. But again, there's not much difference between n and n-2 in terms of scale. So let's just say it iterates n times.
When i==n-2, the inner loop will iterate exactly once.
Hence, the inner loop iterates an average of n/2 times.
Hence, if(a[j] > a[j+1]) is evaluated approximately n * n/2 times. Or n²/2 times. And for Big-O notation, we only care about the largest polynomial and none of the factors in front of it. Hence, O(N²) is the running time. Best, worst, and average.

Time Complexity when inner loop starts with j=2

I get O(n^2 logn) as output of the following code. Yet I am unable to understand why?
int unknown(int n) {
int i, j, k = 0;
for (i = n / 2; i <= n; i++)
for (j = 2; j <= n; j = j * 2)
k = k + n / 2;
return k;
}
A fixed constant starting point will make no difference to the inner loop in terms of complexity.
Starting at two instead of one will mean one less iteration but the ratio is still a logarithmic one.
Think in terms of what happens when you double n. This adds one more iteration to that loop regardless of whether you start at one or two. Hence it's O(log N) complexity.
However, you should keep in mind that the outer loop is an O(N) one since the number of iteratations is proportional to N. That makes the function as a whole O(N log N), not the O(N2 log N) you posit.

Big O of duplicate check function

I would like to know exactly how to compute the big O of the second while when the number of repetitions keeps going down over time.
int duplicate_check(int a[], int n)
{
int i = n;
while (i > 0)
{
i--;
int j = i - 1;
while (j >= 0)
{
if (a[i] == a[j])
{
return 1;
}
j--;
}
}
return 0;
}
Still O(n^2) regardless of the smaller repetition.
The value you are computing is Sum of (n-k) for k = 0 to n.
This equates to (n^2 + n) / 2 which since O() ignores constants and minor terms is O(n^2).
Note you can solve this problem more efficiently by sorting the array O(nlogn) and then searching for two consecutive numbers that are the same O(n) so total O(nlogn)
Big O is an estimate/theoretical speed, it's not the exact calculation.
Like twain249 said, regardless, the time complexity is O(n^2)
BigO shows the worst case time complexity of an algorithm that means the maximum time an algorithm can take ever.It shows upper bound which indicates that whatever the input is time complexity will always be under that bound.
In your case the worst case will when i will iterate until 0 then complexity will be like:
for i=n j will run n-1 times for i=n-1 j will run n-2 times and so on.
adding all (n-1)+(n-2)+(n-3)+............(n-n)=(n-1)*(n)/2=n^2/2-n/2
after ignoring lower term that is n and constant that is 1/2 it becomes n^2.
So O(n^2) that's how it is computed.

What is the time complexity of n * n * n iterations inside a for loop?

What is the time complexity of these loops? Correct me if I am wrong.
This loop is O(n^3) because it's got (n^3)/2 + 1 iterations.
for (int i = 0; i < n * n * n; i+=2)
{
//body
}
and
This loop is O(n^3 * m^2) since it has (n^3 + 1) * (m^2 + 1) iterations. Or would this just be O(n^3) since the inner loop is not a variable n?
for (int i = 0; i < n * n * n; i+=2)
{
for (int j = 0; j < m * m; j++)
{
//Body
}
}
In the first case the time complexity is O(n^3). It captures the most significant term so you ignore the scaling factor of 1/2 and the constant +1. In the latter case it is O(n^3 * m^2) unless you treat m as a constant and not as a variable. In the Big-O notation you don't necessary need to have only a single variable to represent the size of the input data.
This loop is O(n^3) because it's got (n^3)/2 + 1 iterations.
Correct.
This loop is O(n^3 * m^2) since it has (n^3 + 1) * (m^2 + 1) iterations. Or would this just be O(n^3) since the inner loop is not a variable n?
Both are correct. It depends if you consider m a variable or a constant.
In asymptotic notations one can have more than one variables.
For the second case considering both n and m as variables the complexity will be O(n^3 * m^2). If m is treated as a constant then the complexity is O(n^3).

Resources