I need to find the time and space complexity of f3.
I think that g has space complexity of log(n), so for the time complexity, but I am not really sure how I find the time and space complexity of f3 because the calling for g is inside the for commend, does it mean the g is being called every time to check if g(i) < n?
int g(int n)
{
if (n <= 1)
return 1;
return g(n / 2) + 1;
}
int f3(int n)
{
int counter = 0;
for (int i = 0; g(i) < n; ++i)
++counter;
return counter;
}
In short:
Time complexity of f3(n) = O(2^n)
Space complexity of f3(n) = O(n)
Space and Time complexity of g(n) = O(log(n))
Note: (here all log that I refer are log base2 link and all notations are in Big O notation)
Details:
The function "g()" returns floor(log(n))+1. The loop in function "f3()" cotinues until the function 'g()' returns n. For g(i) to return 'n', i needs to be '2^(n-1)'.
In order to terminate the loop in "f3()", 'i' needs to reach 2^(n-1). So 'g()' is called 2^(n-1) times.
So Time complexity of "f3()" = 2^(n-1) x (time complexity of g()) = 2^n x (log n) = O(2^n)
Largest memory used will be during the last call of 'g(i)' where i==2^(n-1).
Therefore space complexity of "f3()" = log(2^(n-1)) = O(n)
As g(n) will run in Theta(log(n)) and each time adds 1 to the final result, the final value of g(i) will be \ceil{log(i)} + 1. On the other hand, the loop inside f3 will run while g(i) < n. It means the loop will iterate up to 2^n. Hence, f3 will run 2^n times. Hence, the time complexity of f3(n) = sum_{i=1}^{2^n} log(i) = log((2^n)!) = \Theta(n * 2^n) (as log((2^n)!) ~ (2^n) log(2^n)).
About space complexity, the maximum depth of g(i), will be when i = 2^n, which is n (n recursive calls). Hence, the space complexity of f3 will be \Theta(n).
Related
When looking at this code for example :
for (int i = 1; i < n; i*=2)
for (int j = 0; j < i; j +=2)
{
// some contstant time operations
}
Is it as simple as saying that because the outer loop is log and and inner loop is n , that combined the result is big(O) of nlogn ?
Here is the analysis of the example in the question. For simplicity I will neglect the increment of 2 in the inner loop and will consider it as 1, because in terms of complexity it does not matter - the inner loop is linear in i and the constant factor of 2 does not matter.
So we can notice, that the outer loop is producing is of values which are powers of 2 capped by n, that is:
1, 2, 4, 8, ... , 2^(log2 n)
these numbers are also the numbers that the "constant time operation" in the inner loop is running for each i.
So all we have to do is to sum up the above series. It is easy to see that these are geometric series:
2^0 + 2^1 + 2^2 + ... + 2^(log2 n)
and it has a well known solution:
(from Wiki )
We have a=1, r=2, and... well n_from_the_image =log n. We have a same name for different variables here, so it is a bit of a problem.
Now let's substitute and get that the sum equals
(1-2^((log2 n) + 1) / (1 - 2) = (1 - 2*n) / (1-2) = 2n-1
Which is a linear O(n) complexity.
Generally, we take the O time complexity to be the number of times the innermost loop is executed (and here we assume the innermost loop consists of statements of O(1) time complexity).
Consider your example. The first loop executes O(log N) times, and the second innermost loop executes O(N) times. If something O(N) is being executed O(log N) times, then yes, the final time complexity is just them multiplied: O(N log N).
Generally, this holds true with most nested loops: you can assume their big-O time complexity to be the time complexity of each loop, multiplied.
However, there are exceptions to this rule when you can consider the break statement. If the loop has the possibility of breaking out early, the time complexity will be different.
Take a look at this example I just came up with:
for(int i = 1; i <= n; ++i) {
int x = i;
while(true) {
x = x/2;
if(x == 0) break;
}
}
Well, the innermost loop is O(infinity), so can we say that the total time complexity is O(N) * O(infinity) = O(infinity)? No. In this case we know the innermost loop will always break in O(log N), giving a total O(N log N) time complexity.
I want to solve that question but I am not sure if I am right or not. I found O(n^2-n)=O(n^2)
double fact(long i)
{
if (i==1 || i==0) return i;
else return i*fact(i-1);
}
funcQ2()
{
for (i=1; i<=n; i++)
sum=sum+log(fact(i));
}
Your fact function is recursive, so you should start by writing the corresponding recurrence relation for the time complexity T(i):
T(0) = 1 // base case i==0
T(1) = 1 // base case i==1
T(i) = T(i-1) + 1 // because fact calls itself on i-1 and does one multiplication afterwards
It's easy to see that the solution to this recurrence relation is T(i) = i for all i > 0, so T(i) ∈ O(i).
Your second function funcQ2 has no inputs, and assuming that n is a constant, its complexity is trivially O(1). If, on the other hand, you assume n to be an input parameter and want to measure time complexity with respect to n, it would be O(n^2) since you are calling fact(i) within the loop (standard arithmetic series).
Is O(log(log(n))) actually just O(log(n)) when it comes to time complexity?
Do you agree that this function g() has a time complexity of O(log(log(n)))?
int f(int n) {
if (n <= 1)
return 0;
return f(n/2) + 1;
}
int g(int n) {
int m = f(f(n));
int i;
int x = 0;
for (i = 0; i < m; i++) {
x += i * i;
}
return m;
}
function f(n) computes the logarithm in base 2 of n by repeatedly dividing by 2. It iterates log2(n) times.
Calling it on its own result will indeed return log2(log2(n)) for an additional log2(log2(n)) iterations.
So far the complexity is O(log(N)) + O(log(log(N)). The first term dominates the second, overall complexity is O(log(N)).
The final loop iterates log2(log2(n)) times, time complexity of this last phase is O(log(log(N)), negligible in front of the initial phase.
Note that since x is not used before the end of function g, computing it is not needed and the compiler may well optimize this loop to nothing.
Overall time complexity comes out as O(log(N)), which is not the same as O(log(log(N)).
Looks like it is log(n) + log(log n) + log(log n).
In order: the first recursion of f(), plus the second recursion of f(), and the for loop, so the final complexity is O(log n), because lower order terms are ignored.
int f(int n) {
if (n<=1)
return 0;
return f(n/2) + 1;
}
Has Time Complexity of Order O(log2(n)). Here 2 is base of logrithm.
int g(int n) {
int m = f(f(n)); // O(log2(log2(n))
int i, x=0;
for( i = 0; i < m; i++) {
x += i*i;
}
// This for loop will take O(log2(log2(n))
return m;
}
Hence overall time complexity of given function is :
T(n) = t1 + t2 + t3
But here O(log2(n)) dominates over O(log2(log2(n)).
Hence time complexity of given function is log2(n).
Please read What is a plain English explanation of "Big O" notation? once.
The time consumed by O(log n) algorithms depends only linearly on the number of digits of n. So it is very easy to scale it.
Say you want to compute F(100000000), the 10^8th F....ascci number. For a O(log n) algorithm it is only going to take 4x the time consumed by computing F(100).
O(log log n) terms can show up in a variety of different places, but there are typically two main routes that will arrive at this runtime. Reference link enter code here here.
Consider the following C function:
int fun1 (int n)
{
int i, j, k, p, q = 0;
for (i = 1; i<n; ++i)
{
p = 0;
for (j=n; j>1; j=j/2)
++p;
for (k=1; k<p; k=k*2)
++q;
}
return q;
}
The question is to decide which of the following most closely approximates the return value of the function fun1?
(A) n^3
(B) n (logn)^2
(C) nlogn
(D) nlog(logn)
This was the explanation which was given :
int fun1 (int n)
{
int i, j, k, p, q = 0;
// This loop runs T(n) time
for (i = 1; i < n; ++i)
{
p = 0;
// This loop runs T(Log Log n) time
for (j=n; j > 1; j=j/2)
++p;
// This loop runs T(Log Log n) time
for (k=1; k < p; k=k*2)
++q;
}
return q;
}
But Time Complexity of a loop is considered as O(Logn) if the loop variables is divided / multiplied by a constant amount.
for (int i = 1; i <=n; i *= c) {
// some O(1) expressions
}
for (int i = n; i > 0; i /= c) {
// some O(1) expressions
}
But it was mentioned that the inner loops take Θ(Log Log n) time each , can anyone explain me the reason ar is the answer wrong?
This question is tricky - there is a difference between what the runtime of the code is and what the return value is.
The first loop's runtime is indeed O(log n), not O(log log n). I've reprinted it here:
p = 0;
for (j=n; j > 1; j=j/2)
++p;
On each iteration, the value of j drops by a factor of two. This means that the number of steps required for this loop to terminate is given by the minimum value of k such that n / 2k ≤ 1. Solving, we see that k = O(log2 n).
Notice that each iteration of this loop increases the value of p by one. This means that at the end of the loop, the value of p is Θ(log n). Consequently, this next loop does indeed run in time O(log log n):
for (k=1; k < p; k=k*2)
++q;
}
The reason for this is that, using similar reasoning to the previous section, the runtime of this loop is Θ(log p), and since p = Θ(log n), this ends up being Θ(log log n).
However, the question is not asking what the runtime is. It's asking what the return value is. On each iteration, the value of q, which is what's ultimately returned, increases by Θ(log log n) because it's increased once per iteration of a loop that runs in time Θ(log log n). This means that the net value of q is Θ(n log log n). Therefore, although the algorithm runs in time O(n log n), it returns a value that's O(n log log n).
Hope this helps!
The only thing wrong i see here concerns the second loop:
for (j=n; j>1; j=j/2)
You say in comments : This loop runs Θ(Log Log n) time
As I see it, this loop runs O(Log n) times
The running times for the first and third loops are correct (O(n) and O(Log Log n)).
EDIT: I agree with the previous answer. I did not notice that the question is about the return value, not the running time!
Answer would be (D) O(n * log(log n)). The reason is described below :-
The first for loop encompasses other 2 for loops which are based on the values of j and k respectively. Also, j is getting halved from n, until it is greater than 1. So, p will be equal to greatest integer(log n). And, k is doubling till it equals p --- where p has been set from previous loop and would be equal to [log n], where [x] is equal to greatest integer of x.
So, the third loop will run for log (log n) time, so value of q will be log (log n). And, hence, both the inner loops being part of the outer for-loop which runs for n times.
Approximate value of q = n* log (log n)) = O(n log(log n)) .
Consider the following C-function:
double foo (int n) {
int i;
double sum;
if (n == 0) return 1.0;
else {
sum = 0.0;
for (i = 0; i < n; i++)
sum + = foo (i);
return sum;
}
}
The space complexity of the above function is
1) O(1)
2) O(n)
3) O(n!)
4) O(n^n)
in the above question, according to me, answer should be (2) but answer is given as (3) option. Although it is recursive function but stack will never have more than O(n) stack depth. Can anyone explain me why is this answer (3) and where am I thinking wrong?
If You needed time complexity then it is certainly not O(N!) as many suggest but way less then that it is O(2^N).
Proof:-
T(N) = T(N-1) + T(N-2) + T(N-3) + T(N-4)........T(1)
moreover by above formula
T(N-1) = T(N-2) + T(N-3)...... T(1)
hence T(N) = T(N-1) + T(N-1) = 2*T(N-1)
solving above gives T(N) = O(2^N)
Whereas if you needed space complexity then for recursive function space complexity is calculated by the amount of stack space at max occupied by it at a moment and that in this case cannot exceed of O(N)
But in any case the answer is not O(N!) because that many computations are not done at all so how can stack occupy that much space.
Note:- Try to run the function for n = 20 if it doesnt cause memory overflow then answer given in text will be 20! which is larger than any memory but i think it will run in O(2^20) time without any stack overflow.
Space complexity is O(N). at any given time the space used is limited to:
N*sizeof_function_call_which_is_a_constant.
Think of it like this:
To calculate foo(n) . The program have to calculate: foo(0)+foo(1)+foo(2) ... foo(n-1):
Similarly for foo(n-1). The program have to recursively calculate: foo(0) + foo(1) + ... foo(n-2).
Basically you will have O(foo(n)) = n! + (n-1)! + (n-2)! + ... 1! = O(n!).
Hope this is clear.