Space complexity of a given recursive program - c

Consider the following C-function:
double foo (int n) {
int i;
double sum;
if (n == 0) return 1.0;
else {
sum = 0.0;
for (i = 0; i < n; i++)
sum + = foo (i);
return sum;
}
}
The space complexity of the above function is
1) O(1)
2) O(n)
3) O(n!)
4) O(n^n)
in the above question, according to me, answer should be (2) but answer is given as (3) option. Although it is recursive function but stack will never have more than O(n) stack depth. Can anyone explain me why is this answer (3) and where am I thinking wrong?

If You needed time complexity then it is certainly not O(N!) as many suggest but way less then that it is O(2^N).
Proof:-
T(N) = T(N-1) + T(N-2) + T(N-3) + T(N-4)........T(1)
moreover by above formula
T(N-1) = T(N-2) + T(N-3)...... T(1)
hence T(N) = T(N-1) + T(N-1) = 2*T(N-1)
solving above gives T(N) = O(2^N)
Whereas if you needed space complexity then for recursive function space complexity is calculated by the amount of stack space at max occupied by it at a moment and that in this case cannot exceed of O(N)
But in any case the answer is not O(N!) because that many computations are not done at all so how can stack occupy that much space.
Note:- Try to run the function for n = 20 if it doesnt cause memory overflow then answer given in text will be 20! which is larger than any memory but i think it will run in O(2^20) time without any stack overflow.

Space complexity is O(N). at any given time the space used is limited to:
N*sizeof_function_call_which_is_a_constant.

Think of it like this:
To calculate foo(n) . The program have to calculate: foo(0)+foo(1)+foo(2) ... foo(n-1):
Similarly for foo(n-1). The program have to recursively calculate: foo(0) + foo(1) + ... foo(n-2).
Basically you will have O(foo(n)) = n! + (n-1)! + (n-2)! + ... 1! = O(n!).
Hope this is clear.

Related

What's the time complexity of this piece of code?

I'm studying for exam and I came across this piece of code and I need to find best and worst case.
A(n):
for (int i = 1; i < n; i*=2) {
for (int j = 0; j < i; j++) {
if (i == j) return; // does nothing
for (int k = n; k > 0; k--)
if (f(n))
return g(n);
}
}
whereas functions worst and best cases are:
f(n) O(n) Ω(log^7(n))
g(n) O(n^2 * log^6(n)) Ω(n^2 * log^6(n))
Worst case:
Complexity of the first loop is log(n), and the second loop depends on first but I would say that it's complexity is n. Third for loop is n. f(n) is checked in O(n) and in worst case g(n) will be executed in last iteration and it's complexity is O(n^2 * log^6(n)). So I would say that worst case is log(n) * n * n * n + n^2 * log^6(n), so it's O(n^3 * log(n)).
Other logic would be, that as second loop depends on the first one, the iterations will go 1 + 2 + 4 + 8 + 16... which is geometric series and it's value is 2^log(n) which is n. Everything under first loop would stay same, so in this case Big-O would be O(n^3)
Best case: I found that the best case would be (n^2 * log^6(n)) as it would go straight to return statement without iterating at all.
Basically, the main question is how does log(n) times executed loop affects nested n times executed loop which depends on it.
Which logic is right for worst case, and is the best case ok?

find time and space complexity

I need to find the time and space complexity of f3.
I think that g has space complexity of log(n), so for the time complexity, but I am not really sure how I find the time and space complexity of f3 because the calling for g is inside the for commend, does it mean the g is being called every time to check if g(i) < n?
int g(int n)
{
if (n <= 1)
return 1;
return g(n / 2) + 1;
}
int f3(int n)
{
int counter = 0;
for (int i = 0; g(i) < n; ++i)
++counter;
return counter;
}
In short:
Time complexity of f3(n) = O(2^n)
Space complexity of f3(n) = O(n)
Space and Time complexity of g(n) = O(log(n))
Note: (here all log that I refer are log base2 link and all notations are in Big O notation)
Details:
The function "g()" returns floor(log(n))+1. The loop in function "f3()" cotinues until the function 'g()' returns n. For g(i) to return 'n', i needs to be '2^(n-1)'.
In order to terminate the loop in "f3()", 'i' needs to reach 2^(n-1). So 'g()' is called 2^(n-1) times.
So Time complexity of "f3()" = 2^(n-1) x (time complexity of g()) = 2^n x (log n) = O(2^n)
Largest memory used will be during the last call of 'g(i)' where i==2^(n-1).
Therefore space complexity of "f3()" = log(2^(n-1)) = O(n)
As g(n) will run in Theta(log(n)) and each time adds 1 to the final result, the final value of g(i) will be \ceil{log(i)} + 1. On the other hand, the loop inside f3 will run while g(i) < n. It means the loop will iterate up to 2^n. Hence, f3 will run 2^n times. Hence, the time complexity of f3(n) = sum_{i=1}^{2^n} log(i) = log((2^n)!) = \Theta(n * 2^n) (as log((2^n)!) ~ (2^n) log(2^n)).
About space complexity, the maximum depth of g(i), will be when i = 2^n, which is n (n recursive calls). Hence, the space complexity of f3 will be \Theta(n).

Big Oh Runtime of a Recursive Sum

If I use a for loop to find the sum of n numbers between 0 and n my runtime is O(n). But if I create a recursive function such as:
int sum(int n) {
if(n == 0)
return 0;
return n + sum((n - 1));
}
Would my runtime still be O(n)?
Yes, your runtime will still be O(N). Your recursive function will "loop" N times until it hits the base case.
However keep in mind that your space complexity is also O(N). Your language has to save n + ... before evaluating sum((n - 1)), creating a stack of recursive calls that is N long.
#Primusa's answer addresses your recursive runtime question. While my answer wont address your runtime question, it should be noted that you dont need an algorithm for this. The closed formula for the sum is (n+1)*(n) / 2.
thanks Carl Gauss!

What is the time complexity \big(O) of this specific function?

What is the time comlexity of this function(f1)?
as I can see that the first loop(i=0)-> (n/4 times) the second one(i=3)->(n/4 - 3 times).... etc, the result is: (n/3)*(n/4 + (n-3)/4 + (n-6)/4 + (n-9)/4 ....
And I stop here, how to continue?
int f1(int n){
int s=0;
for(int i=0; i<n; i+=3)
for (int j=n; j>i; j-=4)
s+=j+i;
return s;
}
The important thing about Big(O) notation is that it eliminates 'constants'. The objective is to determine trend as input size grows without concern for specific numbers.
Think of it as determining the curve on a graph where you don't know the number ranges of the x and y axes.
So in your code, even though you skip most of the values in the range of n for each iteration of each loop, this is done at a constant rate. So regardless of how many you actually skip, this still scales relative to n^2.
It wouldn't matter if you calculated any of the following:
1/4 * n^2
0.0000001 * n^2
(1/4 * n)^2
(0.0000001 * n)^2
1000000 + n^2
n^2 + 10000000 * n
In Big O, these are all equivalent to O(n^2). The point being that once n gets big enough (whatever that may be), all the lower order terms and constant factors become irrelevant in the 'big picture'.
(It's worth emphasising that this is why on small inputs you should be wary of relying too heavily on Big O. That's when constant overheads can still have a big impact.)
Key observation: The inner loop executes (n-i)/4 times in step i, hencei/4 in step n-i.
Now sum all these quantities for i = 3k, 3(k-1), 3(k-2), ..., 9, 6, 3, 0, where 3k is the largest multiple of 3 before n (i.e., 3k <= n < 3(k+1)):
3k/4 + 3(k-1)/4 + ... + 6/4 + 3/4 + 0/4 = 3/4(k + (k-1) + ... + 2 + 1)
= 3/4(k(k+1))/2
= O(k^2)
= O(n^2)
because k <= n/3 <= k+1 and therefore k^2 <= n^2/9 <= (k+1)^2 <= 4k^2
In theory it's "O(n*n)", but...
What if the compiler felt like optimising it into this:
int f1(int n){
int s=0;
for(int i=0; i<n; i+=3)
s += table[i];
return s;
}
Or even this:
int f1(int n){
if(n <= 0) return 0;
return table[n];
}
Then it could also be "O(n)" or "O(1)".
Note that on the surface these kinds of optimisations seem impractical (due to worst case memory costs); but with a sufficiently advanced compiler (e.g. using "whole program optimisation" to examine all callers and determine that n is always within a certain range) it's not inconceivable. In a similar way it's not impossible for all of the callers to be using a constant (e.g. where a sufficiently advanced compiler can replace things like x = f1(123); with x = constant_calculated_at_compile_time).
In other words; in practice, the time complexity of the original function depends on how the function is used and how good/bad the compiler is.

Finding the time complexity of code

Given is an infinite sorted array containing only numbers 0 and 1. Find the transition point efficiently.
For example : 00000000000111111111111111
Output : 11 which is the index where the transition occurs
I have coded a solution for this ignoring some edge cases.
int findTransition(int start)
{
int i;
if(a[start]==1)return start;
for(i=1;;i*=2)
{
//assume that this condition will be true for some index
if(a[start+i]==1)break;
}
if(i==1)return start+1;
return findTransition(start+(i/2));
}
I am not really sure about the time complexity of this solution here. Can someone please help me in figuring this out?
Is it O(log(N))?
Let n be position of transition point
This block
for(i=1;;i*=2)
{
//assume that this condition will be true for some index
if(a[start+i]==1)break;
}
works for log2(n)
So we have
T(n) = log2(n) + T(n/2)
T(n) = log2(n) + log2(n/2) + T(n/4) = log2(n) + (log2(n) - 1) + (log2(n) - 2)...
T(n) = log2(n) * (log2(n) + 1) / 2
So there is O(log(n)^2) complexity (for worst case)
Note: you can use usual binary search instead of recursion call, then you will have log2(n) + log2(n/2) just O(log(n)) granted.

Resources