Calculating time complexity using T(n) method? - c

How may I calculate the time complexity of f() using T(n) method?
int f (int n)
{
if (n==1)
return 1;
return f(f(n-1));
}
What I did till now?
T(n)=T(T(n-1))=T(T(T(n-2)))=T(T(T(T(n-3))))...
Plus, I know that for every n>=1 the function always returns 1
And why changing the last line from:
return f(f(n-1));
to:
return 1+f(f(n-1));
would change time complexity (Note: It's will change the complexity from n to 2^n for sure)?

The time complexity changes, because the function does not return 1; always because of the +1.
For return T(T(n-1)); the second T call will always be called with 1, which will be only 1 more call. The number of calls are 2*n-1, therefore complexity O(n).
For return 1 + T(T(n-1)); not all calls to T will result in 1, T(3) will result in 3 with 7 calls. So the second call depend on the value of n. Incrementing n will lead to doubled calls. The number of calls are 2^n - 1, therefore complexity O(2^n). Here you can see the number of calls: https://ideone.com/1KEAkU
first version (T always return 1):
T(4)
T(1) T(3)
T(1) T(2)
T(1) T(1)
You can see that the left call (the outer one) will always be called with 1 and the tree does not continue there.
second version (T does not always return 1):
T(4)
T(3) T(3)
T(2) T(2) T(2) T(2)
T(1) T(1) T(1) T(1) T(1) T(1) T(1) T(1)
Here you can see that because of the changed return value T(n) == n the second T calls doubles the number of calls.
The space complexity does not increase because the recursions depth does not change, both T(4) graphs have 4 lines and for the second tree the left part can only execute after the right part finished completely. e.g. for both T(4) the maximum number of T functions running are 4.

Adding any constant C to your program, would not change the complexity. In general, even after adding 5 new statements that do not contain the variable n wouldn't change the complexity either. For this program complexity = n, since you know that this will run at most n times.

The (correct) recurrence relation for the time complexity is:
T(n) = T(n-1) + T(f(n-1))
That's because to compute f(n), first you compute f(n-1) (cost T(n-1)) and then call f with the argument f(n-1) (cost T(f(n-1))).
When f(n) always returns 1, this results in T(n) = T(n-1) + 1, which solves to T(n) = Theta(n).
When the return statement of f is changed to return 1 + f(f(n-1)), then f(n) returns n for n>=1 (by a simple proof by induction).
Then the time complexity is T(n) = T(n-1) + T(f(n-1)) = T(n-1) + T(n-1) = 2T(n-1), which solves to T(n) = Theta(2^n).
(As an amusing side note, if you change if(n==1)return 1; to if(n==1)return 0; in the second case (with the +1)), then the time complexity is Theta(Fib(n)) where Fib(n) is the n'th fibonacci number).

Related

what is the correct time complexity for this following code?

I just learned time complexity and I'm trying to calculate the theta for this code:
for(i=2; i<n; i=i+1) {
for(j=1;j<n;j=j*i) {
count++;
}
}
I though that its n*log(n), because the first loop complexity is n, and the second loop is log(n). but I've been told the answer is n.
can someone tell what is the correct answer and explain why?
In the inner loop, j starts at 1 and on each cycle it is multiplied by i, so it takes the values 1 = i0, i1, i2, i3, etc. The iteration stops when j == ik for that integer k such that ik-1 <= n < ik. That takes k+1 iterations.
Logarithms with base > 1 are strictly increasing functions over the positive real numbers, so the relations above are preserved if we take the base-i logarithm of each term: k-1 <= logi(n) < k. With a little algebra, we can then get k+1 <= logi(n) + 2. Since k+1 is the number of iterations, and every inner-loop iteration has the same, constant cost, that gives us that the cost of the inner loop for a given value of i is O(logi(n)).
The overall cost of the loop nest, then, is bounded by Σi=2,n O(logi(n)). That can be written in terms of the natural logarithm as Σi=2,n O(loge(n) / loge(i)). Dropping the 'e' subscript and factoring, we can reach O((log n) Σi=2,n (1/(log i))). And that's one answer.
But for comparison with the complexity of other algorithms, we would like a simpler formulation, even if it's a little looser. By observing that 1/log(i) decreases, albeit slowly, as i increases, we can observe that one slightly looser bound would be O((log n) Σi=2,n (1/(log 2))) = O((log n) * (n-1) / (log 2)) = O(n log n). Thus, we can conclude that O(n log n) is an asymptotic bound.
Is there a tighter bound with similarly simple form? Another answer claims O(n), but it seems to be based on a false premise, or else its reasoning is unclear to me. There may be a tighter bound expression than O(n log n), but I don't think O(n) is a bound.
Update:
Thanks to #SupportUkraine, we can say that the performance is indeed bounded by O(n). Here's an argument inspired by their comment:
We can observe that for all i greater than sqrt(n), the inner loop body will execute exactly twice, contributing O(n) inner-loop iterations in total.
For each of the remaining sqrt(n) outer-loop iterations (having i < sqrt(n)), the number of inner-loop iterations is bounded by O(logi(n)) = O(loge(n)). These contribute O(sqrt(n) * log(n)) iterations in total.
Thus, the whole loop nest costs O(sqrt(n) * log(n)) + O(n). But sqrt(n) * log(n) grows more slowly than n, so this is O(n).
the second loop isn't O(log n) because the multiplier i keeps increasing. It's O(login). This causes the number of repetitions of the inner loop to be inversely proportionally to i, so the inner loops average out to the same number of iterations as the outer loop, making the whole thing O(n).

Converting from Code to a Recurrence Relation

I am studying for an exam and i came across some problems i need to address - dealing with Base Cases:
I am converting from Code to a Recurrence Relation not the other way around
Example 1:
if(n==1) return 0;
Now the recurrence relation to that piece of code is: T(1) = 0
How i got that?
By looking at n==1, we see this is a comparison with a value > 0, which is doing some form of work so we set "T(1)" and the return 0; isn't doing any work so we say "=0"
=> T(1) = 0;
Example 2:
if(n==0) return n+1*2;
Analyzing: n==0 means we aren't doing any work so T(0), but return n+1*2; is doing work so "=1"
=> T(0) = 1;
What i want to know is if this the correct way of analyzing a piece of code like that to come up with a recurrence relation base case?
I am unsure about these which i came up on my own to exhaust possibilities of base-cases:
Example 3: if(n==m-2) return n-1; //answer: T(1) = 1; ?
Example 4: if(n!=2) return n; //answer: T(1) = 1; ?
Example 5: if(n/2==0) return 1; //answer: T(1) = 1; ?
Example 6: if(n<2) return; //answer: T(1) = 0; ?
It's hard to analyze base cases outside of the context of the code, so it might be helpful if you posted the entire function. However, I think your confusion is arising from the assumption that T(n) always represents "work". I'm guessing that you are taking a class on complexity and you have learned about recurrence relations as a method for expressing the complexity of a recursive function.
T(n) is just a function: you plug in a number n (usually a positive integer) and you get out a number T(n). Just like any other function, T(n) means nothing on its own. However, we often use a function with the notation T(n) to express the amount of time required by an algorithm to run on an input of size n. These are two separate concepts; (1) a function T(n) and the various ways to represent it, such as a recurrence relationship, and (2) the number of operations required to run an algorithm.
Let me give an example.
int factorial(n)
{
if (n > 0)
return n*factorial(n-1);
else
return 1;
}
Let's see if we can write some function F(n) that represents the output of the code. well, F(n) = n*F(n-1), with F(0) = 1. Why? Clearly from the code, the result of F(0) is 1. For any other value of n, the result is F(n) = n*F(n-1). That recurrence relation is a very convenient way to express the output of a recursive function. Of course, I could just as easily say that F(n) = n! (the factorial operator), which is also correct. That's a non-recurrence expression of the same function. Notice that I haven't said anything about the run time of the algorithm or how much "work" it is doing. I'm just writing a mathematical expression for what the code outputs.
Dealing with the run time of a function is a little trickier, since you have to decide what you mean by "work" or "an operation." Let's suppose that we don't count "return" as an operation, but we do count multiplication as an operation and we count a conditional (the if statement) as an operation. Under these assumptions, we can try to write a recurrence relation for a function T(n) that describes how much work is done when the input n is given to the function. (Later, I'll make a comment about why this is a poor question.) For n = 0, we have a conditional (the if statement) and a return, nothing else. So T(0) = 1. For any other n > 0, we have a conditional, a multiply, and however many operations are required to compute T(n-1). So the total for n is:
T(n) = 1 (conditional) + 1 (multiply) + T(n-1) = 2 + T(n-1),
T(0) = 1.
We could write T(n) as a recurrence relation: T(n) = 2 + T(n-1), T(0) = 1. Of course, this is also just the function T(n) = 1 + 2n. Again, I want to stress that these are two very different functions. F(n) is describing the output of the function when n is the input. T(n) is describing how much work is done when n is the input.
Now, the T(n) that I just described is bad in terms of complexity theory. The reason is that complexity theory isn't about describing how much work is required to compute functions that take only a single integer as an argument. In other words, we aren't looking at the work required for a function of the form F(n). We want something more general: how much work is required to perform an algorithm on an input of size n. For example, MergeSort is an algorithm for sorting a list of objects. It requires roughly nlog(n) operations to run MergeSort on a list of n items. Notice that MergeSort isn't doing anything with the number n, rather, it operates on a list of size n. In contrast, our factorial function F(n) isn't operating on an input of size n: presumably n is an integer type, so it is probably 32-bits or 64-bits or something, no matter its value. Or you can get picky and say that its size is the minimum number of bits to describe it. In any case, n is the input, not the size of the input.
When you are answering these questions, it is important to be very clear about whether they want a recurrence relation that describes the output of the function, or a recurrence relation that describes the run time of the function.

Efficiently calculating nCk mod p

I have came across this problem many time but I am unable to solve it. There would occur some cases or the other which will wrong answer or otherwise the program I write will be too slow. Formally I am talking about calculating
nCk mod p where p is a prime n is a large number, and 1<=k<=n.
What have I tried:
I know the recursive formulation of factorial and then modelling it as a dynamic programming problem, but I feel that it is slow. The recursive formulation is (nCk) + (nCk-1) = (n+1Ck). I took care of the modulus while storing values in array to avoid overflows but I am not sure that just doing a mod p on the result will avoid all overflows as it may happen that one needs to remove.
To compute nCr, there's a simple algorithm based on the rule nCr = (n - 1)C(r - 1) * n / r:
def nCr(n,r):
if r == 0:
return 1
return n * nCr(n - 1, r - 1) // r
Now in modulo arithmetic we don't quite have division, but we have modulo inverses which (when modding by a prime) are just as good
def nCrModP(n, r, p):
if r == 0:
return 1
return n * nCrModP(n - 1, r - 1) * modinv(r, p) % p
Here's one implementation of modinv on rosettacode
Not sure what you mean by "storing values in array", but I assume they array serves as a lookup table while running to avoid redundant calculations to speed things up. This should take care of the speed problem. Regarding the overflows - you can perform the modulo operation at any stage of computation and repeat it as much as you want - the result will be correct.
First, let's work with the case where p is relatively small.
Take the base-p expansions of n and k: write n = n_0 + n_1 p + n_2 p^2 + ... + n_m p^m and k = k_0 + k_1 p + ... + k_m p^m where each n_i and each k_i is at least 0 but less than p. A theorem (which I think is due to Edouard Lucas) states that C(n,k) = C(n_0, k_0) * C(n_1, k_1) * ... * C(n_m, k_m). This reduces to taking a mod-p product of numbers in the "n is relatively small" case below.
Second, if n is relatively small, you can just compute binomial coefficients using dynamic programming on the formula C(n,k) = C(n-1,k-1) + C(n-1,k), reducing mod p at each step. Or do something more clever.
Third, if k is relatively small (and less than p), you should be able to compute n!/(k!(n-k)!) mod p by computing n!/(n-k)! as n * (n-1) * ... * (n-k+1), reducing modulo p after each product, then multiplying by the modular inverses of each number between 1 and k.

time complexity of the recursive algorithm

Can someone please explain to me how to calculate the complexity of the following recursive code:
long bigmod(long b, long p, long m) {
if (p == 0)
return 1;
else
if (p % 2 == 0)
return square(bigmod(b, p / 2, m)) % m;
else
return ((b % m) * bigmod(b, p - 1, m)) % m;
}
This is O(log(p)) because you are dividing by 2 every time or subtracting one then dividing by two, so the worst case would really take O(2 * log(p)) - one for the division and one for the subtraction of one.
Note that in this example the worst case and average case should be the same complexity.
If you want to be more formal about it then you can write a recurrence relation and use the Master theorem to solve it. http://en.wikipedia.org/wiki/Master_theorem
It runs in O(log n)
There are no expensive operations (by that i more expensive than squaring or modding. no looping, etc) inside the function, so we can pretty much just count the function calls.
Best case is a power of two, we will need exactly log(n) calls.
Worst case we get an odd number on every other call. This can do no more than double our calls. Multiplication by a constant factor, no worse asymptotically. 2*f(x) is still O(f(x))
O(logn)
It is o(log(N)) base 2, because the division by 2

Time complexity of a recursive algorithm

How can I calculate the time complexity of a recursive algorithm?
int pow1(int x,int n) {
if(n==0){
return 1;
}
else{
return x * pow1(x, n-1);
}
}
int pow2(int x,int n) {
if(n==0){
return 1;
}
else if(n&1){
int p = pow2(x, (n-1)/2)
return x * p * p;
}
else {
int p = pow2(x, n/2)
return p * p;
}
}
Analyzing recursive functions (or even evaluating them) is a nontrivial task. A (in my opinion) good introduction can be found in Don Knuths Concrete Mathematics.
However, let's analyse these examples now:
We define a function that gives us the time needed by a function. Let's say that t(n) denotes the time needed by pow(x,n), i.e. a function of n.
Then we can conclude, that t(0)=c, because if we call pow(x,0), we have to check whether (n==0), and then return 1, which can be done in constant time (hence the constant c).
Now we consider the other case: n>0. Here we obtain t(n) = d + t(n-1). That's because we have again to check n==1, compute pow(x, n-1, hence (t(n-1)), and multiply the result by x. Checking and multiplying can be done in constant time (constant d), the recursive calculation of pow needs t(n-1).
Now we can "expand" the term t(n):
t(n) =
d + t(n-1) =
d + (d + t(n-2)) =
d + d + t(n-2) =
d + d + d + t(n-3) =
... =
d + d + d + ... + t(1) =
d + d + d + ... + c
So, how long does it take until we reach t(1)? Since we start at t(n) and we subtract 1 in each step, it takes n-1 steps to reach t(n-(n-1)) = t(1). That, on the other hands, means, that we get n-1 times the constant d, and t(1) is evaluated to c.
So we obtain:
t(n) =
...
d + d + d + ... + c =
(n-1) * d + c
So we get that t(n)=(n-1) * d + c which is element of O(n).
pow2 can be done using Masters theorem. Since we can assume that time functions for algorithms are monotonically increasing. So now we have the time t(n) needed for the computation of pow2(x,n):
t(0) = c (since constant time needed for computation of pow(x,0))
for n>0 we get
/ t((n-1)/2) + d if n is odd (d is constant cost)
t(n) = <
\ t(n/2) + d if n is even (d is constant cost)
The above can be "simplified" to:
t(n) = floor(t(n/2)) + d <= t(n/2) + d (since t is monotonically increasing)
So we obtain t(n) <= t(n/2) + d, which can be solved using the masters theorem to t(n) = O(log n) (see section Application to Popular Algorithms in the wikipedia link, example "Binary Search").
Let's just start with pow1, because that's the simplest one.
You have a function where a single run is done in O(1). (Condition checking, returning, and multiplication are constant time.)
What you have left is then your recursion. What you need to do is analyze how often the function would end up calling itself. In pow1, it'll happen N times. N*O(1)=O(N).
For pow2, it's the same principle - a single run of the function runs in O(1). However, this time you're halving N every time. That means it will run log2(N) times - effectively once per bit. log2(N)*O(1)=O(log(N)).
Something which might help you is to exploit the fact that recursion can always be expressed as iteration (not always very simply, but it's possible. We can express pow1 as
result = 1;
while(n != 0)
{
result = result*n;
n = n - 1;
}
Now you have an iterative algorithm instead, and you might find it easier to analyze it that way.
It can be a bit complex, but I think the usual way is to use Master's theorem.
Complexity of both functions ignoring recursion is O(1)
For the first algorithm pow1(x, n) complexity is O(n) because the depth of recursion correlates with n linearly.
For the second complexity is O(log n). Here we recurse approximately log2(n) times. Throwing out 2 we get log n.
So I'm guessing you're raising x to the power n. pow1 takes O(n).
You never change the value of x but you take 1 from n each time until it gets to 1 (and you then just return) This means that you will make a recursive call n times.

Resources