Writing a function for F(n)=0.5F(n-1) - c

let F(n)=0.5F(n-1) and F(0)=1
a. write a function fun1, a recursive function to evaluate the n's term
b. write a function fun2, a non recursive function to evaluate the n's term
c. what is the time complexity of fun1 and from which n term it will be better to use fun1 vs fun2 regarding space complexity
In general the function evaluate the n term of the sequence {1,1/2,1/4,1/8,...}
a.
double fun1( int n ){
if (n == 0)
return 1;
else
return 0.5*fun1(n-1);
}
b.
double fun2( int n ){
double sum = 1, i;
for (i=0 ; i < n; i++)
sum=sum*(0.5);
return sum;
}
c. Intuitively and mathematically using the sum of geometric sequence we can show that it is O(n)
is there another way?
how to address space complexity?

While your versions of fun1 and fun2 are of different space complexity, their time complexity is O(n).
However, the non-recursive function can also be written as:
#import <math.h>
double fun2(int n) {
return pow(0.5, n);
}
This function is of space and time complexity O(1) and will be more efficient for most n (probably n > 5).
As for the original question: It's very tricky as it depends on the compiler optimization:
A naive implementation is of fun1 is of space complexity O(n) as a call of fun1(n) will have a recursive depth of n and therefore requires n call frames on the stack. On most systems it will only work up to a certain n. Then you get a Stack Overflow error because the stack has a limited size.
An optimizing compiler will recognize that it's a tail-recursive function and will optimize it into something very close to fun2, which has a space complexity of O(1) as it uses a fixed number of variable with a fixed size independent of n and no recursion.

I understand that this is a homework question so I will not refer anything about compiler optimizations and tail recursion since this is not property of the program itself but it depends on the compiler if it will optimize a recursive function or not..
Your first approach is clearly O(n) since it calls recursively f1 and all it does a multiplication.
Your second approach is also clearly O(n) since it is just a simple loop.
So as for time complexity both are the same O(n)
As for space complexity fun1 needs n function records so it is O(n) space complexity while fun2 only needs one variable so it is O(1) space complexity. So as for space complexity fun2 is a better approach.

For a recursive and iterative approach the complexity can be reduced to O(log n):
The recursive depth of the following solution is log n:
double fun3( int n ){
double f;
if ( n == 0 )
return 1.0;
f = fun3( n/2 );
return f * f * (n % 2 ? 0.5 : 1.0);
}
The number of iterations in the following loop is log n, too:
double fun4( int n ){
int i;
double f = (n % 2 ? 0.5 : 1.0);
for (i = n; i > 1; i /= 2)
f *= 0.5*0.5;
return f;
}

You can answer yourself if you take a look on the generated code: https://godbolt.org/z/Gd9XxM
It is very likely that the optimizing compiler will remove the tail recursion.
Space and time complexity strongly depend on the optimization options (Try -Os, -O0)

Related

How is it possible to achive O(log n) power function a^n by only using recursion?

Below, the purpose of the code is to compute power of an integer.
My friend told me that the time complexity of this algorithm is O(log n).
But, in fact the number of function calls is not equal to logn.
For example, power(2, 9) calls power functions 5 times (including the calling power(2,9)), while power(2, 8) calls power function 4 times (including the calling power(2,8).
Nevertheless the number of bits needed for 8 and 9 are same, the numbers of function calls are different.
Why does this happen? Is this really O(log n) algorithm?
#include <stdio.h>
int power(int a, int n) {
if(n == 0) {
return 1;
}
if(n == 1) {
return a;
}
if (n%2 == 0) {
return power(a*a, n/2);
}else{
return a * power(a, n - 1);
}
}
int main() {
for (int i = 0; i < 15; i++)
printf("pow(%d, %d) = %d\n", 2, i, power(2, i));
return 0;
}
Your implementation is O(logN), but it could be made slightly more efficient.
Note that hereafter, a log is a log base 2.
You have log(n) calls of power(a*a,n/2), and a call to power(a, n-1) for every bit set in n.
The number of bits set in n is at most log(n) +1.
Thus, the number of calls to power is at most log(n)+log(n)+1. For instance, when n = 15, the sequence of calls is
power(15), power(14), power(7), power(6), power(3), power(2), power(1)
log(n)+log(n)+1 = 3+3+1 = 7
Here is a more efficient implementation that has only log(n)+2 calls of power.
int power(int a, int n) {
if(n == 0) {
return 1;
}
if (n&1 == 0) {
return power(a*a, n/2);
}else{
return a * power(a*a, n/2);
}
}
In this case the sequence of calls when n = 15 is
power(15), power(7), power(3), power(1), power(0)
I removed the if (n == 1) condition because we can avoid this test that would be performed log(n) time by adding one call to power.
We then have log(n)+2 calls to power which is better than 2log(n)+1.
The reason why the algorithm remains Ο(lgN) even with the extra calls for the odd number case is because the number of extra calls is bounded by a constant. In the worst case, N/2 is odd at each iteration, but this would only double the number of extra calls (the constant is 2). That is, at worst, there will be 2lgN calls to complete the algorithm.
To more easily observe that the algorithm is Ο(lgN), you can rewrite the function to always reduce the power by half at each iteration, so that at worst case, there are only lgN calls. To leverage tail recursion, you can add a function parameter to accumulate the carried multiplier from the odd N.
int power_i (int a, unsigned N, int c) {
if (N == 0) return c;
return power_i(a*a, N/2, N%2 ? a*c : c);
}
int power (int a, unsigned N) {
return power_i(a, N, 1);
}
The advantage of tail recursion is that the optimized code will be converted into a simple loop by most modern C compilers.
Try it online!
The power function has two base cases: n = 0 and n = 1.
The power function has two recursive calls. Only one of them is made in any given call.
Let's first consider the case when n is even: In that case, the recursive call is made with n / 2.
If all calls would use this case, then you half n in each call down until you reach 1. This is indeed log(n) calls (plus 1 for the base case).
The other case, when n is odd, reduces n only by one. If all calls would end up using this recursive call then the function would be called n times; clearly not logarithmic but linear thus.
But what happens to an odd number when you subtract one from it? It becomes an even number. Thus the feared linear behaviour mentioned above cannot occur.
Worst case is: n is odd, thus use second recursive call. Now n is even, thus first recursive call. Now n is odd, this use second, ... and so on down until n is one. In that case every second call reduces n to n / 2. Therefore you need 2 * log(n) calls then (plus one for the base case).
So yes, this is in O(log(n)). This algorithm is often called binary exponentiation.

Fibonacci using Recursion

This is my idea of solving 'nth term of fibonacci series with least processing power'-
int fibo(int n, int a, int b){
return (n>0) ? fibo(n-1, b, a+b) : a;
}
main(){
printf("5th term of fibo is %d", fibo(5 - 1, 0, 1));
}
To print all the terms, till nth term,
int fibo(int n, int a, int b){
printf("%d ", a);
return (n>0)? fibo(n-1, b, a+b): a;
}
I showed this code to my university professor and as per her, this is a wrong approach to solve Fibonacci problem as this does not abstract the method. I should have the function to be called as fibo(n) and not fibo(n, 0, 1). This wasn't a satisfactory answer to me, so I thought of asking experts on SOF.
It has its own advantage over traditional methods of solving Fibonacci problems. The technique where we employ two parallel recursions to get nth term of Fibonacci (fibo(n-1) + fibo(n-2)) might be slow to give 100th term of the series whereas my technique will be lot faster even in the worst scenario.
To abstract it, I can use default parameters but it isn't the case with C. Although I can use something like -
int fibo(int n){return fiboN(n - 1, 0, 1);}
int fiboN(int n, int a, int b){return (n>0)? fiboN(n-1, b, a+b) : a;}
But will it be enough to abstract the whole idea? How should I convince others that the approach isn't wrong (although bit vague)?
(I know, this isn't sort of question that I should I ask on SOF but I just wanted to get advice from experts here.)
With the understanding that the base case in your recursion should be a rather than 0, this seems to me to be an excellent (although not optimal) solution. The recursion in that function is tail-recursion, so a good compiler will be able to avoid stack growth making the function O(1) soace and O(n) time (ignoring the rapid growth in the size of the numbers).
Your professor is correct that the caller should not have to deal with the correct initialisation. So you should provide an external wrapper which avoids the need to fill in the values.
int fibo(int n, int a, int b) {
return n > 0 ? fibo(b, a + b) : a;
}
int fib(int n) { return fibo(n, 0, 1); }
However, it could also be useful to provide and document the more general interface, in case the caller actually wants to vary the initial values.
By the way, there is a faster computation technique, based on the recurrence
fib(a + b - 1) = f(a)f(b) + f(a - 1)f(b - 1)
Replacing b with b + 1 yields:
fib(a + b) = f(a)f(b + 1) + f(a - 1)f(b)
Together, those formulas let us compute:
fib(2n - 1) = fib(n + n - 1)
= fib(n)² + fib(n - 1)²
fib(2n) = fib(n + n)
= fib(n)fib(n + 1) + fib(n - 1)fib(n)
= fib(n)² + 2fib(n)fib(n - 1)
This allows the computation to be performed in O(log n) steps, with each step producing two consecutive values.
Your result will be 0, with your approaches. You just go in recursion, until n=0 and at that point return 0. But you have also to check when n==1 and you should return 1; Also you have values a and b and you do nothing with them.
i would suggest to look at the following recursive function, maybe it will help to fix yours:
int fibo(int n){
if(n < 2){
return n;
}
else
{
return (fibo(n-1) + fibo(n-2));
}
}
It's a classical problem in studying recursion.
EDIT1: According to #Ely suggest, bellow is an optimized recursion, with memorization technique. When one value from the list is calculated, it will not be recalculated again as in first example, but it will be stored in the array and taken from that array whenever is required:
const int MAX_FIB_NUMBER = 10;
int storeCalculatedValues[MAX_FIB_NUMBER] = {0};
int fibo(int n){
if(storeCalculatedValues[n] > 0)
{
return storeCalculatedValues[n];
}
if(n < 2){
storeCalculatedValues[n] = n;
}
else
{
storeCalculatedValues[n] = (fibo(n-1) + fibo(n-2));
}
return storeCalculatedValues[n];
}
Using recursion and with a goal of least processing power, an approach to solve fibonacci() is to have each call return 2 values. Maybe one via a return value and another via a int * parameter.
The usual idea with recursion is to have a a top level function perform a one-time preparation and check of parameters followed by a local helper function written in a lean fashion.
The below follows OP's idea of a int fibo(int n) and a helper one int fiboN(int n, additional parameters)
The recursion depth is O(n) and the memory usage is also O(n).
static int fib1h(int n, int *previous) {
if (n < 2) {
*previous = n-1;
return n;
}
int t;
int sum = fib1h(n-1, &t);
*previous = sum;
return sum + t;
}
int fibo1(int n) {
assert(n >= 0); // Handle negatives in some fashion
int t;
return fib1h(n, &t);
}
#include <stdio.h>
int fibo(int n);//declaring the function.
int main()
{
int m;
printf("Enter the number of terms you wanna:\n");
scanf("%i", &m);
fibo(m);
for(int i=0;i<m;i++){
printf("%i,",fibo(i)); /*calling the function with the help of loop to get all terms */
}
return 0;
}
int fibo(int n)
{
if(n==0){
return 0;
}
if(n==1){
return 1;
}
if (n > 1)
{
int nextTerm;
nextTerm = fibo(n - 2) + fibo(n - 1); /*recursive case,function calling itself.*/
return nextTerm;
}
}
solving 'nth term of fibonacci series with least processing power'
I probably do not need to explain to you the recurrence relation of a Fibonacci number. Though your professor have given you a good hint.
Abstract away details. She is right. If you want the nth Fibonacci number it suffices to merely tell the program just that: Fibonacci(n)
Since you aim for least processing power your professor's hint is also suitable for a technique called memoization, which basically means if you calculated the nth Fibonacci number once, just reuse the result; no need to redo a calculation. In the article you find an example for the factorial number.
For this you may want to consider a data structure in which you store the nth Fibonacci number; if that memory has already a Fibonacci number just retrieve it, otherwise store the calculated Fibonacci number in it.
By the way, didactically not helpful, but interesting: There exists also a closed form expression for the nth Fibonacci number.
This wasn't a satisfactory answer to me, so I thought of asking
experts on SOF.
"Uh, you do not consider your professor an expert?" was my first thought.
As a side note, you can do the fibonacci problem pretty much without recursion, making it the fastest I know approach. The code is in java though:
public int fibFor() {
int sum = 0;
int left = 0;
int right = 1;
for (int i = 2; i <= n; i++) {
sum = left + right;
left = right;
right = sum;
}
return sum;
}
Although #rici 's answer is mostly satisfactory but I just wanted to share what I learnt solving this problem. So here's my understanding on finding fibonacci using recursion-
The traditional implementation fibo(n) { return (n < 2) n : fibo(n-1) + fibo(n-2);} is a lot inefficient in terms of time and space requirements both. This unnecessarily builds stack. It requires O(n) Stack space and O(rn) time, where r = (√5 + 1)/2.
With memoization technique as suggested in #Simion 's answer, we just create a permanent stack instead of dynamic stack created by compiler at run time. So memory requirement remains same but time complexity reduces in amortized way. But is not helpful if we require to use it only the once.
The Approach I suggested in my question requires O(1) space and O(n) time. Time requirement can also be reduced here using same memoization technique in amortized way.
From #rici 's post, fib(2n) = fib(n)² + 2fib(n)fib(n - 1), as he suggests the time complexity reduces to O(log n) and I suppose, the stack growth is still O(n).
So my conclusion is, if I did proper research, time complexity and space requirement both cannot be reduced simultaneously using recursion computation. To achieve both, the alternatives could be using iteration, Matrix exponentiation or fast doubling.

Is the time complexity/Big O of this function a constant?

is the time complexity of this program O(1)?
f1(int n){
int temp = n;
while (temp /= 2))
{
len++; result+=m;
}
}
and if we change int temp to double temp, does the time complexity change as well, or it will remain constant?
f2(int n){
double temp = (double)n;
while (temp /= 2))
{
len++; result+=m;
}
}
The answer for the integer part is O(log n) because the value is halved each time.
The double version starts the same way, except that when the value reaches 1 or close to 1, it doesn't stop and divides until underflow makes it 0. At this point, the number of divisions is fixed.
I've made a small empirically calibrated program which tries to predict the number of loops:
#include <stdio.h>
#include <math.h>
void f2(int n){
int len=0;
double temp = (double)n;
while (temp /= 2)
{
len++;
}
// 1.53 is an empiric constant, maybe it could be theorically computed
// it was just to make the numbers match
printf("%d %lf\n",len,log(n)*1.53+1074);
}
int main()
{
f2(100000000);
f2(10000000);
f2(1000000);
f2(10000);
f2(100);
f2(1);
}
I get:
1101 1102.183642
1097 1098.660686
1094 1095.137731
1087 1088.091821
1081 1081.045910
1074 1074.000000
So the complexity is O(log n) plus an incompressible number of iterations, depending on the machine.
(my apologies for the empiric aspect of my answer, I'm not a floating point expert)
For an algorithm to have constant time-complexity, its runtime should stay constant as the number of inputs, n, grows. If your function on n = 1 and n = 1000000 takes different amounts of time to run, your function is not O(1), i.e. it doesn't have a constant time complexity.
Let's calculate how many steps the first function takes to terminate:
n/2x = 1 ⇒ x = log(n)
For the second, however, it will theoretically keep dividing n by 2 forever, but in reality, it will terminate after some log(n) + c steps, in which case the constant will be omitted, and the complexity is going to be log(n) again.

Algorithmic Complexity of Multiplication

Before this gets accused of being a duplicate, I have looked everywhere on StackOverflow for this answer and have not been able to find something that can explain this to me, so please read the entirety first.
Suppose you need to write a function that takes an integer n and returns the sum of the positive integers from 1..n (I will use C).
int sum_of_integers(int n) {
int i, counter = 0;
for(i = 1; i <= n; i++)
counter += i;
return counter;
}
Obviously, this algorithm is in O(n) time, since the number of instructions it runs is proportional to the input size n.
However, consider this implementation, using the mathematical truth that 1+...+n = (n)(n+1)/2.
int sum_of_integers(int n) {
//Trying to avoid potential int overflow
//And take into account int division
if(n % 2 == 0)
return (n/2)*(n+1);
return ((n+1)/2)*n;
}
My Question: Since multiplication is technically in big-O > O(n), is the first implementation preferred? Is the second implementation considered to be in O(1) or not?
To me, because it does not matter what the size of n is for the second implementation since the same operations are being performed the same amount of times, I feel like it should be in O(1). On the other hand, the second implementation may actually be running more instructions based on the implementation of the multiplication operator.
Schoolbook multiplication takes time O(b^2) where b is the number of bits in the numbers, so using the formula n(n+1)/2 takes time O((log n)^2) which is much faster than O(n).

finding greatest prime factor using recursion in c

have wrote the code for what i see to be a good algorithm for finding the greatest prime factor for a large number using recursion. My program crashes with any number greater than 4 assigned to the variable huge_number though. I am not good with recursion and the assignment does not allow any sort of loop.
#include <stdio.h>
long long prime_factor(int n, long long huge_number);
int main (void)
{
int n = 2;
long long huge_number = 60085147514;
long long largest_prime = 0;
largest_prime = prime_factor(n, huge_number);
printf("%ld\n", largest_prime);
return 0;
}
long long prime_factor (int n, long long huge_number)
{
if (huge_number / n == 1)
return huge_number;
else if (huge_number % n == 0)
return prime_factor (n, huge_number / n);
else
return prime_factor (n++, huge_number);
}
any info as to why it is crashing and how i could improve it would be greatly appreciated.
Even fixing the problem of using post-increment so that the recursion continues forever, this is not a good fit for a recursive solution - see here for why, but it boils down to how fast you can reduce the search space.
While your division of huge_number whittles it down pretty fast, the vast majority of recursive calls are done by simply incrementing n. That means you're going to use a lot of stack space.
You would be better off either:
using an iterative solution where you won't blow out the stack (if you just want to solve the problem) (a); or
finding a more suitable problem for recursion if you're just trying to learn recursion.
(a) An example of such a beast, modeled on your recursive solution, is:
#include <stdio.h>
long long prime_factor_i (int n, long long huge_number) {
while (n < huge_number) {
if (huge_number % n == 0) {
huge_number /= n;
continue;
}
n++;
}
return huge_number;
}
int main (void) {
int n = 2;
long long huge_number = 60085147514LL;
long long largest_prime = 0;
largest_prime = prime_factor_i (n, huge_number);
printf ("%lld\n", largest_prime);
return 0;
}
As can be seen from the output of that iterative solution, the largest factor is 10976461. That means the final batch of recursions in your recursive solution would require a stack depth of ten million stack frames, not something most environments will contend with easily.
If you really must use a recursive solution, you can reduce the stack space to the square root of that by using the fact that you don't have to check all the way up to the number, but only up to its square root.
In addition, other than 2, every other prime number is odd, so you can further halve the search space by only checking two plus the odd numbers.
A recursive solution taking those two things into consideration would be:
long long prime_factor_r (int n, long long huge_number) {
// Debug code for level checking.
// static int i = 0;
// printf ("recursion level = %d\n", ++i);
// Only check up to square root.
if (n * n >= huge_number)
return huge_number;
// If it's a factor, reduce the number and try again.
if (huge_number % n == 0)
return prime_factor_r (n, huge_number / n);
// Select next "candidate" prime to check against, 2 -> 3,
// 2n+1 -> 2n+3 for all n >= 1.
if (n == 2)
return prime_factor_r (3, huge_number);
return prime_factor_r (n + 2, huge_number);
}
You can see I've also removed the (awkward, in my opinion) construct:
if something then
return something
else
return something else
I much prefer the less massively indented code that comes from:
if something then
return something
return something else
But that's just personal preference. In any case, that gets your recursion level down to 1662 (uncomment the debug code to verify) rather than ten million, a rather sizable reduction but still not perfect. That runs okay in my environment.
You meant n+1 instead of n++. n++ increments n after using it, so the recursive call gets the original value of n.
You are overflowing stack, because n++ post-increments the value, making a recursive call with the same values as in the current invocation.
the crash reason is stack overflow. I add a counter to your program and execute it(on ubuntu 10.04 gcc 4.4.3) the counter stop at "218287" before core dump. the better solution is using loop instead of recursion.

Resources