is the time complexity of this program O(1)?
f1(int n){
int temp = n;
while (temp /= 2))
{
len++; result+=m;
}
}
and if we change int temp to double temp, does the time complexity change as well, or it will remain constant?
f2(int n){
double temp = (double)n;
while (temp /= 2))
{
len++; result+=m;
}
}
The answer for the integer part is O(log n) because the value is halved each time.
The double version starts the same way, except that when the value reaches 1 or close to 1, it doesn't stop and divides until underflow makes it 0. At this point, the number of divisions is fixed.
I've made a small empirically calibrated program which tries to predict the number of loops:
#include <stdio.h>
#include <math.h>
void f2(int n){
int len=0;
double temp = (double)n;
while (temp /= 2)
{
len++;
}
// 1.53 is an empiric constant, maybe it could be theorically computed
// it was just to make the numbers match
printf("%d %lf\n",len,log(n)*1.53+1074);
}
int main()
{
f2(100000000);
f2(10000000);
f2(1000000);
f2(10000);
f2(100);
f2(1);
}
I get:
1101 1102.183642
1097 1098.660686
1094 1095.137731
1087 1088.091821
1081 1081.045910
1074 1074.000000
So the complexity is O(log n) plus an incompressible number of iterations, depending on the machine.
(my apologies for the empiric aspect of my answer, I'm not a floating point expert)
For an algorithm to have constant time-complexity, its runtime should stay constant as the number of inputs, n, grows. If your function on n = 1 and n = 1000000 takes different amounts of time to run, your function is not O(1), i.e. it doesn't have a constant time complexity.
Let's calculate how many steps the first function takes to terminate:
n/2x = 1 ⇒ x = log(n)
For the second, however, it will theoretically keep dividing n by 2 forever, but in reality, it will terminate after some log(n) + c steps, in which case the constant will be omitted, and the complexity is going to be log(n) again.
Related
Below, the purpose of the code is to compute power of an integer.
My friend told me that the time complexity of this algorithm is O(log n).
But, in fact the number of function calls is not equal to logn.
For example, power(2, 9) calls power functions 5 times (including the calling power(2,9)), while power(2, 8) calls power function 4 times (including the calling power(2,8).
Nevertheless the number of bits needed for 8 and 9 are same, the numbers of function calls are different.
Why does this happen? Is this really O(log n) algorithm?
#include <stdio.h>
int power(int a, int n) {
if(n == 0) {
return 1;
}
if(n == 1) {
return a;
}
if (n%2 == 0) {
return power(a*a, n/2);
}else{
return a * power(a, n - 1);
}
}
int main() {
for (int i = 0; i < 15; i++)
printf("pow(%d, %d) = %d\n", 2, i, power(2, i));
return 0;
}
Your implementation is O(logN), but it could be made slightly more efficient.
Note that hereafter, a log is a log base 2.
You have log(n) calls of power(a*a,n/2), and a call to power(a, n-1) for every bit set in n.
The number of bits set in n is at most log(n) +1.
Thus, the number of calls to power is at most log(n)+log(n)+1. For instance, when n = 15, the sequence of calls is
power(15), power(14), power(7), power(6), power(3), power(2), power(1)
log(n)+log(n)+1 = 3+3+1 = 7
Here is a more efficient implementation that has only log(n)+2 calls of power.
int power(int a, int n) {
if(n == 0) {
return 1;
}
if (n&1 == 0) {
return power(a*a, n/2);
}else{
return a * power(a*a, n/2);
}
}
In this case the sequence of calls when n = 15 is
power(15), power(7), power(3), power(1), power(0)
I removed the if (n == 1) condition because we can avoid this test that would be performed log(n) time by adding one call to power.
We then have log(n)+2 calls to power which is better than 2log(n)+1.
The reason why the algorithm remains Ο(lgN) even with the extra calls for the odd number case is because the number of extra calls is bounded by a constant. In the worst case, N/2 is odd at each iteration, but this would only double the number of extra calls (the constant is 2). That is, at worst, there will be 2lgN calls to complete the algorithm.
To more easily observe that the algorithm is Ο(lgN), you can rewrite the function to always reduce the power by half at each iteration, so that at worst case, there are only lgN calls. To leverage tail recursion, you can add a function parameter to accumulate the carried multiplier from the odd N.
int power_i (int a, unsigned N, int c) {
if (N == 0) return c;
return power_i(a*a, N/2, N%2 ? a*c : c);
}
int power (int a, unsigned N) {
return power_i(a, N, 1);
}
The advantage of tail recursion is that the optimized code will be converted into a simple loop by most modern C compilers.
Try it online!
The power function has two base cases: n = 0 and n = 1.
The power function has two recursive calls. Only one of them is made in any given call.
Let's first consider the case when n is even: In that case, the recursive call is made with n / 2.
If all calls would use this case, then you half n in each call down until you reach 1. This is indeed log(n) calls (plus 1 for the base case).
The other case, when n is odd, reduces n only by one. If all calls would end up using this recursive call then the function would be called n times; clearly not logarithmic but linear thus.
But what happens to an odd number when you subtract one from it? It becomes an even number. Thus the feared linear behaviour mentioned above cannot occur.
Worst case is: n is odd, thus use second recursive call. Now n is even, thus first recursive call. Now n is odd, this use second, ... and so on down until n is one. In that case every second call reduces n to n / 2. Therefore you need 2 * log(n) calls then (plus one for the base case).
So yes, this is in O(log(n)). This algorithm is often called binary exponentiation.
let F(n)=0.5F(n-1) and F(0)=1
a. write a function fun1, a recursive function to evaluate the n's term
b. write a function fun2, a non recursive function to evaluate the n's term
c. what is the time complexity of fun1 and from which n term it will be better to use fun1 vs fun2 regarding space complexity
In general the function evaluate the n term of the sequence {1,1/2,1/4,1/8,...}
a.
double fun1( int n ){
if (n == 0)
return 1;
else
return 0.5*fun1(n-1);
}
b.
double fun2( int n ){
double sum = 1, i;
for (i=0 ; i < n; i++)
sum=sum*(0.5);
return sum;
}
c. Intuitively and mathematically using the sum of geometric sequence we can show that it is O(n)
is there another way?
how to address space complexity?
While your versions of fun1 and fun2 are of different space complexity, their time complexity is O(n).
However, the non-recursive function can also be written as:
#import <math.h>
double fun2(int n) {
return pow(0.5, n);
}
This function is of space and time complexity O(1) and will be more efficient for most n (probably n > 5).
As for the original question: It's very tricky as it depends on the compiler optimization:
A naive implementation is of fun1 is of space complexity O(n) as a call of fun1(n) will have a recursive depth of n and therefore requires n call frames on the stack. On most systems it will only work up to a certain n. Then you get a Stack Overflow error because the stack has a limited size.
An optimizing compiler will recognize that it's a tail-recursive function and will optimize it into something very close to fun2, which has a space complexity of O(1) as it uses a fixed number of variable with a fixed size independent of n and no recursion.
I understand that this is a homework question so I will not refer anything about compiler optimizations and tail recursion since this is not property of the program itself but it depends on the compiler if it will optimize a recursive function or not..
Your first approach is clearly O(n) since it calls recursively f1 and all it does a multiplication.
Your second approach is also clearly O(n) since it is just a simple loop.
So as for time complexity both are the same O(n)
As for space complexity fun1 needs n function records so it is O(n) space complexity while fun2 only needs one variable so it is O(1) space complexity. So as for space complexity fun2 is a better approach.
For a recursive and iterative approach the complexity can be reduced to O(log n):
The recursive depth of the following solution is log n:
double fun3( int n ){
double f;
if ( n == 0 )
return 1.0;
f = fun3( n/2 );
return f * f * (n % 2 ? 0.5 : 1.0);
}
The number of iterations in the following loop is log n, too:
double fun4( int n ){
int i;
double f = (n % 2 ? 0.5 : 1.0);
for (i = n; i > 1; i /= 2)
f *= 0.5*0.5;
return f;
}
You can answer yourself if you take a look on the generated code: https://godbolt.org/z/Gd9XxM
It is very likely that the optimizing compiler will remove the tail recursion.
Space and time complexity strongly depend on the optimization options (Try -Os, -O0)
I'm trying to optimize a function that, given an array of N int, return the minimum difference between an element and the previous one. Obviously the function is just for array with a dimension >=2.
For example, given the array {2,5,1}, function returns -4 .
I tried to write my code, but I think it is really intricate.
#include <stdio.h>
#define N 4
/*Function for the difference, works because in the main I already gives one difference*/
int minimodiff(int *a, int n, int diff) {
if (n==1) {
return diff;
}
if (diff>(*(a+1) - *a))
return minimodiff(a+1, n-1, *(a+1)-*a);
else return minimodiff(a+1, n-1, diff);
}
int main() {
int a[N]= {1,8,4,3};
printf("%d", minimodiff(a+1, N-1, *(a+1)-*a));
}
I wonder if there is a way to avoid to pass the first difference in main, but doing everything in the recursive function.
I can use as header file stdio.h / stdlib.h / string.h / math.h . Thanks a lot for the help, I hope that this can give me a better understanding of the recursive functions.
minimodiff(a+1, N-1, *(a+1)-*a) is a weak approach to use recursion for it uses a recursion depths of N which can easily overwhelm system resources depth limit. In such a case, a simple loop would suffice.
A good recursive approach would halve the problem at each call, finding the minimum of the left half and the right half. It may not run faster, but the maximum depth of recursion would be log2(N).
// n is the number of array elements
int minimodiff2(const int *a, size_t n) {
if (n == 2) {
return a[1] - a[0];
} else if (n <= 1) {
return INT_MAX;
}
int left = minimodiff2(a, n/2 + 1); // +1 to include a[n/2] in both halves
int right = minimodiff2(a + n/2, n - n/2);
return (left < right) ? left : right;
}
int main() {
int a[]= {1,8,4,3};
printf("%d", minimodiff2(a, sizeof a/ sizeof a[0]));
}
When doing a min calculation, recursive or otherwise, it makes the initial condition simpler if you set the min to the highest possible value. If you were using floating point numbers it would be Infinity. Since you're using integers, it's INT_MAX from limits.h which is defined as the highest possible integer. It is guaranteed to be greater than or equal to all other integers.
If you were doing this iteratively, with loops, you'd initially set diff = INT_MAX. Since this is recursion, INT_MAX is what gets returned when recursion is done.
#include <limits.h>
static inline int min( const int a, const int b ) {
return a < b ? a : b;
}
int minimodiff( const int *a, const size_t size ) {
if( size <= 1 ) {
return INT_MAX;
}
int diff = a[1] - a[0];
return min( minimodiff(a+1, size-1), diff );
}
The recursive approach is a bad idea because extra memory and function calls are used.
Anyway, your question is about avoiding the first difference.
You can use a centinel.
Since the parameter diff is an int variable, it is not possible to obtain a value greater than INT_MAX.
Thus, your first call to minimodiff can be done by giving the value INT_MAX as the argument corresponding to diff.
Besides, the standard header limits.h must be #include'd at top, to make visible the INT_MAX macro.
I have written a program to print all the permutations of the string using backtracking method.
# include <stdio.h>
/* Function to swap values at two pointers */
void swap (char *x, char *y)
{
char temp;
temp = *x;
*x = *y;
*y = temp;
}
/* Function to print permutations of string
This function takes three parameters:
1. String
2. Starting index of the string
3. Ending index of the string. */
void permute(char *a, int i, int n)
{
int j;
if (i == n)
printf("%s\n", a);
else
{
for (j = i; j <= n; j++)
{
swap((a+i), (a+j));
permute(a, i+1, n);
swap((a+i), (a+j)); //backtrack
}
}
}
/* Driver program to test above functions */
int main()
{
char a[] = "ABC";
permute(a, 0, 2);
getchar();
return 0;
}
What would be time complexity here.Isn't it o(n2).How to check the time complexity in case of recursion? Correct me if I am wrong.
Thanks.
The complexity is O(N*N!), You have N! permutations, and you get all of them.
In addition, each permutation requires you to print it, which is O(N) - so totaling in O(N*N!)
My answer is going to focus on methodology since that's what the explicit question is about. For the answer to this specific problem see others' answer such as amit's.
When you are trying to evaluate complexity on algorithms with recursion, you should start counting just as you would with an iterative one. However, when you encounter recursive calls, you don't know yet what the exact cost is. Just write the cost of the line as a function and still count the number of times it's going to run.
For example (Note that this code is dumb, it's just here for the example and does not do anything meaningful - feel free to edit and replace it with something better as long as it keeps the main point):
int f(int n){ //Note total cost as C(n)
if(n==1) return 0; //Runs once, constant cost
int i;
int result = 0; //Runs once, constant cost
for(i=0;i<n;i++){
int j;
result += i; //Runs n times, constant cost
for(j=0;j<n;j++){
result+=i*j; //Runs n^2 times, constant cost
}
}
result+= f(n/2); //Runs once, cost C(n/2)
return result;
}
Adding it up, you end up with a recursive formula like C(n) = n^2 + n + 1 + C(n/2) and C(1) = 1. The next step is to try and change it to bound it by a direct expression. From there depending on your formula you can apply many different mathematical tricks.
For our example:
For n>=2: C(n) <= 2n^2 + C(n/2)
since C is monotone, let's consider C'(p)= C(2^p):
C'(p)<= 2*2^2p + C'(p-1)
which is a typical sum expression (not convenient to write here so let's skip to next step), that we can bound: C'(p)<=2p*2^2p + C'(0)
turning back to C(n)<=2*log(n)*n^2 + C(1)
Hence runtime in O(log n * n^2)
The exact number of permutations via this program is (for a string of length N)
start : N p. starting each N-1 p. etc...
number of permutations is N + N(N-1) + N(N-1)(N-2) + ... + N(N-1)...(2) (ends with 2 since the next call just returns)
or N(1+(N-1)(1+(N-2)(1+(N-3)(1+...3(1+2)...))))
Which is roughly 2N!
Adding a counter in the for loop (removing the printf) matches the formula
N=3 : 9
N=4 : 40
N=5 : 205
N=6 : 1236
...
The time complexity is O(N!)
have wrote the code for what i see to be a good algorithm for finding the greatest prime factor for a large number using recursion. My program crashes with any number greater than 4 assigned to the variable huge_number though. I am not good with recursion and the assignment does not allow any sort of loop.
#include <stdio.h>
long long prime_factor(int n, long long huge_number);
int main (void)
{
int n = 2;
long long huge_number = 60085147514;
long long largest_prime = 0;
largest_prime = prime_factor(n, huge_number);
printf("%ld\n", largest_prime);
return 0;
}
long long prime_factor (int n, long long huge_number)
{
if (huge_number / n == 1)
return huge_number;
else if (huge_number % n == 0)
return prime_factor (n, huge_number / n);
else
return prime_factor (n++, huge_number);
}
any info as to why it is crashing and how i could improve it would be greatly appreciated.
Even fixing the problem of using post-increment so that the recursion continues forever, this is not a good fit for a recursive solution - see here for why, but it boils down to how fast you can reduce the search space.
While your division of huge_number whittles it down pretty fast, the vast majority of recursive calls are done by simply incrementing n. That means you're going to use a lot of stack space.
You would be better off either:
using an iterative solution where you won't blow out the stack (if you just want to solve the problem) (a); or
finding a more suitable problem for recursion if you're just trying to learn recursion.
(a) An example of such a beast, modeled on your recursive solution, is:
#include <stdio.h>
long long prime_factor_i (int n, long long huge_number) {
while (n < huge_number) {
if (huge_number % n == 0) {
huge_number /= n;
continue;
}
n++;
}
return huge_number;
}
int main (void) {
int n = 2;
long long huge_number = 60085147514LL;
long long largest_prime = 0;
largest_prime = prime_factor_i (n, huge_number);
printf ("%lld\n", largest_prime);
return 0;
}
As can be seen from the output of that iterative solution, the largest factor is 10976461. That means the final batch of recursions in your recursive solution would require a stack depth of ten million stack frames, not something most environments will contend with easily.
If you really must use a recursive solution, you can reduce the stack space to the square root of that by using the fact that you don't have to check all the way up to the number, but only up to its square root.
In addition, other than 2, every other prime number is odd, so you can further halve the search space by only checking two plus the odd numbers.
A recursive solution taking those two things into consideration would be:
long long prime_factor_r (int n, long long huge_number) {
// Debug code for level checking.
// static int i = 0;
// printf ("recursion level = %d\n", ++i);
// Only check up to square root.
if (n * n >= huge_number)
return huge_number;
// If it's a factor, reduce the number and try again.
if (huge_number % n == 0)
return prime_factor_r (n, huge_number / n);
// Select next "candidate" prime to check against, 2 -> 3,
// 2n+1 -> 2n+3 for all n >= 1.
if (n == 2)
return prime_factor_r (3, huge_number);
return prime_factor_r (n + 2, huge_number);
}
You can see I've also removed the (awkward, in my opinion) construct:
if something then
return something
else
return something else
I much prefer the less massively indented code that comes from:
if something then
return something
return something else
But that's just personal preference. In any case, that gets your recursion level down to 1662 (uncomment the debug code to verify) rather than ten million, a rather sizable reduction but still not perfect. That runs okay in my environment.
You meant n+1 instead of n++. n++ increments n after using it, so the recursive call gets the original value of n.
You are overflowing stack, because n++ post-increments the value, making a recursive call with the same values as in the current invocation.
the crash reason is stack overflow. I add a counter to your program and execute it(on ubuntu 10.04 gcc 4.4.3) the counter stop at "218287" before core dump. the better solution is using loop instead of recursion.