Is it cheating to use 'static' when writing a recursive algorithm? - c

As part of a programming assignment, I'm required to write a recursive function which determines the largest integer in an array. To quote the exact task:
Write a recursive function that finds the largest number in a given list of
integers.
I have come up with two solutions, the first of which makes two recursive calls:
int largest(int arr[], int length){
if(length == 0)
return 0;
else if(arr[length - 1] > largest(arr,length -1))
return arr[length];
else return largest(arr,length -1);
}
The second one makes only one, however it uses a static variable n:
int largest(int arr[], int length){
static int n = -1;
if(length == 0)
return n;
else if (arr[length - 1] > n)
n = arr[length - 1];
return largest(arr, length - 1);
}
I was wondering whether it would be considered cheating use static variables for such a task. Either way, which one is considered better form? Is there a recursive method which tops both?

I wouldn't say that it's cheating to use static variables this way - I'd say that it's incorrect. :-)
Imagine that you call this function multiple times on a number of different arrays. With the static variable introduced, the value of n never resets between calls, so you may end up returning the wrong value. Generally speaking, it's usually poor coding style to set things up like this, since it makes it really easy to get the wrong answer. Additionally, if your array contains only negative values, you may return -1 as the answer even though -1 is actually bigger than everything in the array.
I do think that the second version has one nice advantage over the first - it's much, much faster because it makes only one recursive call rather than two. Consider using the first version, but updating it so that you cache the value returned by the recursive call so that you don't make two calls. This will exponentially speed up the code; the initial version takes time Θ(2n), while the updated version would take time Θ(n).

There is nothing cheating using a static inside function, recursive or otherwise.
There can be many good reasons for why to do so, but in your case I suspect that you are coming up with a wrong solution -- in as largest will only work once in the lifetime of the program running it.
consider the following (pseudo) code;
main() {
largest([ 9, 8, 7]) // would return 9 -- OK
largest([ 1, 2, 3]) // would return 9 ?? bad
}
The reason being that your largest cannot tell the difference between the two calls, but if that is what you want then that is fine.
Edit:
In answer to your comment, something like this will have a better big-O notation than your initial code;
int largest(int arr[], int length){
int split, lower,upper;
switch (length) {
case 1: return arr[0];
case 2: if (arr[1]>arr[0]) return arr[1]; else return arr[0];
default:
if (len <= 0) throw error;
split = length/2;
lower = largest(arr,split);
upper = largest(arr+split,length-split);
if (lower > upper) return lower; else return upper;
}
}
Alternatively, the obvious solution is;
int largest(int arr[], int length){
if (length <= 0) thor error;
int max = arr[0];
for (int i=1; i<length; i++)
if (arr[i] > max) max = arr[i];
return max;
}
which has no recursion at all

It is actually a terrible design, because on the second execution of the function does not return a correct result.
I don't think you need to debate whether it is cheating, if it is wrong.
The first version is also incorrect, because you return arr[length] instead of arr[length-1]. You can eliminate the second recursive call. What can you do instead of calling the same function (with no side-effects) twice with the same arguments?

In addition to the excellent points in the three prior answers, you should practice having more of a recursion-based mind. (1) Handle the trivial case. (2) For a non-trivial case, make a trivial reduction in the task and recur on the (smaller) remaining problem.
I propose that your proper base case is a list of one item: return that item. An empty list has no largest element.
For the recursion case, check the first element against the max of the rest of the list; return the larger. In near-code form, this looks like the below. It makes only one recursive call, and has only one explicit local variable -- and that is to serve as an alias for the recursion result.
int largest(int arr[], int length){
if(length == 1)
// if only one element, return it
return arr[0];
else n = largest(arr,length-1))
// return the larger of the first element or the remaining largest.
return arr[length-1] > n ? arr[length-1] : n
}

Is there a recursive method which tops both?
Recursion gets a bad name when with N elements cause a recursion depth of N like with return largest(arr,length -1);
To avoid this, insure the length on each recursion is halved.
The maximum recursive depth is O(log2(N))
int largest(int arr[], int length) {
if (length <= 0) return INT_MIN;
int big = arr[0];
while (length > 1) {
int length_r = length / 2;
int length_l = length - length_r;
int big_r = largest(&arr[length_l], length_r);
if (big_r > big) big = big_r;
length = length_l;
}
return big;
}
A sneaky and fast method that barely uses recursion as finding the max is trivial with a loop.
int largest(int arr[], int length) {
if (length <= 0) return INT_MIN;
int max = largest(NULL, -1);
while (length) {
length--;
if (arr[length] > max) max = arr[length];
}
return max;
}

Related

Fibonacci using Recursion

This is my idea of solving 'nth term of fibonacci series with least processing power'-
int fibo(int n, int a, int b){
return (n>0) ? fibo(n-1, b, a+b) : a;
}
main(){
printf("5th term of fibo is %d", fibo(5 - 1, 0, 1));
}
To print all the terms, till nth term,
int fibo(int n, int a, int b){
printf("%d ", a);
return (n>0)? fibo(n-1, b, a+b): a;
}
I showed this code to my university professor and as per her, this is a wrong approach to solve Fibonacci problem as this does not abstract the method. I should have the function to be called as fibo(n) and not fibo(n, 0, 1). This wasn't a satisfactory answer to me, so I thought of asking experts on SOF.
It has its own advantage over traditional methods of solving Fibonacci problems. The technique where we employ two parallel recursions to get nth term of Fibonacci (fibo(n-1) + fibo(n-2)) might be slow to give 100th term of the series whereas my technique will be lot faster even in the worst scenario.
To abstract it, I can use default parameters but it isn't the case with C. Although I can use something like -
int fibo(int n){return fiboN(n - 1, 0, 1);}
int fiboN(int n, int a, int b){return (n>0)? fiboN(n-1, b, a+b) : a;}
But will it be enough to abstract the whole idea? How should I convince others that the approach isn't wrong (although bit vague)?
(I know, this isn't sort of question that I should I ask on SOF but I just wanted to get advice from experts here.)
With the understanding that the base case in your recursion should be a rather than 0, this seems to me to be an excellent (although not optimal) solution. The recursion in that function is tail-recursion, so a good compiler will be able to avoid stack growth making the function O(1) soace and O(n) time (ignoring the rapid growth in the size of the numbers).
Your professor is correct that the caller should not have to deal with the correct initialisation. So you should provide an external wrapper which avoids the need to fill in the values.
int fibo(int n, int a, int b) {
return n > 0 ? fibo(b, a + b) : a;
}
int fib(int n) { return fibo(n, 0, 1); }
However, it could also be useful to provide and document the more general interface, in case the caller actually wants to vary the initial values.
By the way, there is a faster computation technique, based on the recurrence
fib(a + b - 1) = f(a)f(b) + f(a - 1)f(b - 1)
Replacing b with b + 1 yields:
fib(a + b) = f(a)f(b + 1) + f(a - 1)f(b)
Together, those formulas let us compute:
fib(2n - 1) = fib(n + n - 1)
= fib(n)² + fib(n - 1)²
fib(2n) = fib(n + n)
= fib(n)fib(n + 1) + fib(n - 1)fib(n)
= fib(n)² + 2fib(n)fib(n - 1)
This allows the computation to be performed in O(log n) steps, with each step producing two consecutive values.
Your result will be 0, with your approaches. You just go in recursion, until n=0 and at that point return 0. But you have also to check when n==1 and you should return 1; Also you have values a and b and you do nothing with them.
i would suggest to look at the following recursive function, maybe it will help to fix yours:
int fibo(int n){
if(n < 2){
return n;
}
else
{
return (fibo(n-1) + fibo(n-2));
}
}
It's a classical problem in studying recursion.
EDIT1: According to #Ely suggest, bellow is an optimized recursion, with memorization technique. When one value from the list is calculated, it will not be recalculated again as in first example, but it will be stored in the array and taken from that array whenever is required:
const int MAX_FIB_NUMBER = 10;
int storeCalculatedValues[MAX_FIB_NUMBER] = {0};
int fibo(int n){
if(storeCalculatedValues[n] > 0)
{
return storeCalculatedValues[n];
}
if(n < 2){
storeCalculatedValues[n] = n;
}
else
{
storeCalculatedValues[n] = (fibo(n-1) + fibo(n-2));
}
return storeCalculatedValues[n];
}
Using recursion and with a goal of least processing power, an approach to solve fibonacci() is to have each call return 2 values. Maybe one via a return value and another via a int * parameter.
The usual idea with recursion is to have a a top level function perform a one-time preparation and check of parameters followed by a local helper function written in a lean fashion.
The below follows OP's idea of a int fibo(int n) and a helper one int fiboN(int n, additional parameters)
The recursion depth is O(n) and the memory usage is also O(n).
static int fib1h(int n, int *previous) {
if (n < 2) {
*previous = n-1;
return n;
}
int t;
int sum = fib1h(n-1, &t);
*previous = sum;
return sum + t;
}
int fibo1(int n) {
assert(n >= 0); // Handle negatives in some fashion
int t;
return fib1h(n, &t);
}
#include <stdio.h>
int fibo(int n);//declaring the function.
int main()
{
int m;
printf("Enter the number of terms you wanna:\n");
scanf("%i", &m);
fibo(m);
for(int i=0;i<m;i++){
printf("%i,",fibo(i)); /*calling the function with the help of loop to get all terms */
}
return 0;
}
int fibo(int n)
{
if(n==0){
return 0;
}
if(n==1){
return 1;
}
if (n > 1)
{
int nextTerm;
nextTerm = fibo(n - 2) + fibo(n - 1); /*recursive case,function calling itself.*/
return nextTerm;
}
}
solving 'nth term of fibonacci series with least processing power'
I probably do not need to explain to you the recurrence relation of a Fibonacci number. Though your professor have given you a good hint.
Abstract away details. She is right. If you want the nth Fibonacci number it suffices to merely tell the program just that: Fibonacci(n)
Since you aim for least processing power your professor's hint is also suitable for a technique called memoization, which basically means if you calculated the nth Fibonacci number once, just reuse the result; no need to redo a calculation. In the article you find an example for the factorial number.
For this you may want to consider a data structure in which you store the nth Fibonacci number; if that memory has already a Fibonacci number just retrieve it, otherwise store the calculated Fibonacci number in it.
By the way, didactically not helpful, but interesting: There exists also a closed form expression for the nth Fibonacci number.
This wasn't a satisfactory answer to me, so I thought of asking
experts on SOF.
"Uh, you do not consider your professor an expert?" was my first thought.
As a side note, you can do the fibonacci problem pretty much without recursion, making it the fastest I know approach. The code is in java though:
public int fibFor() {
int sum = 0;
int left = 0;
int right = 1;
for (int i = 2; i <= n; i++) {
sum = left + right;
left = right;
right = sum;
}
return sum;
}
Although #rici 's answer is mostly satisfactory but I just wanted to share what I learnt solving this problem. So here's my understanding on finding fibonacci using recursion-
The traditional implementation fibo(n) { return (n < 2) n : fibo(n-1) + fibo(n-2);} is a lot inefficient in terms of time and space requirements both. This unnecessarily builds stack. It requires O(n) Stack space and O(rn) time, where r = (√5 + 1)/2.
With memoization technique as suggested in #Simion 's answer, we just create a permanent stack instead of dynamic stack created by compiler at run time. So memory requirement remains same but time complexity reduces in amortized way. But is not helpful if we require to use it only the once.
The Approach I suggested in my question requires O(1) space and O(n) time. Time requirement can also be reduced here using same memoization technique in amortized way.
From #rici 's post, fib(2n) = fib(n)² + 2fib(n)fib(n - 1), as he suggests the time complexity reduces to O(log n) and I suppose, the stack growth is still O(n).
So my conclusion is, if I did proper research, time complexity and space requirement both cannot be reduced simultaneously using recursion computation. To achieve both, the alternatives could be using iteration, Matrix exponentiation or fast doubling.

C - Recursive function for minimum gap in array

I'm trying to optimize a function that, given an array of N int, return the minimum difference between an element and the previous one. Obviously the function is just for array with a dimension >=2.
For example, given the array {2,5,1}, function returns -4 .
I tried to write my code, but I think it is really intricate.
#include <stdio.h>
#define N 4
/*Function for the difference, works because in the main I already gives one difference*/
int minimodiff(int *a, int n, int diff) {
if (n==1) {
return diff;
}
if (diff>(*(a+1) - *a))
return minimodiff(a+1, n-1, *(a+1)-*a);
else return minimodiff(a+1, n-1, diff);
}
int main() {
int a[N]= {1,8,4,3};
printf("%d", minimodiff(a+1, N-1, *(a+1)-*a));
}
I wonder if there is a way to avoid to pass the first difference in main, but doing everything in the recursive function.
I can use as header file stdio.h / stdlib.h / string.h / math.h . Thanks a lot for the help, I hope that this can give me a better understanding of the recursive functions.
minimodiff(a+1, N-1, *(a+1)-*a) is a weak approach to use recursion for it uses a recursion depths of N which can easily overwhelm system resources depth limit. In such a case, a simple loop would suffice.
A good recursive approach would halve the problem at each call, finding the minimum of the left half and the right half. It may not run faster, but the maximum depth of recursion would be log2(N).
// n is the number of array elements
int minimodiff2(const int *a, size_t n) {
if (n == 2) {
return a[1] - a[0];
} else if (n <= 1) {
return INT_MAX;
}
int left = minimodiff2(a, n/2 + 1); // +1 to include a[n/2] in both halves
int right = minimodiff2(a + n/2, n - n/2);
return (left < right) ? left : right;
}
int main() {
int a[]= {1,8,4,3};
printf("%d", minimodiff2(a, sizeof a/ sizeof a[0]));
}
When doing a min calculation, recursive or otherwise, it makes the initial condition simpler if you set the min to the highest possible value. If you were using floating point numbers it would be Infinity. Since you're using integers, it's INT_MAX from limits.h which is defined as the highest possible integer. It is guaranteed to be greater than or equal to all other integers.
If you were doing this iteratively, with loops, you'd initially set diff = INT_MAX. Since this is recursion, INT_MAX is what gets returned when recursion is done.
#include <limits.h>
static inline int min( const int a, const int b ) {
return a < b ? a : b;
}
int minimodiff( const int *a, const size_t size ) {
if( size <= 1 ) {
return INT_MAX;
}
int diff = a[1] - a[0];
return min( minimodiff(a+1, size-1), diff );
}
The recursive approach is a bad idea because extra memory and function calls are used.
Anyway, your question is about avoiding the first difference.
You can use a centinel.
Since the parameter diff is an int variable, it is not possible to obtain a value greater than INT_MAX.
Thus, your first call to minimodiff can be done by giving the value INT_MAX as the argument corresponding to diff.
Besides, the standard header limits.h must be #include'd at top, to make visible the INT_MAX macro.

this code i wrote to find the smallest number in an array using recursive functions [closed]

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 6 years ago.
Improve this question
int smallest(int arr[],int i,int small,int n)
{
if(i==n)
return small;
else if(small>=arr[i])
{
smallest(arr,i+1,arr[i],n);
}
}
so the compiler says that control reaches end of non-void function.Any suggestions?
Rewrite it so that you can't fall out the end of the function without returning anything; e.g.
int smallest(int arr[],int i,int small,int n)
{
if (i == n)
return small;
else if(small >= arr[i])
return smallest(arr, i+1, arr[i], n);
else
return smallest(arr, i+1, small, n);
}
You also might want to provide an alternative-and-simpler call for user code to call which does the required initializations, in which case you might want to rename the above to basic_smallest and then create a new function
#include <limits.h>
int smallest(int arr[], int i)
{
return basic_smallest(arr, i, INT_MAX, 0);
}
Also, keep in mind that a recursive implementation such as this puts you at risk of blowing up your stack for large arrays. A non-recursive implementation might be safer:
int smallest(int arr[], int i)
{
int n;
int small;
for(n = 0 ; n < i ; ++n)
small = (small >= arr[n] ? arr[n] : small);
return small;
}
You're missing two return cases in your logic flow, and you're making this harder than it needs to be regardless.
Regarding the two missing return statements, they are noted below:
int smallest(int arr[],int i,int small,int n)
{
if(i==n)
return small;
else if(small>=arr[i])
{
smallest(arr,i+1,arr[i],n); // here
}
// and here
}
Both of these become a bit more obvious when you remove the worthless else usage. It isn't needed. If the prior if is true, the function already exited.
int smallest(int arr[],int i,int small,int n)
{
if(i==n)
return small;
if(small>=arr[i])
smallest(arr,i+1,arr[i],n); // here
// and here
}
Same problems, but now it should be visually more obvious the only place you actually return anything is when i == n. In neither case that follows (when small >= arr[i] is true OR false) is any return value provided.
The solution then, if keeping your existing interface, is to add the return AND the final case, thereby covering all logical paths to have a reachable return:
int smallest(int arr[],int i,int small,int n)
{
if(i==n)
return small;
if(small>=arr[i])
return smallest(arr,i+1,arr[i],n);
return smallest(arr,i+1,small,n); // note small. it's important
}
A Different Approach
Recursive smallest() is doable using only a sequence address and a length. You need not tag along that small value, nor do you need the to tote around that index. Rather, you can simply use pointer arithmetic to move the base sequence address up until you exhaust your given elements, while adjusting the remaining sequence length that will eventually tell us we need to stop recursing. All of the relevant data is already kept for you in the recursion stack; you just need to know how to use it:
int smallest(const int arr[], size_t len)
{
if (len < 2) // 1
return *arr;
int small = smallest(arr+1, len-1); // 2
return (*arr < small ? *arr : small); // 3
}
Explanation of noted points below
Base bailout case. Just return whatever is at the start of the sequence if len < 2 is true. There is a missing pedantic step I left out here. It is conceivable someone can pass a sequence of zero length, and if that is the case, it should be considered a runtime error. Keep that in mind if this is to be anything besides an exercise (which it looks like it is).
Get the smallest value of the elements that follow the base element of *arr using recursion. Note that arr+1 is the first parameter. This means the recursed call will refer to the next element in the sequence as it's base element (*arr). Also note len-1 is passed to note there are now one-fewer elements left in the sequence. We rely on that to trigger the base case from (1) above once the sequence is exhausted and we need to stop recursing.
Ternary expression that essentially says, "if the base element is less than the smallest of all the elements that followed, return the base element, otherwise return the smallest element that followed."
A sample run appears below:
#include <iostream>
#include <random>
#include <utility>
#include <stdio.h>
int smallest(const int arr[], size_t len)
{
if (len < 2)
return *arr;
int small = smallest(arr+1, len-1);
return (*arr < small ? *arr : small);
}
int main()
{
int arr[] = { 7,3,5,1,9,2,4,6,8 };
printf("Smallest : %d\n", smallest(arr, sizeof arr / sizeof *arr));
}
Output
Smallest : 1

What's the point of using linear search with sentinel?

My goal is to understand why adopting linear search with sentinel is preferred than using a standard linear search.
#include <stdio.h>
int linearSearch(int array[], int length) {
int elementToSearch;
printf("Insert the element to be searched: ");
scanf("%d", &elementToSearch);
for (int i = 0; i < length; i++) {
if (array[i] == elementToSearch) {
return i; // I found the position of the element requested
}
}
return -1; // The element to be searched is not in the array
}
int main() {
int myArray[] = {2, 4, 9, 2, 9, 10};
int myArrayLength = 6;
linearSearch(myArray, myArrayLength);
return 0;
}
Wikipedia mentions:
Another way to reduce the overhead is to eliminate all checking of the loop index. This can be done by inserting the desired item itself as a sentinel value at the far end of the list.
If I implement linear search with sentinel, I have to
array[length + 1] = elementToSearch;
Though, the loop stops checking the elements of the array once the element to be searched is found. What's the point of using linear search with sentinel?
A standard linear search would go through all the elements checking the array index every time to check when it has reached the last element. Like the way your code does.
for (int i = 0; i < length; i++) {
if (array[i] == elementToSearch) {
return i; // I found the position of the element requested
}
}
But, the idea is sentinel search is to keep the element to be searched in the end, and to skip the array index searching, this will reduce one comparison in each iteration.
while(a[i] != element)
i++;
First, lets turn your example into a solution that uses sentinels.
#include <stdio.h>
int linearSearch(int array[], int length, int elementToSearch) {
int i = 0;
array[length] = elementToSearch;
while (array[i] != elementToSearch) {
i++;
}
return i;
}
int main() {
int myArray[] = {2, 4, 9, 2, 9, 10, -1};
int myArrayLength = 6;
int mySearch = 9;
printf("result is %d\n", linearSearch(myArray, myArrayLength, mySearch));
return 0;
}
Notice that the array now has an extra slot at the end to hold the sentinel value. (If we don't do that, the behavior of writing to array[length] is undefined.)
The purpose of the sentinel approach is to reduce the number of tests performed for each loop iteration. Compare:
// Original
for (int i = 0; i < length; i++) {
if (array[i] == elementToSearch) {
return i;
}
}
return -1;
// New
while (array[i] != elementToSearch) {
i++;
}
return i;
In the first version, the code is testing both i and array[i] for each loop iteration. In the second version, i is not tested.
For a large array, the performance difference could be significant.
But what are the downsides?
The result when the value is not found is different; -1 versus length.
We have to make the array bigger to hold the sentinel value. (And if we don't get it right we risk clobbering something on the stack or heap. Ouch!)
The array cannot be read-only. We have to be able to update it.
This won't work if multiple threads are searching the same array for different elements.
Using the sentinel value allows to remove variable i and correspondingly its checking and increasing.
In your linear search the loop looks the following way
for (int i = 0; i < length; i++) {
if (array[i] == elementToSearch) {
return i; // I found the position of the element requested
}
}
So variable i is introduced, initialized, compared in each iteration of the loop, increased and used to calculate the next element in the array.
Also the function has in fact three parameters if to pass to the function the searched value
int linearSearch(int array[], int length, int value) {
//...
Using the sentinel value the function can be rewritten the following way
int * linearSearch( int array[], int value )
{
while ( *array != value ) ++array;
return array;
}
And inside the caller you can check whether the array has the value the following way
int *target = linearSearch( array, value );
int index = target == array + size - 1 ? -1 : target - array;
If you add the value to search for, you can reduce one comparison in every loop, so that the running time is reduced.
It may look like for(i = 0;;i++) if(array[i] == elementToSearch) return i;.
If you append the value to search for at the end of the array, when instead of using a for loop with initialization, condition and increment you can a simpler loop like
while (array[i++] != elementToSearch)
;
Then the loop condition is the check for the value you search for, which means less code to execute inside the loop.
The point is that you can convert the for loop into a while/repeat loop. Notice how you are checking i < length each time. If you covert it,
do {
} while (array[i++] != elementToSearch);
Then you don't have to do that extra checking. (in this case, array.length is now one bigger)
Although the sentinel approach seems to shave off a few cycles per iteration in the loop, this approach is not a good idea:
the array must be defined with an extra slot and passing its length as 1 less than the defined length is confusing and error prone;
the array must be modifiable;
if the search function modifies the array to set the sentinel value, this constitutes a side effect that can be confusing and unexpected;
the search function with a sentinel cannot be used for a portion of the array;
the sentinel approach is inherently not thread safe: seaching the same array for 2 different values in 2 different threads would not work whereas searching a constant read only array from multiple threads would be fine;
the benefits are small and only for large arrays. If this search becomes a performance bottleneck, you should probably not use linear scanning. You could sort the array and use a binary search or you could use a hash table.
optimizing compilers for modern CPUs can generate code where both comparisons will be performed in parallel, hence incur no overhead;
As a rule of thumb, a search function should not have side effects. A good example of the Principe of least surprise.

finding greatest prime factor using recursion in c

have wrote the code for what i see to be a good algorithm for finding the greatest prime factor for a large number using recursion. My program crashes with any number greater than 4 assigned to the variable huge_number though. I am not good with recursion and the assignment does not allow any sort of loop.
#include <stdio.h>
long long prime_factor(int n, long long huge_number);
int main (void)
{
int n = 2;
long long huge_number = 60085147514;
long long largest_prime = 0;
largest_prime = prime_factor(n, huge_number);
printf("%ld\n", largest_prime);
return 0;
}
long long prime_factor (int n, long long huge_number)
{
if (huge_number / n == 1)
return huge_number;
else if (huge_number % n == 0)
return prime_factor (n, huge_number / n);
else
return prime_factor (n++, huge_number);
}
any info as to why it is crashing and how i could improve it would be greatly appreciated.
Even fixing the problem of using post-increment so that the recursion continues forever, this is not a good fit for a recursive solution - see here for why, but it boils down to how fast you can reduce the search space.
While your division of huge_number whittles it down pretty fast, the vast majority of recursive calls are done by simply incrementing n. That means you're going to use a lot of stack space.
You would be better off either:
using an iterative solution where you won't blow out the stack (if you just want to solve the problem) (a); or
finding a more suitable problem for recursion if you're just trying to learn recursion.
(a) An example of such a beast, modeled on your recursive solution, is:
#include <stdio.h>
long long prime_factor_i (int n, long long huge_number) {
while (n < huge_number) {
if (huge_number % n == 0) {
huge_number /= n;
continue;
}
n++;
}
return huge_number;
}
int main (void) {
int n = 2;
long long huge_number = 60085147514LL;
long long largest_prime = 0;
largest_prime = prime_factor_i (n, huge_number);
printf ("%lld\n", largest_prime);
return 0;
}
As can be seen from the output of that iterative solution, the largest factor is 10976461. That means the final batch of recursions in your recursive solution would require a stack depth of ten million stack frames, not something most environments will contend with easily.
If you really must use a recursive solution, you can reduce the stack space to the square root of that by using the fact that you don't have to check all the way up to the number, but only up to its square root.
In addition, other than 2, every other prime number is odd, so you can further halve the search space by only checking two plus the odd numbers.
A recursive solution taking those two things into consideration would be:
long long prime_factor_r (int n, long long huge_number) {
// Debug code for level checking.
// static int i = 0;
// printf ("recursion level = %d\n", ++i);
// Only check up to square root.
if (n * n >= huge_number)
return huge_number;
// If it's a factor, reduce the number and try again.
if (huge_number % n == 0)
return prime_factor_r (n, huge_number / n);
// Select next "candidate" prime to check against, 2 -> 3,
// 2n+1 -> 2n+3 for all n >= 1.
if (n == 2)
return prime_factor_r (3, huge_number);
return prime_factor_r (n + 2, huge_number);
}
You can see I've also removed the (awkward, in my opinion) construct:
if something then
return something
else
return something else
I much prefer the less massively indented code that comes from:
if something then
return something
return something else
But that's just personal preference. In any case, that gets your recursion level down to 1662 (uncomment the debug code to verify) rather than ten million, a rather sizable reduction but still not perfect. That runs okay in my environment.
You meant n+1 instead of n++. n++ increments n after using it, so the recursive call gets the original value of n.
You are overflowing stack, because n++ post-increments the value, making a recursive call with the same values as in the current invocation.
the crash reason is stack overflow. I add a counter to your program and execute it(on ubuntu 10.04 gcc 4.4.3) the counter stop at "218287" before core dump. the better solution is using loop instead of recursion.

Resources