I don't understand why this code compiles and then segfaults:
#include <stdio.h>
#include <stdlib.h>
unsigned long int gcd(unsigned long int, unsigned long int);
unsigned long int lcm(unsigned long int, unsigned long int);
int main(int argc, char *argv[]) {
int i;
unsigned long int n = 1L;
for (i = 2; i < 21; i++) {
n = lcm(n, i);
}
printf("%ld\n", n);
return 0;
}
unsigned long int gcd(unsigned long int a, unsigned long int b) {
if (a == b) return a;
if (a > b) return gcd(a - b, b);
return gcd(a, b - a);
}
unsigned long int lcm(unsigned long int a, unsigned long int b) {
return abs(a * b) / gcd(a, b);
}
Are those unsigned longs even necessary? I also noted that if I change that 21 to a 18 it gives the correct result. The code is meant to find the LCM of all the numbers from 1 to 20.
Running it in gdb gives:
Program received signal SIGSEGV, Segmentation fault.
0x0000000000400643 in gcd (a=7536618, b=18) at p5.c:19
19 if (a > b) return gcd(a - b, b);
You're overflowing the stack. Which is a shame, because that should be easily optimized as tail recursion, full recursion is extremely overkill for this. Using the proper optimization levels in any modern compiler (cl, gcc, icc) should get rid of the segfault.
Luckily writing this iteratively is trivial as hell:
unsigned long gcd(unsigned long a, unsigned long b)
{
while(a != b)
if(a > b)
a -= b;
else
b -= a;
return a;
}
Due to how the stack and how they work, there's a limit on how deep function calls can be nested, depending on how much local state they keep.
For extremely imbalanced arguments, implementing gcd by repeated subtraction requires a lot of iterations, and so your recursion goes way to deep. You need to either change the implementation (e.g. make it iterative), or change the algorithm (e.g. compute remainders instead of differences).
You could increase the stack size, but that is wasteful of memory and the larger size will run eventually run out too with larger inputs.
Related
I tried following function (as suggested on these forums) to calculate power. However, it is causing program to hang up.
static long ipow(int b, int e) {
long r = 1;
while (e--) r *= b;
return r;
}
double cfilefn(int a, int b, int c) {
return (ipow(a, b) / (double)c);
}
cfilefn(2,3,4);
The function looks all right. Where is the error and how can it be solved?
The ipow function will misbehave if the second argument is a negative number: it will run for a while and have implementation defined behavior when e reaches INT_MIN. You should modify the test while (e--) r *= b; as:
static long ipow(int b, int e) {
long r = 1;
while (e-- > 0)
r *= b;
return r;
}
Note however that ipow will cause arithmetic overflow for moderately large values of e and since you want a double result from cfilefn, you should use double arithmetics for the power function:
#include <math.h>
double cfilefn(int a, int b, int c) {
return pow(a, b) / c;
}
My code for finding 10th decimal digit of square root of 2.
#include <stdio.h>
unsigned long long int power(int a, int b);
unsigned long long int root(int a);
int main()
{
int n;
n=10;
printf("%llu \n",root(n));
return 0;
}
unsigned long long int power(int a, int b)
{
int i;
unsigned long long int m=1;
for (i=1;i<=b;i++)
{
m*=a;
}
return m;
}
unsigned long long int root(int a)
{
unsigned long long int c=1;
int counter=1;
while(counter<=a)
{
c*=10;
while(power(c,2)<=2*power(10,2*counter))
{
c++;
}
c-=1;
counter++;
}
return c;
}
I have tried the same algorithm in python. It can find the 10th decimal digit of $sqrt{2}$ immediately.
However, while doing C, I have waited for 10 mins but without a result.
Python handles big numbers for you. [1]
Although, as you say that you are getting the answer "immediately", your algorithm in python is not probably the same as the one you used in C.
#bruno's answer already explains why you are not getting the expected results in C.
[1] Handling very large numbers in Python
Exceed the range that the data can represent. when counter is equal to 10,2*power(10,2*counter) exceeds the range that unsigned long long int can represent. Python supports large number calculations, unlimited digits
you have overflow(s)
when counter values 10 you try to compute power(10,20) but even long long on 64 bits are not enough large, so you loop in
while(power(c,2)<=2*power(10,2*counter)){
c++;
}
for a long time (may be without ending)
Having long long on 64 bits allows to compute the result for n valuing up to 9
I have written this code to calculate 2^n mod 10^9+7. But sadly this function works only till 2^31 and afterwards all the answers are zero.
Can somebody shed some light why?
typedef unsigned long long LL;
const int MOD = 1000000007;
LL powmod(int a,int n)
{
LL p=1;
for(;n;)
{
if(n%2) p=(p*a)%MOD;
if(n/=2) a=(a*a)%MOD;
}
return p;
}
Just change LL powmod(int a,int n) to LL powmod(LL a,int n).
As squeamish ossifrage hinted at, with int a, the subexpression a*a is computed in int range and overflows "when a * a exceeds MAX_INT" (M.M), while with LL a, it is computed in unsigned long long range.
I am trying to compute n! % m using recursion in C. This is the code I am using
#include<stdio.h>
#define m 1000000007
long long int fact(long long int n) {
if(n == 1)
return (1);
long long int n2 = (long long int)(fact(n-1));
long long int k = (n*n2)%m;
return k;
}
int main() {
printf("%lld",fact(1000000));
}
This program gives SEGMENTATION FAULT , but if I replace the fact function with an iterative approach then program prints correct answer.The iterative fact function is
long long int fact(long long int n){
long long int k =1;
long long int i;
for(i = n;i>1;i--)
k = (k*i)%m;
return k;
}
So why the iterative approach works but the recursive approach is failing?
try gdb, but likely its because you are reaching your max recursion depth, in other words running out of "memory" either literally or based on current rule sets for your os.
If your C compiler optimizes tail calls you can rewrite your recursive function to a tail recursive one:
long long int fact_aux(long long int n, long long int acc) {
if(n <= 1)
return acc;
return fact_aux(n-1, (n * acc)%m);
}
long long int fact(long long int n) {
return fact_aux(n, 1);
}
Note that the C standard doesn't require TCO so you have no guarantee this will be as effective as a loop. GCC does it.
Tried to develop a code that quickly finds Fibonacci values.
But the problem is I get SIGSEGV error when input is of order 1000000.
Also from other questions around here I came to know that it may be because of stack memory that exceeds limit during runtime. And I guess that is the case here.
#include<stdio.h>
unsigned long long int a[1000001] = {0};
unsigned long long int fib(int n)
{
unsigned long long int y;
if(n==1 || n==0)
return n;
if (a[n] != 0)
return a[n];
else
{
y=fib(n-1)+fib(n-2);
a[n] = y;
}
return y;
}
main()
{
int N;
unsigned long long int ans;
a[0] = 1;
a[1] = 1;
scanf(" %d",&N);
ans = fib(N+1);
printf("%llu",ans);
}
How do I fix this code for input value of 1000000?
Here's a better approach (which can still be significantly improved) that will calculate Fibonacci numbers for you:
unsigned long long Fibonacci(int n)
{
unsigned long long last[2] = { 0, 1 }; // the start of our sequence
if(n == 0)
return 0;
for(int i = 2; i <= n; i++)
last[i % 2] = last[0] + last[1];
return last[n % 2];
}
However, you are not going to be able to calculate the millionth Fibonacci number with it, because that number is much, much, much, much larger than the largest number that can fit in an unsigned long long.
Instead of using the stack, use your own variables to track state. Essentially, do the function calls and returns with your own code.
The best way really is just to switch the algorithm entirely to one that's efficient. For example, to calculate fib(6), your code calculates fib(4) twice, once when fib(5) asks and once when fib(6) asks.