I'm working on a lab for my C class and we are doing recursions and functions. I was looking for help and tried doing this that should get me the answer but when I type in my inputs, it only returns a segmentation fault.
I've tried rearranging the positions of the variables and functions and even the int/float types, but nothing seems to work and I always get the same error.
#include <stdio.h>
float power(float, int);
int main(void)
{
float n;
int k;
printf("Please enter n = ");
scanf("%f", &n);
printf("Please enter k = ");
scanf("%d", &k);
printf("Sum = %f", power(n, k));
return 0;
}
float power(float n, int k)
{
return n * power(n, k - 1);
}
I expected 3 ** 3 is 27 but instead get Segmentation Fault :(
Your recursion is calling itself infinitely - it does power(3, 3), power(3, 2), power(3, 1), power(3, 0), power(3, -1)... and so on.
Any number to the power of 0 is 1.0 - that's your base case, so that's where you return.
For a bit of error catching, you can also see if the power passed in is too small to be valid.
float power(float n, int k)
{
if(k > 0) {
return n * power(n, k - 1);
}
if(k == 0) {
return 1.0;
}
return 1.0 / power(n, -k);
}
You need to admit that recursion is wrong and broken, and should never be used.
For example, if you add something to stop the recursion (e.g. if(k == 0) return 1;) it will still cause segmentation faults for large values of k (e.g. if you do x = power(1.0, INT_MAX)).
For this case, it's trivial to convert it into a simple loop; like:
float power(float n, int k) {
float result = 1.0;
while(k > 0) {
result *= n;
k--;
}
return result;
}
However even though this is no longer horribly bad because of recursion, it's still not good because the algorithm is inefficient (especially for large values of k).
A more efficient algorithm is something like:
float power(float n, unsigned int k) {
float result = 1.0;
while(k > 0) {
if( (k & 1) != 0) {
result *= n;
}
k >>= 1;
n *= n;
}
return result;
}
For this version, with a large value of k like 50000 the loop will only be executed 16 times instead of 49999 times, which makes it significantly faster.
Of course you can make the efficient version bad again by using recursion, like this:
float power(float n, unsigned int k) {
float result = 1.0;
if(k > 1) {
result = power(n*n, k >> 1);
}
if( (k & 1) != 0) {
result *= n;
}
return result;
}
In this case, (instead of being significantly more efficient because it loops a lot less) it will be significantly more efficient because it recurses a lot less (and then slightly less efficient because recursion sucks); and "recurses a lot less" is important because it means that it's far less likely that large values of k will make it crash.
For my CS assignment we were asked to create a program to approximate pi using Viete's Formula. I have done that, however, I don't exactly like my code and was wondering if there was a way I could do it without using two while loops.
(My professor is asking us to use a while loop, so I want to keep at least one!)
#include <stdio.h>
#include <math.h>
#include <stdlib.h>
int main()
{
double n, x, out, c, t, count, approx;
printf("enter the number of iterations to approximate pi\n");
scanf("%lf", &n);
c = 1;
out = 1;
t = 0;
count = 1;
x = sqrt(2);
while (count<=n)
{
t=t+1;
while (c<t)
{
x=sqrt(2+x);
c=c+1;
}
out=out*(x/2);
count=count+1;
}
approx=2/out;
printf("%lf is the approximation of pi\n", approx);
}
I just feel like my code could somehow be simpler, but I'm not sure how to simplify it.
Consider how many times the inner loop runs in each iteration of the outer loop
on the first iteration, it does not run at all (c == t == 1)
on each subsequent iteration, it runs exactly once (as t has been incremented once since the last iteration of the outer loop).
So you could replace this inner while with an if:
if (count > 1) {
once you do that, t and c are completely unnecessary and can be eliminated.
If you change the initial value of x (before the loop), you could have the first iteration calculate it here, thus getting rid of the if too. That leaves a minimal loop:
out = 1;
count = 1;
x = 0;
while (count<=n) {
x=sqrt(2+x);
out=out*(x/2);
count=count+1;
}
I just feel like my code could somehow be simpler, but I'm not sure how to simplify it.
I don't like the fact that I am using two while loops. I was wondering if there was a way to code this program using only one, rather than the two I am currently using
Seems simply enough to use a single loop.
OP's code, the while (c < t) loop, could be replaced with if (c < t) and achieve the same outcome. The loop is only executed 1 or 0 times. With an adjustment of initial c or t, the loop/block could executed exactly once each time. Thus negating the test completely.
A few additional adjustments are in Viete().
#include <stdio.h>
#include <math.h>
double Viete(unsigned n) {
const char *pi = "pi 3.141592653589793238462643383...";
puts(pi);
printf("m_pi=%.17f\n", acos(-1));
double term = sqrt(2.0);
double v = 1.0;
while (n-- > 0) {
v = v * term / 2;
printf("v_pi=%.17f %u\n", 2 / v, n);
term = sqrt(2 + term);
}
puts(pi);
return 2 / v;
}
int op_pi(unsigned n) {
unsigned c = 1;
unsigned t = 0;
unsigned count = 1;
double out = 1;
double x = sqrt(2);
while (count <= n) {
t = t + 1;
// while (c < t) {
// or
if (c < t) {
x = sqrt(2 + x);
c = c + 1;
}
out = out * (x / 2);
count = count + 1;
printf("%lf is the approximation of pi %u\n", 2 / out, count);
}
double approx = 2 / out;
printf("%lf is the approximation of pi\n", approx);
}
int main(void) {
op_pi(5);
Viete(5);
}
Output
2.828427 is the approximation of pi 2
3.061467 is the approximation of pi 3
3.121445 is the approximation of pi 4
3.136548 is the approximation of pi 5
3.140331 is the approximation of pi 6
3.140331 is the approximation of pi
pi 3.141592653589793238462643383...
m_pi=3.14159265358979312
v_pi=2.82842712474618985 4
v_pi=3.06146745892071825 3
v_pi=3.12144515225805197 2
v_pi=3.13654849054593887 1
v_pi=3.14033115695475251 0
pi 3.141592653589793238462643383...
Additional minor simplifications possible.
I thought memory access would be faster than the multiplication and division (although compiler-optimized) done with alpha blending. But it wasn't as fast as expected.
The 16 megabytes used for the table is not an issue in this case. But it is a problem if table lookup could even be slower than doing all the CPU calculations.
Can anyone explain to me why and what is happening? Will the table lookup beat out with a slower CPU?
#include <stdio.h>
#include <stdlib.h>
#include <limits.h>
#include <time.h>
#define COLOR_MAX UCHAR_MAX
typedef unsigned char color;
color (*blending_table)[COLOR_MAX + 1][COLOR_MAX + 1];
static color blend(unsigned int destination, unsigned int source, unsigned int a) {
return (source * a + destination * (COLOR_MAX - a)) / COLOR_MAX;
}
void initialize_blending_table(void) {
int destination, source, a;
blending_table = malloc((COLOR_MAX + 1) * sizeof *blending_table);
for (destination = 0; destination <= COLOR_MAX; ++destination) {
for (source = 0; source <= COLOR_MAX; ++source) {
for (a = 0; a <= COLOR_MAX; ++a) {
blending_table[destination][source][a] = blend(destination, source, a);
}
}
}
}
struct timer {
double start;
double end;
};
void timer_start(struct timer *self) {
self->start = clock();
}
void timer_end(struct timer *self) {
self->end = clock();
}
double timer_measure_in_seconds(struct timer *self) {
return (self->end - self->start) / CLOCKS_PER_SEC;
}
#define n 300
int main(void) {
struct timer timer;
volatile int i, j, k, l, m;
timer_start(&timer);
initialize_blending_table();
timer_end(&timer);
printf("init %f\n", timer_measure_in_seconds(&timer));
timer_start(&timer);
for (i = 0; i <= n; ++i) {
for (j = 0; j <= COLOR_MAX; ++j) {
for (k = 0; k <= COLOR_MAX; ++k) {
for (l = 0; l <= COLOR_MAX; ++l) {
m = blending_table[j][k][l];
}
}
}
}
timer_end(&timer);
printf("table %f\n", timer_measure_in_seconds(&timer));
timer_start(&timer);
for (i = 0; i <= n; ++i) {
for (j = 0; j <= COLOR_MAX; ++j) {
for (k = 0; k <= COLOR_MAX; ++k) {
for (l = 0; l <= COLOR_MAX; ++l) {
m = blend(j, k, l);
}
}
}
}
timer_end(&timer);
printf("function %f\n", timer_measure_in_seconds(&timer));
return EXIT_SUCCESS;
}
result
$ gcc test.c -O3
$ ./a.out
init 0.034328
table 14.176643
function 14.183924
Table lookup is not a panacea. It helps when the table is small enough, but in your case the table is very big. You write
16 megabytes used for the table is not an issue in this case
which I think is very wrong, and is possibly the source of the problem you experience. 16 megabytes is too big for L1 cache, so reading data from random indices in the table will involve the slower caches (L2, L3, etc). The penalty for cache misses is typically large; your blending algorithm must be very complex if you want your LUT solution to be faster.
Read the Wikipedia article for more info.
Your benchmark is hopelessly broken, it makes the LUT look a lot better than it actually is because it reads the table in-order.
If your performance results show that the LUT is worse than direct calculation, then when you start with real-world random access patterns and cache misses, the LUT is going to be much worse.
Focus on improving the computation, and enabling vectorization. It's likely to pay off far better than a table-based approach.
(source * a + destination * (COLOR_MAX - a)) / COLOR_MAX
with rearrangement becomes
(source * a + destination * COLOR_MAX - destination * a) / COLOR_MAX
which simplifies to
destination + (source - destination) * a / COLOR_MAX
which has one multiply and one division by a constant, both of which are very efficient. And it is easily vectorized.
You should also mark your helper function as inline, although a good optimizing compiler is probably inlining it anyway.
I'm having trouble keeping randomly generated values that are normally distributed between 0 and 1 (including 0, excluding 1). I believe the algorithm is basically correct, I am just stumped here. Any insight would be great.
These are the needed include files:
#include <limits.h>
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#include <math.h>
The normally distributed random number generator function:
float rand_normal(float mean, float stddev)
{
static float n2 = 0.0;
float x, y, r;
static int n2_cached = 0;
if (!n2_cached)
{
do
{
x = 2.0*rand()/RAND_MAX - 1;
y = 2.0*rand()/RAND_MAX - 1;
r = x*x + y*y;
} while (r==0.0 || r>1.0);
float d = sqrt(-2.0*log(r)/r);
float n1 = x*d;
float result = n1*stddev + mean;
n2 = y*d;
n2_cached = 1;
return result;
}
else
{
n2_cached = 0;
return n2*stddev + mean;
}
}
main function used only for testing purposes.
int main()
{
srand(time(NULL));
int i;
float min = 0.5, max = 0.5, r, avg = 0;
float x, w;
int n = 10000000;
for (i=0; i<n; i++)
{
r = rand_normal(0.5, 0.09);
if (r < min)
min = r;
else if ( r>max)
max = r;
avg += r;
}
avg /= (float)n;
printf("min = %f\nmax = %f\navg = %f\n", min, max, avg);
return 0;
}
In case anyone was wondering, this function is needed for a "genetic inheritance in plants" simulation.
Why would you expect the result to stay between 0 and 1? The Gaussian distribution has full support, so whatever interval you are looking at and whatever the mean and variance you choose, there will always be a (possibility very small) non-zero probability of falling outside of that interval. If you really want to restrict yourself to [0,1] for some reason, then you can simply call rand_normal until you fall into that interval.
Note also that while Box-Müller (the algorithm you are using) is easy to implement, this is one of the worst and most costly ways of generating a Gaussian random variable. The best and fastest algorithm I know is the "Ziggurat" method, an implementation of which can be found at
http://www.seehuhn.de/pages/ziggurat
I would definitely create a function to convert "rand()" to a normalized floating point value. For example:
double
nrand ()
{
return rand()/(RAND_MAX - 1);
}
Also, here are a few links that might help:
http://eternallyconfuzzled.com/arts/jsw_art_rand.aspx
http://people.sc.fsu.edu/~jburkardt/c_src/normal/normal.html
This question already has answers here:
nth fibonacci number in sublinear time
(16 answers)
Closed 6 years ago.
I am a CSE student and preparing myself for programming contest.Now I am working on Fibonacci series. I have a input file of size about some Kilo bytes containing positive integers. Input formate looks like
3 5 6 7 8 0
A zero means the end of file. Output should like
2
5
8
13
21
my code is
#include<stdio.h>
int fibonacci(int n) {
if (n==1 || n==2)
return 1;
else
return fibonacci(n-1) +fibonacci(n-2);
}
int main() {
int z;
FILE * fp;
fp = fopen ("input.txt","r");
while(fscanf(fp,"%d", &z) && z)
printf("%d \n",fibonacci(z));
return 0;
}
The code works fine for sample input and provide accurate result but problem is for my real input set it is taking more time than my time limit. Can anyone help me out.
You could simply use a tail recursion version of a function that returns the two last fibonacci numbers if you have a limit on the memory.
int fib(int n)
{
int a = 0;
int b = 1;
while (n-- > 1) {
int t = a;
a = b;
b += t;
}
return b;
}
This is O(n) and needs a constant space.
You should probably look into memoization.
http://en.wikipedia.org/wiki/Memoization
It has an explanation and a fib example right there
You can do this by matrix multiplictation, raising the matrix to power n and then multiply it by an vector. You can raise it to power in logaritmic time.
I think you can find the problem here. It's in romanian but you can translate it with google translate. It's exactly what you want, and the solution it's listed there.
Your algorithm is recursive, and approximately has O(2^N) complexity.
This issue has been discussed on stackoverflow before:
Computational complexity of Fibonacci Sequence
There is also a faster implementation posted in that particular discussion.
Look in Wikipedia, there is a formula that gives the number in the Fibonacci sequence with no recursion at all
Use memoization. That is, you cache the answers to avoid unnecessary recursive calls.
Here's a code example:
#include <stdio.h>
int memo[10000]; // adjust to however big you need, but the result must fit in an int
// and keep in mind that fibonacci values grow rapidly :)
int fibonacci(int n) {
if (memo[n] != -1)
return memo[n];
if (n==1 || n==2)
return 1;
else
return memo[n] = fibonacci(n-1) +fibonacci(n-2);
}
int main() {
for(int i = 0; i < 10000; ++i)
memo[i] = -1;
fibonacci(50);
}
Nobody mentioned the 2 value stack array version, so I'll just do it for completeness.
// do not call with i == 0
uint64_t Fibonacci(uint64_t i)
{
// we'll only use two values on stack,
// initialized with F(1) and F(2)
uint64_t a[2] = {1, 1};
// We do not enter loop if initial i was 1 or 2
while (i-- > 2)
// A bitwise AND allows switching the storing of the new value
// from index 0 to index 1.
a[i & 1] = a[0] + a[1];
// since the last value of i was 0 (decrementing i),
// the return value is always in a[0 & 1] => a[0].
return a[0];
}
This is a O(n) constant stack space solution that will perform slightly the same than memoization when compiled with optimization.
// Calc of fibonacci f(99), gcc -O2
Benchmark Time(ns) CPU(ns) Iterations
BM_2stack/99 2 2 416666667
BM_memoization/99 2 2 318181818
The BM_memoization used here will initialize the array only once and reuse it for every other call.
The 2 value stack array version performs identically as a version with a temporary variable when optimized.
You can also use the fast doubling method of generating Fibonacci series
Link: fastest-way-to-compute-fibonacci-number
It is actually derived from the results of the matrix exponentiation method.
Use the golden-ratio
Build an array Answer[100] in which you cache the results of fibonacci(n).
Check in your fibonacci code to see if you have precomputed the answer, and
use that result. The results will astonish you.
Are you guaranteed that, as in your example, the input will be given to you in ascending order? If so, you don't even need memoization; just keep track of the last two results, start generating the sequence but only display the Nth number in the sequence if N is the next index in your input. Stop when you hit index 0.
Something like this:
int i = 0;
while ( true ) {
i++; //increment index
fib_at_i = generate_next_fib()
while ( next_input_index() == i ) {
println fib_at_i
}
I leave exit conditions and actually generating the sequence to you.
In C#:
static int fib(int n)
{
if (n < 2) return n;
if (n == 2) return 1;
int k = n / 2;
int a = fib(k + 1);
int b = fib(k);
if (n % 2 == 1)
return a * a + b * b;
else
return b * (2 * a - b);
}
Matrix multiplication, no float arithmetic, O(log N) time complexity assuming integer multiplication/addition is done in constant time.
Here goes python code
def fib(n):
x,y = 1,1
mat = [1,1,1,0]
n -= 1
while n>0:
if n&1==1:
x,y = x*mat[0]+y*mat[1], x*mat[2]+y*mat[3]
n >>= 1
mat[0], mat[1], mat[2], mat[3] = mat[0]*mat[0]+mat[1]*mat[2], mat[0]*mat[1]+mat[1]*mat[3], mat[0]*mat[2]+mat[2]*mat[3], mat[1]*mat[2]+mat[3]*mat[3]
return x
You can reduce the overhead of the if statement: Calculating Fibonacci Numbers Recursively in C
First of all, you can use memoization or an iterative implementation of the same algorithm.
Consider the number of recursive calls your algorithm makes:
fibonacci(n) calls fibonacci(n-1) and fibonacci(n-2)
fibonacci(n-1) calls fibonacci(n-2) and fibonacci(n-3)
fibonacci(n-2) calls fibonacci(n-3) and fibonacci(n-4)
Notice a pattern? You are computing the same function a lot more times than needed.
An iterative implementation would use an array:
int fibonacci(int n) {
int arr[maxSize + 1];
arr[1] = arr[2] = 1; // ideally you would use 0-indexing, but I'm just trying to get a point across
for ( int i = 3; i <= n; ++i )
arr[i] = arr[i - 1] + arr[i - 2];
return arr[n];
}
This is already much faster than your approach. You can do it faster on the same principle by only building the array once up until the maximum value of n, then just print the correct number in a single operation by printing an element of your array. This way you don't call the function for every query.
If you can't afford the initial precomputation time (but this usually only happens if you're asked for the result modulo something, otherwise they probably don't expect you to implement big number arithmetic and precomputation is the best solution), read the fibonacci wiki page for other methods. Focus on the matrix approach, that one is very good to know in a contest.
#include<stdio.h>
int g(int n,int x,int y)
{
return n==0 ? x : g(n-1,y,x+y);}
int f(int n)
{
return g(n,0,1);}
int main (void)
{
int i;
for(i=1; i<=10 ; i++)
printf("%d\n",f(i)
return 0;
}
In the functional programming there is a special algorithm for counting fibonacci. The algorithm uses accumulative recursion. Accumulative recursion are used to minimize the stack size used by algorithms. I think it will help you to minimize the time. You can try it if you want.
int ackFib (int n, int m, int count){
if (count == 0)
return m;
else
return ackFib(n+m, n, count-1);
}
int fib(int n)
{
return ackFib (0, 1, n+1);
}
use any of these: Two Examples of recursion, One with for Loop O(n) time and one with golden ratio O(1) time:
private static long fibonacciWithLoop(int input) {
long prev = 0, curr = 1, next = 0;
for(int i = 1; i < input; i++){
next = curr + prev;
prev = curr;
curr = next;
}
return curr;
}
public static long fibonacciGoldenRatio(int input) {
double termA = Math.pow(((1 + Math.sqrt(5))/2), input);
double termB = Math.pow(((1 - Math.sqrt(5))/2), input);
double factor = 1/Math.sqrt(5);
return Math.round(factor * (termA - termB));
}
public static long fibonacciRecursive(int input) {
if (input <= 1) return input;
return fibonacciRecursive(input - 1) + fibonacciRecursive(input - 2);
}
public static long fibonacciRecursiveImproved(int input) {
if (input == 0) return 0;
if (input == 1) return 1;
if (input == 2) return 1;
if (input >= 93) throw new RuntimeException("Input out of bounds");
// n is odd
if (input % 2 != 0) {
long a = fibonacciRecursiveImproved((input+1)/2);
long b = fibonacciRecursiveImproved((input-1)/2);
return a*a + b*b;
}
// n is even
long a = fibonacciRecursiveImproved(input/2 + 1);
long b = fibonacciRecursiveImproved(input/2 - 1);
return a*a - b*b;
}
using namespace std;
void mult(LL A[ 3 ][ 3 ], LL B[ 3 ][ 3 ]) {
int i,
j,
z;
LL C[ 3 ][ 3 ];
memset(C, 0, sizeof( C ));
for(i = 1; i <= N; i++)
for(j = 1; j <= N; j++) {
for(z = 1; z <= N; z++)
C[ i ][ j ] = (C[ i ][ j ] + A[ i ][ z ] * B[ z ][ j ] % mod ) % mod;
}
memcpy(A, C, sizeof(C));
};
void readAndsolve() {
int i;
LL k;
ifstream I(FIN);
ofstream O(FOUT);
I>>k;
LL A[3][3];
LL B[3][3];
A[1][1] = 1; A[1][2] = 0;
A[2][1] = 0; A[2][2] = 1;
B[1][1] = 0; B[1][2] = 1;
B[2][1] = 1; B[2][2] = 1;
for(i = 0; ((1<<i) <= k); i++) {
if( k & (1<<i) ) mult(A, B);
mult(B, B);
}
O<<A[2][1];
}
//1,1,2,3,5,8,13,21,33,...
int main() {
readAndsolve();
return(0);
}
public static int GetNthFibonacci(int n)
{
var previous = -1;
var current = 1;
int element = 0;
while (1 <= n--)
{
element = previous + current;
previous = current;
current = element;
}
return element;
}
This is similar to answers given before, but with some modifications. Memorization, as stated in other answers, is another way to do this, but I dislike code that doesn't scale as technology changes (size of an unsigned int varies depending on the platform) so the highest value in the sequence that can be reached may also vary, and memorization is ugly in my opinion.
#include <iostream>
using namespace std;
void fibonacci(unsigned int count) {
unsigned int x=0,y=1,z=0;
while(count--!=0) {
cout << x << endl; // you can put x in an array or whatever
z = x;
x = y;
y += z;
}
}
int main() {
fibonacci(48);// 48 values in the sequence is the maximum for a 32-bit unsigend int
return 0;
}
Additionally, if you use <limits> its possible to write a compile-time constant expression that would give you the largest index within the sequence that can be reached for any integral data type.
#include<stdio.h>
main()
{
int a,b=2,c=5,d;
printf("%d %d ");
do
{
d=b+c;
b=c;
c=d;
rintf("%d ");
}