I would like to evaluate Pi approximately by running the following code which fits a regular polygon of n sides inside a circle with unit diameter and calculates its perimeter using the function in the code. However the output after the 34th term is 0 when long double variable type is used or it increases without bounds when double variable type is used. How can I remedy this situation? Any suggestion or help is appreciated and welcome.
Thanks
P.S: Operating system: Ubuntu 12.04 LTS 32-bit, Compiler: GCC 4.6.3
#include <stdio.h>
#include <math.h>
#include <limits.h>
#include <stdlib.h>
#define increment 0.25
int main()
{
int i = 0, k = 0, n[6] = {3, 6, 12, 24, 48, 96};
double per[61] = {0}, per2[6] = {0};
// Since the above algorithm is recursive we need to specify the perimeter for n = 3;
per[3] = 0.5 * 3 * sqrtl(3);
for(i = 3; i <= 60; i++)
{
per[i + 1] = powl(2, i) * sqrtl(2 * (1.0 - sqrtl(1.0 - (per[i] / powl(2, i)) * (per[i] / powl(2, i)))));
printf("%d %f \n", i, per[i]);
}
return 0;
for(k = 0; k < 6; k++)
{
//p[k] = k
}
}
Some ideas:
Use y = (1.0 - x)*( 1.0 + x) instead of y = 1.0 - x*x. This helps with 1 stage of "subtraction of nearly equal values", but I am still stuck on the next 1.0 - sqrtl(y) as y approaches 1.0.
// per[i + 1] = powl(2, i) * sqrtl(2 * (1.0 - sqrtl(1.0 - (per[i] / powl(2, i)) * (per[i] / powl(2, i)))));
long double p = powl(2, i);
// per[i + 1] = p * sqrtl(2 * (1.0 - sqrtl(1.0 - (per[i] / p) * (per[i] / p))));
long double x = per[i] / p;
// per[i + 1] = p * sqrtl(2 * (1.0 - sqrtl(1.0 - x * x)));
// per[i + 1] = p * sqrtl(2 * (1.0 - sqrtl((1.0 - x)*(1.0 + x)) ));
long double y = (1.0 - x)*( 1.0 + x);
per[i + 1] = p * sqrtl(2 * (1.0 - sqrtl(y) ));
Change array size or for()
double per[61+1] = { 0 }; // Add 1 here
...
for (i = 3; i <= 60; i++) {
...
per[i + 1] =
Following is a similar method for pi
unsigned n = 6;
double sine = 0.5;
double cosine = sqrt(0.75);
double pi = n*sine;
static const double mpi = 3.1415926535897932384626433832795;
do {
sine = sqrt((1 - cosine)/2);
cosine = sqrt((1 + cosine)/2);
n *= 2;
pi = n*sine;
printf("%6u s:%.17e c:%.17e pi:%.17e %%:%.6e\n", n, sine, cosine, pi, (pi-mpi)/mpi);
} while (n <500000);
Subtracting 1.0 from a nearly-1.0 number is leading to "catastrophic cancellation", where the relative error in a FP calculation skyrockets due to the loss of significant digits. Try evaluating pow(2, i) - (pow(2, i) - 1.0) for each i between 0 and 60 and you'll see what I mean.
The only real solution to this issue is reorganizing your equations to avoid subtracting nearly-equal nonzero quantities. For more details, see Acton, Real Computing Made Real, or Higham, Accuracy and Stability of Numerical Algorithms.
Related
Having some difficulty troubleshooting code I wrote in C to perform a logistic regression. While it seems to work on smaller, semi-randomized datasets, it stops working (e.g. assigning proper probabilities of belonging to class 1) at around the point where I pass 43,500 observations (determined by tweaking the number of observations created. When creating the 150 features used in the code, I do create the first two as a function of the number of observations, so I'm not sure if maybe that's the issue here, though I am using double precision. Maybe there's an overflow somewhere in the code?
The below code should be self-contained; it generates m=50,000 observations with n=150 features. Setting m below 43,500 should return "Percent class 1: 0.250000", setting to 44,000 or above will return "Percent class 1: 0.000000", regardless of what max_iter (number of times we sample m observations) is set to.
The first feature is set to 1.0 divided by the total number of observations, if class 0 (first 75% of observations), or the index of the observation divided by the total number of observations otherwise.
The second feature is just index divided by total number of observations.
All other features are random.
The logistic regression is intended to use stochastic gradient descent, randomly selecting an observation index, computing the gradient of the loss with the predicted y using current weights, and updating weights with the gradient and learning rate (eta).
Using the same initialization with Python and NumPy, I still get the proper results, even above 50,000 observations.
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include <time.h>
// Compute z = w * x + b
double dlc( int n, double *X, double *coef, double intercept )
{
double y_pred = intercept;
for (int i = 0; i < n; i++)
{
y_pred += X[i] * coef[i];
}
return y_pred;
}
// Compute y_hat = 1 / (1 + e^(-z))
double sigmoid( int n, double alpha, double *X, double *coef, double beta, double intercept )
{
double y_pred;
y_pred = dlc(n, X, coef, intercept);
y_pred = 1.0 / (1.0 + exp(-y_pred));
return y_pred;
}
// Stochastic gradient descent
void sgd( int m, int n, double *X, double *y, double *coef, double *intercept, double eta, int max_iter, int fit_intercept, int random_seed )
{
double *gradient_coef, *X_i;
double y_i, y_pred, resid;
int idx;
double gradient_intercept = 0.0, alpha = 1.0, beta = 1.0;
X_i = (double *) malloc (n * sizeof(double));
gradient_coef = (double *) malloc (n * sizeof(double));
for ( int i = 0; i < n; i++ )
{
coef[i] = 0.0;
gradient_coef[i] = 0.0;
}
*intercept = 0.0;
srand(random_seed);
for ( int epoch = 0; epoch < max_iter; epoch++ )
{
for ( int run = 0; run < m; run++ )
{
// Randomly sample an observation
idx = rand() % m;
for ( int i = 0; i < n; i++ )
{
X_i[i] = X[n*idx+i];
}
y_i = y[idx];
// Compute y_hat
y_pred = sigmoid( n, alpha, X_i, coef, beta, *intercept );
resid = -(y_i - y_pred);
// Compute gradients and adjust weights
for (int i = 0; i < n; i++)
{
gradient_coef[i] = X_i[i] * resid;
coef[i] -= eta * gradient_coef[i];
}
if ( fit_intercept == 1 )
{
*intercept -= eta * resid;
}
}
}
}
int main(void)
{
double *X, *y, *coef, *y_pred;
double intercept;
double eta = 0.05;
double alpha = 1.0, beta = 1.0;
long m = 50000;
long n = 150;
int max_iter = 20;
long class_0 = (long)(3.0 / 4.0 * (double)m);
double pct_class_1 = 0.0;
clock_t test_start;
clock_t test_end;
double test_time;
printf("Constructing variables...\n");
X = (double *) malloc (m * n * sizeof(double));
y = (double *) malloc (m * sizeof(double));
y_pred = (double *) malloc (m * sizeof(double));
coef = (double *) malloc (n * sizeof(double));
// Initialize classes
for (int i = 0; i < m; i++)
{
if (i < class_0)
{
y[i] = 0.0;
}
else
{
y[i] = 1.0;
}
}
// Initialize observation features
for (int i = 0; i < m; i++)
{
if (i < class_0)
{
X[n*i] = 1.0 / (double)m;
}
else
{
X[n*i] = (double)i / (double)m;
}
X[n*i + 1] = (double)i / (double)m;
for (int j = 2; j < n; j++)
{
X[n*i + j] = (double)(rand() % 100) / 100.0;
}
}
// Fit weights
printf("Running SGD...\n");
test_start = clock();
sgd( m, n, X, y, coef, &intercept, eta, max_iter, 1, 42 );
test_end = clock();
test_time = (double)(test_end - test_start) / CLOCKS_PER_SEC;
printf("Time taken: %f\n", test_time);
// Compute y_hat and share of observations predicted as class 1
printf("Making predictions...\n");
for ( int i = 0; i < m; i++ )
{
y_pred[i] = sigmoid( n, alpha, &X[i*n], coef, beta, intercept );
}
printf("Printing results...\n");
for ( int i = 0; i < m; i++ )
{
//printf("%f\n", y_pred[i]);
if (y_pred[i] > 0.5)
{
pct_class_1 += 1.0;
}
// Troubleshooting print
if (i < 10 || i > m - 10)
{
printf("%g\n", y_pred[i]);
}
}
printf("Percent class 1: %f", pct_class_1 / (double)m);
return 0;
}
For reference, here is my (presumably) equivalent Python code, which returns the correct percent of identified classes at more than 50,000 observations:
import numpy as np
import time
def sigmoid(x):
return 1 / (1 + np.exp(-x))
class LogisticRegressor:
def __init__(self, eta, init_runs, fit_intercept=True):
self.eta = eta
self.init_runs = init_runs
self.fit_intercept = fit_intercept
def fit(self, x, y):
m, n = x.shape
self.coef = np.zeros((n, 1))
self.intercept = np.zeros((1, 1))
for epoch in range(self.init_runs):
for run in range(m):
idx = np.random.randint(0, m)
x_i = x[idx:idx+1, :]
y_i = y[idx]
y_pred_i = sigmoid(x_i.dot(self.coef) + self.intercept)
gradient_w = -(x_i.T * (y_i - y_pred_i))
self.coef -= self.eta * gradient_w
if self.fit_intercept:
gradient_b = -(y_i - y_pred_i)
self.intercept -= self.eta * gradient_b
def predict_proba(self, x):
m, n = x.shape
y_pred = np.ones((m, 2))
y_pred[:,1:2] = sigmoid(x.dot(self.coef) + self.intercept)
y_pred[:,0:1] -= y_pred[:,1:2]
return y_pred
def predict(self, x):
return np.round(sigmoid(x.dot(self.coef) + self.intercept))
m = 50000
n = 150
class1 = int(3.0 / 4.0 * m)
X = np.random.rand(m, n)
y = np.zeros((m, 1))
for obs in range(m):
if obs < class1:
continue
else:
y[obs,0] = 1
for obs in range(m):
if obs < class1:
X[obs, 0] = 1.0 / float(m)
else:
X[obs, 0] = float(obs) / float(m)
X[obs, 1] = float(obs) / float(m)
logit = LogisticRegressor(0.05, 20)
start_time = time.time()
logit.fit(X, y)
end_time = time.time()
print(round(end_time - start_time, 2))
y_pred = logit.predict(X)
print("Percent:", y_pred.sum() / len(y_pred))
The issue is here:
// Randomly sample an observation
idx = rand() % m;
... in light of the fact that the OP's RAND_MAX is 32767. This is exacerbated by the fact that all of the class 0 observations are at the end.
All samples will be drawn from the first 32768 observations, and when the total number of observations is greater than that, the proportion of class 0 observations among those that can be sampled is less than 0.25. At 43691 total observations, there are no class 0 observations among those that can be sampled.
As a secondary issue, rand() % m does not yield a wholly uniform distribution if m does not evenly divide RAND_MAX + 1, though the effect of this issue will be much more subtle.
Bottom line: you need a better random number generator.
At minimum, you could consider combining the bits from two calls to rand() to yield an integer with sufficient range, but you might want to consider getting a third-party generator. There are several available.
Note: OP reports "m=50,000 observations with n=150 features.", so perhaps this is not the issue for OP, but I'll leave this answer up for reference when OP tries larger tasks.
A potential issue:
long overflow
m * n * sizeof(double) risks overflow when long is 32-bit and m*n > LONG_MAX (or about 46,341 if m, n are the same).
OP does report
A first step is to perform the multiplication using size_t math where we gain at least 1 more bit in the calculation.
// m * n * sizeof(double)
sizeof(double) * m * n
Yet unless OP's size_t is more than 32-bit, we still have trouble.
IAC, I recommend to use size_t for array sizing and indexing.
Check allocations for failure too.
Since RAND_MAX may be too small and array indexing should be done using size_t math, consider a helper function to generate a random index over the entire size_t range.
// idx = rand() % m;
size_t idx = rand_size_t() % (size_t)m;
If stuck with the standard rand(), below is a helper function to extend its range as needed.
It uses the real nifty IMAX_BITS(m).
#include <assert.h>
#include <limits.h>
#include <stdint.h>
#include <stdlib.h>
// https://stackoverflow.com/a/4589384/2410359
/* Number of bits in inttype_MAX, or in any (1<<k)-1 where 0 <= k < 2040 */
#define IMAX_BITS(m) ((m)/((m)%255+1) / 255%255*8 + 7-86/((m)%255+12))
// Test that RAND_MAX is a power of 2 minus 1
_Static_assert((RAND_MAX & 1) && ((RAND_MAX/2 + 1) & (RAND_MAX/2)) == 0, "RAND_MAX is not a Mersenne number");
#define RAND_MAX_WIDTH (IMAX_BITS(RAND_MAX))
#define SIZE_MAX_WIDTH (IMAX_BITS(SIZE_MAX))
size_t rand_size_t(void) {
size_t index = (size_t) rand();
for (unsigned i = RAND_MAX_WIDTH; i < SIZE_MAX_WIDTH; i += RAND_MAX_WIDTH) {
index <<= RAND_MAX_WIDTH;
index ^= (size_t) rand();
}
return index;
}
Further considerations can replace the rand_size_t() % (size_t)m with a more uniform distribution.
As has been determined elsewhere, the problem is due to the implementation's RAND_MAX value being too small.
Assuming 32-bit ints, a slightly better PRNG function can be implemented in the code, such as this C implementation of the minstd_rand() function from C++:
#define MINSTD_RAND_MAX 2147483646
// Code assumes `int` is at least 32 bits wide.
static unsigned int minstd_seed = 1;
static void minstd_srand(unsigned int seed)
{
seed %= 2147483647;
// zero seed is bad!
minstd_seed = seed ? seed : 1;
}
static int minstd_rand(void)
{
minstd_seed = (unsigned long long)minstd_seed * 48271 % 2147483647;
return (int)minstd_seed;
}
Another problem is that expressions of the form rand() % m produce a biased result when m does not divide (unsigned int)RAND_MAX + 1. Here is an unbiased function that returns a random integer from 0 to le inclusive, making use of the minstd_rand() function defined earlier:
static int minstd_rand_max(int le)
{
int r;
if (le < 0)
{
r = le;
}
else if (le >= MINSTD_RAND_MAX)
{
r = minstd_rand();
}
else
{
int rm = MINSTD_RAND_MAX - le + MINSTD_RAND_MAX % (le + 1);
while ((r = minstd_rand()) > rm)
{
}
r /= (rm / (le + 1) + 1);
}
return r;
}
(Actually, it does still have a very small bias because minstd_rand() will never return 0.)
For example, replace rand() % 100 with minstd_rand_max(99), and replace rand() % m with minstd_rand_max(m - 1). Also replace srand(random_seed) with minstd_srand(random_seed).
I'm programing on Code Composer Studio a program that generate and show a sinusoid, this program should normally be implemented in a DSP, but since I don't have the DSK I'm just compiling it and trying to show the result in CCS.
I'm having a problem in the line 18 it shows that an expression is expected and I don't know why. I checked all comas and () {} and it seems correct.
#include <math.h>
#include <stdio.h>
const int sine_table[40] = { 0, 5125, 10125, 14876, 19260, 23170, 26509, 29196, 31163, 32364, 32767, 32364, 31163, 29196, 26509, 23170, 19260, 14876, 10125, 5125, 0, -5126, -10126, -14877, -19261, -23171, -26510, -29197, -31164, -32365, -32768, -32365, -31164, -29197, -26510, -23171, -19261, -14877, -10126, -5126 };
int i = 0;
int x1 = 0;
int x2 = 0;
float y = 0;
float sin1(float phase) {
x1 = (int) phase % 40; if (x1 < 0) x1 += 40; x2 = (x1 + 1) % 40;
y = (sine_table[x2] - sine_table[x1]) * ((float) ((int) (40 * 0.001 * i * 100) % 4100) / 100 - x1) + sine_table[x1];
return y;
}
int main(void) {
double pi = 3.1415926535897932384626433832795;
for (int i = 0; i < 1000; i++) {
float x = 40 * 0.001 * i;
float radians = x * 2 * pi / 40;
printf("%f %f %f\n", x, sin1(x) / 32768, sin(radians));
i = i + 1;
}
}
#include <stdio.h>
#include <math.h>
const int TERMS = 7;
const float PI = 3.14159265358979;
int fact(int n) {
return n<= 0 ? 1 : n * fact(n-1);
}
double sine(int x) {
double rad = x * (PI / 180);
double sin = 0;
int n;
for(n = 0; n < TERMS; n++) { // That's Taylor series!!
sin += pow(-1, n) * pow(rad, (2 * n) + 1)/ fact((2 * n) + 1);
}
return sin;
}
double cosine(int x) {
double rad = x * (PI / 180);
double cos = 0;
int n;
for(n = 0; n < TERMS; n++) { // That's also Taylor series!
cos += pow(-1, n) * pow(rad, 2 * n) / fact(2 * n);
}
return cos;
}
int main(void){
int y;
scanf("%d",&y);
printf("sine(%d)= %lf\n",y, sine(y));
printf("cosine(%d)= %lf\n",y, cosine(y));
return 0;
}
The code above was implemented to compute sine and cosine using Taylor series.
I tried testing the code and it works fine for sine(120).
I am getting wrong answers for sine(240) and sine(300).
Can anyone help me find out why those errors occur?
You should calculate the functions in the first quadrant only [0, pi/2). Exploit the properties of the functions to get the values for other angles. For instance, for values of x between [pi/2, pi), sin(x) can be calculated by sin(pi - x).
The sine of 120 degrees, which is 40 past 90 degrees, is the same as 50 degrees: 40 degrees before 90. Sine starts at 0, then rises toward 1 at 90 degrees, and then falls again in a mirror image to zero at 180.
The negative sine values from pi to 2pi are just -sin(x - pi). I'd handle everything by this recursive definition:
sin(x):
cases x of:
[0, pi/2) -> calculate (Taylor or whatever)
[pi/2, pi) -> sin(pi - x)
[pi/2, 2pi) -> -sin(x - pi)
< 0 -> sin(-x)
>= 2pi -> sin(fmod(x, 2pi)) // floating-point remainder
A similar approach for cos, using identity cases appropriate for it.
The key point is:
TERMS is too small to have proper precision. And if you increase TERMS, you have to change fact implementation as it will likely overflow when working with int.
I would use a sign to toggle the -1 power instead of pow(-1,n) overkill.
Then use double for the value of PI to avoid losing too many decimals
Then for high values, you should increase the number of terms (this is the main issue). using long long for your factorial method or you get overflow. I set 10 and get proper results:
#include <stdio.h>
#include <math.h>
const int TERMS = 10;
const double PI = 3.14159265358979;
long long fact(int n) {
return n<= 0 ? 1 : n * fact(n-1);
}
double powd(double x,int n) {
return n<= 0 ? 1 : x * powd(x,n-1);
}
double sine(int x) {
double rad = x * (PI / 180);
double sin = 0;
int n;
int sign = 1;
for(n = 0; n < TERMS; n++) { // That's Taylor series!!
sin += sign * powd(rad, (2 * n) + 1)/ fact((2 * n) + 1);
sign = -sign;
}
return sin;
}
double cosine(int x) {
double rad = x * (PI / 180);
double cos = 0;
int n;
int sign = 1;
for(n = 0; n < TERMS; n++) { // That's also Taylor series!
cos += sign * powd(rad, 2 * n) / fact(2 * n);
sign = -sign;
}
return cos;
}
int main(void){
int y;
scanf("%d",&y);
printf("sine(%d)= %lf\n",y, sine(y));
printf("cosine(%d)= %lf\n",y, cosine(y));
return 0;
}
result:
240
sine(240)= -0.866026
cosine(240)= -0.500001
Notes:
my recusive implementation of pow using successive multiplications is probably not needed, since we're dealing with floating point. It introduces accumulation error if n is big.
fact could be using floating point to allow bigger numbers and better precision. Actually I suggested long long but it would be better not to assume that the size will be enough. Better use standard type like int64_t for that.
fact and pow results could be pre-computed/hardcoded as well. This would save computation time.
const double TERMS = 14;
const double PI = 3.14159265358979;
double fact(double n) {return n <= 0.0 ? 1 : n * fact(n - 1);}
double sine(double x)
{
double rad = x * (PI / 180);
rad = fmod(rad, 2 * PI);
double sin = 0;
for (double n = 0; n < TERMS; n++)
sin += pow(-1, n) * pow(rad, (2 * n) + 1) / fact((2 * n) + 1);
return sin;
}
double cosine(double x)
{
double rad = x * (PI / 180);
rad = fmod(rad,2*PI);
double cos = 0;
for (double n = 0; n < TERMS; n++)
cos += pow(-1, n) * pow(rad, 2 * n) / fact(2 * n);
return cos;
}
int main()
{
printf("sine(240)= %lf\n", sine(240));
printf("cosine(300)= %lf\n",cosine(300));
}
I have written a C code using the improved Euler method to determine the position, velocity and energy of the oscillator at regular time intervals. However, I run into a problem that the energy of the oscillator is decreasing, though there are no dissipation terms. I think this is particularly related with the way I update my position and velocity variables and would like to get your help on the matter. My code is as follows:
//Compilation and run
//gcc oscillatorimprovedEuler.c -lm -o oscillatorimprovedEuler && ./oscillatorimprovedEuler
#include <stdio.h>
#include <math.h>
// The global constans are defined in the following way (having the constant value througout the program
#define m 1.0 // kg
#define k 1.0 // kg/sec^2
#define h 0.1 // sec This is the time step
#define N 201 // Number of time steps
int main(void)
{
// We avoid using arrays this time
double x = 0, xint = 0;
double v = 5, vint = 0; // Just like the previous case
double t = 0;
double E = (m * v * v + k * x * x) / 2.0; // This is the energy in units of Joules
FILE *fp = fopen("oscillatorimprovedEuler.dat", "w+");
int i = 0;
for(i = 0; i < N ; i++)
{
fprintf(fp, "%f \t %f \t %f \t %f \n", x, v, E, t);
xint = x + (h) * v;
vint = v - (h) * k * x / m;
v = v - (h) * ((k * x / m) + (k * xint / m)) / 2.0;
x = x + (h) * (v + vint) / 2.0;
E = (m * v * v + k * x * x) / 2.0;
t += h;
}
fclose(fp);
return 0;
}
There may be a very slight point I miss so I would be grateful if you can point it out. I appreciate your help.
So I figured out with the aid of math.stackexchange that the problem was related with updating the position and velocity earlier than the time they should be updated and more intermediate variables were needed. The now working code is below:
//Compilation and run
//gcc oscillatorimprovedEuler.c -lm -o oscillatorimprovedEuler && ./oscillatorimprovedEuler
#include <stdio.h>
#include <math.h>
// The global constans are defined in the following way (having the constant value througout the program
#define m 1.0 // kg
#define k 1.0 // kg/sec^2
#define h 0.1 // sec This is the time step
#define N 200 // Number of time steps
int main(void)
{
// We need to define this many variables to avoid early updating the position and velocity
double x = 0.0, xpre = 0, xcor = 0;
double v = 5.0, vpre = 0, vcor = 0; // Just like the previous case
double t = 0;
double E = (m * v * v + k * x * x) / 2.0; // This is the energy in units of Joules
FILE *fp = fopen("oscillatorimprovedEuler.dat", "w+");
int i = 0;
for(i = 0; i < N ; i++)
{
if (i == 0)
{
fprintf(fp, "%f \t %f \t %f \t %f \n", x, v, E, t);
}
xpre = x + (h) * v;
vpre = v - (h) * k * x / m;
vcor = v - (h) * ((k * x / m) + (k * xpre / m)) / 2.0;
xcor = x + (h) * (v + vpre) / 2.0;
E = (m * vcor * vcor + k * xcor * xcor) / 2.0;
t += h;
fprintf(fp, "%f \t %f \t %f \t %f \n", xcor, vcor, E, t);
x = xcor, v = vcor;
}
fclose(fp);
return 0;
}
I have been using Ubuntu 12.04 LTS with GCC to compile my the codes for my assignment for a while. However, recently I have run into two issues as follows:
The following code calculates zero for a nonzero value with the second formula is used.
There is a large amount of error in the calculation of the integral of the standard normal distribution from 0 to 5 or larger standard deviations.
How can I remedy these issues? I am especially obsessed with the first one. Any help or suggestion is appreciated. thanks in advance.
The code is as follows:
#include <stdio.h>
#include <math.h>
#include <limits.h>
#include <stdlib.h>
#define N 599
long double
factorial(long double n)
{
//Here s is the free parameter which is increased by one in each step and
//pro is the initial product and by setting pro to be 0 we also cover the
//case of zero factorial.
int s = 1;
long double pro = 1;
//Here pro stands for product.
if (n < 0)
printf("Factorial is not defined for a negative number \n");
else {
while (n >= s) {
pro *= s;
s++;
}
return pro;
}
}
int main()
{
// Since the function given is the standard normal distribution
// probability density function we have mean = 0 and variance = 1.
// Hence we also have z = x; while dealing with only positive values of
// x and keeping in mind that the PDF is symmetric around the mean.
long double * summand1 = malloc(N * sizeof(long double));
long double * summand2 = malloc(N * sizeof(long double));
int p = 0, k, z[5] = {0, 3, 5, 10, 20};
long double sum1[5] = {0}, sum2[5] = {0} , factor = 1.0;
for (p = 0; p <= 4; p++)
{
for (k = 0; k <= N; k++)
{
summand1[k] = (1 / sqrtl(M_PI * 2) )* powl(-1, k) * powl(z[p], 2 * k + 1) / ( factorial(k) * (2 * k + 1) * powl(2, k));
sum1[p] += summand1[k];
}
//Wolfamalpha site gives the same value here
for (k = 0; k <= N; k++)
{
factor *= (2 * k + 1);
summand2[k] = ((1 / sqrtl(M_PI * 2) ) * powl(z[p], 2 * k + 1) / factor);
//printf("%Le \n", factor);
sum2[p] += summand2[k];
}
sum2[p] = sum2[p] * expl((-powl(z[p],2)) / 2);
}
for (p = 0; p < 4; p++)
{
printf("The sum obtained for z between %d - %d \
\nusing the first formula is %Lf \n", z[p], z[p+1], sum1[p+1]);
printf("The sum obtained for z between %d - %d \
\nusing the second formula is %Lf \n", z[p], z[p+1], sum2[p+1]);
}
return 0;
}
The working code without the outermost for loop is
#include <stdio.h>
#include <math.h>
#include <limits.h>
#include <stdlib.h>
#define N 1200
long double
factorial(long double n)
{
//Here s is the free parameter which is increased by one in each step and
//pro is the initial product and by setting pro to be 0 we also cover the
//case of zero factorial.
int s = 1;
long double pro = 1;
//Here pro stands for product.
if (n < 0)
printf("Factorial is not defined for a negative number \n");
else {
while (n >= s) {
pro *= s;
s++;
}
return pro;
}
}
int main()
{
// Since the function given is the standard normal distribution
// probability density function we have mean = 0 and variance = 1.
// Hence we also have z = x; while dealing with only positive values of
// x and keeping in mind that the PDF is symmetric around the mean.
long double * summand1 = malloc(N * sizeof(long double));
long double * summand2 = malloc(N * sizeof(long double));
int k, z = 3;
long double sum1 = 0, sum2 = 0, pro = 1.0;
for (k = 0; k <= N; k++)
{
summand1[k] = (1 / sqrtl(M_PI * 2) )* powl(-1, k) * powl(z, 2 * k + 1) / ( factorial(k) * (2 * k + 1) * powl(2, k));
sum1 += summand1[k];
}
//Wolfamalpha site gives the same value here
printf("The sum obtained for z between 0-3 using the first formula is %Lf \n", sum1);
for (k = 0; k <= N; k++)
{
pro *= (2 * k + 1);
summand2[k] = ((1 / sqrtl(M_PI * 2) * powl(z, 2 * k + 1) / pro));
//printf("%Le \n", pro);
sum2 += summand2[k];
}
sum2 = sum2 * expl((-powl(z,2)) / 2);
printf("The sum obtained for z between 0-3 using the second formula is %Lf \n", sum2);
return 0;
}
I'm quite certain that the problem is in factor not being set back to 1 in the outer loop..
factor *= (2 * k + 1); (in the loop that calculates sum2.)
In the second version provided the one that works it starts with z=3
However in the first loop since you do not clear it between iterations on p by the time you reach z[2] it already is a huge number.
EDIT: Possible help with precision..
Basically you have a huge number powl(z[p], 2 * k + 1) divided by another huge number factor. huge floating point numbers lose their precision. The way to avoid that is to perform the division as soon as possible..
Instead of first calculating powl(z[p], 2 * k + 1) and dividing by factor :
- (z[p]z[p] ... . * z[p]) / (1*3*5*...(2*k+1))`
rearrange the calculation: (z[p]/1) * (z[p]^2/3) * (z[p]^2/5) ... (z[p]^2/(2*k+1))
You can do this in sumand2 calculation and a similar trick in summand1