How do i use more than 100.000 dot? - c

I have made this Project it can calculate the Pi with the Monte-Carlo method, but if I use more than 100.000 dots it crashes. Does anyone know how to use like a 1.000.000 dots without crashing? I'm using the GNU compiler. I've tried with another compiler but I had the same problem.
#include <time.h>
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
int main() {
srand(time(NULL));
int dot;
int dotC = 0;
int dotS = 0;
printf("How many dot do you want to use?: ");
scanf("%d", &dot);
float pi[dot];
float x[dot];
float y[dot];
for (int i = 1; i < dot; i++) {
x[i] = (float)rand() / (float)RAND_MAX;
}
for (int i = 0; i < dot; i++) {
y[i] = (float)rand() / (float)RAND_MAX;
}
float distance[dot];
for (int i = 0; i < dot; ++i) {
distance[i] = sqrt(pow(x[i], 2) + pow(y[i], 2));
}
for (int i = 0; i < dot; ++i) {
if (distance[i] < 1) {
dotC++;
}
dotS++;
}
for (int i = 0; i < dot; ++i) {
pi[i] = (float)dotC / (float)dotS * 4;
}
printf("approximation of PY is: ");
printf("%f\n", pi[0]);
}

You get a stack overflow because you allocate large arrays with automatic storage (aka on the stack) exceeding the stack space available to your program.
You can fix the problem by allocating these from the heap with malloc() or calloc(), but you can simplify the algorithm by not using arrays at all:
for each random dot, compute the distance and update the dotC and dotS counters. No need to store the values.
the loop to initialize the x array should start at 0. You have undefined behavior as you do not initialize x[0].
dotS is actually redundant as its final value is the same as dot.
you should use double instead of float for increased precision.
you should use a simple multiplication instead of the more costly pow() function.
there is no need for sqrt() either: comparing the square of the distance to 1.0 gives the same result.
Here is a simplified version:
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
int main() {
srand(time(NULL));
int dots;
int dotC = 0;
printf("How many dots do you want to use?: ");
if (scanf("%d", &dots) != 1)
return 1;
for (int i = 0; i < dots; i++) {
double x = (double)rand() / (double)RAND_MAX;
double y = (double)rand() / (double)RAND_MAX;
if (x * x + y * y <= 1.0)
dotC++;
}
printf("approximation of PI is: %.9f\n", 4 * (double)dotC / (double)dots);
return 0;
}
On systems with slow floating point, you could change the for loop to use 64-bit integers and produce the same result:
for (int i = 0; i < dots; i++) {
long long x = rand();
long long y = rand();
if (x * x + y * y <= (long long)RAND_MAX * RAND_MAX)
dotC++;
}
This algorithm is really a benchmark of the pseudo random number generator, running it for 1 billion dots only produces 4 or 5 decimal places in 13 seconds on my old Macbook with the Apple libC. Integer or floating point versions run at the same speed on this CPU.

If you want dynamically allocated array, you should use malloc. Replace :
float *pi, *x, *y;
x = malloc(dot * sizeof(float));
if (x==NULL) {
printf("no memory for x\n");
exit(1);
}
y = malloc(dot * sizeof(float));
if (y==NULL) {
printf("no memory for y\n");
exit(1);
}
pi = malloc(dot * sizeof(float));
if (pi==NULL) {
printf("no memory for pi\n");
exit(1);
}
That would work, however, for big numbers, it would get out of memory. Perhaps you can calculate incremental? Why store all the calculation? Perhaps the algorithm is not clear enough to me, but I guess that should be possible...
I looked at your code again, and this seems to do the same:
int main(){
srand(time(NULL));
int dot;
int dotC = 0;
int dotS = 0;
printf("How mmany dot do you want to use?: ");
scanf("%d",&dot);
float pi, x, y;
for (int i = 0; i < dot ; i++){
x = (float)rand()/(float)RAND_MAX;
y = (float)rand()/(float)RAND_MAX;
if (sqrt(pow(x, 2) + pow(y, 2)) < 1)
{
dotC++;
}
dotS++;
}
printf("approximation of PY is: %f\n", dotC / (float)dotS * 4);
}

You are probably overflowing the stack (hence this website name) with the dynamic array allocations (like float distance[dot]) when dot is too large. To solve this, you can either increase the stack size (but you will encounter the same issue with larger numbers) or allocate your arrays on the heap instead, eg.
float * x = calloc(dot, sizeof(float));
float * y = calloc(dot, sizeof(float));
/* ... */
free(x);
free(y);
Checking calloc return values for NULL is usually recommended as well, see man calloc.

Well while the other answers are correct and more reasonable, you can always temporarily increase the stack size limit using the ulimit commands if you are on UNIX. This will allow for a few more decimals.
Get the currect stack size with ulimit -s or ulimit -a to get more info.
Then you can find the maximum size the stack can be by typing ulimit -H -s or ulimit -H -a for more info.
Finally you can set the stack size to the maximum by typing ulimit -s maxsize, maxsize is equal to the number you get when you type ulimit -H -s .
Stack size will reset to its default value when your terminate your shell.

Related

Store a large number as separate digits in an array [duplicate]

I am trying to calculate 100! (that is, the factorial of 100).
I am looking for the simplest way to accomplish this using C. I have read around but have not found a concrete answer.
If you must know, I program in Xcode in Mac os X.
If you're looking for a simple library, libtommath (from libtomcrypt) is probably what you want.
If you're looking to write a simple implementation yourself (either as a learning exercise or because you only need a very limited subset of bigint functionality and don't want to tack on a dependency to a large library, namespace pollution, etc.), then I might suggest the following for your problem:
Since you can bound the size of the result based on n, simply pre-allocate an array of uint32_t of the required size to hold the result. I'm guessing you'll want to print the result, so it makes sense to use a base that's a power of 10 (i.e. base 1000000000) rather than a power of 2. That is to say, each element of your array is allowed to hold a value between 0 and 999999999.
To multiply this number by a (normal, non-big) integer n, do something like:
uint32_t carry=0;
for(i=0; i<len; i++) {
uint64_t tmp = n*(uint64_t)big[i] + carry;
big[i] = tmp % 1000000000;
carry = tmp / 1000000000;
}
if (carry) big[len++] = carry;
If you know n will never be bigger than 100 (or some other small number) and want to avoid going into the 64-bit range (or if you're on a 64-bit platform and want to use uint64_t for your bigint array), then make the base a smaller power of 10 so that the multiplication result will always fit in the type.
Now, printing the result is just something like:
printf("%lu", (long)big[len-1]);
for(i=len-1; i; i--) printf("%.9lu", (long)big[i-1]);
putchar('\n');
If you want to use a power of 2 as the base, rather than a power of 10, the multiplication becomes much faster:
uint32_t carry=0;
for(i=0; i<len; i++) {
uint64_t tmp = n*(uint64_t)big[i] + carry;
big[i] = tmp;
carry = tmp >> 32;
}
if (carry) big[len++] = carry;
However, printing your result in decimal will not be so pleasant... :-) Of course if you want the result in hex, then it's easy:
printf("%lx", (long)big[len-1]);
for(i=len-1; i; i--) printf("%.8lx", (long)big[i-1]);
putchar('\n');
Hope this helps! I'll leave implementing other things (like addition, multiplication of 2 bigints, etc) as an exercise for you. Just think back to how you learned to do base-10 addition, multiplication, division, etc. in grade school and teach the computer how to do that (but in base-10^9 or base-2^32 instead) and you should have no problem.
If you're willing to use a library implementation the standard one seems to be GMP
mpz_t out;
mpz_init(out);
mpz_fac_ui(out,100);
mpz_out_str(stdout,10,out);
should calculate 100! from looking at the docs.
You asked for the simplest way to do this. So, here you go:
#include <gmp.h>
#include <stdio.h>
int main(int argc, char** argv) {
mpz_t mynum;
mpz_init(mynum);
mpz_add_ui(mynum, 100);
int i;
for (i = 99; i > 1; i--) {
mpz_mul_si(mynum, mynum, (long)i);
}
mpz_out_str(stdout, 10, mynum);
return 0;
}
I tested this code and it gives the correct answer.
You can also use OpenSSL bn; it is already installed in Mac OS X.
You can print factorial 1000 in C with just 30 lines of code, <stdio.h> and char type :
#include <stdio.h>
#define B_SIZE 3000 // number of buffered digits
struct buffer {
size_t index;
char data[B_SIZE];
};
void init_buffer(struct buffer *buffer, int n) {
for (buffer->index = B_SIZE; n; buffer->data[--buffer->index] = (char) (n % 10), n /= 10);
}
void print_buffer(const struct buffer *buffer) {
for (size_t i = buffer->index; i < B_SIZE; ++i) putchar('0' + buffer->data[i]);
}
void natural_mul_buffer(struct buffer *buffer, const int n) {
int a, b = 0;
for (size_t i = (B_SIZE - 1); i >= buffer->index; --i) {
a = n * buffer->data[i] + b;
buffer->data[i] = (char) (a % 10);
b = a / 10;
}
for (; b; buffer->data[--buffer->index] = (char) (b % 10), b /= 10);
}
int main() {
struct buffer number_1 = {0};
init_buffer(&number_1, 1);
for (int i = 2; i <= 100; ++i)
natural_mul_buffer(&number_1, i);
print_buffer(&number_1);
}
You will find faster but the “little” factorial(10000) is here computed ≈ instantly.
You can put it into a fact.c file then compile + execute :
gcc -O3 -std=c99 -Wall -pedantic fact.c ; ./a.out ;
If you want to execute some base conversion there is a solution, see also Fibonacci(10000), Thank You.

Segfault with large int - not enough memory?

I am fairly new to C and how arrays and memory allocation works. I'm solving a very simple function right now, vector_average(), which computes the mean value between two successive array entries, i.e., the average between (i) and (i + 1). This average function is the following:
void
vector_average(double *cc, double *nc, int n)
{
//#pragma omp parallel for
double tbeg ;
double tend ;
tbeg = Wtime() ;
for (int i = 0; i < n; i++) {
cc[i] = .5 * (nc[i] + nc[i+1]);
}
tend = Wtime() ;
printf("vector_average() took %g seconds\n", tend - tbeg);
}
My goal is to set int n extremely high, to the point where it actually takes some time to complete this loop (hence, why I am tracking wall time in this code). I'm passing this function a random test function of x, f(x) = sin(x) + 1/3 * sin(3 x), denoted in this code as x_nc, in main() in the following form:
int
main(int argc, char **argv)
{
int N = 1.E6;
double x_nc[N+1];
double dx = 2. * M_PI / N;
for (int i = 0; i <= N; i++) {
double x = i * dx;
x_nc[i] = sin(x) + 1./3. * sin(3.*x);
}
double x_cc[N];
vector_average(x_cc, x_nc, N);
}
But my problem here is that if I set int N any higher than 1.E5, it segfaults. Please provide any suggestions for how I might set N much higher. Perhaps I have to do something with malloc, but, again, I am new to all of this stuff and I'm not quite sure how I would implement this.
-CJW
A function only has 1M stack memory on Windows or other system. Obviously, the size of temporary variable 'x_nc' is bigger than 1M. So, you should use heap to save data of x_nc:
int
main(int argc, char **argv)
{
int N = 1.E6;
double* x_nc = (double*)malloc(sizeof(dounble)*(N+1));
double dx = 2. * M_PI / N;
for (int i = 0; i <= N; i++) {
double x = i * dx;
x_nc[i] = sin(x) + 1./3. * sin(3.*x);
}
double* x_cc = (double*)malloc(sizeof(double)*N);
vector_average(x_cc, x_nc, N);
free(x_nc);
free(x_cc);
return 0;
}

Segmentation Fault 11 in C caused by larger operation numbers

I have known that when encountered with segmentation fault 11, it means the program has attempted to access an area of memory that it is not allowed to access.
Here I am trying to calculate a Fourier transform, using the following code.
It works well when nPoints = 2^15 (or of course with less points) , however it corrupts when I further increase the points to 2^16. I am wondering, is that caused by occupying too much memory? But I did not notice too much memory occupation during the operation. And although it use recursion, it transforms in-place. I thought it would occupy not so much memory. Then, where's the problem?
Thanks in advance
PS: one thing I forgot to say is, the result above was on Max OS (8G memory).
When I running the code on Windows (16G memory), it corrupts when nPoints = 2^14. So it makes me confused whether it's caused by the memory allocation, as the Windows PC has a larger memory (but it's really hard to say, because the two operation systems utilize different memory strategy).
#include <stdio.h>
#include <tgmath.h>
#include <string.h>
// in place FFT with O(n) memory usage
long double PI;
typedef long double complex cplx;
void _fft(cplx buf[], cplx out[], int n, int step)
{
if (step < n) {
_fft(out, buf, n, step * 2);
_fft(out + step, buf + step, n, step * 2);
for (int i = 0; i < n; i += 2 * step) {
cplx t = exp(-I * PI * i / n) * out[i + step];
buf[i / 2] = out[i] + t;
buf[(i + n)/2] = out[i] - t;
}
}
}
void fft(cplx buf[], int n)
{
cplx out[n];
for (int i = 0; i < n; i++) out[i] = buf[i];
_fft(buf, out, n, 1);
}
int main()
{
const int nPoints = pow(2, 15);
PI = atan2(1.0l, 1) * 4;
double tau = 0.1;
double tSpan = 12.5;
long double dt = tSpan / (nPoints-1);
long double T[nPoints];
cplx At[nPoints];
for (int i = 0; i < nPoints; ++i)
{
T[i] = dt * (i - nPoints / 2);
At[i] = exp( - T[i]*T[i] / (2*tau*tau));
}
fft(At, nPoints);
return 0;
}
You cannot allocate very large arrays in the stack. The default stack size on macOS is 8 MiB. The size of your cplx type is 32 bytes, so an array of 216 cplx elements is 2 MiB, and you have two of them (one in main and one in fft), so that is 4 MiB. That fits on the stack, but, at that size, the program runs to completion when I try it. At 217, it fails, which makes sense because then the program has two arrays taking 8 MiB on stack. The proper way to allocate such large arrays is to include <stdlib.h> and use cmplx *At = malloc(nPoints * sizeof *At); followed by if (!At) { /* Print some error message about being unable to allocate memory and terminate the program. */ }. You should do that for At, T, and out. Also, when you are done with each array, you should free it, as with free(At);.
To calculate an integer power of two, use the integer operation 1 << power, not the floating-point operation pow(2, 16). We have designed pow well on macOS, but, on other systems, it may return approximations even when exact results are possible. An approximate result may be slightly less than the exact integer value, so converting it to an integer truncates to the wrong result. If it may be a power of two larger than suitable for an int, then use (type) 1 << power, where type is a suitably large integer type.
the following, instrumented, code clearly shows that the OPs code repeatedly updates the same locations in the out[] array and actually does not update most of the locations in that array.
#include <stdio.h>
#include <tgmath.h>
#include <assert.h>
// in place FFT with O(n) memory usage
#define N_POINTS (1<<15)
double T[N_POINTS];
double At[N_POINTS];
double PI;
// prototypes
void _fft(double buf[], double out[], int step);
void fft( void );
int main( void )
{
PI = 3.14159;
double tau = 0.1;
double tSpan = 12.5;
double dt = tSpan / (N_POINTS-1);
for (int i = 0; i < N_POINTS; ++i)
{
T[i] = dt * (i - (N_POINTS / 2));
At[i] = exp( - T[i]*T[i] / (2*tau*tau));
}
fft();
return 0;
}
void fft()
{
double out[ N_POINTS ];
for (int i = 0; i < N_POINTS; i++)
out[i] = At[i];
_fft(At, out, 1);
}
void _fft(double buf[], double out[], int step)
{
printf( "step: %d\n", step );
if (step < N_POINTS)
{
_fft(out, buf, step * 2);
_fft(out + step, buf + step, step * 2);
for (int i = 0; i < N_POINTS; i += 2 * step)
{
double t = exp(-I * PI * i / N_POINTS) * out[i + step];
buf[i / 2] = out[i] + t;
buf[(i + N_POINTS)/2] = out[i] - t;
printf( "index: %d buf update: %d, %d\n", i, i/2, (i+N_POINTS)/2 );
}
}
}
Suggest running via (where untitled1 is the name of the executable and on linux)
./untitled1 > out.txt
less out.txt
the out.txt file is 8630880 bytes
An examination of that file shows the lack of coverage and shows that any one entry is NOT the sum of the prior two entries, so I suspect this is not a valid Fourier transform,

SSE Intrinsics arithmetic error

I've been experimenting with SSE intrinsics and I seem to have run into a weird bug that I can't figure out. I am computing the inner product of two float arrays, 4 elements at a time.
For testing I've set each element of both arrays to 1, so the product should be == size.
It runs correctly, but whenever I run the code with size > ~68000000 the code using the sse intrinsics starts computing the wrong inner product. It seems to get stuck at a certain sum and never exceeds this number. Here is an example run:
joe:~$./test_sse 70000000
sequential inner product: 70000000.000000
sse inner product: 67108864.000000
sequential time: 0.417932
sse time: 0.274255
Compilation:
gcc -fopenmp test_sse.c -o test_sse -std=c99
This error seems to be consistent amongst the handful of computers I've tested it on. Here is the code, perhaps someone might be able to help me figure out what is going on:
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#include <omp.h>
#include <math.h>
#include <assert.h>
#include <xmmintrin.h>
double inner_product_sequential(float * a, float * b, unsigned int size) {
double sum = 0;
for(unsigned int i = 0; i < size; i++) {
sum += a[i] * b[i];
}
return sum;
}
double inner_product_sse(float * a, float * b, unsigned int size) {
assert(size % 4 == 0);
__m128 X, Y, Z;
Z = _mm_set1_ps(0.0f);
float arr[4] __attribute__((aligned(sizeof(float) * 4)));
for(unsigned int i = 0; i < size; i += 4) {
X = _mm_load_ps(a+i);
Y = _mm_load_ps(b+i);
X = _mm_mul_ps(X, Y);
Z = _mm_add_ps(X, Z);
}
_mm_store_ps(arr, Z);
return arr[0] + arr[1] + arr[2] + arr[3];
}
int main(int argc, char ** argv) {
if(argc < 2) {
fprintf(stderr, "usage: ./test_sse <size>\n");
exit(EXIT_FAILURE);
}
unsigned int size = atoi(argv[1]);
srand(time(0));
float *a = (float *) _mm_malloc(size * sizeof(float), sizeof(float) * 4);
float *b = (float *) _mm_malloc(size * sizeof(float), sizeof(float) * 4);
for(int i = 0; i < size; i++) {
a[i] = b[i] = 1;
}
double start, time_seq, time_sse;
start = omp_get_wtime();
double inner_seq = inner_product_sequential(a, b, size);
time_seq = omp_get_wtime() - start;
start = omp_get_wtime();
double inner_sse = inner_product_sse(a, b, size);
time_sse = omp_get_wtime() - start;
printf("sequential inner product: %f\n", inner_seq);
printf("sse inner product: %f\n", inner_sse);
printf("sequential time: %f\n", time_seq);
printf("sse time: %f\n", time_sse);
_mm_free(a);
_mm_free(b);
}
You are running into the precision limit of single precision floating point numbers. The number 16777216 (2^24), which is the value of each component of the vector Z when reaching the "limit" inner product, is represented in 32-bit floating point as hexadecimal 0x4b800000 or binary 0 10010111 00000000000000000000000, i.e. the 23-bit mantissa is all zeros (implicit leading 1 bit), and the 8-bit exponent part is 151 representing the exponent 151 - 127 = 24. If you add a 1 to that value this would require to increase the exponent but then the added one cannot be represented in the mantissa any longer, so in single precision floating point arithmetic 2^24 + 1 = 2^24.
You do not see that in your sequential function because there you are using a 64-bit double precision value to store the result, and as we are working on a x86 platform, internally most probably an 80-bit excess precision register is used.
You can force to use single precision throughout in your sequential code by rewriting it as
float sum;
float inner_product_sequential(float * a, float * b, unsigned int size) {
sum = 0;
for(unsigned int i = 0; i < size; i++) {
sum += a[i] * b[i];
}
return sum;
}
and you will see 16777216.000000 as maximum computed value.

Numerical Integral from 0 to infinity

My aim is to calculate the numerical integral of a probability distribution function (PDF) of the distance of an electron from the nucleus of the hydrogen atom in C programming language. I have written a sample code however it fails to find the numerical value correctly due to the fact that I cannot increase the limit as much as its necessary in my opinion. I have also included the library but I cannot use the values stated in the following post as integral boundaries: min and max value of data type in C . What is the remedy in this case? Should switch to another programming language maybe? Any help and suggestion is appreciated, thanks in advance.
Edit: After some value I get the error segmentation fault. I have checked the actual result of the integral to be 0.0372193 with Wolframalpha. In addition to this if I increment k in smaller amounts I get zero as a result that is why I defined r[k]=k, I know it should be smaller for increased precision.
#include <stdio.h>
#include <math.h>
#include <limits.h>
#define a0 0.53
int N = 200000;
// This value of N is the highest possible number in long double
// data format. Change its value to adjust the precision of integration
// and computation time.
// The discrete integral may be defined as follows:
long double trapezoid(long double x[], long double f[]) {
int i;
long double dx = x[1]-x[0];
long double sum = 0.5*(f[0]+f[N]);
for (i = 1; i < N; i++)
sum+=f[i];
return sum*dx;
}
main() {
long double P[N], r[N], a;
// Declare and initialize the loop variable
int k = 0;
for (k = 0; k < N; k++)
{
r[k] = k ;
P[k] = r[k] * r[k] * exp( -2*r[k] / a0);
//printf("%.20Lf \n", r[k]);
//printf("%.20Lf \n", P[k]);
}
a = trapezoid(r, P);
printf("%.20Lf \n", a);
}
Last Code:
#include <stdio.h>
#include <math.h>
#include <limits.h>
#include <stdlib.h>
#define a0 0.53
#define N LLONG_MAX
// This value of N is the highest possible number in long double
// data format. Change its value to adjust the precision of integration
// and computation time.
// The discrete integral may be defined as follows:
long double trapezoid(long double x[],long double f[]) {
int i;
long double dx = x[1]-x[0];
long double sum = 0.5*(f[0]+f[N]);
for (i = 1; i < N; i++)
sum+=f[i];
return sum*dx;
}
main() {
printf("%Ld", LLONG_MAX);
long double * P = malloc(N * sizeof(long double));
long double * r = malloc(N * sizeof(long double));
// Declare and initialize the loop variable
int k = 0;
long double integral;
for (k = 1; k < N; k++)
{
P[k] = r[k] * r[k] * expl( -2*r[k] / a0);
}
integral = trapezoid(r, P);
printf("%Lf", integral);
}
Edit last code working:
#include <stdio.h>
#include <math.h>
#include <limits.h>
#include <stdlib.h>
#define a0 0.53
#define N LONG_MAX/100
// This value of N is the highest possible number in long double
// data format. Change its value to adjust the precision of integration
// and computation time.
// The discrete integral may be defined as follows:
long double trapezoid(long double x[],long double f[]) {
int i;
long double dx = x[1]-x[0];
long double sum = 0.5*(f[0]+f[N]);
for (i = 1; i < N; i++)
sum+=f[i];
return sum*dx;
}
main() {
printf("%Ld \n", LLONG_MAX);
long double * P = malloc(N * sizeof(long double));
long double * r = malloc(N * sizeof(long double));
// Declare and initialize the loop variable
int k = 0;
long double integral;
for (k = 1; k < N; k++)
{
r[k] = k / 100000.0;
P[k] = r[k] * r[k] * expl( -2*r[k] / a0);
}
integral = trapezoid(r, P);
printf("%.15Lf \n", integral);
free((void *)P);
free((void *)r);
}
In particular I have changed the definition for r[k] by using a floating point number in the division operation to get a long double as a result and also as I have stated in my last comment I cannot go for Ns larger than LONG_MAX/100 and I think I should investigate the code and malloc further to get the issue. I have found the exact value that is obtained analytically by taking the limits; I have confirmed the result with TI-89 Titanium and Wolframalpha (both numerically and analytically) apart from doing it myself. The trapezoid rule worked out pretty well when the interval size has been decreased. Many thanks for all the posters here for their ideas. Having a value of 2147483647 LONG_MAX is not that particularly large as I expected by the way, should the limit not be around ten to power 308?
Numerical point of view
The usual trapezoid method doesn't work with improper integrals. As such, Gaussian quadrature rules are much better, since they not only provide 2n-1 exactness (that is, for a polynomial of degree 2n-1 they will return the correct solution), but also manage improper integrals by using the right weight function.
If your integral is improper in both sides, you should try the Gauss-Hermite quadrature, otherwise use the Gauss-Laguerre quadrature.
The "overflow" error
long double P[N], r[N], a;
P has a size of roughly 3MB, and so does r. That's too much memory. Allocate the memory instead:
long double * P = malloc(N * sizeof(long double));
long double * r = malloc(N * sizeof(long double));
Don't forget to include <stdlib.h> and use free on both P and r if you don't need them any longer. Also, you may not access the N-th entry, so f[N] is wrong.
Using Gauss-Laguerre quadrature
Now Gauss-Laguerre uses exp(-x) as weight function. If you're not familiar with Gaussian quadrature: the result of E(f) is the integral of w * f, where w is the weight function.
Your f looks like this, and:
f x = x^2 * exp (-2 * x / a)
Wait a minute. f already contains exp(-term), so we can substitute x with t = x * a /2 and get
f' x = (t * a/2)^2 * exp(-t) * a/2
Since exp(-t) is already part of our weight function, your function fits now perfectly into the Gauss-Laguerre quadrature. The resulting code is
#include <stdio.h>
#include <math.h>
/* x[] and a[] taken from
* https://de.wikipedia.org/wiki/Gau%C3%9F-Quadratur#Gau.C3.9F-Laguerre-Integration
* Calculating them by hand is a little bit cumbersome
*/
const int gauss_rule_length = 3;
const double gauss_x[] = {0.415774556783, 2.29428036028, 6.28994508294};
const double gauss_a[] = {0.711093009929, 0.278517733569, 0.0103892565016};
double f(double x){
return x *.53/2 * x *.53/2 * .53/2;
}
int main(){
int i;
double sum = 0;
for(i = 0; i < gauss_rule_length; ++i){
sum += gauss_a[i] * f(gauss_x[i]);
}
printf("%.10lf\n",sum); /* 0.0372192500 */
return 0;
}

Resources