Integration of Bessel functions in C++/Fortran [closed] - c

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
How can I integrate an equation including bessel functions numerically from "0" to "infinity" in Fortran or/and C?
I did in matlab, but it's not true for larger inputs and after a specific values , the bessel functions give completely wrong results(there is a restriction in Matlab)

There's a large number of analytic results for various integrals of the Bessel functions (see DLMF, Sect. 10.22), including definite integrals over precisely this range. You'd be much better off, and almost certainly faster and more accurate, trying hard to recast your expression into something that's integrable and using an exact result.

Last time I had to do with such things, it was state of the art to do simple integration of the intervals defined by the zero crossings. That is in most cases relatively stable and if the integrand is approaching zero reasonable fast easy to do.
As a starting point for playing around I´ve included a bit of code. Of course you need to work on the convergence detection and error checking. This is no production code but I thought maybe it provides a starting point for you. Its using gsl.
On my iMac this code takes about 2 µs per iteration. It will not become faster by including a hardcoded table for the intervals.
I hope this is of some use for you.
#include <iostream>
#include <vector>
#include <gsl/gsl_sf_bessel.h>
#include <gsl/gsl_integration.h>
#include <gsl/gsl_sf.h>
double f (double x, void * params) {
double y = 1.0 / (1.0 + x) * gsl_sf_bessel_J0 (x);
return y;
}
int main(int argc, const char * argv[]) {
double sum = 0;
double delta = 0.00001;
int max_steps = 1000;
gsl_integration_workspace * w = gsl_integration_workspace_alloc (max_steps);
gsl_function F;
F.function = &f;
F.params = 0;
double result, error;
double a,b;
for(int n=0; n < max_steps; n++)
{
if(n==0)
{
a = 0.0;
b = gsl_sf_bessel_zero_J0(1);
}
else
{
a = n;
b = gsl_sf_bessel_zero_J0(n+1);
}
gsl_integration_qag (&F, // function
besselj0_intervals[n], // from
besselj0_intervals[n+1], // to
0, // eps absolute
1e-4,// eps relative
max_steps,
GSL_INTEG_GAUSS15,
w,
&result,
&error);
sum += result;
std::cout << n << " " << result << " " << sum << "\n";
if(abs(result) < delta)
break;
}
return 0;
}

You can pretty much google and find lots of Bessel functions implemented in C already.
http://www.atnf.csiro.au/computing/software/gipsy/sub/bessel.c
http://jean-pierre.moreau.pagesperso-orange.fr/c_bessel.html
https://msdn.microsoft.com/en-us/library/h7zkk1bz.aspx
In the end, these use the built in types and will be limited to the ranges they can represent (just as MATLAB is). At best, expect 15 digits of precision using double precision floating point representation. So, for large numbers, they will appear to be rounded. eg. 1237846464123450000000000.00000
And, of course, others on Stack Overflow have looked into it.
C++ Bessel function for complex numbers

Related

What is a more accurate algorithm I can use to calculate the sine of a number?

I have this code that calculates a guess for sine and compares it to the standard C library's (glibc's in my case) result:
#include <stdio.h>
#include <math.h>
double double_sin(double a)
{
a -= (a*a*a)/6;
return a;
}
int main(void)
{
double clib_sin = sin(.13),
my_sin = double_sin(.13);
printf("%.16f\n%.16f\n%.16f\n", clib_sin, my_sin, clib_sin-my_sin);
return 0;
}
The accuracy for double_sin is poor (about 5-6 digits). Here's my output:
0.1296341426196949
0.1296338333333333
0.0000003092863615
As you can see, after .12963, the results differ.
Some notes:
I don't think the Taylor series will work for this specific situation, the factorials required for greater accuracy aren't able to be stored inside an unsigned long long.
Lookup tables are not an option, they take up too much space and generally don't provide any information on how to calculate the result.
If you use magic numbers, please explain them (although I would prefer if they were not used).
I would greatly prefer an algorithm is easily understandable and able to be used as a reference over one that is not.
The result does not have to be perfectly accurate. A minimum would be the requirements of IEEE 754, C, and/or POSIX.
I'm using the IEEE-754 double format, which can be relied on.
The range supported needs to be at least from -2*M_PI to 2*M_PI. It would be nice if range reduction were included.
What is a more accurate algorithm I can use to calculate the sine of a number?
I had an idea about something similar to Newton-Raphson, but for calculating sine instead. However, I couldn't find anything on it and am ruling this possibility out.
You can actually get pretty close with the Taylor series. The trick is not to calculate the full factorial on each iteration.
The Taylor series looks like this:
sin(x) = x^1/1! - x^3/3! + x^5/5! - x^7/7!
Looking at the terms, you calculate the next term by multiplying the numerator by x^2, multiplying the denominator by the next two numbers in the factorial, and switching the sign. Then you stop when adding the next term doesn't change the result.
So you could code it like this:
double double_sin(double x)
{
double result = 0;
double factor = x;
int i;
for (i=2; result+factor!=result; i+=2) {
result += factor;
factor *= -(x*x)/(i*(i+1));
}
return result;
}
My output:
0.1296341426196949
0.1296341426196949
-0.0000000000000000
EDIT:
The accuracy can be increased further if the terms are added in the reverse direction, however this means computing a fixed number of terms:
#define FACTORS 30
double double_sin(double x)
{
double result = 0;
double factor = x;
int i, j;
double factors[FACTORS];
for (i=2, j=0; j<FACTORS; i+=2, j++) {
factors[j] = factor;
factor *= -(x*x)/(i*(i+1));
}
for (j=FACTORS-1;j>=0;j--) {
result += factors[j];
}
return result;
}
This implementation loses accuracy if x falls outside the range of 0 to 2*PI. This can be fixed by calling x = fmod(x, 2*M_PI); at the start of the function to normalize the value.

Increase precision on a double variable for small numbers [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 4 years ago.
Improve this question
I'm trying to perform the following operation:
double exponent = pow(2.0, -254)
The result I get is 'inf', the actual result is: 3.4544e-77, which is a very small number, I would guess that I could get '0' instead but I get 'inf'.
I need the actual result, is there a way to improve precision on a double? I have tried also long double without success.
I'm programming on C with Visual Studio.
In C, you can multiply by powers of two with scalb, so scalb(1, -254) is 2−254. You can also use powers of two in hexadecimal floating-point constants; 2−254 is 0x1p-254.
However, pow(2.0, -254) does not return infinity. If you attempted to print the return value, and “inf” was printed, there is an error in your source code.
Yes, the problem was in my source code, my original code is:
#include <math.h>
void main()
{
unsigned int scale = 1;
unsigned int bias = 255;
double temp = pow(2.0, scale - bias);
printf("Number is: %e\n", temp);
return;
}
which Im guessing the operation 'scale-bias' is overflowing since they are UINTS, thus getting me a 'inf', changing them to INT solves the issue:
#include <math.h>
void main()
{
int scale = 1;
int bias = 255;
double temp = pow(2.0, scale - bias);
printf("Number is: %e\n", temp);
return;
}

Converting a float with many decimal places to a float with one decimal place only [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
i want to know if there is a code that can convert float number like : 4.91820238 to a float with one decimal place only : 4.9 in C without using any libraries
P.S: I don't use print() function , I want to convert it and sotre its value ?
You can use the fmodf function in the standard math.h header.
Then subtract the result from the original.
#include <stdio.h>
#include <math.h>
int main(void) {
float f = 4.9182;
printf("%f", f - fmodf(f, 0.1));
return 0;
}
Edit:
I see from your comment that you are on a freestanding environment. So no standard library. But you can probably whip up you own function by consulting the example implementation on the reference site, and doing some inline assembly.
Just multiply by 10 and divide by 10, do this
float a = 4.91820238;
int i;
i = a*10;
num = i/10.0;
// num has 4.9 now
Remember the 10.0 during the divison, if you just divide it by 10 you'll just get the integer quotient of it.
Convert it to an integer type and back again:
float myRound(float f, int num_places) {
float factor = 1f;
for (i = 0; i < num_places; i++) {
factor *= 10f;
return ((long long)(factor * (f + .5f)))/factor;
}
Note that the code above is subject to the usual precision/range problems regarding floating point numbers.

append numbers C Programming [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question appears to be off-topic because it lacks sufficient information to diagnose the problem. Describe your problem in more detail or include a minimal example in the question itself.
Closed 8 years ago.
Improve this question
If I have the number 88 how would I add 00 to the end of and turn it into 8800? Bitwise shifts is the only way I can think of to do this but it doesn't work. It completely changes the numbers.
Bitwise shifts can only be used to multiply by powers of two, you simply want multiplication. Just run:
printf("%d", 88 * 100);
to print 8800.
If all you want to do is literally add 00 to the end of numbers you can instead do:
printf("%d00", 88);
You cannot do everything with bitwise shift operators alone. Its mathematically impossible to say it straight. But if you still insist you can do something like (88 << 6) + (88 << 5) + (88 << 2)
As a comment points out your answer can be obtained simply by multiplying your number by hundred.
So, you read from a file, and want to add two zeroes to it. Two ways I can see to do this: String-wise and Numerically.
String-wise, you can use
strcat(inputedText, "00");
(or just printf, or possible other solutions)
numerically, you can convert the inputted data from the file to an int, and multiply it by 100. If you need to print it, Vality's answer shows how to do that.
Live run: http://ideone.com/wiW5Zb
#include <stdio.h>
#include <stdint.h>
int64_t calc(int64_t value)
{
int64_t t = 1;
while(t < value)
t *= 10;
return t;
}
int64_t concatnum(int64_t a, int64_t b)
{
return (a * calc(b)) + b;
}
int main()
{
int a = 1000;
int b = 11;
int c = concatnum(a, b); //b must be greater 10.
printf("%d", c);
return 0;
}

How to demonstrate a better average? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
Here's a question on Page 60-61 in A Book On C(3rd Edition) by Al Kelley/Ira Pohl:
The following code fragment shows two different methods of computing a running average:
int i;
double x;
double avg= sum= 0.0;
double navg;
for (i=1; scanf("%lf", &x)==1; ++i)
{
avg+= (x-avg)/i;
sum+= x;
navg= sum/i;
}
The original question as written in the book is: If you input some "ordinary" numbers, the avg and the navg seem to be identical. Demonstrate experimentally that the avg is better, even when sum does not overflow.
My question as a beginning programmer is:
What is the criteria for a "better" algorithm? I believe precision and the amount of running time are two key factors, but are there any other things that make an algorithm "better"?
In terms of precision and running time, how can I demonstrate experimentally that when overflow is ruled out, avg is still a better method than navg is? Should I use numbers that are "out of the ordinary", e.g., of largely different magnitudes?
the two algorithms don't have much difference on running time;
compared with navg, avg is better on precision.
(1)running time:
the following two pieces of code demonstrate that at the magnitude of 1000000, the two algorithms do not have much difference.
#include<stdio.h>
#include<time.h>
int main()
{
int i ;
double x ,sum = 0,avg = 0;
srand(time(NULL));
for(i = 0; i < 1000000 ; i++)
{
x = rand()%10+1;
sum += x;
}
avg = sum/i;
printf("%lf\n",avg);
printf("time use:%lf\n",(double)clock()/CLOCKS_PER_SEC);
}
#include<stdio.h>
#include<time.h>
int main()
{
double sum = 0,avg = 0;
double x;
int i;
srand(time(NULL));
for(i = 0 ; i < 1000000; i++)
{
x = rand()%10+1;
avg += (x-avg)/(i+1);
}
printf("%lf\n",avg);
printf("time use:%lf\n",(double)clock()/CLOCKS_PER_SEC);
}
(2)precision:
the code below demonstrates that, adding the differences between avg and every x, the result is 0; while as for navg the result is -2.44718e-005, which means that avg is better on precision.
#include <stdlib.h>
#include <stdio.h>
int main()
{
static double data[1000000];
double sum, avg, check_value;
int i;
int n = sizeof(data)/sizeof(data[0]);
avg = 0;
for( i = 0; i < n; ++ i)
{
avg += ( data[i] - avg) / (i + 1);
}
check_value = 0;
for( i = 0; i < n; ++ i)
{
check_value = check_value + ( data[i] - avg );
}
printf("\navg += (x[i] - avb) / i:\tavg = %g\t check_value = %g", avg, check_value );
for( i = 0; i < n; ++ i )
{
data[i] = 1.3;
}
sum = 0;
for( i = 0; i < n; ++ i)
{
sum += data[i];
}
avg = sum / n;
check_value = 0;
for( i = 0; i < n; ++ i)
{
check_value = check_value + ( data[i] - avg );
}
printf("\n avg = sum / N: \tavg = %g\t check_value = %g", avg, check_value );
getchar();
}
I think this is a valid question, although not too well phrased. A problem is that even the question referred to by furins was not phrased well and was closed before receiving a good answer.
The question itself however is interesting especially for the closed one which shows that it was even included in a book, so it could lead more people in one or another direction.
I think neither algorithm is particularly better. In the naive average it looks like we will lose precision, or that when averaging numbers with several magnitude of differences we even lose numbers, but the same may probably be discovered with the other algorithm as well, maybe just with different sets of input data.
So, especially for that it is from an existing book, I think this is a perfectly valid question seeking for some decent answer.
I attempt to try to cover up what I think about the two algorithm by an example. So imagine you have 4 numbers of roughly the same magnitude and you want to average them.
The naive method sums them up first, one after another. After summing the first two you obviously lost one bit of precision at the low end (since you now probably have one larger exponent). When you add the last number, you have 2 bits lost (which bits are now used for representing the high part of the sum). But then you divide by four which in this case essentially is just subtracting 2 from your exponent.
What did we lose during this process? Now it's easier to answer what if all the numbers were truncated by 2 bits first. This case obviously the last two bits of the fraction of the resulting average will be zero, and up to additional 2 bits worth of error may be introduced (if all the truncated bits were happened to be ones in the original numbers compared to if they were zeros). So essentially if the sources were single precision floats with 23 bits of fraction, the resulting avg would now have about 19 bits worth of precision.
The actual result from the naive method is better though as the first numbers summed didn't lose that much precision.
In the differential method in each iteration the appropriately weighted difference is added to the sum. If the numbers are of the same magnitude, then this difference will most likely be somewhere one magnitude below. It is then divided by the current count, in this operation nothing lost, but the resulting difference for the last number (with i=4 in this example) may be about 3 magnitudes below than the source numbers. We add this to the running average which is around the same magnitude as the original numbers.
So with the differential method in this example adding the last number seems to have lost about 3 bits of precision, for all the 4 numbers it may even look like we may be down to 5 essentially lost bits of precision - maybe even worse than the naive method?
The differential method is harder to follow, maybe I did some mistake in my assumptions. But I think it's clear: It just doesn't seem valid to regard one or another performing better, or if so, maybe it depends on the layout and magnitude differences of the data.
Note that you're dividing by zero in your for() loop even if you do ++i

Resources