logsumexp implementation in C? - c

Does anybody know of an open source numerical C library that provides the logsumexp-function?
The logsumexp(a) function computes the sum of exponentials log(e^{a_1}+...e^{a_n}) of the components of the array a, avoiding numerical overflow.

Here's a very simple implementation from scratch (tested, at least minimally):
double logsumexp(double nums[], size_t ct) {
double max_exp = nums[0], sum = 0.0;
size_t i;
for (i = 1 ; i < ct ; i++)
if (nums[i] > max_exp)
max_exp = nums[i];
for (i = 0; i < ct ; i++)
sum += exp(nums[i] - max_exp);
return log(sum) + max_exp;
}
This does the trick of effectively dividing all of the arguments by the largest, then adding its log back in at the end to avoid overflow, so it's well-behaved for adding a large number of similarly-scaled values, with errors creeping in if some arguments are many orders of magnitude larger than others.
If you want it to run without crashing when given 0 arguments, you'll have to add a case for that :)

Related

Exceeding the range of long double and big floating point numbers

Problem statement: I am working on a code that calculates big numbers. Hence, I am easily get beyond the maximum length of "long double". Here is an example below, where part of the code is given that generates big numbers:
int n;
long double summ;
a[1]=1;
b[1]=1;
c[1] = 1; //a, b, c are 1D variables of long double types
summ=1+c[1];
for(n=2; n <=1760; n++){
a[n]=n*n;
b[n]=n;
c[n] = c[n-1]*a[n-1]/b[n]; //Let us assume we have this kind of operation
summ= summ+c[n]; //So basically, summ = 1+c[1]+c[2]+c[3]+...+c[1760]
}
The intermediates values of summ and c[n] are then used to evaluate the ratio c[n]/summ for every integer n. Then, just after the above loop, I do:
for(n=1;n<=1760;n++){
c2[n]=c[n]/summ; //summ is thus here equals to 1+c[1]+c[2]+c[3]+...+c[1760]
}
Output: If we print n, c[n] and summ, we obtain inf after n=1755 because we exceed the length of long double:
n c[n] summ
1752 2.097121e+4917 2.098320e+4917
1753 3.672061e+4920 3.674159e+4920
1754 6.433452e+4923 6.437126e+4923
1755 1.127785e+4927 1.128428e+4927
1756 inf inf
1757 inf inf
1758 inf inf
1759 inf inf
1760 inf inf
Of course, if there is an overflow for c[n] and summ, I cannot evaluate the quantity of interest, which is c2[n].
Questions: Does someone see any solution for this ? How do I need to change the code so that to have finite numerical values (for arbitrary n) ?
I will indeed most likely need to go to very big numbers (n can be much larger than 1760).
Proposition: I know that GNU Multiple Precision Arithmetic (GMP) might be useful but honestly found too many difficulties trying to use this (outside the field), so if there an easier way to solve this, I would be glad to read it. Otherwise, I will be forever grateful if someone could apply GMP or any other method to solve the above-mentioned problem.
NOTE: This does not exactly what OP wants. I'll leave this answer here in case someone has a similar problem.
As long as your final result and all initial values are not out of range, you can very often re-arrange your terms to avoid any overflow. In your case if you actually just want to know c2[n] = c[n]/sum[n] you can re-write this as follows:
c2[n] = c[n]/sum[n]
= c[n]/(sum[n-1] + c[n]) // def. of sum[n]
= 1.0/(sum[n-1]/c[n] + 1.0)
= 1.0/(sum[n-1]/(c[n-1] * a[n-1] / b[n]) + 1.0) // def. of c[n]
= 1.0/(sum[n-1]/c[n-1] * b[n] / a[n-1] + 1.0)
= a[n-1]/(1/c2[n-1] * b[n] + a[n-1]) // def. of c2[n-1]
= (a[n-1]*c2[n-1]) / (b[n] + a[n-1]*c2[n-1])
Now in the final expression neither argument grows out of range, and in fact c2 slowly converges towards 1. If the values in your question are the actual values of a[n] and b[n] you may even find a closed form expression for c2[n] (I did not check it).
To check that the re-arrangement works, you can compare it with your original formula (godbolt-link, only printing the last values): https://godbolt.org/z/oW8KsdKK6
Btw: Unless you later need all values of c2 again, there is actually no need to store any intermediate value inside an array.
I ain't no mathematician. This is what I wrote with the results below. Looks to me that the exponent, at least, is keeping up with your long double results using my feeble only double only...
#include <stdio.h>
#include <math.h>
int main() {
int n;
double la[1800], lb[1800], lc[1800];
for( n = 2; n <= 1760; n++ ) {
lb[n] = log10(n);
la[n] = lb[n] + lb[n];
lc[n] = lc[n-1] + la[n-1] - lb[n];
printf( "%4d: %.16lf\n", n, lc[n] );
}
return 0;
}
/* omitted for brevity */
1750: 4910.8357954121602000
1751: 4914.0785853634488000
1752: 4917.3216235537839000
1753: 4920.5649098413542000
1754: 4923.8084440845114000
1755: 4927.0522261417700000 <<=== Take note, please.
1756: 4930.2962558718036000
1757: 4933.5405331334487000
1758: 4936.7850577857016000
1759: 4940.0298296877190000
1760: 4943.2748486988194000
EDIT (Butterfly edition)
Below is a pretty simple iterative function involving one single and one double precision float values. The purpose is to demonstrate that iterative calculations are exceedingly sensitive to initial conditions. While it seems obvious that the extra bits of the double will "hold-on", remaining closer to the results one would get with infinite precision, the compounding discrepancy between these two versions demonstrate that "demons lurking in small places" will likely remain hidden in the fantastically tiny gaps between finite representations of what is infinite.
Just a bit of fun for a rainy day.
int main() {
float fpi = 3.1415926535897932384626433832;
double dpi = 3.1415926535897932384626433832;
double thresh = 10e-8;
for( int i = 0; i < 1000; i++ ) {
fpi = fpi * 1.03f;
dpi = dpi * 1.03f;
double diff = fabs( dpi - fpi );
if( diff > thresh) {
printf( "%3d: %25.16lf\n", i, diff );
thresh *= 10.0;
}
}
return 0;
}
8: 0.0000001229991486
35: 0.0000010704333473
90: 0.0000100210180918
192: 0.0001092634900033
229: 0.0010121794607585
312: 0.0100316228017618
367: 0.1002719746902585
453: 1.0056506423279643
520: 10.2658853083848950
609: 103.8011477291584000
667: 1073.9984381198883000
736: 10288.9632129669190000
807: 101081.5514678955100000
886: 1001512.2135009766000000
966: 10473883.3271484370000000

Optimization of 3D Direct Convolution Implementation in C

For my project, I've written a naive C implementation of direct 3D convolution with periodic padding on the input. Unfortunately, since I'm new to C, the performance isn't so good... here's the code:
int mod(int a, int b)
{
// calculate mod to get the correct index with periodic padding
int r = a % b;
return r < 0 ? r + b : r;
}
void convolve3D(const double *image, const double *kernel, const int imageDimX, const int imageDimY, const int imageDimZ, const int stencilDimX, const int stencilDimY, const int stencilDimZ, double *result)
{
int imageSize = imageDimX * imageDimY * imageDimZ;
int kernelSize = kernelDimX * kernelDimY * kernelDimZ;
int i, j, k, l, m, n;
int kernelCenterX = (kernelDimX - 1) / 2;
int kernelCenterY = (kernelDimY - 1) / 2;
int kernelCenterZ = (kernelDimZ - 1) / 2;
int xShift,yShift,zShift;
int outIndex, outI, outJ, outK;
int imageIndex = 0, kernelIndex = 0;
// Loop through each voxel
for (k = 0; k < imageDimZ; k++){
for ( j = 0; j < imageDimY; j++) {
for ( i = 0; i < imageDimX; i++) {
stencilIndex = 0;
// for each voxel, loop through each kernel coefficient
for (n = 0; n < kernelDimZ; n++){
for ( m = 0; m < kernelDimY; m++) {
for ( l = 0; l < kernelDimX; l++) {
// find the index of the corresponding voxel in the output image
xShift = l - kernelCenterX;
yShift = m - kernelCenterY;
zShift = n - kernelCenterZ;
outI = mod ((i - xShift), imageDimX);
outJ = mod ((j - yShift), imageDimY);
outK = mod ((k - zShift), imageDimZ);
outIndex = outK * imageDimX * imageDimY + outJ * imageDimX + outI;
// calculate and add
result[outIndex] += stencil[stencilIndex]* image[imageIndex];
stencilIndex++;
}
}
}
imageIndex ++;
}
}
}
}
by convention, all the matrices (image, kernel, result) are stored in column-major fashion, and that's why I loop through them in such way so they are closer in memory (heard this would help).
I know the implementation is very naive, but since it's written in C, I was hoping the performance would be good, but instead it's a little disappointing. I tested it with image of size 100^3 and kernel of size 10^3 (Total ~1GFLOPS if only count the multiplication and addition), and it took ~7s, which I believe is way below the capability of a typical CPU.
If possible, could you guys help me optimize this routine?
I'm open to anything that could help, with just a few things if you could consider:
The problem I'm working with could be big (e.g. image of size 200 by 200 by 200 with kernel of size 50 by 50 by 50 or even larger). I understand that one way of optimizing this is by converting this problem into a matrix multiplication problem and use the blas GEMM routine, but I'm afraid memory could not hold such a big matrix
Due to the nature of the problem, I would prefer direct convolution instead of FFTConvolve, since my model is developed with direct convolution in mind, and my impression of FFT convolve is that it gives slightly different result than direct convolve especially for rapidly changing image, a discrepancy I'm trying to avoid.
That said, I'm in no way an expert in this. so if you have a great implementation based on FFTconvolve and/or my impression on FFT convolve is totally biased, I would really appreciate if you could help me out.
The input images are assumed to be periodic, so periodic padding is necessary
I understand that utilizing blas/SIMD or other lower level ways would definitely help a lot here. but since I'm a newbie here I dont't really know where to start... I would really appreciate if you help pointing me to the right direction if you have experience in these libraries,
Thanks a lot for your help, and please let me know if you need more info about the nature of the problem
As a first step, replace your mod ((i - xShift), imageDimX) with something like this:
inline int clamp( int x, int size )
{
if( x < 0 ) return x + size;
if( x >= size ) return x - size;
return x;
}
These branches are very predictable because they yield same results for very large count of consecutive elements. Integer modulo is relatively slow.
Now, next step (ordered by cost/profit) is going to be parallelizing. If you have any modern C++ compiler, just enable OpenMP somewhere in project settings. After that you need 2 changes.
Decorate your very outer loop with something like this: #pragma omp parallel for schedule(guided)
Move your function-level variables within that loop. This also means you’ll have to compute initial imageIndex from your k, for each iteration.
Next option, rework your code so you only write each output value once. Compute the final value in your innermost 3 loops, reading from random locations from both image and kernel, and only write the result once. When you have that result[outIndex] += in the inner loop, CPU stalls waiting for the data from memory. When you accumulate in a variable that’s a register not memory, there’s no access latency.
SIMD is the most complicated optimization for that. But in short, you’ll need maximum width of the FMA your hardware has (if you have AVX and need double precision, that width is 4), and you’ll also need multiple independent accumulators for your 3 innermost loops, to avoid hitting the latency as opposed to saturating the throughput. Here’s my answer to much easier problem as an example what I mean.

How to generate a very large non singular matrix A in Ax = b?

I am solving the system of linear algebraic equations Ax = b by using Jacobian method but by taking manual inputs. I want to analyze the performance of the solver for large system. Is there any method to generate matrix A i.e non singular?
I am attaching my code here.`
#include<stdio.h>
#include<stdlib.h>
#include<math.h>
#define TOL = 0.0001
void main()
{
int size,i,j,k = 0;
printf("\n enter the number of equations: ");
scanf("%d",&size);
double reci = 0.0;
double *x = (double *)malloc(size*sizeof(double));
double *x_old = (double *)malloc(size*sizeof(double));
double *b = (double *)malloc(size*sizeof(double));
double *coeffMat = (double *)malloc(size*size*sizeof(double));
printf("\n Enter the coefficient matrix: \n");
for(i = 0; i < size; i++)
{
for(j = 0; j < size; j++)
{
printf(" coeffMat[%d][%d] = ",i,j);
scanf("%lf",&coeffMat[i*size+j]);
printf("\n");
//coeffMat[i*size+j] = 1.0;
}
}
printf("\n Enter the b vector: \n");
for(i = 0; i < size; i++)
{
x[i] = 0.0;
printf(" b[%d] = ",i);
scanf("%lf",&b[i]);
}
double sum = 0.0;
while(k < size)
{
for(i = 0; i < size; i++)
{
x_old[i] = x[i];
}
for(i = 0; i < size; i++)
{
sum = 0.0;
for(j = 0; j < size; j++)
{
if(i != j)
{
sum += (coeffMat[i * size + j] * x_old[j] );
}
}
x[i] = (b[i] -sum) / coeffMat[i * size + i];
}
k = k+1;
}
printf("\n Solution is: ");
for(i = 0; i < size; i++)
{
printf(" x[%d] = %lf \n ",i,x[i]);
}
}
This is all a bit Heath Robinson, but here's what I've used. I have no idea how 'random' such matrices all, in particular I don't know what distribution they follow.
The idea is to generate the SVD of the matrix. (Called A below, and assumed nxn).
Initialise A to all 0s
Then generate n positive numbers, and put them, with random signs, in the diagonal of A. I've found it useful to be able to control the ratio of the largest of these positive numbers to the smallest. This ratio will be the condition number of the matrix.
Then repeat n times: generate a random n vector f , and multiply A on the left by the Householder reflector I - 2*f*f' / (f'*f). Note that this can be done more efficiently than by forming the reflector matrix and doing a normal multiplication; indeed its easy to write a routine that given f and A will update A in place.
Repeat the above but multiplying on the right.
As for generating test data a simple way is to pick an x0 and then generate b = A * x0. Don't expect to get exactly x0 back from your solver; even if it is remarkably well behaved you'll find that the errors get bigger as the condition number gets bigger.
Talonmies' comment mentions http://www.eecs.berkeley.edu/Pubs/TechRpts/1991/CSD-91-658.pdf which is probably the right approach (at least in principle, and in full generality).
However, you are probably not handling "very large" matrixes (e.g. because your program use naive algorithms, and because you don't run it on a large supercomputer with a lot of RAM). So the naive approach of generating a matrix with random coefficients and testing afterwards that it is non-singular is probably enough.
Very large matrixes would have many billions of coefficients, and you need a powerful supercomputer with e.g. terabytes of RAM. You probably don't have that, if you did, your program probably would run too long (you don't have any parallelism), might give very wrong results (read http://floating-point-gui.de/ for more) so you don't care.
A matrix of a million coefficients (e.g. 1024*1024) is considered small by current hardware standards (and is more than enough to test your code on current laptops or desktops, and even to test some parallel implementations), and generating randomly some of them (and computing their determinant to test that they are not singular) is enough, and easily doable. You might even generate them and/or check their regularity with some external tool, e.g. scilab, R, octave, etc. Once your program computed a solution x0, you could use some tool (or write another program) to compute Ax0 - b and check that it is very close to the 0 vector (there are some cases where you would be disappointed or surprised, since round-off errors matter).
You'll need some good enough pseudo random number generator perhaps as simple as drand48(3) which is considered as nearly obsolete (you should find and use something better); you could seed it with some random source (e.g. /dev/urandom on Linux).
BTW, compile your code with all warnings & debug info (e.g. gcc -Wall -Wextra -g). Your #define TOL = 0.0001 is probably wrong (should be #define TOL 0.0001 or const double tol = 0.0001;). Use the debugger (gdb) & valgrind. Add optimizations (-O2 -mcpu=native) when benchmarking. Read the documentation of every used function, notably those from <stdio.h>. Check the result count from scanf... In C99, you should not cast the result of malloc, but you forgot to test against its failure, so code:
double *b = malloc(size*sizeof(double));
if (!b) {perror("malloc b"); exit(EXIT_FAILURE); };
You'll rather end, not start, your printf control strings with \n because stdout is often (not always!) line buffered. See also fflush.
You probably should read also some basic linear algebra textbook...
Notice that actually writing robust and efficient programs to invert matrixes or to solve linear systems is a difficult art (which I don't know at all : it has programming issues, algorithmic issues, and mathematical issues; read some numerical analysis book). You can still get a PhD and spend your whole life working on that. Please understand that you need ten years to learn programming (or many other things).

Multiply each element of an array by a number in C

I'm trying to optimize some of my code in C, which is a lot bigger than the snippet below. Coming from Python, I wonder whether you can simply multiply an entire array by a number like I do below.
Evidently, it does not work the way I do it below. Is there any other way that achieves the same thing, or do I have to step through the entire array as in the for loop?
void main()
{
int i;
float data[] = {1.,2.,3.,4.,5.};
//this fails
data *= 5.0;
//this works
for(i = 0; i < 5; i++) data[i] *= 5.0;
}
There is no short-cut you have to step through each element of the array.
Note however that in your example, you may achieve a speedup by using int rather than float for both your data and multiplier.
If you want to, you can do what you want through BLAS, Basic Linear Algebra Subprograms, which is optimised. This is not in the C standard, it is a package which you have to install yourself.
Sample code to achieve what you want:
#include <stdio.h>
#include <stdlib.h>
#include <cblas.h>
int main () {
int limit =10;
float *a = calloc( limit, sizeof(float));
for ( int i = 0; i < limit ; i++){
a[i] = i;
}
cblas_sscal( limit , 0.5f, a, 1);
for ( int i = 0; i < limit ; i++){
printf("%3f, " , a[i]);
}
printf("\n");
}
The names of the functions is not obvious, but reading the guidelines you might start to guess what BLAS functions does. sscal() can be split into s for single precision and scal for scale, which means that this function works on floats. The same function for double precision is called dscal().
If you need to scale a vector with a constant and adding it to another, BLAS got a function for that too:
saxpy()
s a x p y
float a*x + y
y[i] += a*x
As you might guess there is a daxpy() too which works on doubles.
I'm afraid that, in C, you will have to use for(i = 0; i < 5; i++) data[i] *= 5.0;.
Python allows for so many more "shortcuts"; however, in C, you have to access each element and then manipulate those values.
Using the for-loop would be the shortest way to accomplish what you're trying to do to the array.
EDIT: If you have a large amount of data, there are more efficient (in terms of running time) ways to multiply 5 to each value. Check out loop tiling, for example.
data *= 5.0;
Here data is address of array which is constant.
if you want to multiply the first value in that array then use * operator as below.
*data *= 5.0;

GNU Scientific Library probability distribution functions in C

I have a set of GSL Histograms, which are used to make a set of probability distribution functions, which according to the documentation are stored in a struct, as follows:
Data Type: gsl_histogram_pdf
size_t n
This is the number of bins used to approximate the probability distribution function.
double * range
The ranges of the bins are stored in an array of n+1 elements pointed to by range.
double * sum
The cumulative probability for the bins is stored in an array of n elements pointed to by sum.
I am intending to use a KS test to determine, if data was similar or not. So, I am trying to access the sum of a given bin in this structure, to calculate the 'distance' and I assumed that, I should be able to access that value by using:
((my_type)->pdf->sum+x)
with X being the bin number.
Yet this always returns 0 no matter what I do, does anyone have any idea, what is going wrong?
Thanks in advance
---- EDIT ----
Here is a snippet of my code that deals with the pdf / histogram:
/* GSL Histogram creation */
for (i = 0; i < chrom->hits; i++) {
if ( (chrom+i)->spectra->peaks != 0 ) {
(chrom+i)->hist = gsl_histogram_alloc(bins);
gsl_histogram_set_ranges_uniform((chrom+i)->hist, low_mz, high_mz);
for (j = 0; j < (chrom+i)->spectra->peaks; j++) {
gsl_histogram_increment( (chrom+i)->hist, ((chrom+i)->spectra+j)->mz_value);
}
} else {
printf("0 value encountered!\n");
}
}
/* Histogram probability distribution function creation */
for (i = 0; i < chrom->hits; i++) {
if ( (chrom+i)->spectra->peaks != 0 ) {
(chrom+i)->pdf = gsl_histogram_pdf_alloc(bins);
gsl_histogram_pdf_init( (chrom+i)->pdf, (chrom+i)->hist);
} else {
continue;
}
}
/* Kolmogorov-Smirnov */
float D;
for (i = 0; i < chrom->hits-1; i++) {
printf("%f\n",((chrom+i)->pdf->sum+25));
for (j = i+1; j < chrom->hits; j++) {
D = 0;
diff = 0;
/* Determine max distance */
}
}
You compute a pointer to the value you intend to access.
Change your current pointer computation
printf("%f\n",((chrom+i)->pdf->sum+25));
either to a normal array subscript
printf("%f\n",(chrom+i)->pdf->sum[25]);
or to a pointer computation followed by a dereferencing
printf("%f\n",*((chrom+i)->pdf->sum+25));
See whether that fixes your issue. The value shouldn't be 0 either, but it might well get displayed as 0 as it might represent a pretty small floating point number, depending on memory virtual layout.

Resources