I'm still pretty new to using SSE and am trying to implement a modulo of 2*Pi for double-precision inputs of the order 1e8 (the result of which will be fed into some vectorised trig calculations).
My current attempt at the code is based around the idea that mod(x, 2*Pi) = x - floor(x/(2*Pi))*2*Pi and looks like:
#define _PD_CONST(Name, Val) \
static const double _pd_##Name[2] __attribute__((aligned(16))) = { Val, Val }
_PD_CONST(2Pi, 6.283185307179586); /* = 2*pi */
_PD_CONST(recip_2Pi, 0.159154943091895); /* = 1/(2*pi) */
void vec_mod_2pi(const double * vec, int Size, double * modAns)
{
__m128d sse_a, sse_b, sse_c;
int i;
int k = 0;
double t = 0;
unsigned int initial_mode;
initial_mode = _MM_GET_ROUNDING_MODE();
_MM_SET_ROUNDING_MODE(_MM_ROUND_DOWN);
for (i = 0; i < Size; i += 2)
{
sse_a = _mm_loadu_pd(vec+i);
sse_b = _mm_mul_pd( _mm_cvtepi32_pd( _mm_cvtpd_epi32( _mm_mul_pd(sse_a, *(__m128d*)_pd_recip_2Pi) ) ), *(__m128d*)_pd_2Pi);
sse_c = _mm_sub_pd(sse_a, sse_b);
_mm_storeu_pd(modAns+i,sse_c);
}
k = i-2;
for (i = 0; i < Size%2; i++)
{
t = (double)((int)(vec[k+i] * 0.159154943091895)) * 6.283185307179586;
modAns[k+i] = vec[k+i] - t;
}
_MM_SET_ROUNDING_MODE(initial_mode);
}
Unfortunately, this is currently returning a lot of NaN with a couple of answers of 1.128e119 as well (some what outside the range of 0 -> 2*Pi that I was aiming for!). I suspect that where I'm going wrong is in the double-to-int-to-double conversion that I'm trying to use to do the floor.
Can anyone suggest where I've gone wrong and how to improve it?
P.S. sorry about the format of that code, it's the first time I've posted a question on here and can't seem to get it to give me empty lines within the code block to make it readable.
If you want any kind of accuracy, the simple algorithm is terribly bad. For an accurate range reduction algorithm, see e.g. Ng et al., ARGUMENT REDUCTION FOR HUGE ARGUMENTS: Good to the Last Bit (now available via the Wayback Machine: 2012-12-24)
For large arguments Hayne-Panek algorithm is typically used. However, the Hayne-Panek paper is quite difficult to read, and I suggest to have a look at Chapter 11 in the Handbook of Floating-Point Arithmetic for a more accessible explanation.
Related
Problem statement: I am working on a code that calculates big numbers. Hence, I am easily get beyond the maximum length of "long double". Here is an example below, where part of the code is given that generates big numbers:
int n;
long double summ;
a[1]=1;
b[1]=1;
c[1] = 1; //a, b, c are 1D variables of long double types
summ=1+c[1];
for(n=2; n <=1760; n++){
a[n]=n*n;
b[n]=n;
c[n] = c[n-1]*a[n-1]/b[n]; //Let us assume we have this kind of operation
summ= summ+c[n]; //So basically, summ = 1+c[1]+c[2]+c[3]+...+c[1760]
}
The intermediates values of summ and c[n] are then used to evaluate the ratio c[n]/summ for every integer n. Then, just after the above loop, I do:
for(n=1;n<=1760;n++){
c2[n]=c[n]/summ; //summ is thus here equals to 1+c[1]+c[2]+c[3]+...+c[1760]
}
Output: If we print n, c[n] and summ, we obtain inf after n=1755 because we exceed the length of long double:
n c[n] summ
1752 2.097121e+4917 2.098320e+4917
1753 3.672061e+4920 3.674159e+4920
1754 6.433452e+4923 6.437126e+4923
1755 1.127785e+4927 1.128428e+4927
1756 inf inf
1757 inf inf
1758 inf inf
1759 inf inf
1760 inf inf
Of course, if there is an overflow for c[n] and summ, I cannot evaluate the quantity of interest, which is c2[n].
Questions: Does someone see any solution for this ? How do I need to change the code so that to have finite numerical values (for arbitrary n) ?
I will indeed most likely need to go to very big numbers (n can be much larger than 1760).
Proposition: I know that GNU Multiple Precision Arithmetic (GMP) might be useful but honestly found too many difficulties trying to use this (outside the field), so if there an easier way to solve this, I would be glad to read it. Otherwise, I will be forever grateful if someone could apply GMP or any other method to solve the above-mentioned problem.
NOTE: This does not exactly what OP wants. I'll leave this answer here in case someone has a similar problem.
As long as your final result and all initial values are not out of range, you can very often re-arrange your terms to avoid any overflow. In your case if you actually just want to know c2[n] = c[n]/sum[n] you can re-write this as follows:
c2[n] = c[n]/sum[n]
= c[n]/(sum[n-1] + c[n]) // def. of sum[n]
= 1.0/(sum[n-1]/c[n] + 1.0)
= 1.0/(sum[n-1]/(c[n-1] * a[n-1] / b[n]) + 1.0) // def. of c[n]
= 1.0/(sum[n-1]/c[n-1] * b[n] / a[n-1] + 1.0)
= a[n-1]/(1/c2[n-1] * b[n] + a[n-1]) // def. of c2[n-1]
= (a[n-1]*c2[n-1]) / (b[n] + a[n-1]*c2[n-1])
Now in the final expression neither argument grows out of range, and in fact c2 slowly converges towards 1. If the values in your question are the actual values of a[n] and b[n] you may even find a closed form expression for c2[n] (I did not check it).
To check that the re-arrangement works, you can compare it with your original formula (godbolt-link, only printing the last values): https://godbolt.org/z/oW8KsdKK6
Btw: Unless you later need all values of c2 again, there is actually no need to store any intermediate value inside an array.
I ain't no mathematician. This is what I wrote with the results below. Looks to me that the exponent, at least, is keeping up with your long double results using my feeble only double only...
#include <stdio.h>
#include <math.h>
int main() {
int n;
double la[1800], lb[1800], lc[1800];
for( n = 2; n <= 1760; n++ ) {
lb[n] = log10(n);
la[n] = lb[n] + lb[n];
lc[n] = lc[n-1] + la[n-1] - lb[n];
printf( "%4d: %.16lf\n", n, lc[n] );
}
return 0;
}
/* omitted for brevity */
1750: 4910.8357954121602000
1751: 4914.0785853634488000
1752: 4917.3216235537839000
1753: 4920.5649098413542000
1754: 4923.8084440845114000
1755: 4927.0522261417700000 <<=== Take note, please.
1756: 4930.2962558718036000
1757: 4933.5405331334487000
1758: 4936.7850577857016000
1759: 4940.0298296877190000
1760: 4943.2748486988194000
EDIT (Butterfly edition)
Below is a pretty simple iterative function involving one single and one double precision float values. The purpose is to demonstrate that iterative calculations are exceedingly sensitive to initial conditions. While it seems obvious that the extra bits of the double will "hold-on", remaining closer to the results one would get with infinite precision, the compounding discrepancy between these two versions demonstrate that "demons lurking in small places" will likely remain hidden in the fantastically tiny gaps between finite representations of what is infinite.
Just a bit of fun for a rainy day.
int main() {
float fpi = 3.1415926535897932384626433832;
double dpi = 3.1415926535897932384626433832;
double thresh = 10e-8;
for( int i = 0; i < 1000; i++ ) {
fpi = fpi * 1.03f;
dpi = dpi * 1.03f;
double diff = fabs( dpi - fpi );
if( diff > thresh) {
printf( "%3d: %25.16lf\n", i, diff );
thresh *= 10.0;
}
}
return 0;
}
8: 0.0000001229991486
35: 0.0000010704333473
90: 0.0000100210180918
192: 0.0001092634900033
229: 0.0010121794607585
312: 0.0100316228017618
367: 0.1002719746902585
453: 1.0056506423279643
520: 10.2658853083848950
609: 103.8011477291584000
667: 1073.9984381198883000
736: 10288.9632129669190000
807: 101081.5514678955100000
886: 1001512.2135009766000000
966: 10473883.3271484370000000
For my project, I've written a naive C implementation of direct 3D convolution with periodic padding on the input. Unfortunately, since I'm new to C, the performance isn't so good... here's the code:
int mod(int a, int b)
{
// calculate mod to get the correct index with periodic padding
int r = a % b;
return r < 0 ? r + b : r;
}
void convolve3D(const double *image, const double *kernel, const int imageDimX, const int imageDimY, const int imageDimZ, const int stencilDimX, const int stencilDimY, const int stencilDimZ, double *result)
{
int imageSize = imageDimX * imageDimY * imageDimZ;
int kernelSize = kernelDimX * kernelDimY * kernelDimZ;
int i, j, k, l, m, n;
int kernelCenterX = (kernelDimX - 1) / 2;
int kernelCenterY = (kernelDimY - 1) / 2;
int kernelCenterZ = (kernelDimZ - 1) / 2;
int xShift,yShift,zShift;
int outIndex, outI, outJ, outK;
int imageIndex = 0, kernelIndex = 0;
// Loop through each voxel
for (k = 0; k < imageDimZ; k++){
for ( j = 0; j < imageDimY; j++) {
for ( i = 0; i < imageDimX; i++) {
stencilIndex = 0;
// for each voxel, loop through each kernel coefficient
for (n = 0; n < kernelDimZ; n++){
for ( m = 0; m < kernelDimY; m++) {
for ( l = 0; l < kernelDimX; l++) {
// find the index of the corresponding voxel in the output image
xShift = l - kernelCenterX;
yShift = m - kernelCenterY;
zShift = n - kernelCenterZ;
outI = mod ((i - xShift), imageDimX);
outJ = mod ((j - yShift), imageDimY);
outK = mod ((k - zShift), imageDimZ);
outIndex = outK * imageDimX * imageDimY + outJ * imageDimX + outI;
// calculate and add
result[outIndex] += stencil[stencilIndex]* image[imageIndex];
stencilIndex++;
}
}
}
imageIndex ++;
}
}
}
}
by convention, all the matrices (image, kernel, result) are stored in column-major fashion, and that's why I loop through them in such way so they are closer in memory (heard this would help).
I know the implementation is very naive, but since it's written in C, I was hoping the performance would be good, but instead it's a little disappointing. I tested it with image of size 100^3 and kernel of size 10^3 (Total ~1GFLOPS if only count the multiplication and addition), and it took ~7s, which I believe is way below the capability of a typical CPU.
If possible, could you guys help me optimize this routine?
I'm open to anything that could help, with just a few things if you could consider:
The problem I'm working with could be big (e.g. image of size 200 by 200 by 200 with kernel of size 50 by 50 by 50 or even larger). I understand that one way of optimizing this is by converting this problem into a matrix multiplication problem and use the blas GEMM routine, but I'm afraid memory could not hold such a big matrix
Due to the nature of the problem, I would prefer direct convolution instead of FFTConvolve, since my model is developed with direct convolution in mind, and my impression of FFT convolve is that it gives slightly different result than direct convolve especially for rapidly changing image, a discrepancy I'm trying to avoid.
That said, I'm in no way an expert in this. so if you have a great implementation based on FFTconvolve and/or my impression on FFT convolve is totally biased, I would really appreciate if you could help me out.
The input images are assumed to be periodic, so periodic padding is necessary
I understand that utilizing blas/SIMD or other lower level ways would definitely help a lot here. but since I'm a newbie here I dont't really know where to start... I would really appreciate if you help pointing me to the right direction if you have experience in these libraries,
Thanks a lot for your help, and please let me know if you need more info about the nature of the problem
As a first step, replace your mod ((i - xShift), imageDimX) with something like this:
inline int clamp( int x, int size )
{
if( x < 0 ) return x + size;
if( x >= size ) return x - size;
return x;
}
These branches are very predictable because they yield same results for very large count of consecutive elements. Integer modulo is relatively slow.
Now, next step (ordered by cost/profit) is going to be parallelizing. If you have any modern C++ compiler, just enable OpenMP somewhere in project settings. After that you need 2 changes.
Decorate your very outer loop with something like this: #pragma omp parallel for schedule(guided)
Move your function-level variables within that loop. This also means you’ll have to compute initial imageIndex from your k, for each iteration.
Next option, rework your code so you only write each output value once. Compute the final value in your innermost 3 loops, reading from random locations from both image and kernel, and only write the result once. When you have that result[outIndex] += in the inner loop, CPU stalls waiting for the data from memory. When you accumulate in a variable that’s a register not memory, there’s no access latency.
SIMD is the most complicated optimization for that. But in short, you’ll need maximum width of the FMA your hardware has (if you have AVX and need double precision, that width is 4), and you’ll also need multiple independent accumulators for your 3 innermost loops, to avoid hitting the latency as opposed to saturating the throughput. Here’s my answer to much easier problem as an example what I mean.
I'm trying to optimize some of my code in C, which is a lot bigger than the snippet below. Coming from Python, I wonder whether you can simply multiply an entire array by a number like I do below.
Evidently, it does not work the way I do it below. Is there any other way that achieves the same thing, or do I have to step through the entire array as in the for loop?
void main()
{
int i;
float data[] = {1.,2.,3.,4.,5.};
//this fails
data *= 5.0;
//this works
for(i = 0; i < 5; i++) data[i] *= 5.0;
}
There is no short-cut you have to step through each element of the array.
Note however that in your example, you may achieve a speedup by using int rather than float for both your data and multiplier.
If you want to, you can do what you want through BLAS, Basic Linear Algebra Subprograms, which is optimised. This is not in the C standard, it is a package which you have to install yourself.
Sample code to achieve what you want:
#include <stdio.h>
#include <stdlib.h>
#include <cblas.h>
int main () {
int limit =10;
float *a = calloc( limit, sizeof(float));
for ( int i = 0; i < limit ; i++){
a[i] = i;
}
cblas_sscal( limit , 0.5f, a, 1);
for ( int i = 0; i < limit ; i++){
printf("%3f, " , a[i]);
}
printf("\n");
}
The names of the functions is not obvious, but reading the guidelines you might start to guess what BLAS functions does. sscal() can be split into s for single precision and scal for scale, which means that this function works on floats. The same function for double precision is called dscal().
If you need to scale a vector with a constant and adding it to another, BLAS got a function for that too:
saxpy()
s a x p y
float a*x + y
y[i] += a*x
As you might guess there is a daxpy() too which works on doubles.
I'm afraid that, in C, you will have to use for(i = 0; i < 5; i++) data[i] *= 5.0;.
Python allows for so many more "shortcuts"; however, in C, you have to access each element and then manipulate those values.
Using the for-loop would be the shortest way to accomplish what you're trying to do to the array.
EDIT: If you have a large amount of data, there are more efficient (in terms of running time) ways to multiply 5 to each value. Check out loop tiling, for example.
data *= 5.0;
Here data is address of array which is constant.
if you want to multiply the first value in that array then use * operator as below.
*data *= 5.0;
I am reprogramming a piece of MATLAB code in mex (using C). So far my C version of the MATLAB code is about as double as fast as the MATLAB code. Now I have three questions, all related to the code below:
How can I speed up this code more?
Do you see any problems with this code? I ask this because I don't know mex very well and I am also not a C guru ;-) ... I am aware that there should be some checks in the code (for example if there is still heap space while using realloc, but I left this away for the sake of simplicity for the moment)
Is it possible, that MATLAB is optimizing so well, that I really can't get much more than twice as fast code in C...?
The code should be more or less platform independent (Win, Linux, Unix, Mac, different Hardware), so I don't want to use assembler or specific linear Algebra Libraries. So that's why I programmed the staff by myself...
#include <mex.h>
#include <math.h>
#include <matrix.h>
void mexFunction(
int nlhs, mxArray *plhs[],
int nrhs, const mxArray *prhs[])
{
double epsilon = ((double)(mxGetScalar(prhs[0])));
int strengthDim = ((int)(mxGetScalar(prhs[1])));
int lenPartMat = ((int)(mxGetScalar(prhs[2])));
int numParts = ((int)(mxGetScalar(prhs[3])));
double *partMat = mxGetPr(prhs[4]);
const mxArray* verletListCells = prhs[5];
mxArray *verletList;
double *pseSum = (double *) malloc(numParts * sizeof(double));
for(int i = 0; i < numParts; i++) pseSum[i] = 0.0;
float *tempVar = NULL;
for(int i = 0; i < numParts; i++)
{
verletList = mxGetCell(verletListCells,i);
int numberVerlet = mxGetM(verletList);
tempVar = (float *) realloc(tempVar, numberVerlet * sizeof(float) * 2);
for(int a = 0; a < numberVerlet; a++)
{
tempVar[a*2] = partMat[((int) (*(mxGetPr(verletList) + a))) - 1] - partMat[i];
tempVar[a*2 + 1] = partMat[((int) (*(mxGetPr(verletList) + a))) - 1 + lenPartMat] - partMat[i + lenPartMat];
tempVar[a*2] = pow(tempVar[a*2],2);
tempVar[a*2 + 1] = pow(tempVar[a*2 + 1],2);
tempVar[a*2] = tempVar[a*2] + tempVar[a*2 + 1];
tempVar[a*2] = sqrt(tempVar[a*2]);
tempVar[a*2] = 4.0/(pow(epsilon,2) * M_PI) * exp(-(pow((tempVar[a*2]/epsilon),2)));
pseSum[i] = pseSum[i] + ((partMat[((int) (*(mxGetPr(verletList) + a))) - 1 + 2*lenPartMat] - partMat[i + (2 * lenPartMat)]) * tempVar[a*2]);
}
}
plhs[0] = mxCreateDoubleMatrix(numParts,1,mxREAL);
for(int a = 0; a < numParts; a++)
{
*(mxGetPr(plhs[0]) + a) = pseSum[a];
}
free(tempVar);
free(pseSum);
}
So this is the improved version, which is about 12 times faster than MATLAB version. The conversion thing is still eating up much time, but I let this away for now, becaues I have to change something in MATLAB for this. So first focus on the remaining C code. Do you see any more potential in the following code?
#include <mex.h>
#include <math.h>
#include <matrix.h>
void mexFunction(
int nlhs, mxArray *plhs[],
int nrhs, const mxArray *prhs[])
{
double epsilon = ((double)(mxGetScalar(prhs[0])));
int strengthDim = ((int)(mxGetScalar(prhs[1])));
int lenPartMat = ((int)(mxGetScalar(prhs[2])));
double *partMat = mxGetPr(prhs[3]);
const mxArray* verletListCells = prhs[4];
int numParts = mxGetM(verletListCells);
mxArray *verletList;
plhs[0] = mxCreateDoubleMatrix(numParts,1,mxREAL);
double *pseSum = mxGetPr(plhs[0]);
double epsilonSquared = epsilon*epsilon;
double preConst = 4.0/((epsilonSquared) * M_PI);
int numberVerlet = 0;
double tempVar[2];
for(int i = 0; i < numParts; i++)
{
verletList = mxGetCell(verletListCells,i);
double *verletListPtr = mxGetPr(verletList);
numberVerlet = mxGetM(verletList);
for(int a = 0; a < numberVerlet; a++)
{
int adress = ((int) (*(verletListPtr + a))) - 1;
tempVar[0] = partMat[adress] - partMat[i];
tempVar[1] = partMat[adress + lenPartMat] - partMat[i + lenPartMat];
tempVar[0] = tempVar[0]*tempVar[0] + tempVar[1]*tempVar[1];
tempVar[0] = preConst * exp(-(tempVar[0]/epsilonSquared));
pseSum[i] += ((partMat[adress + 2*lenPartMat] - partMat[i + (2*lenPartMat)]* tempVar[0]);
}
}
}
You do not need to allocate the pseSum for local use and then later copy the data to the output. You can simply allocate a MATLAB object and get the pointer to the memory :
plhs[0] = mxCreateDoubleMatrix(numParts,1,mxREAL);
pseSum = mxGetPr(plhs[0]);
Thus you will not have to initialize pseSum to 0, because MATLAB already does it in mxCreateDoubleMatrix.
Remove all the mxGetPr from the inner loop and assign them to variables before.
Instead of casting doubles to ints consider using int32 or uint32 arrays in MATLAB. Casting double to int is expensive. The internal loop computations would look like
tempVar[a*2] = partMat[somevar[a] - 1] - partMat[i];
You use such constructs in your code
((int) (*(mxGetPr(verletList) + a)))
You do it because the varletList is a 'double' array (that is the case by default in MATLAB), which holds integer values. Instead, you should use integer array. Before you call your mex file type in MATLAB:
varletList = int32(varletList);
Then you will not need the type cast to int above. You will simply write
((int*)mxGetData(verletList))[a]
or better yet, assign earlier
somevar = (int*)mxGetData(verletList);
and later write
somevar[a]
precompute 4.0/(pow(epsilon,2) * M_PI) before all loops! That is one expensive constant.
pow((tempVar[a*2]/epsilon),2)) is simply tempVar[a*2]^2/epsilon^2. You calculate sqrt(tempVar[a*2]) just before. Why do you square it now?
Generally do not use pow(x, 2). Just write x*x
I would add some sanity checks on the parameters, especially if you demand integers. Either use MATLABs int32/uint32 type, or check that what you get actually is an integer.
Edit in the new code
compute -1/epsilonSquared before the loops and compute exp(minvepssq*tempVar[0]).note that the result might differ slightly. Depends what you need, but if you don't care about exact order of operations, do it.
define a register variable preSum_r and use it to sum the results in the inner loop. After the loop assign it to preSum[i]. If you want more fun, you can write the result to the memory using SSE streaming store (_mm_stream_pd compiler intrinsic).
do remove double to int cast
most likely irrelevant, but try to change tempVar[0/1] to normal variables. Irrelevant, because the compiler should do that for you. But again, an array is not needed here.
parallelise the external loop with OpenMP. Trivial (at least the simplest version without thinking about data layout for NUMA architectures) since there is no dependence between the iterations.
Can you estimate ahead of time what will be the maximum size of tempVar and allocate memory for it before the loop instead of using realloc? Reallocating memory is a time consuming operation and if your numParts is large, this could have a huge impact. Take a look at this question.
Does anybody know of an open source numerical C library that provides the logsumexp-function?
The logsumexp(a) function computes the sum of exponentials log(e^{a_1}+...e^{a_n}) of the components of the array a, avoiding numerical overflow.
Here's a very simple implementation from scratch (tested, at least minimally):
double logsumexp(double nums[], size_t ct) {
double max_exp = nums[0], sum = 0.0;
size_t i;
for (i = 1 ; i < ct ; i++)
if (nums[i] > max_exp)
max_exp = nums[i];
for (i = 0; i < ct ; i++)
sum += exp(nums[i] - max_exp);
return log(sum) + max_exp;
}
This does the trick of effectively dividing all of the arguments by the largest, then adding its log back in at the end to avoid overflow, so it's well-behaved for adding a large number of similarly-scaled values, with errors creeping in if some arguments are many orders of magnitude larger than others.
If you want it to run without crashing when given 0 arguments, you'll have to add a case for that :)