Related
I have a function that gets a 3 x 3 matrix and a 3 x 4000 vector, and multiplies them.
All the calculation are done in double precision (64-bit).
The function is called about 3.5 million times so it should be optimized.
#define MATRIX_DIM 3
#define VECTOR_LEN 3000
typedef struct {
double a;
double b;
double c;
} vector_st;
double matrix[MATRIX_DIM][MATRIX_DIM];
vector_st vector[VACTOR_LEN];
inline void rotate_arr(double input_matrix[][MATRIX_DIM], vector_st *input_vector, vector_st *output_vector)
{
int i;
for (i = 0; i < VACTOR_LEN; i++) {
op_rotate_preset_arr[i].a = input_matrix[0][0] * input_vector[i].a +
input_matrix[0][1] * input_vector[i].b +
input_matrix[0][2] * input_vector[i].c;
op_rotate_preset_arr[i].b = input_matrix[1][0] * input_vector[i].a +
input_matrix[1][1] * input_vector[i].b +
input_matrix[1][2] * input_vector[i].c;
op_rotate_preset_arr[i].c = input_matrix[2][0] * input_vector[i].a +
input_matrix[2][1] * input_vector[i].b +
input_matrix[2][2] * input_vector[i].c;
}
}
I all out of ideas on how to optimize it because it's inline, data access is sequential and the function is short and pretty straight-forward.
It can be assumed that the vector is always the same and only the matrix is changing if it will boost performance.
One easy to fix problem here is that compilers assumes that the matrix and the output vectors may alias. As seen here in the second function, that causes code to be generated that is less efficient and significantly larger. This can be fixed simply by adding restrict to the output pointer. Doing only this already helps and keeps the code free from platform specific optimization, but relies on auto-vectorization in order to use the performance increases that have happened in the past two decades.
Auto-vectorization is evidently still too immature for the task, both Clang and GCC generate way too much shuffling around of the data. This should improve in future compilers, but for now even a case like this (that doesn't seem inherently super hard) needs manual help, such as this (not tested though)
void rotate_arr_avx(double input_matrix[][MATRIX_DIM], vector_st *input_vector, vector_st * restrict output_vector)
{
__m256d col0, col1, col2, a, b, c, t;
int i;
// using set macros like this is kind of dirty, but it's outside the loop anyway
col0 = _mm256_set_pd(0.0, input_matrix[2][0], input_matrix[1][0], input_matrix[0][0]);
col1 = _mm256_set_pd(0.0, input_matrix[2][1], input_matrix[1][1], input_matrix[0][1]);
col2 = _mm256_set_pd(0.0, input_matrix[2][2], input_matrix[1][2], input_matrix[0][2]);
for (i = 0; i < VECTOR_LEN; i++) {
a = _mm256_set1_pd(input_vector[i].a);
b = _mm256_set1_pd(input_vector[i].b);
c = _mm256_set1_pd(input_vector[i].c);
t = _mm256_add_pd(_mm256_add_pd(_mm256_mul_pd(col0, a), _mm256_mul_pd(col1, b)), _mm256_mul_pd(col2, c));
// this stores an element too much, ensure 8 bytes of padding exist after the array
_mm256_storeu_pd(&output_vector[i].a, t);
}
}
Writing it this way significantly improves what compilers do with it, now compiling to a nice and tight loop without all the nonsense. Earlier the code hurt to look at, but with this the loop now looks like this (GCC 8.1, with FMA enabled), which is actually readable:
.L2:
vbroadcastsd ymm2, QWORD PTR [rsi+8+rax]
vbroadcastsd ymm1, QWORD PTR [rsi+16+rax]
vbroadcastsd ymm0, QWORD PTR [rsi+rax]
vmulpd ymm2, ymm2, ymm4
vfmadd132pd ymm1, ymm2, ymm3
vfmadd132pd ymm0, ymm1, ymm5
vmovupd YMMWORD PTR [rdx+rax], ymm0
add rax, 24
cmp rax, 72000
jne .L2
This has an obvious deficiency: only 3 of the 4 double precision slots of the 256bit AVX vectors are actually used. If the data format of the vector was changed to for example AAAABBBBCCCC repeating, a totally different approach could be used, namely broadcasting the matrix elements instead of the vector elements, then multiplying the broadcasted matrix element by the A component of 4 different vector_sts at once.
An other thing we can try, without even changing the data format, is processing more than one matrix at the same time, which helps to re-use loads from the input_vector to increase arithmetic intensity.
void rotate_arr_avx(double input_matrixA[][MATRIX_DIM], double input_matrixB[][MATRIX_DIM], vector_st *input_vector, vector_st * restrict output_vectorA, vector_st * restrict output_vectorB)
{
__m256d col0A, col1A, col2A, a, b, c, t, col0B, col1B, col2B;
int i;
// using set macros like this is kind of dirty, but it's outside the loop anyway
col0A = _mm256_set_pd(0.0, input_matrixA[2][0], input_matrixA[1][0], input_matrixA[0][0]);
col1A = _mm256_set_pd(0.0, input_matrixA[2][1], input_matrixA[1][1], input_matrixA[0][1]);
col2A = _mm256_set_pd(0.0, input_matrixA[2][2], input_matrixA[1][2], input_matrixA[0][2]);
col0B = _mm256_set_pd(0.0, input_matrixB[2][0], input_matrixB[1][0], input_matrixB[0][0]);
col1B = _mm256_set_pd(0.0, input_matrixB[2][1], input_matrixB[1][1], input_matrixB[0][1]);
col2B = _mm256_set_pd(0.0, input_matrixB[2][2], input_matrixB[1][2], input_matrixB[0][2]);
for (i = 0; i < VECTOR_LEN; i++) {
a = _mm256_set1_pd(input_vector[i].a);
b = _mm256_set1_pd(input_vector[i].b);
c = _mm256_set1_pd(input_vector[i].c);
t = _mm256_add_pd(_mm256_add_pd(_mm256_mul_pd(col0A, a), _mm256_mul_pd(col1A, b)), _mm256_mul_pd(col2A, c));
// this stores an element too much, ensure 8 bytes of padding exist after the array
_mm256_storeu_pd(&output_vectorA[i].a, t);
t = _mm256_add_pd(_mm256_add_pd(_mm256_mul_pd(col0B, a), _mm256_mul_pd(col1B, b)), _mm256_mul_pd(col2B, c));
_mm256_storeu_pd(&output_vectorB[i].a, t);
}
}
I have a O(N^4) image processing loop and after profiling it (Using Intel Vtune 2013), I see that the number of Instructions retired is reduced drastically. I need help understanding this behavior on a multicore architecture. (I'm using Intel Xeon x5365- has 8 cores with shared L2 cache for every 2 cores). And also the no of branch mis-predictions have increased drastically!!
///////////////EDITS/////////// A sample of my non-Unrolled code is shown below:
for(imageNo =0; imageNo<496;imageNo++){
for (unsigned int k=0; k<256; k++)
{
double z = O_L + (double)k * R_L;
for (unsigned int j=0; j<256; j++)
{
double y = O_L + (double)j * R_L;
for (unsigned int i=0; i<256; i++)
{
double x[1] = {O_L + (double)i * R_L} ;
double w_n = (A_n[2] * x[0] + A_n[5] * y + A_n[8] * z + A_n[11]) ;
double u_n = ((A_n[0] * x[0] + A_n[3] * y + A_n[6] * z + A_n[9] ) / w_n);
double v_n = ((A_n[1] * x[0] + A_n[4] * y + A_n[7] * z + A_n[10]) / w_n);
for(int loop=0; loop<1;loop++)
{
px_x[loop] = (int) floor(u_n);
px_y[loop] = (int) floor(v_n);
alpha[loop] = u_n - px_x[loop] ;
beta[loop] = v_n - px_y[loop] ;
}
///////////////////(i,j) pixels ///////////////////////////////
if (px_x[0]>=0 && px_x[0]<(int)threadCopy[0].S_x && px_y[0]>=0 && px_y[0]<(int)threadCopy[0].S_y)
pixel_1[0] = threadCopy[0].I_n[px_y[0] * threadCopy[0].S_x + px_x[0]];
else
pixel_1[0] = 0.0;
if (px_x[0]+1>=0 && px_x[0]+1<(int)threadCopy[0].S_x && px_y[0]>=0 && px_y[0]<(int)threadCopy[0].S_y)
pixel_1[2] = threadCopy[0].I_n[px_y[0] * threadCopy[0].S_x + (px_x[0]+1)];
else
pixel_1[2] = 0.0;
/////////////////// (i+1, j) pixels/////////////////////////
if (px_x[0]>=0 && px_x[0]<(int)threadCopy[0].S_x && px_y[0]+1>=0 && px_y[0]+1<(int)threadCopy[0].S_y)
pixel_1[1] = threadCopy[0].I_n[(px_y[0]+1) * threadCopy[0].S_x + px_x[0]];
else
pixel_1[1] = 0.0;
if (px_x[0]+1>=0 && px_x[0]+1<(int)threadCopy[0].S_x && px_y[0]+1>=0 && px_y[0]+1<(int)threadCopy[0].S_y)
pixel_1[3] = threadCopy[0].I_n[(px_y[0]+1) * threadCopy[0].S_x + (px_x[0]+1)];
else
pixel_1[3] = 0.0;
pix_1 = (1.0 - alpha[0]) * (1.0 - beta[0]) * pixel_1[0] + (1.0 - alpha[0]) * beta[0] * pixel_1[1]
+ alpha[0] * (1.0 - beta[0]) * pixel_1[2] + alpha[0] * beta[0] * pixel_1[3];
f_L[k * L * L + j * L + i] += (float)(1.0 / (w_n * w_n) * pix_1);
}
}
}
}
I'm unrolling the inner most loop by 4 iterations.(You will have a general ideal how I stripped the loop. Basically i created an array of Array[4] and filled respective vales in it.) Doing the math, I'm reducing the total no of iterations by 75%. Say there are 4 loop handling instructions for every loop (load i, inc i, cmp i, jle loop), the total no of instructions after unrolling should reduce by (256-64)*4*256*256*496=24.96G.
The profiled results are:
Before UnRolling: Instr retired: 3.1603T no of branch mis-predictions: 96 million
After UnRolling: Instr retired: 2.642240T no of branch mis-predictions: 144 million
The no instr retired decreased by 518.06G . I have no clue how this is happening. I would appreciate any help regarding this (Even if it is remote possibility for its occurrence) . Also, I would like to know why are branch mis-predictions increasing. Thanks in advance!
It is not clear where gcc would be reducing the number of instructions. It is possible that increased register pressure might encourage gcc to use load+operate instructions (so the same number of primitive operations but fewer instructions). The index for f_L would only be incremented once per innermost loop, but this would only save 6.2G (3*64*256*256*496) instructions. (By the way, the loop overhead should only be three instructions since i should remain in a register.)
The following pseudo-assembly (for a RISC-like ISA) using a two-way unrolling shows how an increment can be saved:
// the address of f_L[k * L * L + j * L + i] is in r1
// (float)(1.0 / (w_n * w_n) * pix_1) results are in f1 and f2
load-single f9 [r1]; // load float at address in r1 to register f9
add-single f9 f9 f1; // f9 = f9 + f1
store-single [r1] f9; // store float in f9 to address in r1
load-single f10 4[r1]; // load float at address of r1+4 to f10
add-single f10 f10 f2; // f10 = f10 + f2
store-single 4[r1] f10; // store float in f10 to address of r1+4
add r1 r1 #8; // increase the address by 8 bytes
The trace of two iterations of the non-unrolled version would look more like:
load-single f9 [r1]; // load float at address of r1 to f9
add-single f9 f9 f2; // f9 = f9 + f2
store-single [r1] f9; // store float in f9 to address of r1
add r1 r1 #4; // increase the address by 4 bytes
...
load-single f9 [r1]; // load float at address of r1 to f9
add-single f9 f9 f2; // f9 = f9 + f2
store-single [r1] f9; // store float in f9 to address of r1
add r1 r1 #4; // increase the address by 4 bytes
Because memory addressing instructions commonly include adding an immediate offset (Itanium is an unusual exception) and the pipelines are not generally implemented to optimize the case when the immediate is zero, using a non-zero immediate offset is generally "free". (It certainly reduces the number of instructions—7 vs. 8 in this case—, but generally it also improves performance.)
With respect to branch prediction, the according to Agner Fog's The microarchitecture of Intel, AMD and VIA CPUs: An optimization guide for assembly programmers and compiler makers(PDF) the Core2 microarchitecture's branch predictor uses an 8 bit global history. This means that it tracks the results for the last 8 branches and uses these 8 bits (along with bits from the instruction address) to index a table. This allows correlations between nearby branches to be recognized.
For your code, the branch corresponding to, e.g., the 8th previous branch is not the same branch in each iteration (since short-circuiting is used), so it is not easy to conceptualize how well correlations would be recognized.
Some correlations in branches are obvious. If px_x[0]>=0 is true, px_x[0]+1>=0 will also be true. If px_x[0] <(int)threadCopy[0].S_x is true, then px_x[0]+1 <(int)threadCopy[0].S_x is likely to be true.
If the unrolling is done such that px_x[n] is tested for all four values of n then these correlations would be pushed farther away so that the results are not used by the branch predictor.
Some optimization possibilities
Although you did not ask about any optimization possibilities, I am going to offer some avenues for exploration.
First, for the branches, if not being strictly portable is OK, the test x>=0 && x<y can be simplified to (unsigned)x<(unsigned)y. This is not strictly portable because, e.g., a machine could theoretically represent negative numbers in a sign-magnitude format with the most significant bit as the sign and negative indicated by a zero bit. For the common representations of signed integers, such a reinterpreting cast will work as long as y is a positive signed integer since a negative x value will have the most significant bit set and so be larger than y interpreted as an unsigned integer.
Second, the number of branches can be significantly reduced by using the 100% correlations for either px_x or px_y:
if ((unsigned) px_y[0]<(unsigned int)threadCopy[0].S_y)
{
if ((unsigned)px_x[0]<(unsigned int)threadCopy[0].S_x)
pixel_1[0] = threadCopy[0].I_n[px_y[0] * threadCopy[0].S_x + px_x[0]];
else
pixel_1[0] = 0.0;
if ((unsigned)px_x[0]+1<(unsigned int)threadCopy[0].S_x)
pixel_1[2] = threadCopy[0].I_n[px_y[0] * threadCopy[0].S_x + (px_x[0]+1)];
else
pixel_1[2] = 0.0;
}
if ((unsigned)px_y[0]+1<(unsigned int)threadCopy[0].S_y)
{
if ((unsigned)px_x[0]<(unsigned int)threadCopy[0].S_x)
pixel_1[1] = threadCopy[0].I_n[(px_y[0]+1) * threadCopy[0].S_x + px_x[0]];
else
pixel_1[1] = 0.0;
if ((unsigned)px_x[0]+1<(unsigned int)threadCopy[0].S_x)
pixel_1[3] = threadCopy[0].I_n[(px_y[0]+1) * threadCopy[0].S_x + (px_x[0]+1)];
else
pixel_1[3] = 0.0;
}
(If the above section of code is replicated for unrolling, it should probably be replicated as a block rather than interleaving tests for different px_x and px_y values to allow the px_y branch to be near the px_y+1 branch and the first px_x branch to be near the other px_x branch and the px_x+1 branches.)
Another possible optimization is changing the calculation of w_n into a calculation of its reciprocal. This would change a multiply and three divisions into four multiplies and one division. Division is much more expensive than multiplication. In addition, calculating an approximate reciprocal is much more SIMD-friendly since there are usually reciprocal estimate instructions that provide a starting point which can be refined by the Newton-Raphson method.
If even worse obfuscation of the code is acceptable, you might consider changing code like double y = O_L + (double)j * R_L; into double y = O_L; ... y += R_L;. (I ran a test, and gcc does not seem to recognize this optimization, probably because of the use of floating point and the cast to double.) Thus:
for(int imageNo =0; imageNo<496;imageNo++){
double z = O_L;
for (unsigned int k=0; k<256; k++)
{
double y = O_L;
for (unsigned int j=0; j<256; j++)
{
double x[1]; x[0] = O_L;
for (unsigned int i=0; i<256; i++)
{
...
x[0] += R_L ;
} // end of i loop
y += R_L;
} // end of j loop
z += R_L;
} // end of k loop
} // end of imageNo loop
I am guessing that this would only modest improve performance, so the obfuscation cost would be higher relative to the benefit.
Another change that might be worth trying is incorporating some of the pix_1 calculation into the sections conditionally setting pixel_1[] values. This would significantly obfuscate the code and might not have much benefit. In addition, it might make autovectorization by the compiler more difficult. (With conditionally setting the values to the appropriate I_n or to zero, an SIMD comparison could set each element to -1 or 0 and a simple and with the I_n value would provide the correct value. In this case, the overhead of forming the I_n vector would probably not be worthwhile given that Core2 only supports 2-wide double precision SIMD, but with gather support or even just a longer vector the tradeoffs might change.)
However, this change would increase the size of basic blocks and reduce the amount of computation when any of px_x and px_y are out of range (I am guessing this is uncommon, so the benefit would be very small at best).
double pix_1 = 0.0;
double alpha_diff = 1.0 - alpha;
if ((unsigned) px_y[0]<(unsigned int)threadCopy[0].S_y)
{
double beta_diff = 1.0 - beta;
if ((unsigned)px_x[0]<(unsigned int)threadCopy[0].S_x)
pix1 += alpha_diff * beta_diff
* threadCopy[0].I_n[px_y[0] * threadCopy[0].S_x + px_x[0]];
// no need for else statement since pix1 is already zeroed and not
// adding the pixel_1[0] factor is the same as zeroing pixel_1[0]
if ((unsigned)px_x[0]+1<(unsigned int)threadCopy[0].S_x)
pix1 += alpha * beta_diff
* threadCopy[0].I_n[px_y[0] * threadCopy[0].S_x + (px_x[0]+1)];
}
if ((unsigned)px_y[0]+1<(unsigned int)threadCopy[0].S_y)
{
if ((unsigned)px_x[0]<(unsigned int)threadCopy[0].S_x)
pix1 += alpha_diff * beta
* threadCopy[0].I_n[(px_y[0]+1) * threadCopy[0].S_x + px_x[0]];
if ((unsigned)px_x[0]+1<(unsigned int)threadCopy[0].S_x)
pix1 += alpha * beta
* threadCopy[0].I_n[(px_y[0]+1) * threadCopy[0].S_x + (px_x[0]+1)];
}
Ideally, code like yours would be vectorized, but I do not know how to get gcc to recognize the opportunities, how to express the opportunities using intrinsics, nor whether significant effort at manually vectorizing this code would be worthwhile with an SIMD width of only two.
I am not a programmer (just someone who likes learning and thinking about computer architecture) and I have a significant inclination toward micro-optimization (as clear from the above), so the above proposals should be considered in that light.
My code relies heavily on computing distances between two points in 3D space.
To avoid the expensive square root I use the squared distance throughout.
But still it takes up a major fraction of the computing time and I would like to replace my simple function with something even faster.
I now have:
double distance_squared(double *a, double *b)
{
double dx = a[0] - b[0];
double dy = a[1] - b[1];
double dz = a[2] - b[2];
return dx*dx + dy*dy + dz*dz;
}
I also tried using a macro to avoid the function call but it doesn't help much.
#define DISTANCE_SQUARED(a, b) ((a)[0]-(b)[0])*((a)[0]-(b)[0]) + ((a)[1]-(b)[1])*((a)[1]-(b)[1]) + ((a)[2]-(b)[2])*((a)[2]-(b)[2])
I thought about using SIMD instructions but could not find a good example or complete list of instructions (ideally some multiply+add on two vectors).
GPU's are not an option since only one set of points is known at each function call.
What would be the fastest way to compute the distance squared?
A good compiler will optimize that about as well as you will ever manage. A good compiler will use SIMD instructions if it deems that they are going to be beneficial. Make sure that you turn on all such possible optimizations for your compiler. Unfortunately, vectors of dimension 3 don't tend to sit well with SIMD units.
I suspect that you will simply have to accept that the code produced by the compiler is probably pretty close to optimal and that no significant gains can be made.
The first obvious thing would be to use the restrict keyword.
As it is now, a and b are aliasable (and thus, from the compiler's point of view which assumes the worst possible case are aliased). No compiler will auto-vectorize this, as it is wrong to do so.
Worse, not only can the compiler not vectorize such a loop, in case you also store (luckily not in your example), it also must re-load values each time. Always be clear about aliasing, as it greatly impacts the compiler.
Next, if you can live with that, use float instead of double and pad to 4 floats even if one is unused, this is a more "natural" data layout for the majority of CPUs (this is somewhat platform specific, but 4 floats is a good guess for most platforms -- 3 doubles, a.k.a. 1.5 SIMD registers on "typical" CPUs, is not optimal anywhere).
(For a hand-written SIMD implementation (which is harder than you think), first and before all be sure to have aligned data. Next, look into what latencies your instrucitons have on the target machine and do the longest ones first. For example on pre-Prescott Intel it makes sense to first shuffle each component into a register and then multiply with itself, even though that uses 3 multiplies instead of one, because shuffles have a long latency. On the later models, a shuffle takes a single cycle, so that would be a total anti-optimization.
Which again shows that leaving it to the compiler is not such a bad idea.)
The SIMD code to do this (using SSE3):
movaps xmm0,a
movaps xmm1,b
subps xmm0,xmm1
mulps xmm0,xmm0
haddps xmm0,xmm0
haddps xmm0,xmm0
but you need four value vectors (x,y,z,0) for this to work. If you've only got three values then you'd need to do a bit of fiddling about to get the required format which would cancel out any benefit of the above.
In general though, due to the superscalar pipelined architecture of the CPU, the best way to get performance is to do the same operation on lots of data, that way you can interleave the various steps and do a bit of loop unrolling to avoid pipeline stalls. The above code will definately stall on the last three instructions based on the "can't use a value directly after it's modified" principle - the second instruction has to wait for the result of the previous instruction to complete which isn't good in a pipelined system.
Doing the calculation on two or more different sets points of points at the same time can remove the above bottleneck - whilst waiting for the result of one computation, you can start the computation of the next point:
movaps xmm0,a1
movaps xmm2,a2
movaps xmm1,b1
movaps xmm3,b2
subps xmm0,xmm1
subps xmm2,xmm3
mulps xmm0,xmm0
mulps xmm2,xmm2
haddps xmm0,xmm0
haddps xmm2,xmm2
haddps xmm0,xmm0
haddps xmm2,xmm2
If you would like to optimize something, at first profile code and inspect assembler output.
After compiling it with gcc -O3 (4.6.1) we'll have nice disassembled output with SIMD:
movsd (%rdi), %xmm0
movsd 8(%rdi), %xmm2
subsd (%rsi), %xmm0
movsd 16(%rdi), %xmm1
subsd 8(%rsi), %xmm2
subsd 16(%rsi), %xmm1
mulsd %xmm0, %xmm0
mulsd %xmm2, %xmm2
mulsd %xmm1, %xmm1
addsd %xmm2, %xmm0
addsd %xmm1, %xmm0
This type of problem often occurs in MD simulations. Usually the amount of calculations is reduced by cutoffs and neighbor lists, so the number for the calculation is reduced. The actual calculation of the squared distances however is exactly done (with compiler optimizations and a fixed type float[3]) as given in your question.
So if you want to reduce the amount of squared calculations you should tell us more about the problem.
Perhaps passing the 6 doubles directly as arguments could make it faster (because it could avoid the array dereference):
inline double distsquare_coord(double xa, double ya, double za,
double xb, double yb, double zb)
{
double dx = xa-yb; double dy=ya-yb; double dz=za-zb;
return dx*dx + dy*dy + dz*dz;
}
Or perhaps, if you have many points in the vicinity, you might compute a distance (to the same fixed other point) by linear approximation of the distances of other near points.
If you can rearrange your data to process two pairs of input vectors at once, you may use this code (SSE2 only)
// #brief Computes two squared distances between two pairs of 3D vectors
// #param a
// Pointer to the first pair of 3D vectors.
// The two vectors must be stored with stride 24, i.e. (a + 3) should point to the first component of the second vector in the pair.
// Must be aligned by 16 (2 doubles).
// #param b
// Pointer to the second pairs of 3D vectors.
// The two vectors must be stored with stride 24, i.e. (a + 3) should point to the first component of the second vector in the pair.
// Must be aligned by 16 (2 doubles).
// #param c
// Pointer to the output 2 element array.
// Must be aligned by 16 (2 doubles).
// The two distances between a and b vectors will be written to c[0] and c[1] respectively.
void (const double * __restrict__ a, const double * __restrict__ b, double * __restrict c) {
// diff0 = ( a0.y - b0.y, a0.x - b0.x ) = ( d0.y, d0.x )
__m128d diff0 = _mm_sub_pd(_mm_load_pd(a), _mm_load_pd(b));
// diff1 = ( a1.x - b1.x, a0.z - b0.z ) = ( d1.x, d0.z )
__m128d diff1 = _mm_sub_pd(_mm_load_pd(a + 2), _mm_load_pd(b + 2));
// diff2 = ( a1.z - b1.z, a1.y - b1.y ) = ( d1.z, d1.y )
__m128d diff2 = _mm_sub_pd(_mm_load_pd(a + 4), _mm_load_pd(b + 4));
// prod0 = ( d0.y * d0.y, d0.x * d0.x )
__m128d prod0 = _mm_mul_pd(diff0, diff0);
// prod1 = ( d1.x * d1.x, d0.z * d0.z )
__m128d prod1 = _mm_mul_pd(diff1, diff1);
// prod2 = ( d1.z * d1.z, d1.y * d1.y )
__m128d prod2 = _mm_mul_pd(diff1, diff1);
// _mm_unpacklo_pd(prod0, prod2) = ( d1.y * d1.y, d0.x * d0.x )
// psum = ( d1.x * d1.x + d1.y * d1.y, d0.x * d0.x + d0.z * d0.z )
__m128d psum = _mm_add_pd(_mm_unpacklo_pd(prod0, prod2), prod1);
// _mm_unpackhi_pd(prod0, prod2) = ( d1.z * d1.z, d0.y * d0.y )
// dotprod = ( d1.x * d1.x + d1.y * d1.y + d1.z * d1.z, d0.x * d0.x + d0.y * d0.y + d0.z * d0.z )
__m128d dotprod = _mm_add_pd(_mm_unpackhi_pd(prod0, prod2), psum);
__mm_store_pd(c, dotprod);
}
In actual fact, its the derivative of the Lennard Jones potential. The reason for is that I am writing a Molecular Dynamics program and at least 80% of the time is spent in the following function, even with the most aggressive compiler options (gcc ** -O3).
double ljd(double r) /* Derivative of Lennard Jones Potential for Argon with
respect to distance (r) */
{
double temp;
temp = Si/r;
temp = temp*temp;
temp = temp*temp*temp;
return ( (24*Ep/r)*(temp-(2 * pow(temp,2))) );
}
This code is from a file "functs.h", which I import into my main file. I thought that using temporary variables in this way would make the function faster, but I am worried that creating them is too wasteful. Should I use static? Also the code is written in parallel using openmp, so I can't really declare temp as a global variable?
The variables Ep and Si are defined (using #define). I have only been using C for about 1 month. I tried to look at the assembler code generated by gcc, but I was completely lost.\
I would get rid of the call to pow() for a start:
double ljd(double r) /* Derivative of Lennard Jones Potential for Argon with
respect to distance (r) */
{
double temp;
temp = Si / r;
temp = temp * temp;
temp = temp * temp * temp;
return ( (24.0 * Ep / r) * (temp - (2.0 * temp * temp)) );
}
On my architecture (intel Centrino Duo, MinGW-gcc 4.5.2 on Windows XP), non-optimized code using pow()
static inline double ljd(double r)
{
return 24 * Ep / Si * (pow(Si / r, 7) - 2 * pow(Si / r, 13));
}
actually outperforms your version if -ffast-math is provided.
The generated assembly (using some arbitrary values for Ep and Si) looks like this:
fldl LC0
fdivl 8(%ebp)
fld %st(0)
fmul %st(1), %st
fmul %st, %st(1)
fld %st(0)
fmul %st(1), %st
fmul %st(2), %st
fxch %st(1)
fmul %st(2), %st
fmul %st(0), %st
fmulp %st, %st(2)
fxch %st(1)
fadd %st(0), %st
fsubrp %st, %st(1)
fmull LC1
Well, as I've said before, compilers suck at optimising floating point code for many reasons. So, here's an Intel assembly version that should be faster (compiled using DevStudio 2005):
const double Si6 = /*whatever pow(Si,6) is*/;
const double Si_value = /*whatever Si is*/; /* need _value as Si is a register name! */
const double Ep24 = /*whatever 24.Ep is*/;
double ljd (double r)
{
double result;
__asm
{
fld qword ptr [r]
fld st(0)
fmul st(0),st(0)
fld st(0)
fmul st(0),st(0)
fmulp st(1),st(0)
fld qword ptr [Si6]
fdivrp st(1),st(0)
fld st(0)
fld1
fsub st(0),st(1)
fsubrp st(1),st(0)
fmulp st(1),st(0)
fld qword ptr [Ep24]
fmulp st(1),st(0)
fdivrp st(1),st(0)
fstp qword ptr [result]
}
return result;
}
This version will produce slightly different results to the version posted. The compiler will probably be writing intermediate results to RAM in the original code. This will lose precision since the (Intel) FPU operates at 80bits internally whereas the double type is only 64bits. The above assembler will not lose precision in the intermediate results, it is all done at 80bits. Only the final result is rounded to 64bits.
The local variable is just fine. It doesn't cost anything. Leave it alone.
As others said, get rid of the pow call. It can't be any faster than simply squaring the number, and it could be a lot slower.
That said, just because the function is active 80+% of the time does not mean it's a problem. It only means if there is something you can optimize, it's either in there, or in something it calls (like pow) or in something that calls it.
If you try random pausing, which is a method of stack-sampling, you will see that routine on 80+% of samples, plus the lines within it that are responsible for the time, plus its callers that are responsible for the time, and their callers, and so on. All the lines of code on the stack are jointly responsible for the time.
Optimality is not when nothing take a large percent of time, it is when nothing you can fix takes a large percent of time.
Is your application structured in such a way that you could profitably vectorise this function, calculating several independent values in parallel? This would allow you to utilise hardware vector units, such as SSE.
It also seems like you would be better off keeping 1/r values around, rather than r itself.
This is an example explicitly using SSE2 instructions to implement the function. ljd() calculates two values at once.
static __m128d ljd(__m128d r)
{
static const __m128d two = { 2.0, 2.0 };
static const __m128d si = { Si, Si };
static const __m128d ep24 = { 24 * Ep, 24 * Ep };
__m128d temp2, temp3;
__m128d temp = _mm_div_pd(si, r);
__m128d ep24_r = _mm_div_pd(ep24, r);
temp = _mm_mul_pd(temp, temp);
temp2 = _mm_mul_pd(temp, temp);
temp2 = _mm_mul_pd(temp2, temp);
temp3 = _mm_mul_pd(temp2, temp2);
temp3 = _mm_mul_pd(temp3, two);
return _mm_mul_pd(ep24_r, _mm_sub_pd(temp2, temp3));
}
/* Requires `out` and `in` to be 16-byte aligned */
void ljd_array(double out[], const double in[], int n)
{
int i;
for (i = 0; i < n; i += 2)
{
_mm_store_pd(out + i, ljd(_mm_load_pd(in + i)));
}
}
However, it is important to note that recent versions of GCC are often able to vectorise functions like this automatically, as long as you're targetting the right architecture and have optimisation enabled. If you're targetting 32-bit x86, try compiling with -msse2 -O3, and adjust things such that the input and output arrays are 16-byte aligned.
Alignment for static and automatic arrays can be achieved under gcc with the type attribute __attribute__ ((aligned (16))), and for dynamic arrays using the posix_memalign() function.
Ah, that brings me back some memories... I've done MD with Lennard Jones potential years ago.
In my scenario (not huge systems) it was enough to replace the pow() with several multiplications, as suggested by another answer. I also restricted the range of neighbours, effectively truncating the potential at about r ~ 3.5 and applying some standard thermodynamic correction afterwards.
But if all this is not enough for you, I suggest to precompute the function for closely spaced values of r and simply interpolate (linear or quadratic, I'd say).
I want to learn more about using the SSE.
What ways are there to learn, besides the obvious reading the Intel® 64 and IA-32 Architectures Software Developer's Manuals?
Mainly I'm interested to work with the GCC X86 Built-in Functions.
First, I don't recommend on using the built-in functions - they are not portable (across compilers of the same arch).
Use intrinsics, GCC does a wonderful job optimizing SSE intrinsics into even more optimized code. You can always have a peek at the assembly and see how to use SSE to it's full potential.
Intrinsics are easy - just like normal function calls:
#include <immintrin.h> // portable to all x86 compilers
int main()
{
__m128 vector1 = _mm_set_ps(4.0, 3.0, 2.0, 1.0); // high element first, opposite of C array order. Use _mm_setr_ps if you want "little endian" element order in the source.
__m128 vector2 = _mm_set_ps(7.0, 8.0, 9.0, 0.0);
__m128 sum = _mm_add_ps(vector1, vector2); // result = vector1 + vector 2
vector1 = _mm_shuffle_ps(vector1, vector1, _MM_SHUFFLE(0,1,2,3));
// vector1 is now (1, 2, 3, 4) (above shuffle reversed it)
return 0;
}
Use _mm_load_ps or _mm_loadu_ps to load data from arrays.
Of course there are way more options, SSE is really powerful and in my opinion relatively easy to learn.
See also https://stackoverflow.com/tags/sse/info for some links to guides.
Since you asked for resources:
A practical guide to using SSE with C++: Good conceptual overview on how to use SSE effectively, with examples.
MSDN Listing of Compiler Intrinsics: Comprehensive reference for all your intrinsic needs. It's MSDN, but pretty much all the intrinsics listed here are supported by GCC and ICC as well.
Christopher Wright's SSE Page: Quick reference on the meanings of the SSE opcodes. I guess the Intel Manuals can serve the same function, but this is faster.
It's probably best to write most of your code in intrinsics, but do check the objdump of your compiler's output to make sure that it's producing efficient code. SIMD code generation is still a fairly new technology and it's very possible that the compiler might get it wrong in some cases.
I find Dr. Agner Fog's research & optimization guides very valuable! He also has some libraries & testing tools that I have not tried yet.
http://www.agner.org/optimize/
Step 1: write some assembly manually
I recommend that you first try to write your own assembly manually to see and control exactly what is happening when you start learning.
Then the question becomes how to observe what is happening in the program, and the answers are:
GDB
use the C standard library to print and assert things
Using the C standard library yourself requires a little bit of work, but nothing much. I have for example done this work nicely for you on Linux in the following files of my test setup:
lkmc.h
lkmc.c
lkmc/x86_64.h
Using those helpers, I then start playing around with the basics, such as:
load and store data to / from memory into SSE registers
add integers and floating point numbers of different sizes
assert that the results are what I expect
addpd.S
#include <lkmc.h>
LKMC_PROLOGUE
.data
.align 16
addps_input0: .float 1.5, 2.5, 3.5, 4.5
addps_input1: .float 5.5, 6.5, 7.5, 8.5
addps_expect: .float 7.0, 9.0, 11.0, 13.0
addpd_input0: .double 1.5, 2.5
addpd_input1: .double 5.5, 6.5
addpd_expect: .double 7.0, 9.0
.bss
.align 16
output: .skip 16
.text
/* 4x 32-bit */
movaps addps_input0, %xmm0
movaps addps_input1, %xmm1
addps %xmm1, %xmm0
movaps %xmm0, output
LKMC_ASSERT_MEMCMP(output, addps_expect, $0x10)
/* 2x 64-bit */
movaps addpd_input0, %xmm0
movaps addpd_input1, %xmm1
addpd %xmm1, %xmm0
movaps %xmm0, output
LKMC_ASSERT_MEMCMP(output, addpd_expect, $0x10)
LKMC_EPILOGUE
GitHub upstream.
paddq.S
#include <lkmc.h>
LKMC_PROLOGUE
.data
.align 16
input0: .long 0xF1F1F1F1, 0xF2F2F2F2, 0xF3F3F3F3, 0xF4F4F4F4
input1: .long 0x12121212, 0x13131313, 0x14141414, 0x15151515
paddb_expect: .long 0x03030303, 0x05050505, 0x07070707, 0x09090909
paddw_expect: .long 0x04030403, 0x06050605, 0x08070807, 0x0A090A09
paddd_expect: .long 0x04040403, 0x06060605, 0x08080807, 0x0A0A0A09
paddq_expect: .long 0x04040403, 0x06060606, 0x08080807, 0x0A0A0A0A
.bss
.align 16
output: .skip 16
.text
movaps input1, %xmm1
/* 16x 8bit */
movaps input0, %xmm0
paddb %xmm1, %xmm0
movaps %xmm0, output
LKMC_ASSERT_MEMCMP(output, paddb_expect, $0x10)
/* 8x 16-bit */
movaps input0, %xmm0
paddw %xmm1, %xmm0
movaps %xmm0, output
LKMC_ASSERT_MEMCMP(output, paddw_expect, $0x10)
/* 4x 32-bit */
movaps input0, %xmm0
paddd %xmm1, %xmm0
movaps %xmm0, output
LKMC_ASSERT_MEMCMP(output, paddd_expect, $0x10)
/* 2x 64-bit */
movaps input0, %xmm0
paddq %xmm1, %xmm0
movaps %xmm0, output
LKMC_ASSERT_MEMCMP(output, paddq_expect, $0x10)
LKMC_EPILOGUE
GitHub upstream.
Step 2: write some intrinsics
For production code however, you will likely want to use the pre-existing intrinsics instead of raw assembly as mentioned at: https://stackoverflow.com/a/1390802/895245
So now I try to convert the previous examples into more or less equivalent C code with intrinsics.
addpq.c
#include <assert.h>
#include <string.h>
#include <x86intrin.h>
float global_input0[] __attribute__((aligned(16))) = {1.5f, 2.5f, 3.5f, 4.5f};
float global_input1[] __attribute__((aligned(16))) = {5.5f, 6.5f, 7.5f, 8.5f};
float global_output[4] __attribute__((aligned(16)));
float global_expected[] __attribute__((aligned(16))) = {7.0f, 9.0f, 11.0f, 13.0f};
int main(void) {
/* 32-bit add (addps). */
{
__m128 input0 = _mm_set_ps(1.5f, 2.5f, 3.5f, 4.5f);
__m128 input1 = _mm_set_ps(5.5f, 6.5f, 7.5f, 8.5f);
__m128 output = _mm_add_ps(input0, input1);
/* _mm_extract_ps returns int instead of float:
* * https://stackoverflow.com/questions/5526658/intel-sse-why-does-mm-extract-ps-return-int-instead-of-float
* * https://stackoverflow.com/questions/3130169/how-to-convert-a-hex-float-to-a-float-in-c-c-using-mm-extract-ps-sse-gcc-inst
* so we must use instead: _MM_EXTRACT_FLOAT
*/
float f;
_MM_EXTRACT_FLOAT(f, output, 3);
assert(f == 7.0f);
_MM_EXTRACT_FLOAT(f, output, 2);
assert(f == 9.0f);
_MM_EXTRACT_FLOAT(f, output, 1);
assert(f == 11.0f);
_MM_EXTRACT_FLOAT(f, output, 0);
assert(f == 13.0f);
/* And we also have _mm_cvtss_f32 + _mm_shuffle_ps, */
assert(_mm_cvtss_f32(output) == 13.0f);
assert(_mm_cvtss_f32(_mm_shuffle_ps(output, output, 1)) == 11.0f);
assert(_mm_cvtss_f32(_mm_shuffle_ps(output, output, 2)) == 9.0f);
assert(_mm_cvtss_f32(_mm_shuffle_ps(output, output, 3)) == 7.0f);
}
/* Now from memory. */
{
__m128 *input0 = (__m128 *)global_input0;
__m128 *input1 = (__m128 *)global_input1;
_mm_store_ps(global_output, _mm_add_ps(*input0, *input1));
assert(!memcmp(global_output, global_expected, sizeof(global_output)));
}
/* 64-bit add (addpd). */
{
__m128d input0 = _mm_set_pd(1.5, 2.5);
__m128d input1 = _mm_set_pd(5.5, 6.5);
__m128d output = _mm_add_pd(input0, input1);
/* OK, and this is how we get the doubles out:
* with _mm_cvtsd_f64 + _mm_unpackhi_pd
* https://stackoverflow.com/questions/19359372/mm-cvtsd-f64-analogon-for-higher-order-floating-point
*/
assert(_mm_cvtsd_f64(output) == 9.0);
assert(_mm_cvtsd_f64(_mm_unpackhi_pd(output, output)) == 7.0);
}
return 0;
}
GitHub upstream.
paddq.c
#include <assert.h>
#include <inttypes.h>
#include <string.h>
#include <x86intrin.h>
uint32_t global_input0[] __attribute__((aligned(16))) = {1, 2, 3, 4};
uint32_t global_input1[] __attribute__((aligned(16))) = {5, 6, 7, 8};
uint32_t global_output[4] __attribute__((aligned(16)));
uint32_t global_expected[] __attribute__((aligned(16))) = {6, 8, 10, 12};
int main(void) {
/* 32-bit add hello world. */
{
__m128i input0 = _mm_set_epi32(1, 2, 3, 4);
__m128i input1 = _mm_set_epi32(5, 6, 7, 8);
__m128i output = _mm_add_epi32(input0, input1);
/* _mm_extract_epi32 mentioned at:
* https://stackoverflow.com/questions/12495467/how-to-store-the-contents-of-a-m128d-simd-vector-as-doubles-without-accessing/56404421#56404421 */
assert(_mm_extract_epi32(output, 3) == 6);
assert(_mm_extract_epi32(output, 2) == 8);
assert(_mm_extract_epi32(output, 1) == 10);
assert(_mm_extract_epi32(output, 0) == 12);
}
/* Now from memory. */
{
__m128i *input0 = (__m128i *)global_input0;
__m128i *input1 = (__m128i *)global_input1;
_mm_store_si128((__m128i *)global_output, _mm_add_epi32(*input0, *input1));
assert(!memcmp(global_output, global_expected, sizeof(global_output)));
}
/* Now a bunch of other sizes. */
{
__m128i input0 = _mm_set_epi32(0xF1F1F1F1, 0xF2F2F2F2, 0xF3F3F3F3, 0xF4F4F4F4);
__m128i input1 = _mm_set_epi32(0x12121212, 0x13131313, 0x14141414, 0x15151515);
__m128i output;
/* 8-bit integers (paddb) */
output = _mm_add_epi8(input0, input1);
assert(_mm_extract_epi32(output, 3) == 0x03030303);
assert(_mm_extract_epi32(output, 2) == 0x05050505);
assert(_mm_extract_epi32(output, 1) == 0x07070707);
assert(_mm_extract_epi32(output, 0) == 0x09090909);
/* 32-bit integers (paddw) */
output = _mm_add_epi16(input0, input1);
assert(_mm_extract_epi32(output, 3) == 0x04030403);
assert(_mm_extract_epi32(output, 2) == 0x06050605);
assert(_mm_extract_epi32(output, 1) == 0x08070807);
assert(_mm_extract_epi32(output, 0) == 0x0A090A09);
/* 32-bit integers (paddd) */
output = _mm_add_epi32(input0, input1);
assert(_mm_extract_epi32(output, 3) == 0x04040403);
assert(_mm_extract_epi32(output, 2) == 0x06060605);
assert(_mm_extract_epi32(output, 1) == 0x08080807);
assert(_mm_extract_epi32(output, 0) == 0x0A0A0A09);
/* 64-bit integers (paddq) */
output = _mm_add_epi64(input0, input1);
assert(_mm_extract_epi32(output, 3) == 0x04040404);
assert(_mm_extract_epi32(output, 2) == 0x06060605);
assert(_mm_extract_epi32(output, 1) == 0x08080808);
assert(_mm_extract_epi32(output, 0) == 0x0A0A0A09);
}
return 0;
GitHub upstream.
Step 3: go and optimize some code and benchmark it
The final, and most important and hard step, is of course to actually use the intrinsics to make your code fast, and then to benchmark your improvement.
Doing so, will likely require you to learn a bit about the x86 microarchitecture, which I don't know myself. CPU vs IO bound will likely be one of the things that comes up: What do the terms "CPU bound" and "I/O bound" mean?
As mentioned at: https://stackoverflow.com/a/12172046/895245 this will almost inevitably involve reading Agner Fog's documentation, which appear to be better than anything Intel itself has published.
Hopefully however steps 1 and 2 will serve as a basis to at least experiment with functional non-performance aspects and quickly see what instructions are doing.
TODO: produce a minimal interesting example of such optimization here.
You can use the SIMD-Visualiser to graphically visualize and animate the operations. It'll greatly help understanding how the data lanes are processed