How to further optimize performance of Matrix Multiplication? - c

I am trying to optimize my matrix multiplication code running on a single core. How can I futher improve the performance in regards to loop unrolling, FMA/SSE? I'm also curious to know why the performance won't increase if you use four instead of two sums in the inner loop.
The problem size is a 1000x1000 matrix multiplication. Both gcc 9 and icc 19.0.5 are available. Intel Xeon # 3.10GHz, 32K L1d Cache, Skylake Architecture. Compiled with gcc -O3 -mavx.
void mmult(double* A, double* B, double* C)
{
const int block_size = 64 / sizeof(double);
__m256d sum[2], broadcast;
for (int i0 = 0; i0 < SIZE_M; i0 += block_size) {
for (int k0 = 0; k0 < SIZE_N; k0 += block_size) {
for (int j0 = 0; j0 < SIZE_K; j0 += block_size) {
int imax = i0 + block_size > SIZE_M ? SIZE_M : i0 + block_size;
int kmax = k0 + block_size > SIZE_N ? SIZE_N : k0 + block_size;
int jmax = j0 + block_size > SIZE_K ? SIZE_K : j0 + block_size;
for (int i1 = i0; i1 < imax; i1++) {
for (int k1 = k0; k1 < kmax; k1++) {
broadcast = _mm256_broadcast_sd(A+i1*SIZE_N+k1);
for (int j1 = j0; j1 < jmax; j1+=8) {
sum[0] = _mm256_load_pd(C+i1*SIZE_K+j1+0);
sum[0] = _mm256_add_pd(sum[0], _mm256_mul_pd(broadcast, _mm256_load_pd(B+k1*SIZE_K+j1+0)));
_mm256_store_pd(C+i1*SIZE_K+j1+0, sum[0]);
sum[1] = _mm256_load_pd(C+i1*SIZE_K+j1+4);
sum[1] = _mm256_add_pd(sum[1], _mm256_mul_pd(broadcast, _mm256_load_pd(B+k1*SIZE_K+j1+4)));
_mm256_store_pd(C+i1*SIZE_K+j1+4, sum[1]);
// doesn't improve performance.. why?
// sum[2] = _mm256_load_pd(C+i1*SIZE_K+j1+8);
// sum[2] = _mm256_add_pd(sum[2], _mm256_mul_pd(broadcast, _mm256_load_pd(B+k1*SIZE_K+j1+8)));
// _mm256_store_pd(C+i1*SIZE_K+j1+8, sum[2]);
// sum[3] = _mm256_load_pd(C+i1*SIZE_K+j1+12);
// sum[3] = _mm256_add_pd(sum[3], _mm256_mul_pd(broadcast, _mm256_load_pd(B+k1*SIZE_K+j1+12)));
// _mm256_store_pd(C+i1*SIZE_K+j1+4, sum[3]);
}
}
}
}
}
}
}

This code has 2 loads per FMA (if FMA-contraction happens), but Skylake only supports at most one load per FMA in theory (if you want to max out 2/clock FMA throughput), and even that is usually too much in practice. (Peak through is 2 loads + 1 store per clock, but it usually can't quite sustain that). See Intel's optimization guide and https://agner.org/optimize/
The loop overhead is not the biggest problem, the body itself forces the code to run at half speed.
If the k-loop was the inner loop, a lot of accumulation could be chained, without having to load/store to and from C. This has a downside: with a loop-carried dependency chain like that, it would be up to to code to explicitly ensure that there is enough independent work to be done.
In order to have few loads but enough independent work, the body of the inner loop could calculate the product between a small column vector from A and a small row vector from B, for example using 4 scalar broadcasts to load the column and 2 normal vector loads from B, resulting in just 6 loads for 8 independent FMAs (even lower ratios are possible), which is enough independent FMAs to keep Skylake happy and not too many loads. Even a 3x4 footprint is possible, which also has enough independent FMAs to keep Haswell happy (it needs at least 10).
I happen to have some example code, it's for single precision and C++ but you'll get the point:
sumA_1 = _mm256_load_ps(&result[i * N + j]);
sumB_1 = _mm256_load_ps(&result[i * N + j + 8]);
sumA_2 = _mm256_load_ps(&result[(i + 1) * N + j]);
sumB_2 = _mm256_load_ps(&result[(i + 1) * N + j + 8]);
sumA_3 = _mm256_load_ps(&result[(i + 2) * N + j]);
sumB_3 = _mm256_load_ps(&result[(i + 2) * N + j + 8]);
sumA_4 = _mm256_load_ps(&result[(i + 3) * N + j]);
sumB_4 = _mm256_load_ps(&result[(i + 3) * N + j + 8]);
for (size_t k = kk; k < kk + akb; k++) {
auto bc_mat1_1 = _mm256_set1_ps(*mat1ptr);
auto vecA_mat2 = _mm256_load_ps(mat2 + m2idx);
auto vecB_mat2 = _mm256_load_ps(mat2 + m2idx + 8);
sumA_1 = _mm256_fmadd_ps(bc_mat1_1, vecA_mat2, sumA_1);
sumB_1 = _mm256_fmadd_ps(bc_mat1_1, vecB_mat2, sumB_1);
auto bc_mat1_2 = _mm256_set1_ps(mat1ptr[N]);
sumA_2 = _mm256_fmadd_ps(bc_mat1_2, vecA_mat2, sumA_2);
sumB_2 = _mm256_fmadd_ps(bc_mat1_2, vecB_mat2, sumB_2);
auto bc_mat1_3 = _mm256_set1_ps(mat1ptr[N * 2]);
sumA_3 = _mm256_fmadd_ps(bc_mat1_3, vecA_mat2, sumA_3);
sumB_3 = _mm256_fmadd_ps(bc_mat1_3, vecB_mat2, sumB_3);
auto bc_mat1_4 = _mm256_set1_ps(mat1ptr[N * 3]);
sumA_4 = _mm256_fmadd_ps(bc_mat1_4, vecA_mat2, sumA_4);
sumB_4 = _mm256_fmadd_ps(bc_mat1_4, vecB_mat2, sumB_4);
m2idx += 16;
mat1ptr++;
}
_mm256_store_ps(&result[i * N + j], sumA_1);
_mm256_store_ps(&result[i * N + j + 8], sumB_1);
_mm256_store_ps(&result[(i + 1) * N + j], sumA_2);
_mm256_store_ps(&result[(i + 1) * N + j + 8], sumB_2);
_mm256_store_ps(&result[(i + 2) * N + j], sumA_3);
_mm256_store_ps(&result[(i + 2) * N + j + 8], sumB_3);
_mm256_store_ps(&result[(i + 3) * N + j], sumA_4);
_mm256_store_ps(&result[(i + 3) * N + j + 8], sumB_4);
This means that the j-loop and the i-loop are unrolled, but not the k-loop, even though it is the inner loop now. Unrolling the k-loop a bit did help a bit in my experiments.

See #harold's answer for an actual improvement. This is mostly to repost what I wrote in comments.
four instead of two sums in the inner loop. (Why doesn't unrolling help?)
There's no loop-carried dependency through sum[i]. The next iteration assigns sum[0] = _mm256_load_pd(C+i1*SIZE_K+j1+0); which has no dependency on the previous value.
Therefore register-renaming of the same architectural register onto different physical registers is sufficient to avoid write-after-write hazards that might stall the pipeline. No need to complicate the source with multiple tmp variables. See Why does mulss take only 3 cycles on Haswell, different from Agner's instruction tables? (Unrolling FP loops with multiple accumulators) (In that question, one dot product of 2 arrays, there is a loop carried dependency through an accumulator. There, using multiple accumulators is valuable to hide FP FMA latency so we bottleneck on FMA throughput, not latency.)
A pipeline without register renaming (most in-order CPUs) would benefit from "software pipelining" to statically schedule for what out-of-order exec can do on the fly: load into different registers so there's distance (filled with independent work) between each load and the FMA that consumes it. And then between that and the store.
But all modern x86 CPUs are OoO; even Knight's Landing has some limited OoO exec for SIMD vectors. (Silvermont doesn't support AVX, but does run SIMD instructions in-order, only doing OoO for integer).
Without any multiple-accumulator situation to hide latency, the benefits of unrolling (explicitly in the source or with -funroll-loop as enabled by -fprofile-use, or in clang by default) are:
Reduce front-end bandwidth to get the loop overhead into the back-end. More useful-work uops per loop overhead. Thus it helps if your "useful work" is close to bottlenecked on the front end.
Less back-end execution-unit demand for running the loop overhead. Normally not a problem on Haswell and later, or Zen; the back end can mostly keep up with the front-end when the instruction mix includes some integer stuff and some pure load instructions.
Fewer total uops per work done means OoO exec can "see" farther ahead for memory loads/stores.
Sometimes better branch prediction for short-running loops: The lower iteration count means a shorter pattern for branch prediction to learn. So for short trip-counts, a better chance of correctly predicting the not-taken for the last iteration when execution falls out of the loop.
Sometimes save a mov reg,reg in more complicated cases where it's easier for the compiler to generate a new result in a different reg. The same variable can alternate between living in two regs instead of needing to get moved back to the same one to be ready for the next iteration. Especially if you have a loop that uses a[i] and a[i+1] in a dependent way, or something like Fibonacci.
With 2 loads + 1 store in the loop, that will probably be the bottleneck, not FMA or front-end bandwidth. Unrolling by 2 might have helped avoid a front-end bottleneck, but more than that would only matter with contention from another hyperthread.
An interesting question came up in comments: doesn't unrolling need a lot of registers to be useful?
Harold commented:
16 is not a huge number of registers, but it's enough to have 12
accumulators and 3 pieces of row vector from B and the broadcasted
scalar from A, so it works out to just about enough. The loop from OP
above barely uses any registers anyway. The 8 registers in 32bit are
indeed too few.
Of course since the code in the question doesn't have "accumulators" in registers across loop iterations, only adding into memory, compilers could have optimized all of sum[0..n] to reuse the same register in asm; it's "dead" after storing. So actual register pressure is very low.
Yes x86-64 is somewhat register-poor, that's why AVX512 doubles the number as well as width of vector regs (zmm0..31). Yes, many RISCs have 32 int / 32 fp regs, including AArch64 up from 16 in ARM.
x86-64 has 16 scalar integer registers (including the stack pointer, not including the program counter), so normal functions can use 15. There are also 16 vector regs, xmm0..15. (And with AVX they're double the width ymm0..15).
(Some of this was written before I noticed that sum[0..n] was pointless, not loop-carried.)
Register renaming onto a large physical register file is sufficient in this case. There are other cases where having more architectural registers helps, especially for higher FP latency hence why AVX512 has 32 zmm regs. But for integer 16 is close to enough. RISC CPUs were often designed for in-order without reg renaming, needing SW pipeline.
With OoO exec, the jump from 8 to 16 architectural GP integer regs is more significant than a jump from 16 to 32 would be, in terms of reducing spill/reloads. (I've seen a paper that measured total dynamic instruction count for SPECint with various numbers of architectural registers. I didn't look it up again, but 8->16 might have been 10% total saving while 16->32 was only a couple %. Something like that).
But this specific problem doesn't need a lot of FP registers, only 4 vectors for sum[0..3] (if they were loop-carried) and maybe 1 temporary; x86 can use memory-source mul/add/FMA. Register renaming removes any WAW hazards so we can reuse the same temporary register instead of needing software pipelining. (And OoO exec also hides load and ALU latency.)
You want multiple accumulators when there are loop-carried dependencies. This code is adding into memory, not into a few vector accumulators, so any dependency is through store/reload. But that only has ~7 cycle latency so any sane cache-blocking factor hides it.

Related

Implementing matrix operation using AVX in C

I'm trying to implement the following operation using AVX:
for (i=0; i<N; i++) {
for(j=0; j<N; j++) {
for (k=0; k<K; k++) {
d[i][j] += 2 * a[i][k] * ( b[k][j]- c[k]);
}
}
}
for (int i=0; i<N; i++){
f+= d[ind[i]][ind[i]]/2;
}
Where d is a NxN matrix, a is a NxK, b a KxN and c a vector of length K. All of them are doubles. Of course, all the data is aligned and I am using #pragma vector aligned to help compiler (gcc).
I know how to use AVX extensions with one-dimension arrays, but it is being a little bit tricky to me to do it with matrix. Currently, I have the following, but I'm not getting correct results:
for (int i=0; i< floor (N/4); i++){
for (int j=0; j< floor (N/4); j++){
__m256d D, A, B, C;
D = _mm256_setzero_pd();
#pragma vector aligned
for (int k=0; k<K_MAX; k++){
A = _mm256_load_pd(a[i] + k*4);
B = _mm256_load_pd(b[k] + j*4);
C = _mm256_load_pd(c + 4*k);
B = _mm256_sub_pd(B, C);
A = _mm256_mul_pd(A, B);
D = _mm256_add_pd(_mm256_set1_pd(2.0), A);
_mm256_store_pd(d[i] + j*4, D);
}
}
}
for (int i=0; i<N; i++){
f+= d[ind[i]][ind[i]]/2;
}
I hope someone can tell me where the mistake is.
Thanks in advance.
NOTE: I'm not willing to introduce OpenMP, just using SIMD Intel instructions
Assuming both N and K numbers are relatively large (much larger than 4 which is hardware vector size), here's one way to vectorize your main loop. Untested.
The main idea is vectorizing the middle loop instead of the inner one. This is done for two reasons.
This avoids horizontal operations. When vectorizing just the inner loop, we would have to compute horizontal sum of a vector.
That b[k][j] load has unfortunate RAM access pattern when loading for 4 consecutive k values, need either 4 separate load instructions, or gather load, both methods are relatively slow. Loading elements for 4 consecutive j values is a full-vector load instruction, very efficient, especially since you align your inputs.
const int N_aligned = ( N / 4 ) * 4;
for( int i = 0; i < N; i++ )
{
int j = 0;
for( ; j < N_aligned; j += 4 )
{
// Load 4 scalars from d
__m256d dv = _mm256_loadu_pd( &d[ i ][ j ] );
// Run the inner loop which only loads from RAM but never stores any data
for( int k = 0; k < K; k++ )
{
__m256d av = _mm256_broadcast_sd( &a[ i ][ k ] );
__m256d bv = _mm256_loadu_pd( &b[ k ][ j ] );
__m256d cv = _mm256_broadcast_sd( &c[ k ] );
// dv += 2*av*( bv - cv )
__m256d t1 = _mm256_add_pd( av, av ); // 2*av
__m256d t2 = _mm256_sub_pd( bv, cv ); // bv - cv
dv = _mm256_fmadd_pd( t1, t2, dv );
}
// Store the updated 4 values
_mm256_storeu_pd( &d[ i ][ j ], dv );
}
// Handle remainder with scalar code
for( ; j < N; j++ )
{
double ds = d[ i ][ j ];
for( int k = 0; k < K; k++ )
ds += 2 * a[ i ][ k ] * ( b[ k ][ j ] - c[ k ] );
d[ i ][ j ] = ds;
}
}
If you want to optimize further, try to unroll the inner loop by a small factor like 2, use 2 independent accumulators initialized with _mm256_setzero_pd(), add them after the loop. It could be that on some processors, this version stalls on the latency of the FMA instruction, instead of saturating load ports or ALUs. Multiple independent accumulators sometimes help.
b[k][j] is your problem: the elements b[k + 0..3][j] aren't contiguous in memory. Using SIMD (in a reasonable / useful way) is not something you can drop in to the classic naive matmul loop. See What Every Programmer Should Know About Memory? - there's an appendix with an example of an SSE2 matmul (with cache-blocking) which shows how to do operations in a different order that's SIMD-friendly.
Soonts's answer shows how to vectorize at all, by vectorizing over j, the middle loop. But that leaves a relatively poor memory access pattern, and 3 loads + 3 ALU operations inside the loop. (This answer started out as a comment on it, see it for the code I'm talking about and proposing changes to.)
Loop inversion should be possible to do j as the inner-most loop. That would mean doing stores for d[i][j] += ... inside the inner-most loop, but OTOH it makes more loop invariants in 2 * a[i][k] * ( b[k][j]- c[k] ) so you can usefully transform to d[i][j] += (2*a_ik) * b[k][j] - (2*a_ik*c_k), i.e. one VFMSUBPD and one VADDPD per load&store. (With the bv load folding into the FMSUB as a memory source operand, and the dv load folding into VADDPD, so hopefully only 3 uops for the front-end, including a separate store, not including loop overhead.)
The compiler will have to unroll and avoid an indexed addressing mode so the store-address uop can stay micro-fused and run on port 7 on Intel CPUs (Haswell through Skylake-family), not competing with the two loads. Ice Lake doesn't have that problem, having two full independent store-AGUs separate from the two load AGUs. But probably still needs some loop unrolling to avoid a front-end bottleneck.
Here's an example, untested (original version contributed by Soonts, thanks). It optimizes down to 2 FP math ops in the loop in a different way: simply hoisting 2*a out of the loop, doing SUB then FMA for dv += (2av)*(sub_result). But bv can't be a source operand for vsubpd because we need bv - cv. But we can fix that by negating cv to allow (-cv) + bv in the inner loop, with bv as a memory source operand. Sometimes compilers will do things like that for you, but here it seems they didn't, so I did it manually. Otherwise we get a separate vmovupd load going through the front-end.
#include <stdint.h>
#include <stdlib.h>
#include <immintrin.h>
// This double [N][N] C99 VLA syntax isn't portable to C++ even with GNU extensions
// restrict tells the compiler the output doesn't overlap with any of the inputs
void matop(size_t N, size_t K, double d[restrict N][N], const double a[restrict N][K], const double b[restrict K][N], const double c[restrict K])
{
for( size_t i = 0; i < N; i++ ) {
// loop-invariant pointers for this outer iteration
//double* restrict rowDi = &d[ i ][ 0 ];
const double* restrict rowAi = &a[ i ][ 0 ];
for( size_t k = 0; k < K; k++ ) {
const double* restrict rowBk = &b[ k ][ 0 ];
double* restrict rowDi = &d[ i ][ 0 ];
#if 0 // pure scalar
// auto-vectorizes ok; still a lot of extra checking outside outermost loop even with restrict
for (size_t j=0 ; j<N ; j++){
rowDi[j] += 2*rowAi[k] * (rowBk[j] - c[k]);
}
#else // SIMD inner loop with cleanup
// *** TODO: unroll over 2 or 3 i values
// and maybe also 2 or 3 k values, to reuse each bv a few times while it's loaded.
__m256d av = _mm256_broadcast_sd( rowAi + k );
av = _mm256_add_pd( av, av ); // 2*a[ i ][ k ] broadcasted
const __m256d cv = _mm256_broadcast_sd( &c[ k ] );
const __m256d minus_ck = _mm256_xor_pd(cv, _mm256_set1_pd(-0.0)); // broadcasted -c[k]
//const size_t N_aligned = ( (size_t)N / 4 ) * 4;
size_t N_aligned = N & -4; // round down to a multiple of 4 j iterations
const double* endBk = rowBk + N_aligned;
//for( ; j < N_aligned; j += 4 )
for ( ; rowBk != endBk ; rowBk += 4, rowDi += 4) { // coax GCC into using pointer-increments in the asm, instead of j+=4
// Load the output vector to update
__m256d dv = _mm256_loadu_pd( rowDi );
// Update with FMA
__m256d bv = _mm256_loadu_pd( rowBk );
__m256d t2 = _mm256_add_pd( minus_ck, bv ); // bv - cv
dv = _mm256_fmadd_pd( av, t2, dv );
// Store back to the same address
_mm256_storeu_pd( rowDi, dv );
}
// rowDi and rowBk point to the double after the last full vector
// The remainder, if you can't pad your rows to a multiple of 4 and step on that padding
for(int j=0 ; j < (N&3); j++ )
rowDi[ j ] += _mm256_cvtsd_f64( av ) * ( rowBk[ j ] + _mm256_cvtsd_f64( minus_ck ) );
#endif
}
}
}
Without unrolling (https://godbolt.org/z/6WeYKbnYY), GCC11's inner loop asm looks like this, all single-uop instructions that can stay micro-fused even in the back-end on Haswell and later.
.L7: # do{
vaddpd ymm0, ymm2, YMMWORD PTR [rax] # -c[k] + rowBk[0..3]
add rax, 32 # rowBk += 4
add rdx, 32 # rowDi += 4
vfmadd213pd ymm0, ymm1, YMMWORD PTR [rdx-32] # fma(2aik, Bkj-ck, Dij)
vmovupd YMMWORD PTR [rdx-32], ymm0 # store FMA result
cmp rcx, rax
jne .L7 # }while(p != endp)
But it's 6 total uops, 3 of them loop overhead (pointer increments and fused cmp+jne), so Haswell through Skylake could only run it at 1 iteration per 1.5 clocks, bottlenecked on the 4-wide issue stage in the front-end. (Which wouldn't let OoO exec get ahead on executing the pointer increments and loop branch, to notice early and recover while the back-end was still chewing on older loads and FP math.)
So loop unrolling should be helpful, since we managed to coax GCC into using indexed addressing modes. Without that it's relatively useless with AVX code on Intel Haswell/Skylake CPUs, with each vaddpd ymm5, ymm4, [rax + r14] decoding as 1 micro-fused uop, but unlaminating into 2 at issue into the back-end, not helping us get more work through the narrowest part of the front-end. (A lot like if we'd used a separate vmovupd load like we got with _mm256_sub_pd(bv, cv) instead of add(bv, -cv).)
The vmovupd ymmword ptr [rbp + r14], ymm5 store stays micro-fused but can't run on port 7, limiting us to a total of 2 memory operations per clock (up to 1 of which can be a store.) So a best case of 1.5 cycles per vector.
Compiled on https://godbolt.org/z/rd3rn9zor with GCC and clang -O3 -march=skylake -funroll-loops. GCC does actually use pointer increments with loads folded into 8x vaddpd and 8x vfmadd213pd. But clang uses indexed addressing modes and doesn't unroll. (You probably don't want -funroll-loops for your whole program, so either compile this separately or manually unroll. GCC's unrolling fully peels a prologue that does 0..7 vector iterations before entering the actual SIMD loop, so it's quite aggressive.)
GCC's loop-unrolling looks useful here for large N, amortizing the pointer increments and loop overhead over multiple vectors. (GCC doesn't know how to invent multiple accumulators for FP dep chains in a dot product for example, making its unrolling useless in that case, unlike clang.)
Unfortunately clang doesn't unroll the inner loop for us, but it does use vmaskmovpd in an interesting way for the cleanup.
It's maybe good that we use a separate loop counter for cleanup, in a way that lets the compiler easily prove the trip-count for the cleanup is 0..3, so it doesn't try to auto-vectorize with YMM.
The other way to do it, using an actual j variable for the inner loop and its cleanup, more like Soonts' edit. IIRC, compilers did try to auto-vectorize the cleanup for this, wasting code size and some always-false branching.
size_t j = 0; // used for cleanup loop after
for( ; j < N_aligned; j += 4 )
{
// Load the output vector to update
__m256d dv = _mm256_loadu_pd( rowDi + j );
// Update with FMA
__m256d bv = _mm256_loadu_pd( rowBk + j );
__m256d t2 = _mm256_sub_pd( bv, cv ); // bv - cv
dv = _mm256_fmadd_pd( av, t2, dv );
// Store back to the same address
_mm256_storeu_pd( rowDi + j, dv );
}
// The remainder, if you can't pad your rows to a multiple of 4
for( ; j < N; j++ )
rowDi[ j ] += _mm256_cvtsd_f64( av ) * ( rowBk[ j ] - _mm256_cvtsd_f64( cv ) );
This has a fairly good mix of load&store vs. FP math for modern CPUs (https://agner.org/optimize/ and https://uops.info/), especially Intel where we can do 2 loads and 1 store. I think Zen 2 or 3 can also do 2 loads + 1 store. It needs to hit in L1d cache to sustain that kind of throughput, though. (And even then, Intel's optimization manual says the max sustained L1d bandwidth on Skylake is less than the full 96 bytes/cycle that would require. More like mid-80s IIRC, so we can't quite expect one result vector per cycle, even with sufficient unrolling to avoid front-end bottlenecks.)
There's no latency bottleneck, since we move on to a new dv every iteration instead of accumulating anything across loop iterations.
The other advantage to this is that memory access to d[i][j] and b[k][j] would be sequential, with no other memory access in the inner-most loop. (The middle loop would do broadcast-loads of a[i][k] and c[k]. Those seem likely to cache-miss if the inner loop evicts too much; with some unrolling of the outer loop, one SIMD load and some shuffling could help, but probably cache-blocking would avoid a need for that.)
Looping over the same d[i] row repeatedly for different b[k] rows gives us locality for the part that we're modifying (i.e. use k as the middle loop, keeping i as the outer-most.) With k as the outer loop, we'd be looping K times over the whole d[0..N-1][0..N-1], probably needing to write + read each pass all that way out to whichever level of cache or memory could hold it.
But really you'd still want to cache-block if each row is really long, so you avoid the cache misses to bring all of b[][] in from DRAM N times. And avoid evicting the stuff you're going to broadcast-load next.
Smarter unrolling: a first step towards cache-blocking
Some of the above problems with maxing out load/store execution unit throughput, and requiring the compiler to use non-indexed addressing modes, can go away if we do more with each vector of data while it's loaded.
For example, instead of working on just one row of d[][], we could be working on 2, 3, or 4. Then every (rowBk[j] - c[k]) result can be used that many times (with a different 2aik) for a d[i+unroll][j + 0..vec] vector.
And we can also load a couple different (rowBk+K*0..unroll)[j+0..3], each with a corresponding minus_ck0, minus_ck1, etc. (Or keep an array of vectors; as long as it's small and the compiler has enough registers, the elements won't exist in memory.)
With multiple bv-cv and dv vectors in registers all at the same time, we can do significantly more FMAs per load without increasing the total amount of FP work. It takes more registers for constants, though, otherwise we could be defeating the purpose by forcing more reloads.
The d[i][j] += (2*a_ik) * b[k][j] - (2*a_ik*c_k) transformation wouldn't be useful here; we want to keep bv-cv separate from i so we can reuse that result as an input for different FMAs.
The b[k][j]+(-c[k]) can still benefit from micro-fusion of a load with a vaddpd so ideally it would still use a pointer increment, but the front-end might not be a bottleneck anymore.
Don't overdo it with this; too many memory input streams can be a problem for cache conflict misses especially for some N values that might create aliasing, and also for HW prefetching tracking them all. (Although Intel's L2 streamer is said to track 1 forward and 1 backward stream per 4k page, IIRC.) Probably about 4 to 8 ish streams is ok. But if d[][] isn't missing in L1d, then it's not really an input stream from memory. You don't want your b[][] input rows to be evicting the d data, though, since you'll be looping over 2 to 4 rows of d data repeatedly.
By comparison: Soonts's loop - less frequent cleanup, but worse memory access pattern.
Soonts's current loop with 3 loads and 3 ALU operations isn't ideal, although 1 load per FMA operation is already ok if they hit in cache (most modern CPUs can do 2 each per clock, although AMD Zen can also do 2 FP adds in parallel with mul/fma). If that extra ALU operation was a bottleneck, we could pre-multiply a[][] by 2 once, taking only O(N*K) work vs. O(N^2*K) to do it on the fly. But it's probably not a bottleneck and thus not worth it.
More importantly, the memory access pattern in Soonts's current answer is looping forward 1 double at a time for broadcast loads of c[k] and a[i][k] which is good, but the bv = _mm256_loadu_pd of b[k][j + 0..3] is unfortunately striding down a column.
If you're going to unroll as Soonts suggested, don't just do two dep chains for one dv, do at least two vectors, d[i][j + 0..3] and 4..7 so you use a whole 64 bytes (full cache line) from every b[k][j] you touch. Or four vectors for a pair of cache-lines. (Intel CPUs at least use an adjacent-line prefetcher, which likes to complete a 128-byte aligned pair of cache lines, so you'd benefit from aligning the rows of b[][] by 128. Or at least by 64, and get some benefit from adjacent-line prefetching.
If a vertical slice of b[][] fits in some level of cache (along with the row of d[i][] you're currently accumulating into), the next stride down the next group of columns can benefit from that prefetching and locality. If not, fully using the lines you touch is more important, so they don't have to get pulled in again later.
So with Soonts's vectorization strategy, for large problems where this won't fit in L1d cache, probably good to make sure b's rows are aligned by 64, even if that means padding at the end of each row. (The storage geometry doesn't have to match the actual matrix dimension; you pass N and row_stride separately. You use one for index calculations, the other for loop bounds.)

loop unrolling not giving expected speedup for floating-point dot product

/* Inner product. Accumulate in temporary */
void inner4(vec_ptr u, vec_ptr v, data_t *dest)
{
long i;
long length = vec_length(u);
data_t *udata = get_vec_start(u);
data_t *vdata = get_vec_start(v);
data_t sum = (data_t) 0;
for (i = 0; i < length; i++) {
sum = sum + udata[i] * vdata[i];
}
*dest = sum;
}
Write a version of the inner product procedure described in the above problem that
uses 6 × 1a loop unrolling . For x86-64, our measurements of the unrolled version
give a CPE of 1.07 for integer data but still 3.01 for both floating-point data.
My code for 6*1a version of loop unrolling
void inner4(vec_ptr u, vec_ptr v, data_t *dest){
long i;
long length = vec_length(u);
data_t *udata = get_vec_start(u);
data_t *vdata = get_vec_start(v);
long limit = length -5;
data_t sum = (data_t) 0;
for(i=0; i<limit; i+=6){
sum = sum +
((udata[ i ] * vdata[ i ]
+ udata[ i+1 ] * vdata[ i+1 ])
+ (udata[ i+2 ] * vdata[ i+2 ]
+ udata[ i+3 ] * vdata[ i+3 ]))
+ ((udata[ i+4 ] * vdata[ i+4 ])
+ udata[ i+5 ] * vdata[ i+5 ]);
}
for (i = 0; i < length; i++) {
sum = sum + udata[i] * vdata[i];
}
*dest = sum;
}
Question: Explain why any (scalar) version of an inner product procedure running on an Intel Core i7 Haswell processor cannot achieve a CPE less than 1.00.
Any idea how to solve the problem?
Your unroll doesn't help with the FP latency bottleneck:
sum + x + y + z without -ffast-math is the same order of operations as sum += x; sum += y; ... so you haven't done anything about the single dependency chain running through all the + operations. Loop overhead (or front-end throughput) is not the bottleneck, it's the 3 cycle latency of addss on Haswell, so this unroll makes basically no difference.
What would work is sum += u[i]*v[i] + u[i+1]*v[i+1] + ... as a way to unroll without multiple accumulators, because then the sum of each group of elements is independent.
It costs slightly more math operations that way, like starting with a mul and ending with an add, but the middle ones can still contract into FMAs if you compile with -march=haswell. See comments on AVX performance slower for bitwise xor op and popcount for an example of GCC turning a naive unroll like sum += u0*v0; sum += u1*v1 into sum += u0*v0 + u1*v1;. In that case the problem was slightly different: sum of squared differences like sum += (u0-v0)**2 + (u1-v1)**2;, but it boils down to the same latency problem of ultimately doing some multiplies and adds.
The other way to solve the problem is with multiple accumulators, allowing all the operations to be FMAs. But Haswell has 5-cycle latency FMA, and 3-cycle latency addss, so doing the sum += ... addition on its own, not as part of an FMA, actually helps with the latency bottleneck on Haswell (unlike on Skylake add/sub/mul are all 4 cycle latency). The following all show unrolling with multiple accumulators, instead of with adding groups together like the first towards pairwise summation like you're doing:
Why does mulss take only 3 cycles on Haswell, different from Agner's instruction tables? (Unrolling FP loops with multiple accumulators)
When, if ever, is loop unrolling still useful?
Loop unrolling to achieve maximum throughput with Ivy Bridge and Haswell
FP math instruction throughput isn't the bottleneck for a big dot product on modern CPUs, only latency. Or load throughput if you unroll enough.
Explain why any (scalar) version of an inner product procedure running on an Intel Core i7 Haswell processor cannot achieve a CPE less than 1.00.
Each element takes 2 loads, and with only 2 load ports, that's a hard throughput bottleneck. (https://agner.org/optimize/ / https://www.realworldtech.com/haswell-cpu/5/)
I'm assuming you're counting an "element" as an i value, a pair of floats, one each from udata[i] and vdata[i]. The FP FMA throughput bottleneck is also 2/clock on Haswell (whether they're scalar, 128-bit, or 256-bit vectors), but dot product takes 2 loads per FMA. In theory, even Sandybridge or maybe even K8 could achieve 1 element per clock, with separate mul and add instructions, since they both support 2 loads per clock, and have a wide enough pipeline to get load / mulss / addss through the pipeline with some room to spare.

How to extract 8 integers from a 256 vector using intel intrinsics?

I'm trying to enhance the performance of my code by using the 256bit vector (Intel intrinsics - AVX).
I have an I7 Gen.4 (Haswell architecture) processor supporting SSE1 to SSE4.2 and AVX/AVX2 Extensions.
This is the code snippet that I'm trying to enhance:
/* code snipet */
kfac1 = kfac + factor; /* 7 cycles for 7 additions */
kfac2 = kfac1 + factor;
kfac3 = kfac2 + factor;
kfac4 = kfac3 + factor;
kfac5 = kfac4 + factor;
kfac6 = kfac5 + factor;
kfac7 = kfac6 + factor;
k1fac1 = k1fac + factor1; /* 7 cycles for 7 additions */
k1fac2 = k1fac1 + factor1;
k1fac3 = k1fac2 + factor1;
k1fac4 = k1fac3 + factor1;
k1fac5 = k1fac4 + factor1;
k1fac6 = k1fac5 + factor1;
k1fac7 = k1fac6 + factor1;
k2fac1 = k2fac + factor2; /* 7 cycles for 7 additions */
k2fac2 = k2fac1 + factor2;
k2fac3 = k2fac2 + factor2;
k2fac4 = k2fac3 + factor2;
k2fac5 = k2fac4 + factor2;
k2fac6 = k2fac5 + factor2;
k2fac7 = k2fac6 + factor2;
/* code snipet */
From the Intel Manuals, I found this.
an integer addition ADD takes 1 cycle (latency).
a vector of 8 integers (32 bit) takes 1 cycle also.
So I've tried ton make it this way:
fac = _mm256_set1_epi32 (factor )
fac1 = _mm256_set1_epi32 (factor1)
fac2 = _mm256_set1_epi32 (factor2)
v1 = _mm256_set_epi32 (0,kfac6,kfac5,kfac4,kfac3,kfac2,kfac1,kfac)
v2 = _mm256_set_epi32 (0,k1fac6,k1fac5,k1fac4,k1fac3,k1fac2,k1fac1,k1fac)
v3 = _mm256_set_epi32 (0,k2fac6,k2fac5,k2fac4,k2fac3,k2fac2,k2fac1,k2fac)
res1 = _mm256_add_epi32 (v1,fac) ////////////////////
res2 = _mm256_add_epi32 (v2,fa1) // just 3 cycles //
res3 = _mm256_add_epi32 (v3,fa2) ////////////////////
But the problem is that these factors are going to be used as tables indexes ( table[kfac] ... ). So i have to extract the factor as seperate integers again.
I wonder if there is any possible way to do it??
A smart compiler could get table+factor into a register and use indexed addressing modes to get table+factor+k1fac6 as an address. Check the asm, and if the compiler doesn't do this for you, try changing the source to hand-hold the compiler:
const int *tf = table + factor;
const int *tf2 = table + factor2; // could be lea rdx, [rax+rcx*4] or something.
...
foo = tf[kfac2];
bar = tf2[k2fac6]; // could be mov r12, [rdx + rdi*4]
But to answer the question you asked:
Latency isn't a big deal when you have that many independent adds happening. The throughput of 4 scalar add instructions per clock on Haswell is much more relevant.
If k1fac2 and so on are already in contiguous memory, then using SIMD is possibly worth it. Otherwise all the shuffling and data transfer to get them in/out of vector regs makes it definitely not worth it. (i.e. the stuff compiler emits to implement _mm256_set_epi32 (0,kfac6,kfac5,kfac4,kfac3,kfac2,kfac1,kfac).
You could avoid needing to get the indices back into integer registers by using an AVX2 gather for the table loads. But gather is slow on Haswell, so probably not worth it. Maybe worth it on Broadwell.
On Skylake, gather is fast so it could be good if you can SIMD whatever you do with the LUT results. If you need to extract all the gather results back to separate integer registers, it's probably not worth it.
If you did need to extract 8x 32-bit integers from a __m256i into integer registers, you have three main choices of strategy:
Vector store to a tmp array and scalar loads
ALU shuffle instructions like pextrd (_mm_extract_epi32). Use _mm256_extracti128_si256 to get the high lane into a separate __m128i.
A mix of both strategies (e.g. store the high 128 to memory while using ALU stuff on the low half).
Depending on the surrounding code, any of these three could be optimal on Haswell.
pextrd r32, xmm, imm8 is 2 uops on Haswell, with one of them needing the shuffle unit on port5. That's a lot of shuffle uops, so a pure ALU strategy is only going to be good if your code is bottlenecked on L1d cache throughput. (Not the same thing as memory bandwidth). movd r32, xmm is only 1 uop, and compilers do know to use that when compiling _mm_extract_epi32(vec, 0), but you can also write int foo = _mm_cvtsi128_si32(vec) to make it explicit and remind yourself that the bottom element can be accessed more efficiently.
Store/reload has good throughput. Intel SnB-family CPUs including Haswell can run two loads per clock, and IIRC store-forwarding works from an aligned 32-byte store to any 4-byte element of it. But make sure it's an aligned store, e.g. into _Alignas(32) int tmp[8], or into a union between an __m256i and an int array. You could still store into the int array instead of the __m256i member to avoid union type-punning while still having the array aligned, but it's easiest to just use C++11 alignas or C11 _Alignas.
_Alignas(32) int tmp[8];
_mm256_store_si256((__m256i*)tmp, vec);
...
foo2 = tmp[2];
However, the problem with store/reload is latency. Even the first result won't be ready for 6 cycles after the store-data is ready.
A mixed strategy gives you the best of both worlds: ALU to extract the first 2 or 3 elements lets execution get started on whatever code uses them, hiding the store-forwarding latency of the store/reload.
_Alignas(32) int tmp[8];
_mm256_store_si256((__m256i*)tmp, vec);
__m128i lo = _mm256_castsi256_si128(vec); // This is free, no instructions
int foo0 = _mm_cvtsi128_si32(lo);
int foo1 = _mm_extract_epi32(lo, 1);
foo2 = tmp[2];
// rest of foo3..foo7 also loaded from tmp[]
// Then use foo0..foo7
You might find that it's optimal to do the first 4 elements with pextrd, in which case you only need to store/reload the upper lane. Use vextracti128 [mem], ymm, 1:
_Alignas(16) int tmp[4];
_mm_store_si128((__m128i*)tmp, _mm256_extracti128_si256(vec, 1));
// movd / pextrd for foo0..foo3
int foo4 = tmp[0];
...
With fewer larger elements (e.g. 64-bit integers), a pure ALU strategy is more attractive. 6-cycle vector-store / integer-reload latency is longer than it would take to get all of the results with ALU ops, but store/reload could still be good if there's a lot of instruction-level parallelism and you bottleneck on ALU throughput instead of latency.
With more smaller elements (8 or 16-bit), store/reload is definitely attractive. Extracting the first 2 to 4 elements with ALU instructions is still good. And maybe even vmovd r32, xmm and then picking that apart with integer shift/mask instructions is good.
Your cycle-counting for the vector version is also bogus. The three _mm256_add_epi32 operations are independent, and Haswell can run two vpaddd instructions in parallel. (Skylake can run all three in a single cycle, each with 1 cycle latency.)
Superscalar pipelined out-of-order execution means there's a big difference between latency and throughput, and keeping track of dependency chains matters a lot. See http://agner.org/optimize/, and other links in the x86 tag wiki for more optimization guides.

How to optimize these loops (with compiler optimization disabled)?

I need to optimize some for-loops for speed (for a school assignment) without using compiler optimization flags.
Given a specific Linux server (owned by the school), a satisfactory improvement is to make it run under 7 seconds, and a great improvement is to make it run under 5 seconds. This code that I have right here gets about 5.6 seconds. I am thinking I may need to use pointers with this in some way to get it to go faster, but I'm not really sure. What options do I have?
The file must remain 50 lines or less (not counting comments).
#include <stdio.h>
#include <stdlib.h>
// You are only allowed to make changes to this code as specified by the comments in it.
// The code you submit must have these two values.
#define N_TIMES 600000
#define ARRAY_SIZE 10000
int main(void)
{
double *array = calloc(ARRAY_SIZE, sizeof(double));
double sum = 0;
int i;
// You can add variables between this comment ...
register double sum1 = 0, sum2 = 0, sum3 = 0, sum4 = 0, sum5 = 0, sum6 = 0, sum7 = 0, sum8 = 0, sum9 = 0;
register int j;
// ... and this one.
printf("CS201 - Asgmt 4 - \n");
for (i = 0; i < N_TIMES; i++)
{
// You can change anything between this comment ...
for (j = 0; j < ARRAY_SIZE; j += 10)
{
sum += array[j];
sum1 += array[j + 1];
sum2 += array[j + 2];
sum3 += array[j + 3];
sum4 += array[j + 4];
sum5 += array[j + 5];
sum6 += array[j + 6];
sum7 += array[j + 7];
sum8 += array[j + 8];
sum9 += array[j + 9];
}
// ... and this one. But your inner loop must do the same
// number of additions as this one does.
}
// You can add some final code between this comment ...
sum += sum1 + sum2 + sum3 + sum4 + sum5 + sum6 + sum7 + sum8 + sum9;
// ... and this one.
return 0;
}
Re-posting a modified version of my answer from optimized sum of an array of doubles in C, since that question got voted down to -5. The OP of the other question phrased it more as "what else is possible", so I took him at his word and info-dumped about vectorizing and tuning for current CPU hardware. :)
The OP of that question eventually said he wasn't allowed to use compiler options higher than -O0, which I guess is the case here, too.
Summary:
Why using -O0 distorts things (unfairly penalizes things that are fine in normal code for a normal compiler). Using -O0 (the gcc/clang default) so your loops don't optimize away is not a valid excuse or a useful way to find out what will be faster with normal optimization enabled. (See also Idiomatic way of performance evaluation? for more about benchmark methods and pitfalls, like ways to enable optimization but still stop the compiler from optimizing away the work you want to measure.)
Stuff that's wrong with the assignment.
Types of optimizations. FP latency vs. throughput, and dependency chains. Link to Agner Fog's site. (Essential reading for optimization).
Experiments getting the compiler to optimize it (after fixing it to not optimize away). Best result with auto-vectorization (no source changes): gcc: half as fast as an optimal vectorized loop. clang: same speed as a hand-vectorized loop.
Some more comments on why bigger expressions are a perf win with -O0 only.
Source changes to get good performance without -ffast-math, making the code closer to what we want the compiler to do. Also some rules-lawyering ideas that would be useless in the real-world.
Vectorizing the loop with GCC architecture-neutral vectors, to see how close the auto-vectorizing compilers came to matching the performance of ideal asm code (since I checked the compiler output).
I think the point of the assignment is to sort of teach assembly-language performance optimizations using C with no compiler optimizations. This is silly. It's mixing up things the compiler will do for you in real life with things that do require source-level changes.
See Why does clang produce inefficient asm with -O0 (for this simple floating point sum)?
-O0 doesn't just "not optimize", it makes the compiler store variables to memory after every statement instead of keeping them in registers. It does this so you get the "expected" results if you set a breakpoint with gdb and modify the value (in memory) of a C variable. Or even if you jump to another line in the same function. So each C statement has to be compiled to an independent block of asm that starts and ends with all variables in memory. For a modern portable compiler like gcc which already transforms through multiple internal representations of program flow on the way from source to asm, this part of -O0 requires explicitly de-optimizing its graph of data flow back into separate C statements. These store/reloads lengthen every loop-carried dependency chain so it's horrible for tiny loops if the loop counter is kept in memory. (e.g. 1 cycle per iteration for inc reg vs. 6c for inc [mem], creating a bottleneck on loop counter updates in tight loops).
With gcc -O0, the register keyword lets gcc keep a var in a register instead of memory, and thus can make a big difference in tight loops (Example on the Godbolt Compiler explorer). But that's only with -O0. In real code, register is meaningless: the compiler attempts to optimally use the available registers for variables and temporaries. register is already deprecated in ISO C++11 (but not C11), and there's a proposal to remove it from the language along with other obsolete stuff like trigraphs.
With an extra variables involved, -O0 hurts array indexing a bit more than pointer incrementing.
Array indexing usually makes code easier to read. Compilers sometimes fail to optimize stuff like array[i*width + j*width*height], so it's a good idea to change the source to do the strength-reduction optimization of turning the multiplies into += adds.
At an asm level, array indexing vs. pointer incrementing are close to the same performance. (x86 for example has addressing modes like [rsi + rdx*4] which are as fast as [rdi]. except on Sandybridge and later.) It's the compiler's job to optimize your code by using pointer incrementing even when the source uses array indexing, when that's faster.
For good performance, you have to be aware of what compilers can and can't do. Some optimizations are "brittle", and a small seemingly-innocent change to the source will stop the compiler from doing an optimization that was essential for some code to run fast. (e.g. pulling a constant computation out of a loop, or proving something about how different branch conditions are related to each other, and simplifying.)
Besides all that, it's a crap sample because it doesn't have anything to stop a smart compiler from optimizing away the entire thing. It doesn't even print the sum. Even gcc -O1 (instead of -O3) threw away some of the looping.
(You can fix this by printing sum at the end. gcc and clang don't seem to realize that calloc returns zeroed memory, and optimize it away to 0.0. See my code below.)
Normally you'd put your code in a function, and call it in a loop from main() in another file. And compile them separately, without whole-program cross-file optimisation, so the compiler can't do optimisations based on the compile-time constants you call it with. The repeat-loop being wrapped so tightly around the actual loop over the array is causing havoc with gcc's optimizer (see below).
Also, the other version of this question had an uninitialized variable kicking around. It looks like long int help was introduced by the OP of that question, not the prof. So I will have to downgrade my "utter nonsense" to merely "silly", because the code doesn't even print the result at the end. That's the most common way of getting the compiler not to optimize everything away in a microbenchmark like this.
I assume your prof mentioned a few things about performance. There are a crapton of different things that could come into play here, many of which I assume didn't get mentioned in a 2nd-year CS class.
Besides multithreading with openmp, there's vectorizing with SIMD. There are also optimizations for modern pipelined CPUs: specifically, avoid having one long dependency chain.
Further essential reading:
Agner Fog's guides for optimizing C and asm for x86. Some of it applies to all CPUs.
What Every Programmer Should Know About Memory
Your compiler manual is also essential, esp. for floating point code. Floating point has limited precision, and is not associative. The final sum does depend on which order you do the additions in. Usually the difference in rounding error is small, so the compiler can get a big speedup by re-ordering things if you use -ffast-math to allow it.
Instead of just unrolling, keep multiple accumulators which you only add up at the end, like you're doing with the sum0..sum9 unroll-by-10. FP instructions have medium latency but high throughput, so you need to keep multiple FP operations in flight to keep the floating point execution units saturated.
If you need the result of the last op to be complete before the next one can start, you're limited by latency. For FP add, that's one per 3 cycles. In Intel Sandybridge, IvB, Haswell, and Broadwell, the throughput of FP add is one per cycle. So you need to keep at least 3 independent ops that can be in flight at once to saturate the machine. For Skylake, it's 2 per cycle with latency of 4 clocks. (On the plus side for Skylake, FMA is down to 4 cycle latency.)
In this case, there's also basic stuff like pulling things out of the loop, e.g. help += ARRAY_SIZE.
Compiler Options
Lets start by seeing what the compiler can do for us.
I started out with the original inner loop, with just help += ARRAY_SIZE pulled out, and adding a printf at the end so gcc doesn't optimize everything away. Let's try some compiler options and see what we can achieve with gcc 4.9.2 (on my i5 2500k Sandybridge. 3.8GHz max turbo (slight OC), 3.3GHz sustained (irrelevant for this short benchmark)):
gcc -O0 fast-loop-cs201.c -o fl: 16.43s performance is a total joke. Variables are stored to memory after every operation, and re-loaded before the next. This is a bottleneck, and adds a lot of latency. Not to mention losing out on actual optimisations. Timing / tuning code with -O0 is not useful.
-O1: 4.87s
-O2: 4.89s
-O3: 2.453s (uses SSE to do 2 at once. I'm of course using a 64bit system, so hardware support for -msse2 is baseline.)
-O3 -ffast-math -funroll-loops: 2.439s
-O3 -march=sandybridge -ffast-math -funroll-loops: 1.275s (uses AVX to do 4 at once.)
-Ofast ...: no gain
-O3 -ftree-parallelize-loops=4 -march=sandybridge -ffast-math -funroll-loops: 0m2.375s real, 0m8.500s user. Looks like locking overhead killed it. It only spawns the 4 threads total, but the inner loop is too short for it to be a win: it collects the sums every time, instead of giving each thread 1/4 of the outer loop iterations.
-Ofast -fprofile-generate -march=sandybridge -ffast-math, run it, then
-Ofast -fprofile-use -march=sandybridge -ffast-math: 1.275s. profile-guided optimization is a good idea when you can exercise all the relevant code-paths, so the compiler can make better unrolling / inlining decisions.
clang-3.5 -Ofast -march=native -ffast-math: 1.070s. (clang 3.5 is too old to support -march=sandybridge. You should prefer to use a compiler version that's new enough to know about the target architecture you're tuning for, esp. if using -march to make code that doesn't need to run on older architectures.)
gcc -O3 vectorizes in a hilarious way: The inner loop does 2 (or 4) iterations of the outer loop in parallel, by broadcasting one array element to all elements of an xmm (or ymm) register, and doing an addpd on that. So it sees the same values are being added repeatedly, but even -ffast-math doesn't let gcc just turn it into a multiply. Or switch the loops.
clang-3.5 vectorizes a lot better: it vectorizes the inner loop, instead of the outer, so it doesn't need to broadcast. It even uses 4 vector registers as 4 separate accumulators. It knows that calloc only returns 16-byte aligned memory (on x86-64 System V), and when tuning for Sandybridge (before Haswell) it knows that 32-byte loads have a big penalty when misaligned. And that splitting them isn't too expensive since a 32-byte load takes 2 cycles in a load port anyway.
vmovupd -0x60(%rbx,%rcx,8),%xmm4
vinsertf128 $0x1,-0x50(%rbx,%rcx,8),%ymm4,%ymm4
This is worse on later CPUs, especially when the data does happen to be aligned at run-time; see Why doesn't gcc resolve _mm256_loadu_pd as single vmovupd? about GCC versions where -mavx256-split-unaligned-load was on by default with -mtune=generic.
It's actually slower when I tell it that the array is aligned. (with a stupid hack like array = (double*)((ptrdiff_t)array & ~31); which actually generates an instruction to mask off the low 5 bits, because clang-3.5 doesn't support gcc's __builtin_assume_aligned.) In that case it uses a tight loop of 4x vaddpd mem, %ymm, %ymm. It only runs about 0.65 insns per cycle (and 0.93 uops / cycle), according to perf, so the bottleneck isn't front-end.
I checked with a debugger, and calloc is indeed returning a pointer that's an odd multiple of 16. (glibc for large allocations tends to allocate new pages, and put bookkeeping info in the initial bytes, always misaligning to any boundary wider than 16.) So half the 32B memory accesses are crossing a cache line, causing a big slowdown. It is slightly faster to do two separate 16B loads when your pointer is 16B-aligned but not 32B-aligned, on Sandybridge. (gcc enables -mavx256-split-unaligned-load and ...-store for -march=sandybridge, and also for the default tune=generic with -mavx, which is not so good especially for Haswell or with memory that's usually aligned by the compiler doesn't know about it.)
Source level changes
As we can see from clang beating gcc, multiple accumulators are excellent. The most obvious way to do this would be:
for (j = 0; j < ARRAY_SIZE; j+=4) { // unroll 4 times
sum0 += array[j];
sum1 += array[j+1];
sum2 += array[j+2];
sum3 += array[j+3];
}
and then don't collect the 4 accumulators into one until after the end of the outer loop.
Your (from the other question) source change of
sum += j[0]+j[1]+j[2]+j[3]+j[4]+j[5]+j[6]+j[7]+j[8]+j[9];
actually has a similar effect, thanks to out-of-order execution. Each group of 10 is a separate dependency chain. order-of-operations rules say the j values get added together first, and then added to sum. So the loop-carried dependency chain is still only the latency of one FP add, and there's lots of independent work for each group of 10. Each group is a separate dependency chain of 9 adds, and takes few enough instructions for the out-of-order execution hardware to see the start of the next chain and, and find the parallelism to keep those medium latency, high throughput FP execution units fed.
With -O0, as your silly assignment apparently requires, values are stored to RAM at the end of every statement. Writing longer expressions without updating any variables, even temporaries, will make -O0 run faster, but it's not a useful optimisation. Don't waste your time on changes that only help with -O0, esp. not at the expense of readability.
Using 4 accumulator variables and not adding them together until the end of the outer loop defeats clang's auto-vectorizer. It still runs in only 1.66s (vs. 4.89 for gcc's non-vectorized -O2 with one accumulator). Even gcc -O2 without -ffast-math also gets 1.66s for this source change. Note that ARRAY_SIZE is known to be a multiple of 4, so I didn't include any cleanup code to handle the last up-to-3 elements (or to avoid reading past the end of the array, which would happen as written now). It's really easy to get something wrong and read past the end of the array when doing this.
GCC, on the other hand, does vectorize this, but it also pessimises (un-optimises) the inner loop into a single dependency chain. I think it's doing multiple iterations of the outer loop, again.
Using gcc's platform-independent vector extensions, I wrote a version which compiles into apparently-optimal code:
// compile with gcc -g -Wall -std=gnu11 -Ofast -fno-tree-vectorize -march=native fast-loop-cs201.vec.c -o fl3-vec
#include <stdio.h>
#include <stdlib.h>
#include <stddef.h>
#include <assert.h>
#include <string.h>
// You are only allowed to make changes to this code as specified by the comments in it.
// The code you submit must have these two values.
#define N_TIMES 600000
#define ARRAY_SIZE 10000
int main(void)
{
double *array = calloc(ARRAY_SIZE, sizeof(double));
double sum = 0;
int i;
// You can add variables between this comment ...
long int help = 0;
typedef double v4df __attribute__ ((vector_size (8*4)));
v4df sum0={0}, sum1={0}, sum2={0}, sum3={0};
const size_t array_bytes = ARRAY_SIZE*sizeof(double);
double *aligned_array = NULL;
// this more-than-declaration could go in an if(i == 0) block for strict compliance with the rules
if ( posix_memalign((void**)&aligned_array, 32, array_bytes) ) {
exit (1);
}
memcpy(aligned_array, array, array_bytes); // In this one case: faster to align once and have no extra overhead for N_TIMES through the loop
// ... and this one.
// Please change 'your name' to your actual name.
printf("CS201 - Asgmt 4 - I. Forgot\n");
for (i = 0; i < N_TIMES; i++) {
// You can change anything between this comment ...
/*
#if defined(__GNUC__) && (__GNUC__ * 100 + __GNUC_MINOR__) >= 407 // GCC 4.7 or later.
array = __builtin_assume_aligned(array, 32);
#else
// force-align for other compilers. This loop-invariant will be done outside the loop.
array = (double*) ((ptrdiff_t)array & ~31);
#endif
*/
assert ( ARRAY_SIZE / (4*4) == (ARRAY_SIZE+15) / (4*4) ); // We don't have a cleanup loop to handle where the array size isn't a multiple of 16
// incrementing pointers can be more efficient than indexing arrays
// esp. on recent Intel where micro-fusion only works with one-register addressing modes
// of course, the compiler can always generate pointer-incrementing asm from array-indexing source
const double *start = aligned_array;
while ( (ptrdiff_t)start & 31 ) {
// annoying loops like this are the reason people use aligned buffers
sum += *start++; // scalar until we reach 32B alignment
// in practice, this loop doesn't run, because we copy into an aligned buffer
// This will also require a cleanup loop, and break our multiple-of-16 doubles assumption.
}
const v4df *end = (v4df *)(aligned_array+ARRAY_SIZE);
for (const v4df *p = (v4df *)start ; p+3 < end; p+=4) {
sum0 += p[0]; // p+=4 increments the pointer by 4 * 4 * 8 bytes
sum1 += p[1]; // make sure you keep track of what you're incrementing
sum2 += p[2];
sum3 += p[3];
}
// the compiler might be smart enough to pull this out of the inner loop
// in fact, gcc turns this into a 64bit movabs outside of both loops :P
help+= ARRAY_SIZE;
// ... and this one. But your inner loop must do the same
// number of additions as this one does.
/* You could argue legalese and say that
if (i == 0) {
for (j ...)
sum += array[j];
sum *= N_TIMES;
}
* still does as many adds in its *INNER LOOP*, but it just doesn't run it as often
*/
}
// You can add some final code between this comment ...
sum0 = (sum0 + sum1) + (sum2 + sum3);
sum += sum0[0] + sum0[1] + sum0[2] + sum0[3];
printf("sum = %g; help=%ld\n", sum, help); // defeat the compiler.
free (aligned_array);
free (array); // not strictly necessary, because this is the end of main(). Leaving it out for this special case is a bad example for a CS class, though.
// ... and this one.
return 0;
}
The inner loop compiles to:
4007c0: c5 e5 58 19 vaddpd (%rcx),%ymm3,%ymm3
4007c4: 48 83 e9 80 sub $0xffffffffffffff80,%rcx # subtract -128, because -128 fits in imm8 instead of requiring an imm32 to encode add $128, %rcx
4007c8: c5 f5 58 49 a0 vaddpd -0x60(%rcx),%ymm1,%ymm1 # one-register addressing mode can micro-fuse
4007cd: c5 ed 58 51 c0 vaddpd -0x40(%rcx),%ymm2,%ymm2
4007d2: c5 fd 58 41 e0 vaddpd -0x20(%rcx),%ymm0,%ymm0
4007d7: 4c 39 c1 cmp %r8,%rcx # compare with end with p
4007da: 75 e4 jne 4007c0 <main+0xb0>
(For more, see online compiler output at the godbolt compiler explorer. The -xc compiler option compiles as C, not C++. The inner loop is from .L3 to jne .L3. See the x86 tag wiki for x86 asm links. See also this q&a about micro-fusion not happening on SnB-family, which Agner Fog's guides don't cover).
performance:
$ perf stat -e task-clock,cycles,instructions,r1b1,r10e,stalled-cycles-frontend,stalled-cycles-backend,L1-dcache-load-misses,cache-misses ./fl3-vec
CS201 - Asgmt 4 - I. Forgot
sum = 0; help=6000000000
Performance counter stats for './fl3-vec':
1086.571078 task-clock (msec) # 1.000 CPUs utilized
4,072,679,849 cycles # 3.748 GHz
2,629,419,883 instructions # 0.65 insns per cycle
# 1.27 stalled cycles per insn
4,028,715,968 r1b1 # 3707.733 M/sec # unfused uops
2,257,875,023 r10e # 2077.982 M/sec # fused uops. lower than insns because of macro-fusion
3,328,275,626 stalled-cycles-frontend # 81.72% frontend cycles idle
1,648,011,059 stalled-cycles-backend # 40.47% backend cycles idle
751,736,741 L1-dcache-load-misses # 691.843 M/sec
18,772 cache-misses # 0.017 M/sec
1.086925466 seconds time elapsed
I still don't know why it's getting such low instructions per cycle. The inner loop is using 4 separate accumulators, and I checked with gdb that the pointers are aligned. So cache-bank conflicts shouldn't be the problem. Sandybridge L2 cache can sustain one 32B transfers per cycle, which should keep up with the one 32B FP vector add per cycle.
32B loads from L1 take 2 cycles (it wasn't until Haswell that Intel made 32B loads a single-cycle operation). However, there are 2 load ports, so the sustained throughput is 32B per cycle (which we're not reaching).
Perhaps the loads need to be pipelined ahead of when they're used, to minimize having the ROB (re-order buffer) fill up when a load stalls? But the perf counters indicate a fairly high L1 cache hit rate, so hardware prefetch from L2 to L1 seems to be doing its job.
0.65 instructions per cycle is only about half way to saturating the vector FP adder. This is frustrating. Even IACA says the loop should run in 4 cycles per iteration. (i.e. saturate the load ports and port1 (where the FP adder lives)) :/
update: I guess L2 bandwidth was the problem after all. There aren't enough line-fill buffers to keep enough misses in flight to sustain the peak throughput every cycle. L2 sustained bandwidth is less than peak on Intel SnB / Haswell / Skylake CPUs.
See also Single Threaded Memory Bandwidth on Sandy Bridge (Intel forum thread, with much discussion about what limits throughput, and how latency * max_concurrency is one possible bottleneck. See also the "Latency Bound Platforms" part of the answer to Enhanced REP MOVSB for memcpy limited memory concurrency is a bottleneck for loads as well as stores, but for loads prefetch into L2 does mean you might not be limited purely by Line Fill buffers for outstanding L1D misses.
Reducing ARRAY_SIZE to 1008 (multiple of 16), and increasing N_TIMES by a factor of 10, brought the runtime down to 0.5s. That's 1.68 insns per cycle. (The inner loop is 7 total instructions for 4 FP adds, thus we are finally saturating the vector FP add unit, and the load ports.) Loop tiling is a much better solution, see below.
Intel CPUs only have 32k each L1-data and L1-instruction caches. I think your array would just barely fit in the 64kiB L1D on an AMD K10 (Istanbul) CPU, but not Bulldozer-family (16kiB L1D) or Ryzen (32kiB L1D).
Gcc's attempt to vectorize by broadcasting the same value into a parallel add doesn't seem so crazy. If it had managed to get this right (using multiple accumulators to hide latency), that would have allowed it to saturate the vector FP adder with only half the memory bandwidth. As-is, it was pretty much a wash, probably because of overhead in broadcasting.
Also, it's pretty silly. The N_TIMES is a just a make-work repeat. We don't actually want to optimize for doing the identical work multiple times. Unless we want to win at silly assignments like this. A source-level way to do this would be to increment i in the part of the code we're allowed to modify:
for (...) {
sum += a[j] + a[j] + a[j] + a[j];
}
i += 3; // The inner loop does 4 total iterations of the outer loop
More realistically, to deal with this you could interchange your loops (loop over the array once, adding each value N_TIMES times). I think I've read that Intel's compiler will sometimes do that for you.
A more general technique is called cache blocking, or loop tiling. The idea is to work on your input data in small blocks that fit in cache. Depending on your algorithm, it can be possible to do various stages of thing on a chunk, then repeat for the next chunk, instead of having each stage loop over the whole input. As always, once you know the right name for a trick (and that it exists at all), you can google up a ton of info.
You could rules-lawyer your way into putting an interchanged loop inside an if (i == 0) block in the part of the code you're allowed to modify. It would still do the same number of additions, but in a more cache-optimal order.
You may be on the right track, though you'll need to measure it to be certain (my normal advice to measure, not guess seems a little superfluous here since the whole point of the assignment is to measure).
Optimising compilers will probably not see much of a difference since they're pretty clever about that sort of stuff but, since we don't know what optimisation level it will be compiling at, you may get a substantial improvement.
To use pointers in the inner loop is a simple matter of first adding a pointer variable:
register double *pj;
then changing the loop to:
for (pj = &(array[0]); pj < &(array[ARRAY_SIZE]); j++) {
sum += *j++;
sum1 += *j++;
sum2 += *j++;
sum3 += *j++;
sum4 += *j++;
sum5 += *j++;
sum6 += *j++;
sum7 += *j++;
sum8 += *j++;
sum9 += *j;
}
This keeps the amount of additions the same within the loop (assuming you're counting += and ++ as addition operators, of course) but basically uses pointers rather than array indexes.
With no optimisation1 on my system, this drops it from 9.868 seconds (CPU time) to 4.84 seconds. Your mileage may vary.
1 With optimisation level -O3, both are reported as taking 0.001 seconds so, as mentioned, the optimisers are pretty clever. However, given you're seeing 5+ seconds, I'd suggest it wasn't been compiled with optimisation on.
As an aside, this is a good reason why it's usually advisable to write your code in a readable manner and let the compiler take care of getting it running faster. While my meager attempts at optimisation roughly doubled the speed, using -O3 made it run some ten thousand times faster :-)
Before anything else, try to change compiler settings to produce faster code. There is general optimisation, and the compiler might do auto vectorisation.
What you would always do is try several approaches and check what is fastest. As a target, try to get to one cycle per addition or better.
Number of iterations per loop: You add up 10 sums simultaneously. It might be that your processor doesn't have enough registers for that, or it has more. I'd measure the time for 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14... sums per loop.
Number of sums: Having more than one sum means that latency doesn't bite you, just throughput. But more than four or six might not be helpful. Try four sums, with 4, 8, 12, 16 iterations per loop. Or six sums, with 6, 12, 18 iterations.
Caching: You are running through an array of 80,000 bytes. Probably more than L1 cache. Split the array into 2 or 4 parts. Do an outer loop iterating over the two or four subarrays, the next loop from 0 to N_TIMES - 1, and the inner loop adding up values.
And then you can try using vector operations, or multi-threading your code, or using the GPU to do the work.
And if you are forced to use no optimisation, then the "register" keyword might actually work.

Effects of Loop unrolling on memory bound data

I have been working with a piece of code which is intensively memory bound. I am trying to optimize it within a single core by manually implementing cache blocking, sw prefetching, loop unrolling etc. Even though cache blocking gives significant improvement in performance. However when i introduce loop unrolling I get tremendous performance degradation.
I am compiling with Intel icc with compiler flags -O2 and -ipo in all my test cases.
My code is similar to this (3D 25-point stencil):
void stencil_baseline (double *V, double *U, int dx, int dy, int dz, double c0, double c1, double c2, double c3, double c4)
{
int i, j, k;
for (k = 4; k < dz-4; k++)
{
for (j = 4; j < dy-4; j++)
{
//x-direction
for (i = 4; i < dx-4; i++)
{
U[k*dy*dx+j*dx+i] = (c0 * (V[k*dy*dx+j*dx+i]) //center
+ c1 * (V[k*dy*dx+j*dx+(i-1)] + V[k*dy*dx+j*dx+(i+1)])
+ c2 * (V[k*dy*dx+j*dx+(i-2)] + V[k*dy*dx+j*dx+(i+2)])
+ c3 * (V[k*dy*dx+j*dx+(i-3)] + V[k*dy*dx+j*dx+(i+3)])
+ c4 * (V[k*dy*dx+j*dx+(i-4)] + V[k*dy*dx+j*dx+(i+4)]));
}
//y-direction
for (i = 4; i < dx-4; i++)
{
U[k*dy*dx+j*dx+i] += (c1 * (V[k*dy*dx+(j-1)*dx+i] + V[k*dy*dx+(j+1)*dx+i])
+ c2 * (V[k*dy*dx+(j-2)*dx+i] + V[k*dy*dx+(j+2)*dx+i])
+ c3 * (V[k*dy*dx+(j-3)*dx+i] + V[k*dy*dx+(j+3)*dx+i])
+ c4 * (V[k*dy*dx+(j-4)*dx+i] + V[k*dy*dx+(j+4)*dx+i]));
}
//z-direction
for (i = 4; i < dx-4; i++)
{
U[k*dy*dx+j*dx+i] += (c1 * (V[(k-1)*dy*dx+j*dx+i] + V[(k+1)*dy*dx+j*dx+i])
+ c2 * (V[(k-2)*dy*dx+j*dx+i] + V[(k+2)*dy*dx+j*dx+i])
+ c3 * (V[(k-3)*dy*dx+j*dx+i] + V[(k+3)*dy*dx+j*dx+i])
+ c4 * (V[(k-4)*dy*dx+j*dx+i] + V[(k+4)*dy*dx+j*dx+i]));
}
}
}
}
When I do loop unrolling on the innermost loop (dimension i) and unroll in directions x,y,z separately by unroll factor 2,4,8 respectively, I get performance degradation in all 9 cases i.e. unroll by 2 on direction x, unroll by 2 on direction y, unroll by 2 in direction z, unroll by 4 in direction x ... etc.
But when I perform loop unrolling on the outermost loop (dimension k) by factor of 8 (2 & 4 also), I get v.good performance improvement which is even better than cache blocking.
I even tried profiling my code with Intel Vtune. It seemed like the bottlenecks where mainly due to 1.LLC Miss and 2. LLC Load Misses serviced by Remote DRAM.
I am unable to understand why unrolling the innermost fastest loop in giving performance degradation whereas unrolling the outermost, slowest dimension is fetching performance improvement. However, this improvement in the latter case is when i use -O2 and -ipo when compiling with icc.
I am not sure how to interpret these statistics. Can someone help shed some light on this.
This strongly suggests that you are causing instruction cache misses by the unrolling, which is typical. In the age of modern hardware, unrolling no longer automatically means faster code. If each inner loop fits in a cache line, you will get better performance.
You may be able to unroll manually, to limit the size of the generated code, but this will require examining the generated machine-language instructions -- and their position -- to ensure that your loop is within a single cache line. Cache lines are typically 64 bytes long, and aligned on 64-byte boundaries.
Outer loops do not have the same effect. They will likely be outside of the instruction cache regardless of the unroll level. Unrolling these results in fewer branches, which is why you get better performance.
"Load misses serviced by remote DRAM" means that you allocated memory on one NUMA node, but now you are running on the other. Setting process or thread affinity based on NUMA is the answer.
Remote DRAM takes almost twice as long to read as local DRAM on the Intel machines that I have used.

Resources