Optimizing 'for-loops' over arrays in C99 with different indexing - arrays

I want to speed up an array multiplication in C99.
This is the original for loops:
for(int i=0;i<n;i++) {
for(int j=0;j<m;j++) {
total[j]+= w[j][i] * x[i];
}
}
My boss asked my to try this, but it did not improve the speed:
for(int i=0;i<n;i++) {
float value = x[i];
for(int j=0;j<m;j++) {
total[j]+= w[j][i] * value;
}
}
Have you other ideas (except for openmp, which I already use) on how I could speed up these for-loops?
I am using:
gcc -DMNIST=1 -O3 -fno-strict-aliasing -std=c99 -lm -D_GNU_SOURCE -Wall -pedantic -fopenmp
Thanks!

One of the theories is that testing for zero is faster than testing for j<m. So by looping from j=m while j>0, in theory you could save some nanoseconds per loop. However in recent experience this has made not a single difference to me, so I think this doesn't hold for current cpu's.
Another issue is memory layout: if your inner loop accesses a chunk of memory that isn't spread out, but continuous, chances are you have more benefit of the lowest cache available in your CPU.
In your current example, switching the layout of w from w[j][i] to w[i][j] may therefore help. Aligning your values on 4 or 8 bytes boundaries will help as well (but you will find that this is already the case for your arrays)
Another one is loop-unrolling, meaning that you do your inner loop in chunks of, say, 4. So the evaluation if the loop is done, has to be done 4 times less. The optimum value must be determined emperically, and may also depend on the problem at hand (e.g. if you know you're looping a multiple of 5 times, use 5)

Right now, each two consecutive internal operations (i.e. total[j]+= w[j][i] * x[i]) write to different locations and read from distant locations. You can possibly gain some performance by localizing reads and writes (thus, hitting more the internal cache) - for example, by switching the j loop and the i loop, so that the j loop is the external and the i loop is the internal.
This way you'll be localizing both the reads and the writes:
Memory writes will be to the same place for all is.
Memory reads will be sequential for w[j][i] and x[i].
To sum up:
for(int j=0;j<m;j++) {
for(int i=0;i<n;i++) {
total[j]+= w[j][i] * x[i];
}
}

If this really matters:
Link against a tuned CBLAS library. There are lots to choose from, some free and some commercial. Some platforms already have one on the system.
Replace your code with a call to cblas_dgemv.
This is an extraordinarily well understood problem, and many smart people have written highly-tuned libraries for it. Use one of them.

If you know that x, total and w do not alias each other you can get a fairly measurable boost by rearranging the loop indices and avoiding the write to total[j] each time through the loop:
for(int j=0;j<m;j++) {
const float * const w_j = w[j];
float total_j = 0;
for(int i=0;i<n;i++)
total_j += w_j[i] * x[i];
total[j] += total_j;
}
However, BLAS is the right answer, most of the time for this sort of thing. The best solution will depend on n, m, prefetch times, pipeline depths, loop unrolling, the size of your cache lines, etc. You probably don't want to do the level of optimization that it other people have done under the covers.

Related

Would it be faster to use a for loop or list out the operations?

I'm working on code to do matrix operations for a satellite my school is making. Would it be faster and less resource intensive to use a for loop or to just write out the operations? All matrices are of a known size
for (i = 0; i < 3; i++)//Row
{
for (j = 0; j < 3; j++)//Column
{
result[i][j] = a[i][j] * b;
}
}
or
result[1][1] = a[1][1] * b;
result[1][2] = a[1][2] * b;
etc...
You are talking about loop unrolling. You're right, it is a common technique to decrease computing time of a program. However, as it has been said in the comments, it is not absolutely sure you will save time, because it depends on many factors (compiler, compiler optimisation level, etc...). It is also possible that the compiler does unroll loops itself if you choose a high optimisation level.
Don't forget it takes more code size, which is also a valuable resource.
Keep in mind there are other ways to optimize code. For example here, you multiply all elements in an array by the same variable. Perhaps you can do this multiplication later in the code when you access again the resultarray ? It will save travelling an array with all the memory accesses it implies.

Segmentation fault when trying to use intrinsics specifically _mm256_storeu_pd()

Seemed to have fixed it myself by type casting the cij2 pointer inside the mm256 call
so _mm256_storeu_pd((double *)cij2,vecC);
I have no idea why this changed anything...
I'm writing some code and trying to take advantage of the Intel manual vectorization. But whenever I run the code I get a segmentation fault on trying to use my double *cij2.
if( q == 0)
{
__m256d vecA;
__m256d vecB;
__m256d vecC;
for (int i = 0; i < M; ++i)
for (int j = 0; j < N; ++j)
{
double cij = C[i+j*lda];
double *cij2 = (double *)malloc(4*sizeof(double));
for (int k = 0; k < K; k+=4)
{
vecA = _mm256_load_pd(&A[i+k*lda]);
vecB = _mm256_load_pd(&B[k+j*lda]);
vecC = _mm256_mul_pd(vecA,vecB);
_mm256_storeu_pd(cij2, vecC);
for (int x = 0; x < 4; x++)
{
cij += cij2[x];
}
}
C[i+j*lda] = cij;
}
I've pinpointed the problem to the cij2 pointer. If i comment out the 2 lines that include that pointer the code runs fine, it doesn't work like it should but it'll actually run.
My question is why would i get a segmentation fault here? I know I've allocated the memory correctly and that the memory is a 256 vector of double's with size 64 bits.
After reading the comments I've come to add some clarification.
First thing I did was change the _mm_malloc to just a normal allocation using malloc. Shouldn't affect either way but will give me some more breathing room theoretically.
Second the problem isn't coming from a null return on the allocation, I added a couple loops in to increment through the array and make sure I could modify the memory without it crashing so I'm relatively sure that isn't the problem. The problem seems to stem from the loading of the data from vecC to the array.
Lastly I can not use BLAS calls. This is for a parallelisms class. I know it would be much simpler to call on something way smarter than I but unfortunately I'll get a 0 if I try that.
You dynamically allocate double *cij2 = (double *)malloc(4*sizeof(double)); but you never free it. This is just silly. Use double cij2[4], especially if you're not going to bother to align it. You never need more than one scratch buffer at once, and it's a small fixed size, so just use automatic storage.
In C++11, you'd use alignas(32) double cij2[4] so you could use _mm256_store_pd instead of storeu. (Or just to make sure storeu isn't slowed down by an unaligned address).
If you actually want to debug your original, use a debugger to catch it when it segfaults, and look at the pointer value. Make sure it's something sensible.
Your methods for testing that the memory was valid (like looping over it, or commenting stuff out) sound like they could lead to a lot of your loop being optimized away, so the problem wouldn't happen.
When your program crashes, you can also look at the asm instructions. Vector intrinsics map fairly directly to x86 asm (except when the compiler sees a more efficient way).
Your implementation would suck a lot less if you pulled the horizontal sum out of the loop over k. Instead of storing each multiply result and horizontally adding it, use a vector add into a vector accumulator. hsum it outside the loop over k.
__m256d cij_vec = _mm256_setzero_pd();
for (int k = 0; k < K; k+=4) {
vecA = _mm256_load_pd(&A[i+k*lda]);
vecB = _mm256_load_pd(&B[k+j*lda]);
vecC = _mm256_mul_pd(vecA,vecB);
cij_vec = _mm256_add_pd(cij_vec, vecC); // TODO: use multiple accumulators to keep multiple VADDPD or VFMAPD instructions in flight.
}
C[i+j*lda] = hsum256_pd(cij_vec); // put the horizontal sum in an inline function
For good hsum256_pd implementations (other than storing to memory and using a scalar loop), see Fastest way to do horizontal float vector sum on x86 (I included an AVX version there. It should be easy to adapt the pattern of shuffling to 256b double-precision.) This will help your code a lot, since you still have O(N^2) horizontal sums (but not O(N^3) with this change).
Ideally you could accumulate results for 4 i values in parallel, and not need horizontal sums.
VADDPD has a latency of 3 to 4 clocks, and a throughput of one per 1 to 0.5 clocks, so you need from 3 to 8 vector accumulators to saturate the execution units. Or with FMA, up to 10 vector accumulators (e.g. on Haswell where FMA...PD has 5c latency and one per 0.5c throughput). See Agner Fog's instruction tables and optimization guides to learn more about that. Also the x86 tag wiki.
Also, ideally nest your loops in a way that gave you contiguous access to two of your three arrays, since cache access patterns are critical for matmul (lots of data reuse). Even if you don't get fancy and transpose small blocks at a time that fit in cache. Even transposing one of your input matrices can be a win, since that costs O(N^2) and speeds up the O(N^3) process. I see your inner loop currently has a stride of lda while accessing A[].

Optimization / using pointers to access arrays

I have an exercise about optimization. I need to optimize a program which rotates and image by 45 degrees. I know accessing arrays using pointers is more efficient, so I tried the changes below- the original code:
RGB* nrgb = (RGB *)malloc(imgSizeXY*3);//3=sizeof(RGB)
//...
for (i=imgSizeY-1; i>=0; --i)
{
for (j=imgSizeX-1; j>=0; --j)
{
//...
int y=(i*imgSizeX+j);
nrgb[y].r = *imgInd; //*imgInd computed earlier
The changes:
RGB* nrgb = (RGB *)malloc(imgSizeXY*3);//3=sizeof(RGB)
RGB* rgbInd = nrgb+imgSizeXY-1;
for (i=imgSizeY-1; i>=0; --i)
{
for (j=imgSizeX-1; j>=0; --j)
{
rgbInd->r=*imgInd;
--rgbInd;
but when using pointers, the program produces an erroneous output. I have been staring at it for hours, and still have no idea why. Any ideas? Thank you very much!
There is no difference between access array elements by pointer and access by index. You can see that if produce assembler code. Index notatiin more simple.
An L1 cache hit is an order of magnitude faster than an L2 cache hit, which itself is an order of magnitude faster than a main memory access. See Numbers Every Computer Scientist Should Know. For image operations, you expect that you're going to have to do a lot of memory reads and writes, so you should expect to primarily be concerned with cache efficiency when optimising your code.
So concentrate on finding ways to use the caches more effectively, and don't worry too much that your compiler isn't optimising simple pointer arithmetic optimally.

Why is it worse to initialize a two dimensional array like this?

for(int i = 0; i<100; i++)
for(int j = 0; j<100; j++)
array[j][i] = 0;
// array[i][j] = 0;
My professor said it was much more costly to initialize a two dimensional array in the first way as opposed to the second. Can someone explain what is going on underneath the hood which makes that the case? Or, do the two means of initialization have equal performance?
As #dlev mentioned, this is due to locality of reference and has to do with how the physical hardware in the computer works.
Inside the computer, there are many different types of memory. Typically, only certain memory locations (registers) can have actual operations performed on them; the rest of the time, if you're performing operations on data, you have to load it from memory into a register, perform some computation, then write it back.
Main memory (RAM) is much, much slower than registers, often by a factor of hundreds to thousands. Consequently, reading from memory should be avoided if at all possible. To address this, most computers typically have special regions of memory called caches. The job of the cache is to hold data that has recently been accessed from memory such that if that same memory region is accessed again, the value can be pulled from the cache (fast) rather than from main memory (slow). Typically, caches are designed so that if a value is read in from memory, that value, plus a whole bunch of adjacent values, are pulled into the cache. That way, if you iterate over an array, then after reading the first value, the rest of the values from the array will be sitting in the cache and can be accessed more efficiently.
The reason that your code is slower than it needs to be is that it doesn't access the array elements sequentially. In C, 2D arrays are laid out in row-major order, meaning that the memory is arranged as
A[0][0] A[0][4] A[0][5] ... A[1][0] A[1][6] A[1][7] ... A[2][0] A[2][8] A[2][9] ...
Consequently, if you use this for loop:
for (int i = 0; i < N; i++) {
for (int j = 0; j < M; j++) {
// Do something with A[i][j]
}
}
Then you get excellent locality, because you will be accessing array elements in the order in which they appear in memory. This makes the number of reads of main memory very small, since everything is typically in cache and ready to go.
However, if you interchange the loops, as you've done, your accesses jump around in memory and are not necessarily consecutive. This means that you will have a lot of cache misses in which the memory address you read next isn't in the cache. This increases the number of cache loads, which can dramatically slow down the program.
Compilers are starting to get smart enough to interchange loops like this automatically, but we're still a ways away from being able to ignore these details. As a general rule, when writing C or C++ code for multidimensional arrays, try to iterate in row-major order rather than column-major order. You can get noticeable speedups in your program.
Hope this helps!
I'll probably get downvoted for this, but if you are programming C, then the "best" is most likely:
memset(array, 0, sizeof(array));
Then you can defer all responsibility of optimizing (which you are obviously worried about) to the implementation of memset. Any specific hardware advantages can be done there.
http://en.wikipedia.org/wiki/Sizeof#Using_sizeof_with_arrays/
http://www.cplusplus.com/reference/clibrary/cstring/memset/
Another observation is that if you are init'ing to zero, ask yourself why? If your array is static (which for this large it probably is?), then cstartup will initialize to zero for you. Again, this will probably use the most efficient way for your hardware.
I'm a bit late to the party, and there is an excellent answer already. However, I thought I could contribute by demonstrating how one could answer this question experimentally using a profiling tool (on Linux).
I'll use the perf tool in the Ubuntu 10.10 package linux-tools-common.
Here's the little C program I wrote to answer this question:
// test.c
#define DIM 1024
int main()
{
int v[DIM][DIM];
unsigned i, j;
for (i = 0; i < DIM; i++) {
for (j = 0; j < DIM; j++) {
#ifdef ROW_MAJOR_ORDER
v[i][j] = 0;
#else
v[j][i] = 0;
#endif
}
}
return 0;
}
Then compile the two different versions:
$ gcc test.c -O0 -DROW_MAJOR_ORDER -o row-maj
$ gcc test.c -O0 -o row-min
Note I've disabled optimization with -O0 so gcc has no chance to rearrange our loop to be more efficient.
We can list the performance statistics available with perf by doing perf list. In this case, we are interested in cache misses which is the event cache-misses.
Now it's as simple as running each version of the program numerous times and taking an average:
$ perf stat -e cache-misses -r 100 ./row-min
Performance counter stats for './row-min' (100 runs):
286468 cache-misses ( +- 0.810% )
0.016588860 seconds time elapsed ( +- 0.926% )
$ perf stat -e cache-misses -r 100 ./row-maj
Performance counter stats for './row-maj' (100 runs):
9594 cache-misses ( +- 1.203% )
0.006791615 seconds time elapsed ( +- 0.840% )
And now we've experimentally verified that you do in fact see two orders of magnitude more cache misses with the "row-minor" version.
If you look at the memory locations accessed by each technique, the second will access consecutive bytes, while the first will hop around by 100-byte leaps. The memory cache will work much more efficiently if you do it the second way.

Most efficient way to calculate the exponential of each element of a matrix

I'm migrating from Matlab to C + GSL and I would like to know what's the most efficient way to calculate the matrix B for which:
B[i][j] = exp(A[i][j])
where i in [0, Ny] and j in [0, Nx].
Notice that this is different from matrix exponential:
B = exp(A)
which can be accomplished with some unstable/unsupported code in GSL (linalg.h).
I've just found the brute force solution (couple of 'for' loops), but is there any smarter way to do it?
EDIT
Results from the solution post of Drew Hall
All the results are from a 1024x1024 for(for) loop in which in each iteration two double values (a complex number) are assigned. The time is the averaged time over 100 executions.
Results when taking into account the {Row,Column}-Major mode to store the matrix:
226.56 ms when looping over the row in the inner loop in Row-Major mode (case 1).
223.22 ms when looping over the column in the inner loop in Row-Major mode (case 2).
224.60 ms when using the gsl_matrix_complex_set function provided by GSL (case 3).
Source code for case 1:
for(i=0; i<Nx; i++)
{
for(j=0; j<Ny; j++)
{
/* Operations to obtain c_value (including exponentiation) */
matrix[2*(i*s_tda + j)] = GSL_REAL(c_value);
matrix[2*(i*s_tda + j)+1] = GSL_IMAG(c_value);
}
}
Source code for case 2:
for(i=0; i<Nx; i++)
{
for(j=0; j<Ny; j++)
{
/* Operations to obtain c_value (including exponentiation) */
matrix->data[2*(j*s_tda + i)] = GSL_REAL(c_value);
matrix->data[2*(j*s_tda + i)+1] = GSL_IMAG(c_value);
}
}
Source code for case 3:
for(i=0; i<Nx; i++)
{
for(j=0; j<Ny; j++)
{
/* Operations to obtain c_value (including exponentiation) */
gsl_matrix_complex_set(matrix, i, j, c_value);
}
}
There's no way to avoid iterating over all the elements and calling exp() or equivalent on each one. But there are faster and slower ways to iterate.
In particular, your goal should be to mimimize cache misses. Find out if your data is stored in row-major or column-major order, and be sure to arrange your loops such that the inner loop iterates over elements stored contiguously in memory, and the outer loop takes the big stride to the next row (if row major) or column (if column major). Although this seems trivial, it can make a HUGE difference in performance (depending on the size of your matrix).
Once you've handled the cache, your next goal is to remove loop overhead. The first step (if your matrix API supports it) is to go from nested loops (M & N bounds) to a single loop iterating over the underlying data (MN bound). You'll need to get a raw pointer to the underlying memory block (that is, a double rather than a double**) to do this.
Finally, throw in some loop unrolling (that is, do 8 or 16 elements for each iteration of the loop) to further reduce the loop overhead, and that's probably about as quick as you can make it. You'll probably need a final switch statement with fall-through to clean up the remainder elements (for when your array size % block size != 0).
No, unless there's some strange mathematical quirk I haven't heard of, you pretty much just have to loop through the elements with two for loops.
If you just want to apply exp to an array of numbers, there's really no shortcut. You gotta call it (Nx * Ny) times. If some of the matrix elements are simple, like 0, or there are repeated elements, some memoization could help.
However, if what you really want is a matrix exponential (which is very useful), the algorithm we rely on is DGPADM. It's in Fortran, but you can use f2c to convert it to C. Here's the paper on it.
Since the contents of the loop haven't been shown, the bit that calculates the c_value we don't know if the performance of the code is limited by memory bandwidth or limited by CPU. The only way to know for sure is to use a profiler, and a sophisticated one at that. It needs to be able to measure memory latency, i.e. the amount of time the CPU has been idle waiting for data to arrive from RAM.
If you are limited by memory bandwidth, there's not a lot you can do once you're accessing memory sequentially. The CPU and memory work best when data is fetched sequentially. Random accesses hit the throughput as data is more likely to have to be fetched into cache from RAM. You could always try getting faster RAM.
If you're limited by CPU then there are a few more options available to you. Using SIMD is one option, as is hand coding the floating point code (C/C++ compiler aren't great at FPU code for many reasons). If this were me, and the code in the inner loop allows for it, I'd have two pointers into the array, one at the start and a second 4/5ths of the way through it. Each iteration, a SIMD operation would be performed using the first pointer and scalar FPU operations using the second pointer so that each iteration of the loop does five values. Then, I'd interleave the SIMD instructions with the FPU instructions to mitigate latency costs. This shouldn't affect your caches since (at least on the Pentium) the MMU can stream up to four data streams simultaneously (i.e. prefetch data for you without any prompting or special instructions).

Resources