How does openMP COLLAPSE works internally? - c

I am trying the openMP parallelism, to multiply 2 matrixes using 2 threads.
I understand how the outer loop parallelism works (i.e without the "collapse(2)" works).
Now, using collapse:
#pragma omp parallel for collapse(2) num_threads(2)
for( i = 0; i < m; i++)
for( j = 0; j < n; j++)
{
s = 0;
for( k = 0; k < p; k++)
s += A[i][k] * B[k][j];
C[i][j] = s;
}
From what I gather, collapse "collapses" the loops into a single big loop, and then uses threads in the big loop. So, for the previous code, i think that it would be equivalent to something like this:
#pragma omp parallel for num_threads(2)
for (ij = 0; ij <n*m; ij++)
{
i= ij/n;
j= mod(ij,n);
s = 0;
for( k = 0; k < p; k++)
s += A[i][k] * B[k][j];
C[i][j] = s;
}
My questions are:
Is that how it works? I have not found any explaination on how it
"collapses" the loops.
If yes, what is the benefit in using that? Doesn't
it divide the jobs between 2 threads EXACTLY like the parallelism without
collapsing?. If not, then does how it work?
PS: Now that i am thinking a little bit more, in case that n is an odd number, say 3, without the collapse one thread would have 2 iterations, and the other just one. That results in uneven jobs for the threads, and a bit less efficient.
If we were to use my collapse equivalent (if that is how collapse indeed works) each thread would have "1.5" iterations. If n would be very large, that would not really matter, would it? Not to mention, doing that i= ij/n; j= mod(ij,n); every time, it decreases performance, doesn't it?

The OpenMP specification says just (page 58 of Version 4.5):
If a collapse clause is specified with a parameter value greater than 1, then the iterations of the associated loops to which the clause applies are collapsed into one larger iteration space that is then divided according to the schedule clause. The sequential execution of the iterations in these associated loops determines the order of the iterations in the collapsed iteration space.
So, basically your logic is correct, except that your code is equivalent to the schedule(static,1) collapse(2) case, i.e. iteration chunk size of 1. In the general case, most OpenMP runtimes have default schedule of schedule(static), which means that the chunk size will be (approximately) equal to the number of iterations divided by the number of threads. The compiler may then use some optimisation to implement it by e.g. running a partial inner loop for a fixed value for the outer loop, then an integer number of outer iterations with complete inner loops, then a partial inner loop again.
For example, the following code:
#pragma omp parallel for collapse(2)
for (int i = 0; i < 100; i++)
for (int j = 0; j < 100; j++)
a[100*i+j] = i+j;
gets transformed by the OpenMP engine of GCC into:
<bb 3>:
i = 0;
j = 0;
D.1626 = __builtin_GOMP_loop_static_start (0, 10000, 1, 0, &.istart0.3, &.iend0.4);
if (D.1626 != 0)
goto <bb 8>;
else
goto <bb 5>;
<bb 8>:
.iter.1 = .istart0.3;
.iend0.5 = .iend0.4;
.tem.6 = .iter.1;
D.1630 = .tem.6 % 100;
j = (int) D.1630;
.tem.6 = .tem.6 / 100;
D.1631 = .tem.6 % 100;
i = (int) D.1631;
<bb 4>:
D.1632 = i * 100;
D.1633 = D.1632 + j;
D.1634 = (long unsigned int) D.1633;
D.1635 = D.1634 * 4;
D.1636 = .omp_data_i->a;
D.1637 = D.1636 + D.1635;
D.1638 = i + j;
*D.1637 = D.1638;
.iter.1 = .iter.1 + 1;
if (.iter.1 < .iend0.5)
goto <bb 10>;
else
goto <bb 9>;
<bb 9>:
D.1639 = __builtin_GOMP_loop_static_next (&.istart0.3, &.iend0.4);
if (D.1639 != 0)
goto <bb 8>;
else
goto <bb 5>;
<bb 10>:
j = j + 1;
if (j <= 99)
goto <bb 4>;
else
goto <bb 11>;
<bb 11>:
j = 0;
i = i + 1;
goto <bb 4>;
<bb 5>:
__builtin_GOMP_loop_end_nowait ();
<bb 6>:
This is a C-like representation of the program's abstract syntax tree, which probably a bit hard to read, but what it does is, it uses modulo arithmetic only once to compute the initial values of i and j based on the start of the iteration block (.istart0.3) determined by the call to GOMP_loop_static_start(). Then it simply increases i and j as one would expect a loop nest to be implemented, i.e. increase j until it hits 100, then reset j to 0 and increase i. At the same time, it also keeps the current iteration number from the collapsed iteration space in .iter.1, basically iterating at the same time both the single collapsed loop and the two nested loops.
As to case when the number of threads does not divide the number of iterations, the OpenMP standard says:
When no chunk_size is specified, the iteration space is divided into chunks that are approximately equal in size, and at most one chunk is distributed to each thread. The size of the chunks is unspecified in this case.
The GCC implementation leaves the threads with highest IDs doing one iteration less. Other possible distribution strategies are outlined in the note on page 61. The list is by no means exhaustive.

The exact behavior is not specified by the standard itself. However, the standard requires that the inner loop has exactly the same iterations for each iteration of the outer loop. This allows the following transformation:
#pragma omp parallel
{
int iter_total = m * n;
int iter_per_thread = 1 + (iter_total - 1) / omp_num_threads(); // ceil
int iter_start = iter_per_thread * omp_get_thread_num();
int iter_end = min(iter_iter_start + iter_per_thread, iter_total);
int ij = iter_start;
for (int i = iter_start / n;; i++) {
for (int j = iter_start % n; j < n; j++) {
// normal loop body
ij++;
if (ij == iter_end) {
goto end;
}
}
}
end:
}
From skimming the disassembly, i believe this is similar to what GCC does. It does avoid the per-iteration division/modulo, but costs one register and addition per inner iterator. Of course it will vary for different scheduling strategies.
Collapsing loops does increase the number of loop iterations that can be assigned to threads, thus helping with load balance or even exposing enough parallel work in the first place.

Related

Optimizing a matrix transpose function with OpenMP

I have this code that transposes a matrix using loop tiling strategy.
void transposer(int n, int m, double *dst, const double *src) {
int blocksize;
for (int i = 0; i < n; i += blocksize) {
for (int j = 0; j < m; j += blocksize) {
// transpose the block beginning at [i,j]
for (int k = i; k < i + blocksize; ++k) {
for (int l = j; l < j + blocksize; ++l) {
dst[k + l*n] = src[l + k*m];
}
}
}
}
}
I want to optimize this with multi-threading using OpenMP, however I am not sure what to do when having so many nested for loops. I thought about just adding #pragma omp parallel for but doesn't this just parallelize the outer loop?
When you try to parallelize a loop nest, you should ask yourself how many levels are conflict free. As in: every iteration writing to a different location. If two iterations write (potentially) to the same location, you need to 1. use a reduction 2. use a critical section or other synchronization 3. decide that this loop is not worth parallelizing, or 4. rewrite your algorithm.
In your case, the write location depends on k,l. Since k<n and l*n, there are no pairs k.l / k',l' that write to the same location. Furthermore, there are no two inner iterations that have the same k or l value. So all four loops are parallel, and they are perfectly nested, so you can use collapse(4).
You could also have drawn this conclusion by considering the algorithm in the abstract: in a matrix transposition each target location is written exactly once, so no matter how you traverse the target data structure, it's completely parallel.
You can use the collapse specifier to parallelize over two loops.
# pragma omp parallel for collapse(2)
for (int i = 0; i < n; i += blocksize) {
for (int j = 0; j < m; j += blocksize) {
// transpose the block beginning at [i,j]
for (int k = i; k < i + blocksize; ++k) {
for (int l = j; l < j + blocksize; ++l) {
dst[k + l*n] = src[l + k*m];
}
}
}
}
As a side-note, I think you should swap the two innermost loops. Usually, when you have a choice between writing sequentially and reading sequentially, writing is more important for performance.
I thought about just adding #pragma omp parallel for but doesnt this
just parallelize the outer loop?
Yes. To parallelize multiple consecutive loops one can utilize OpenMP' collapse clause. Bear in mind, however that:
(As pointed out by Victor Eijkhout). Even though this does not directly apply to your code snippet, typically, for each new loop to be parallelized one should reason about potential newer race-conditions e.g., that this parallelization might have added. For example, different threads writing concurrently into the same dst position.
in some cases parallelizing nested loops may result in slower execution times than parallelizing a single loop. Since, the concrete implementation of the collapse clause uses a more complex heuristic (than the simple loop parallelization) to divide the iterations of the loops among threads, which can result in an overhead higher than the gains that it provides.
You should try to benchmark with a single parallel loop and then with two, and so on, and compare the results, accordingly.
void transposer(int n, int m, double *dst, const double *src) {
int blocksize;
#pragma omp parallel for collapse(...)
for (int i = 0; i < n; i += blocksize)
for (int j = 0; j < m; j += blocksize)
for (int k = i; k < i + blocksize; ++k
for (int l = j; l < j + blocksize; ++l)
dst[k + l*n] = src[l + k*m];
}
Depending upon the number of threads, cores, size of the matrices among other factors it might be that running sequential would actually be faster than the parallel versions. This is specially true in your code that is not very CPU intensive (i.e., dst[k + l*n] = src[l + k*m];)

How to make the parallel version of this dependent nested for, and why is the collapse not working

How could I make the parallel of this with OpenMP 3.1? I have tried a collapse but the compiler says this:
error: initializer expression refers to iteration variable ‘k’
for (j = k+1; j < N; ++j){
And when I try a simple parallel for, the result is like the threads sometimes do the same and jump things so sometimes the result is greater and other times is less
int N = 100;
int *x;
x = (int*) malloc ((N+1)*sizeof(int));
//... initialization of the array x ...
// ...
for (k = 1; k < N-1; ++k)
{
for (j = k+1; j < N; ++j)
{
s = x[k] + x[j];
if (fn(s) == 1){
count++;
}
}
Count must be 62 but is random
Based on the code snippet that you have provided, and according to the restrictions to nested parallel loops specified by the OpenMP 3.1 standard:
The iteration count for each associated loop is computed before entry to the outermost loop. If execution of any associated loop changes any of the values used to compute any of the iteration counts, then the behavior is unspecified.
Since the iterations of your inner loop depend upon the iterations of your outer loop (i.e., j = k+1) you can not do the following:
#pragma omp parallel for collapse(2) schedule(static, 1) private(j) reduction(+:count)
for (k = 1; k < N-1; ++k)
for (j = k+1; j < N; ++j)
...
Moreover, from the OpenMP 3.1 "Loop Construct" section (relevant to this question) one can read:
for (init-expr; test-expr; incr-expr) structured-block
where init-expr is one of the following:
...
integer-type var = lb
...
and test-expr :
...
var relational-op b
with the restriction of lb and b of:
Loop invariant expressions of a type compatible with the type of var.
Notwithstanding, as kindly pointed out by #Hristo Iliev, "that changed in 5.0 where support for non-rectangular loops was added.". As one can read from the OpenMP 5.0 "Loop Construct" section, now the restriction on lb and b are:
Expressions of a type compatible with the type of var that are loop
invariant with respect to the outermost associated loop or are one of
the following (where var-outer, a1, and a2 have a type compatible with
the type of var, var-outer is var from an outer associated loop, and
a1 and a2 are loop invariant integer expressions with respect to the
outermost loop):
...
var-outer + a2
...
Alternatively to the collapse clause you can use the normal parallel for. Bear in mind that you have a race condition during the update of the variable count.
#pragma omp parallel for schedule(static, 1) private(j) reduction(+:count)
for (k = 1; k < N-1; ++k){
for (j = k+1; j < N; ++j)
{
s = x[k] + x[j];
if (fn(s) == 1){
count++;
}
}
Importante note although the k does not have to be private, since it is part of the loop to be parallelized and OpenMP will implicitly make it private, the same does not apply to the variable j. Hence, one of the reason why:
Count must be 62 but is random
the other was the lack of the reduction(+:count).

OpenMP - Why does the number of comparisons decrease?

I have the following algorithm:
int hostMatch(long *comparisons)
{
int i = -1;
int lastI = textLength-patternLength;
*comparisons=0;
#pragma omp parallel for schedule(static, 1) num_threads(1)
for (int k = 0; k <= lastI; k++)
{
int j;
for (j = 0; j < patternLength; j++)
{
(*comparisons)++;
if (textData[k+j] != patternData[j])
{
j = patternLength+1; //break
}
}
if (j == patternLength && k > i)
i = k;
}
return i;
}
When changing num_threads I get the following results for number of comparisons:
01 = 9949051000
02 = 4992868032
04 = 2504446034
08 = 1268943748
16 = 776868269
32 = 449834474
64 = 258963324
Why is the number of comparisons not constant? It's interesting because the number of comparisons halves with the doubling of the number of threads. Is there some sort of race conditions going on for (*comparisons)++ where OMP just skips the increment if the variable is in use?
My current understanding is that the iterations of the k loop are split near-evenly amongst the threads. Each iteration has a private integer j as well as a private copy of integer k, and a non-parallel for loop which adds to the comparisons until terminated.
The naive way around the race condition to declare the operation as atomic update:
#pragma omp atomic update
(*comparisons)++;
Note that a critical section here is unnecessary and much more expensive. An atomic update can be declared on a primitive binary or unary operation on any l-value expression with scalar type.
Yet this is still not optimal because the value of *comparisons needs to be moved around between CPU caches all the time and a expensive locked instruction is performed. Instead you should use a reduction. For that you need another local variable, the pointer won't work here.
int hostMatch(long *comparisons)
{
int i = -1;
int lastI = textLength-patternLength;
long comparisons_tmp = 0;
#pragma omp parallel for reduction(comparisons_tmp:+)
for (int k = 0; k <= lastI; k++)
{
int j;
for (j = 0; j < patternLength; j++)
{
comparisons_tmp++;
if (textData[k+j] != patternData[j])
{
j = patternLength+1; //break
}
}
if (j == patternLength && k > i)
i = k;
}
*comparisons = comparisons_tmp;
return i;
}
P.S. schedule(static, 1) seems like a bad idea, since this will lead to inefficient memory access patterns on textData. Just leave it out and let the compiler do it's thing. If a measurement shows that it's not working efficiently, give it some better hints.
You said it yourself (*comparisons)++; has a race condition. It is a critical section that has to be serialized (I don't think (*pointer)++ is an atomic operation).
So basically you read the same value( i.e. 2) twice by two threads and then both increase it (3) and write it back. So you get 3 instead of 4. You have to make sure the operations on variables, that are not in the local scope of your parallelized function/loop, don't overlap.

Why for loop order effect running time in matrix multiplication?

I'am writing a C program to calculate the product of two matrix.
The problem That I noticed that the order of for loops does matter. For example:
for N=500
for (int i = 0; i < N; ++i) {
for (int j = 0; j < N; ++j) {
for (int k = 0 ; k < N; ++k) {
C[i*N+j]+=A[i*N+k] * B[k*N+j];
}
}
}
execution time (Seconds) : 1.1531820000
for (int j = 0; j < N; ++j) {
for (int k = 0 ; k < N; ++k) {
for (int i = 0; i < N; ++i) {
C[i*N+j]+=A[i*N+k] * B[k*N+j];
}
}
}
execution time (Seconds) : 2.6801300000
Matrix declaration:
A=(double*)malloc(sizeof(double)*N*N);
B=(double*)malloc(sizeof(double)*N*N);
C=(double*)malloc(sizeof(double)*N*N);
I run the test for 5 time than calculate the average. Anyone have an idea why is this happening?
With the second loop, you keep making many big jumps all the time when you increment i in the inner loop, and to a lesser extent k. The cache is probably not very happy with that.
The first loop is better, indeed it's even better if you invert the orders of j and k.
This is essentially a problem of data locality. Accesses to main memory are very slow on modern architectures, so your CPU will keep caches of recently accessed memory and try to prefetch memory that is likely to be accessed next. Those caches are very efficient at speeding up accesses that are grouped in the same small area, or accesses that follow a predictable pattern.
Here we turned a pattern where the CPU would make big jumps through memory and then come back into a nice mostly sequential pattern, hence the speedup.

Parallelizing giving wrong output

I got some problems trying to parallelize an algorithm. The intention is to do some modifications to a 100x100 matrix. When I run the algorithm without openMP everything runs smoothly in about 34-35 seconds, when I parallelize on 2 threads (I need it to be with 2 threads only) it gets down to like 22 seconds but the output is wrong and I think it's a synchronization problem that I cannot fix.
Here's the code :
for (p = 0; p < sapt; p++){
memset(count,0,Nc*sizeof(int));
for (i = 0; i < N; i ++){
for (j = 0; j < N; j++){
for( m = 0; m < Nc; m++)
dist[m] = N+1;
omp_set_num_threads(2);
#pragma omp parallel for shared(configurationMatrix, dist) private(k,m) schedule(static,chunk)
for (k = 0; k < N; k++){
for (m = 0; m < N; m++){
if (i == k && j == m)
continue;
if (MAX(abs(i-k),abs(j-m)) < dist[configurationMatrix[k][m]])
dist[configurationMatrix[k][m]] = MAX(abs(i-k),abs(j-m));
}
}
int max = -1;
for(m = 0; m < Nc; m++){
if (dist[m] == N+1)
continue;
if (dist[m] > max){
max = dist[m];
configurationMatrix2[i][j] = m;
}
}
}
}
memcpy(configurationMatrix, configurationMatrix2, N*N*sizeof(int));
#pragma omp parallel for shared(count, configurationMatrix) private(i,j)
for (i = 0; i < N; i ++)
for (j = 0; j < N; j++)
count[configurationMatrix[i][j]] ++;
for (i = 0; i < Nc; i ++)
fprintf(out,"%i ", count[i]);
fprintf(out, "\n");
}
In which : sapt = 100;
count -> it's a vector that holds me how many of an each element of the matrix I'm having on each step;
(EX: count[1] = 60 --> I have the element '1' 60 times in my matrix and so on)
dist --> vector that holds me max distances from element i,j of let's say value K to element k,m of same value K.
(EX: dist[1] = 10 --> distance from the element of value 1 to the furthest element of value 1)
Then I write stuff down in an output file, but again, wrong output.
If I understand your code correctly this line
count[configurationMatrix[i][j]] ++;
increments count at the element whose index is at configurationMatrix[i][j]. I don't see that your code takes any steps to ensure that threads are not simultaneously trying to increment the same element of count. It's entirely feasible that two different elements of configurationMatrix provide the same index into count and that those two elements are handled by different threads. Since ++ is not an atomic operation your code has a data race; multiple threads can contend for update access to the same variable and you lose any guarantees of correctness, or determinism, in the result.
I think you may have other examples of the same problem in other parts of your code too. You are silent on the errors you observe in the results of the parallel program compared with the results from the serial program yet those errors are often very useful in diagnosing a problem. For example, if the results of the parallel program are not the same every time you run it, that is very suggestive of a data race somewhere in your code.
How to fix this ? Since you only have 2 threads the easiest fix would be to not parallelise this part of the program. You could wrap the data race inside an OpenMP critical section but that's really just another way of serialising your code. Finally, you could possibly modify your algorithm and data structures to avoid this problem entirely.

Resources