I have a 2d array, say arr[SIZE][SIZE], which is updated in two for loops in the form:
for(int i = 0; i < SIZE; i++)
for(int j = 0; j < SIZE; j++)
arr[i][j] = new_value();
that I am trying to parallelise using OpenMP.
There are two instances where this occurs, the first is the function new_value_1() which relies on arr[i+1][j] and arr[i][j+1] (the "edge of the array" issue is already taken care of), which I can happily parallelise using the chessboard technique:
for(int l = 0; l < 2; l++)
#pragma omp parallel for private(i, j)
for(int i = 0; i < SIZE; i++)
for(int j = (i + l) % 2; j < SIZE; j += 2)
arr[i][j] = new_value_1();
The issue comes with this second instance, new_value_2(), which relies upon:
arr[i+1][j],
arr[i-1][j],
arr[i][j+1],
arr[i][j-1]
i.e. the adjacent elements in all directions.
Here, there is a dependency on the negative adjacent elements, so arr[0][2] = new_value_2() depends on the already updated value of arr[0][1] which would not be computed until the second l loop.
I was wondering if there was something I was missing in parallelising this way or if the issue is inherent with the way the algorithm works? If the former, any other approaches would be appreciated.
I was wondering if there was something I was missing in parallelising this way or if the issue is inherent with the way the algorithm works?
Yes, you're missing something, but not along the lines you probably hoped. Supposing that the idea is that the parallel version should compute the same result as the serial version, the checkerboard approach does not solve the problem even for the new_value_1() case. Consider this layout of array elements:
xoxoxo
oxoxox
xoxoxo
oxoxox
On the first of the two checkerboard passes, the 'x' elements are updated according to the original values of the 'o' elements -- so far, so good -- but on the second pass, the 'o' elements are updated based on the new values of the 'x' elements. The data dependency is broken, yes, but the overall computation is not the same. Of course, the same applies even more so to the new_value_2() computation. The checkerboarding doesn't really help you.
If the former, any other approaches would be appreciated.
You could do the computation in shells. For example, consider this labeling of the array elements:
0123
1234
2345
3456
All the elements with the same label can be computed in parallel (for both new_value() functions), provided that all those with numerically lesser labels are computed first. That might look something like this:
for(int l = 0; l < (2 * SIZE - 1); l++) {
int iterations = (l < SIZE) ? (l + 1) : (2 * SIZE - (l + l));
int i = (l < SIZE) ? l : (SIZE - 1);
int j = (l < SIZE) ? 0 : (1 + l - SIZE);
#pragma omp parallel for private(i, j, m)
for(int m = 0; m < iterations; m++) {
arr[i--][j++] = new_value_1();
}
}
You won't that way get as much of a benefit from parallelization, but that is an inherent aspect of the way the serial computation works, at least for the new_value_2() case.
For the new_value_1() case, though, you might do a little better by going row by row:
for(int i = 0; i < SIZE; i++)
#pragma omp parallel for private(j)
for(int j = 0; j < SIZE; j++)
arr[i][j] = new_value_1();
Alternatively, for the new_value_1() case only, you could potentially get a good speedup by storing the results in a separate array:
#pragma omp parallel for private(i, j), collapse(2)
for(int i = 0; i < SIZE; i++)
for(int j = 0; j < SIZE; j++)
arr2[i][j] = new_value_1();
If that requires you to copy the result back to the original array afterward then it might well not be worth it, but you could potentially avoid that by flipping back and forth between two arrays: compute from the first into the second, and then the next time around, compute from the second into the first ( if the problem [PSPACE] scaling permits having such an extended in-RAM allocation, i.e. it again comes at a [PTIME] cost, hopefully paid just once ).
Related
I have this code that transposes a matrix using loop tiling strategy.
void transposer(int n, int m, double *dst, const double *src) {
int blocksize;
for (int i = 0; i < n; i += blocksize) {
for (int j = 0; j < m; j += blocksize) {
// transpose the block beginning at [i,j]
for (int k = i; k < i + blocksize; ++k) {
for (int l = j; l < j + blocksize; ++l) {
dst[k + l*n] = src[l + k*m];
}
}
}
}
}
I want to optimize this with multi-threading using OpenMP, however I am not sure what to do when having so many nested for loops. I thought about just adding #pragma omp parallel for but doesn't this just parallelize the outer loop?
When you try to parallelize a loop nest, you should ask yourself how many levels are conflict free. As in: every iteration writing to a different location. If two iterations write (potentially) to the same location, you need to 1. use a reduction 2. use a critical section or other synchronization 3. decide that this loop is not worth parallelizing, or 4. rewrite your algorithm.
In your case, the write location depends on k,l. Since k<n and l*n, there are no pairs k.l / k',l' that write to the same location. Furthermore, there are no two inner iterations that have the same k or l value. So all four loops are parallel, and they are perfectly nested, so you can use collapse(4).
You could also have drawn this conclusion by considering the algorithm in the abstract: in a matrix transposition each target location is written exactly once, so no matter how you traverse the target data structure, it's completely parallel.
You can use the collapse specifier to parallelize over two loops.
# pragma omp parallel for collapse(2)
for (int i = 0; i < n; i += blocksize) {
for (int j = 0; j < m; j += blocksize) {
// transpose the block beginning at [i,j]
for (int k = i; k < i + blocksize; ++k) {
for (int l = j; l < j + blocksize; ++l) {
dst[k + l*n] = src[l + k*m];
}
}
}
}
As a side-note, I think you should swap the two innermost loops. Usually, when you have a choice between writing sequentially and reading sequentially, writing is more important for performance.
I thought about just adding #pragma omp parallel for but doesnt this
just parallelize the outer loop?
Yes. To parallelize multiple consecutive loops one can utilize OpenMP' collapse clause. Bear in mind, however that:
(As pointed out by Victor Eijkhout). Even though this does not directly apply to your code snippet, typically, for each new loop to be parallelized one should reason about potential newer race-conditions e.g., that this parallelization might have added. For example, different threads writing concurrently into the same dst position.
in some cases parallelizing nested loops may result in slower execution times than parallelizing a single loop. Since, the concrete implementation of the collapse clause uses a more complex heuristic (than the simple loop parallelization) to divide the iterations of the loops among threads, which can result in an overhead higher than the gains that it provides.
You should try to benchmark with a single parallel loop and then with two, and so on, and compare the results, accordingly.
void transposer(int n, int m, double *dst, const double *src) {
int blocksize;
#pragma omp parallel for collapse(...)
for (int i = 0; i < n; i += blocksize)
for (int j = 0; j < m; j += blocksize)
for (int k = i; k < i + blocksize; ++k
for (int l = j; l < j + blocksize; ++l)
dst[k + l*n] = src[l + k*m];
}
Depending upon the number of threads, cores, size of the matrices among other factors it might be that running sequential would actually be faster than the parallel versions. This is specially true in your code that is not very CPU intensive (i.e., dst[k + l*n] = src[l + k*m];)
I'm in my first few months of learning to code in C through a high school program. Someone recently mentioned to me that there's often a way to make code more efficient and I think I have a problem that could be made more efficient. I'm not sure how but I have a hunch that it could be made faster.
We're given a 2D square array of integers with row and col size n. We have subsquares within the 2D square array with row and col size s. We can always assume that s will evenly divide n. I've written the following code to iterate over each subsquare
Currently my code looks something like this:
int **grid;
int s, i, j, k, l;
// reading in inputs, other processing
for (i = 0; i < n; i += s) {
for (j = 0; j < n; j += s) {
for (k = 0; k < s; k++) {
for (l = 0; l < s; l++) {
printf("%d \n", grid[i + k][j + l]);
}
}
printf("next subsquare: \n");
}
}
As you can see, I've got 4 nested for loops and I feel like it's a bit messy to have it in this format. Is there a better way to do this? Later on I might be summing each subsquare or performing some other operation with each subsquare.
I'am writing a C program to calculate the product of two matrix.
The problem That I noticed that the order of for loops does matter. For example:
for N=500
for (int i = 0; i < N; ++i) {
for (int j = 0; j < N; ++j) {
for (int k = 0 ; k < N; ++k) {
C[i*N+j]+=A[i*N+k] * B[k*N+j];
}
}
}
execution time (Seconds) : 1.1531820000
for (int j = 0; j < N; ++j) {
for (int k = 0 ; k < N; ++k) {
for (int i = 0; i < N; ++i) {
C[i*N+j]+=A[i*N+k] * B[k*N+j];
}
}
}
execution time (Seconds) : 2.6801300000
Matrix declaration:
A=(double*)malloc(sizeof(double)*N*N);
B=(double*)malloc(sizeof(double)*N*N);
C=(double*)malloc(sizeof(double)*N*N);
I run the test for 5 time than calculate the average. Anyone have an idea why is this happening?
With the second loop, you keep making many big jumps all the time when you increment i in the inner loop, and to a lesser extent k. The cache is probably not very happy with that.
The first loop is better, indeed it's even better if you invert the orders of j and k.
This is essentially a problem of data locality. Accesses to main memory are very slow on modern architectures, so your CPU will keep caches of recently accessed memory and try to prefetch memory that is likely to be accessed next. Those caches are very efficient at speeding up accesses that are grouped in the same small area, or accesses that follow a predictable pattern.
Here we turned a pattern where the CPU would make big jumps through memory and then come back into a nice mostly sequential pattern, hence the speedup.
I got some problems trying to parallelize an algorithm. The intention is to do some modifications to a 100x100 matrix. When I run the algorithm without openMP everything runs smoothly in about 34-35 seconds, when I parallelize on 2 threads (I need it to be with 2 threads only) it gets down to like 22 seconds but the output is wrong and I think it's a synchronization problem that I cannot fix.
Here's the code :
for (p = 0; p < sapt; p++){
memset(count,0,Nc*sizeof(int));
for (i = 0; i < N; i ++){
for (j = 0; j < N; j++){
for( m = 0; m < Nc; m++)
dist[m] = N+1;
omp_set_num_threads(2);
#pragma omp parallel for shared(configurationMatrix, dist) private(k,m) schedule(static,chunk)
for (k = 0; k < N; k++){
for (m = 0; m < N; m++){
if (i == k && j == m)
continue;
if (MAX(abs(i-k),abs(j-m)) < dist[configurationMatrix[k][m]])
dist[configurationMatrix[k][m]] = MAX(abs(i-k),abs(j-m));
}
}
int max = -1;
for(m = 0; m < Nc; m++){
if (dist[m] == N+1)
continue;
if (dist[m] > max){
max = dist[m];
configurationMatrix2[i][j] = m;
}
}
}
}
memcpy(configurationMatrix, configurationMatrix2, N*N*sizeof(int));
#pragma omp parallel for shared(count, configurationMatrix) private(i,j)
for (i = 0; i < N; i ++)
for (j = 0; j < N; j++)
count[configurationMatrix[i][j]] ++;
for (i = 0; i < Nc; i ++)
fprintf(out,"%i ", count[i]);
fprintf(out, "\n");
}
In which : sapt = 100;
count -> it's a vector that holds me how many of an each element of the matrix I'm having on each step;
(EX: count[1] = 60 --> I have the element '1' 60 times in my matrix and so on)
dist --> vector that holds me max distances from element i,j of let's say value K to element k,m of same value K.
(EX: dist[1] = 10 --> distance from the element of value 1 to the furthest element of value 1)
Then I write stuff down in an output file, but again, wrong output.
If I understand your code correctly this line
count[configurationMatrix[i][j]] ++;
increments count at the element whose index is at configurationMatrix[i][j]. I don't see that your code takes any steps to ensure that threads are not simultaneously trying to increment the same element of count. It's entirely feasible that two different elements of configurationMatrix provide the same index into count and that those two elements are handled by different threads. Since ++ is not an atomic operation your code has a data race; multiple threads can contend for update access to the same variable and you lose any guarantees of correctness, or determinism, in the result.
I think you may have other examples of the same problem in other parts of your code too. You are silent on the errors you observe in the results of the parallel program compared with the results from the serial program yet those errors are often very useful in diagnosing a problem. For example, if the results of the parallel program are not the same every time you run it, that is very suggestive of a data race somewhere in your code.
How to fix this ? Since you only have 2 threads the easiest fix would be to not parallelise this part of the program. You could wrap the data race inside an OpenMP critical section but that's really just another way of serialising your code. Finally, you could possibly modify your algorithm and data structures to avoid this problem entirely.
for example
int count=0
for(int i=0;i<12;i++)
for(int j=i+1;j<10;j++)
for(int k=j+1;k<8;k++)
count++;
System.out.println("count = "+count);
or
for(int i=0;i<I;i++)
for(int j=i+1;j<J;j++)
for(int k=j+1;k<K;k++)
:
:
:
for(int z=y+1;z,<Z;z,++,)
count++;
what is value of count after all iteration? Is there any formula to calculate it?
It's a math problem of summation
Basically, one can prove that:
for (i=a; i<b; i++)
count+=1
is equivalent to
count+=b-a
Similarly,
for (i=a; i<b; i++)
count+=i
is equivalent to
count+= 0.5 * (b*(b+1) - a*(a+1))
You can get similar formulas using for instance wolframalpha (Wolfram's Mathematica)
This system will do the symbolic calculation for you, so for instance,
for(int i=0;i<A;i++)
for(int j=i+1;j<B;j++)
for(int k=j+1;k<C;k++)
count++
is a Mathematica query:
http://www.wolframalpha.com/input/?i=Sum[Sum[Sum[1,{k,j%2B1,C-1}],{j,i%2B1,B-1}],{i,0,A-1}]
Not a full answer but when i, j and k are all the same (say they're all n) the formula is C(n, nb_for_loops), which may already interest you :)
final int n = 50;
int count = 0;
for (int i = 0; i < n; i++) {
for (int j = i + 1; j < n; j++) {
for (int k = j + 1; k < n; k++) {
for (int l = k+1; l < n; l++) {
count++;
}
}
}
}
System.out.println( count );
Will give 230300 which is C(50,4).
You can compute this easily using the binomail coefficient:
http://en.wikipedia.org/wiki/Binomial_coefficient
One formula to compute this is: n! / (k! * (n-k)!)
For example if you want to know how many different sets of 5 cards can be taken out of a 52 cards deck, you can either use 5 nested loops or use the formula above, they'll both give: 2 598 960
That's roughly the volume of an hyperpyramid http://www.physicsinsights.org/pyramids-1.html => 1/d * (n ^d) (with d dimension)
The formula works for real number so you have to adapt it for integer
(for the case d=2 (the hyperpyramid is a triangle then) , 1/2*(n*n) becomes the well know formula n(n+1)/2 (or n(n-1)/2) depending if you include the diagonal or not). I let you do the math
I think the fact your not using n all time but I,J,K is not a problem as you can rewrite each loop as 2 loop stopping in the middle so they all stop as the same number
the formula might becomes 1/d*((n/2)^d)*2 (I'm not sure, but something similar should be ok)
That's not really the answer to your question but I hope that will help to find a real one.