Parallelize a for loop with computation - c

I am trying to optimize a C program using openMP. I am able to parallelize the simple for loop like
#pragma omp parallel
#pragma omp for
for (i = 0; i < size; i++)
{
Y[i] = i * 0.3;
Z[i] = -i * 0.4;
}
Where X, Y and Z are float* of size "size" This works fine, but there is another loop immediately after it:
for (i = 0; i < size; i++)
{
X[i] += Z[i] * Y[i] * Y[i] * 10.0;
sum += X[i];
}
printf("Sum =%d\n",sum);
I am not sure how to parallelize the above for the loop. I am compiling the program with the command gcc -fopenmp filename and running the executable ./a.out I hope that is enough to reflect performance improvement.
I added #pragma omp parallel for reduction(+ \ : sum) at top of the second loop, it indeed is running faster and producing correct output. Need expert input to parallelize the above and avoid false sharing? Is the above directive correct or any better alternative to parallelize and make it faster?

Related

I am having trouble with OpenMP on C

I want to parallelize the for loops and I can't seem to grasp the concept, every time I try to parallelize them it still works but it slows down dramatically.
for(i=0; i<nbodies; ++i){
for(j=i+1; j<nbodies; ++j) {
d2 = 0.0;
for(k=0; k<3; ++k) {
rij[k] = pos[i][k] - pos[j][k];
d2 += rij[k]*rij[k];
if (d2 <= cut2) {
d = sqrt(d2);
d3 = d*d2;
for(k=0; k<3; ++k) {
double f = -rij[k]/d3;
forces[i][k] += f;
forces[j][k] -= f;
}
ene += -1.0/d;
}
}
}
}
I tried using synchronization with barrier and critical in some cases but nothing happens or the processing simply does not end.
Update, this is the state I'm at right now. Working without crashes but calculation times worsen the more threads I add. (Ryzen 5 2600 6/12)
#pragma omp parallel shared(d,d2,d3,nbodies,rij,pos,cut2,forces) private(i,j,k) num_threads(n)
{
clock_t begin = clock();
#pragma omp for schedule(auto)
for(i=0; i<nbodies; ++i){
for(j=i+1; j<nbodies; ++j) {
d2 = 0.0;
for(k=0; k<3; ++k) {
rij[k] = pos[i][k] - pos[j][k];
d2 += rij[k]*rij[k];
}
if (d2 <= cut2) {
d = sqrt(d2);
d3 = d*d2;
#pragma omp parallel for shared(d3) private(k) schedule(auto) num_threads(n)
for(k=0; k<3; ++k) {
double f = -rij[k]/d3;
#pragma omp atomic
forces[i][k] += f;
#pragma omp atomic
forces[j][k] -= f;
}
ene += -1.0/d;
}
}
}
clock_t end = clock();
double time_spent = (double)(end - begin) / CLOCKS_PER_SEC;
#pragma omp single
printf("Calculation time %lf sec\n",time_spent);
}
I incorporated the timer in the actual parallel code (I think it is some milliseconds faster this way). Also I think I got most of the shared and private variables right. In the file it outputs the forces.
Using barriers or other synchronizations will slow down your code, if the amount of unsynchronized work is not larger by a good factor. That is not the case with you. You probably need to reformulate your code to remove synchronization.
You are doing something like an N-body simulation. I've worked out a couple of solutions here: https://pages.tacc.utexas.edu/~eijkhout/pcse/html/omp-examples.html#N-bodyproblems
Also: your d2 loop is a reduction, so you can treat it like that, but it is probably enough if that variable is private to the i,j iterations.
You should always define your variables in their minimal required scope, especially if performance is an issue. (Note that if you do so your compiler can create more efficient code). Besides performance it also helps to avoid data race.
I think you have misplaced a curly brace and the condition in the first for loop should be i<nbodies-1. Variable ene can be summed up using reduction and to avoid data race atomic operations have to be used to increase array forces, so you do not need to use slow barriers or critical sections. Your code should look something like this (assuming int for indices and double for calculation):
#pragma omp parallel for reduction(+:ene)
for(int i=0; i<nbodies-1; ++i){
for(int j=i+1; j<nbodies; ++j) {
double d2 = 0.0;
double rij[3];
for(int k=0; k<3; ++k) {
rij[k] = pos[i][k] - pos[j][k];
d2 += rij[k]*rij[k];
}
if (d2 <= cut2) {
double d = sqrt(d2);
double d3 = d*d2;
for(int k=0; k<3; ++k) {
double f = -rij[k]/d3;
#pragma omp atomic
forces[i][k] += f;
#pragma omp atomic
forces[j][k] -= f;
}
ene += -1.0/d;
}
}
}
}
Solved, turns out all I needed was
#pragma omp parallel for nowait
Doesn't need the "atomic" either.
Weird solution, I don't fully understand how it works but it does also the output file has 0 corrupt results whatsoever.

How can I paralelize two for statements equally between threads using OpenMP?

Lets say I have the following code:
#pragma omp parallel for
for (i = 0; i < array.size; i++ ) {
int temp = array[i];
for (p = 0; p < array2.size; p++) {
array2[p] = array2[p] + temp;
How can I divide the array2.size between the threads that I call when I do the #pragma omp parallel for in the first line? For what I understood when I do the #pragma omp parallel for I'll spawn several threads in such a way that each thread will have a part of the array.size so that the i will never be the same between threads. But in this case I also want those same threads to have a different part of the array2.size (their p will also never be the same between them) so that I dont have all the threads doing the same calculation in the second for.
I've tried the collapse notation but it seems that this is only used for perfect for statements since I couldn't get the result I wanted.
Any help is appreciated! Thanks in advance
The problem with your code is that multiple threads will try to modify array2 at the same time (race condition). This can easily be avoided by reordering the loops. If array2.size doesn't provide enough parallelism, you may apply the collapse clause, as the loops are now in canonical form.
#pragma omp parallel for
for (p = 0; p < array2.size; p++) {
for (i = 0; i < array.size; i++ ) {
array2[p] += array[i];
}
}
You shouldn't expect too much of this though as the ratio between loads/stores and computation is very bad. This is without a doubt memory-bound and not compute-bound.
EDIT: If this is really your problem and not just a minimal example, I would also try the following:
#pragma omp parallel
{
double sum = 0.;
#pragma omp for reduction(+: sum)
for (i = 0; i < array.size; i++) {
sum += array[i];
}
#pragma omp for
for (p = 0; p < array2.size; p++) {
array2[p] += sum;
}
}

OpenMP SIMD reduction in array: "error: reduction variable must be shared on entry to this OpenMP pragma"

I am trying to compute the average value over adjacent elements within a matrix, but am stuck getting OpenMP's vectorization to work. As I understand the second nested for-loop, the reduction clause should ensure that no race conditions occur when writing to elements of next. However, when compiling the code (I tried auto-vectorization with both GCC GCC 7.3.0 and ICC and OpenMP > 4.5) I get the report: "error: reduction variable "next" must be shared on entry to this OpenMP pragma". Why does this occur when variables are shared by default? How can I fix this issue since adding shared(next) does not seem to help?
// CODE ABOVE (...)
size_t const width = 100;
size_t const height = 100;
float * restrict next = malloc(sizeof(float)*width*height);
// INITIALIZATION OF 'next' (this works fine)
#pragma omp for simd collapse(2)
for(size_t j = 1; j < height-1; j++)
for(size_t i = 1; i < width-1; i++)
next[j*width+i] = 0.0f;
// COMPUTE AVERAGE FOR INNER ELEMENTS
#pragma omp for simd collapse(4) reduction(+:next[0:width*height])
for(size_t j = 1; j < height-1; ++j){
for(size_t i = 1; i < width-1; ++i){
// compute average over adjacent elements
for(size_t _j = 0; _j < 3; _j++) {
for(size_t _i = 0; _i < 3; _i++) {
next[j*width + i] += (1.0 / 9.0) * next[(j-1 +_j)*width + (i-1 + _i)];
}
}
}
}
The problem is that GCC 7.3.0 does not support
#pragma omp for simd collapse(4) reduction(+:next[0:width*height])
the use of reduction of array sections in this context.
This function is support by GCC 9 forwards:
Since GCC 9, there is initial OpenMP 5 support (essentially C/C++,
only). GCC 10 added some more features, mainly for C/C++ but also for
Fortran.

Matrix multiplication by vector OpenMP C [duplicate]

This question already has answers here:
Why OpenMP under ubuntu 12.04 is slower than serial version
(3 answers)
Parallelizing matrix times a vector by columns and by rows with OpenMP
(2 answers)
Closed 8 years ago.
I'm trying to write Matrix by vector multiplication in C (OpenMP)
but my program slows when I add processors...
1 proc - 1,3 s
2 proc - 2,6 s
4 proc - 5,47 s
I tested this on my PC (core i5) and our school's cluster and the result is the same (program slows)
here is my code (matrix is 10000 x 10000) and vector is 10000:
double start_time = clock();
#pragma omp parallel private(i) num_threads(4)
{
tid = omp_get_thread_num();
world_size = omp_get_num_threads();
printf("Threads: %d\n",world_size);
for(y = 0; y < matrix_size ; y++){
#pragma omp parallel for private(i) shared(results, vector, matrix)
for(i = 0; i < matrix_size; i++){
results[y] = results[y] + vector[i]*matrix[i][y];
}
}
}
double end_time = clock();
double result_time = (end_time - start_time) / CLOCKS_PER_SEC;
printf("Time: %f\n", result_time);
My question is: is there any mistake? For me it seems pretty straightforward and should speed up
I essentially already answer this question parallelizing-matrix-times-a-vector-by-columns-and-by-rows-with-openmp.
You have a race condition when you write to results[y]. To fix this, and still parallelize the inner loop, you have to make private versions of results[y], fill them in parallel, and then merge them in a critical section.
In the code below I assume you're using double, replace it with float or int or whatever datatype you're using (note that your inner loop goes over the first index of matrix[i][y] which is cache unfriendly).
#pragma omp parallel num_threads(4)
{
int y,i;
double* results_private = (double*)calloc(matrix_size, sizeof(double));
for(y = 0; y < matrix_size ; y++) {
#pragma omp for
for(i = 0; i < matrix_size; i++) {
results_private[y] += vector[i]*matrix[i][y];
}
}
#pragma omp critical
{
for(y=0; y<matrix_size; y++) results[y] += results_private[y];
}
free(results_private);
}
If this is homework assignment and you want to really impress your instructor then it's possible to do the merging without a critical section. See this link to get an idea on what to do fill-histograms-array-reduction-in-parallel-with-openmp-without-using-a-critic though I can't promise it will be faster.
I've not done any parallel programming for a while now, nor any Maths for that matter, but don't you want to split the rows of the matrix in parallel, rather than the columns?
What happens if you try this:
double start_time = clock();
#pragma omp parallel private(i) num_threads(4)
{
tid = omp_get_thread_num();
world_size = omp_get_num_threads();
printf("Threads: %d\n",world_size);
#pragma omp parallel for private(y) shared(results, vector, matrix)
for(y = 0; y < matrix_size ; y++){
for(i = 0; i < matrix_size; i++){
results[y] = results[y] + vector[i]*matrix[i][y];
}
}
}
double end_time = clock();
double result_time = (end_time - start_time) / CLOCKS_PER_SEC;
printf("Time: %f\n", result_time);
Also, are you sure everything's OK compiling and linking with openMP?
You have a typical case of cache conflicts.
Consider that a cache line on your CPU is probably 64 bytes long. Having one processor/core write to the first 4 bytes (float) causes that cache line to be invalidated on every other L1/L2 and maybe L3. This is a lot of overhead.
Partition your data better!
#pragma omp parallel for private(i) shared(results, vector, matrix) schedule(static,16)
should do the trick. Increase the chunksize if this does not help.
Another optimisation is to store the result locally before you flush it down to memory.
Also, this is an OpenMP thing, but you don't need to start a new parallel region for the loop (each mention of parallel starts a new team):
#pragma omp parallel default(none) \
shared(vector, matrix) \
firstprivate(matrix_size) \
num_threads(4)
{
int i, y;
#pragma omp for schedule(static,16)
for(y = 0; y < matrix_size ; y++){
double result = 0;
for(i = 0; i < matrix_size; i++){
results += vector[i]*matrix[i][y];
}
result[y] = result;
}
}

C OpenMP parallel bubble sort

I have an implementation of parallel bubble sort algorithm(Odd-Even transposition sort) in C, using OpenMP. However, after I tested it it's slower than the serial version(by about 10%) although I have a 4 cores processor ( 2 real x 2 because of Intel hyperthreading). I have checked to see if the cores are actually used and I can see them at 100% each when running the program. Therefore I think I did a mistake in the implementation the algorithm.
I am using linux with kernel 2.6.38-8-generic.
This is how I compile:
gcc -o bubble-sort bubble-sort.c -Wall -fopenmp or
gcc -o bubble-sort bubble-sort.c -Wall -fopenmp for the serial version
This is how i run:
./bubble-sort < in_10000 > out_10000
#include <omp.h>
#include <stdio.h>
#include <time.h>
#include <stdlib.h>
int main()
{
int i, n, tmp, *x, changes;
int chunk;
scanf("%d ", &n);
chunk = n / 4;
x = (int*) malloc(n * sizeof(int));
for(i = 0; i < n; ++i)
scanf("%d ", &x[i]);
changes = 1;
int nr = 0;
while(changes)
{
#pragma omp parallel private(tmp)
{
nr++;
changes = 0;
#pragma omp for \
reduction(+:changes)
for(i = 0; i < n - 1; i = i + 2)
{
if(x[i] > x[i+1] )
{
tmp = x[i];
x[i] = x[i+1];
x[i+1] = tmp;
++changes;
}
}
#pragma omp for \
reduction(+:changes)
for(i = 1; i < n - 1; i = i + 2)
{
if( x[i] > x[i+1] )
{
tmp = x[i];
x[i] = x[i+1];
x[i+1] = tmp;
++changes;
}
}
}
}
return 0;
}
Later edit:
It seems to work well now after I made the changes you suggested. It also scales pretty good(I tested on 8 physical cores too -> took 21s for a set of 150k numbers which is far less than on one core). However if I set the OMP_SCHEDULE environment variable myself the performance decreases...
You should profile it and check where threads spend time.
One possible reason is that parallel regions are constantly created and destroyed; depending on OpenMP implementation, it could lead to re-creation of the thread pool, though good implementations should probably handle this case.
Some small things to shave off:
- ok seems completely unnecessary, you can just change the loop exit condition to i<n-1;
- explicit barrier is unnecessary - first, you put it out of parallel regions so it makes no sense; and second, OpenMP parallel regions and loops have implicit barriers at the end;
- combine at least the two consequent parallel regions inside the while loop:
#pragma omp parallel private(tmp)
{
#pragma omp for bla-bla
for (i=0; i<n-1; i+=2 ) {...}
#pragma omp for bla-bla
for (i=1; i<n-1; i+=2 ) {...}
}

Resources