No result using openmp - c

I'm tryng to add all the members of an array using openmp this way
#include <omp.h>
#include <stdio.h>
#include <stdlib.h>
int main (int argc, char *argv[])
{
int v[] ={1,2,3,4,5,6,7,8,9};
int sum = 0;
#pragma omp parallel private(v, sum)
{
#pragma reduction(+: sum)
{
for (int i = 0; i < sizeof(v)/sizeof(int); i++){
sum += v[i];
}
}
}
printf("%d\n",sum);
}
But when I print sum the result is 0

You are very confused about data-sharing attributes and work-sharing for OpenMP. This answer does not attempt to properly teach them to you, but only give you a concise specific example.
Your code does not make any sense and does not compile.
You do not need multiple regions or such, and there are only two variables. v - which is defined outside, is read by all and must be shared - which it implicitly is because it is defined outside. Then there is sum, which is a reduction variable.
Further, you need to apply worksharing (for) to the loop. So in the end it looks like this:
int v[] ={1,2,3,4,5,6,7,8,9};
int sum = 0;
#pragma omp parallel for reduction(+: sum)
for (int i = 0; i < sizeof(v)/sizeof(int); i++){
sum += v[i];
}
printf("%d\n",sum);
Note there are private variables in this example. Private variables are very dangerous because they are uninitialized inside the parallel region, simply don't use them explicitly. If you need something local, declare it inside the parallel region.

Related

Is using pragma omp simd like this correct?

#include <stdio.h>
#include <stdlib.h>
#include <omp.h>
#define pow(x) ((x) * (x))
#define NUM_THREADS 8
#define wmax 1000
#define Nv 2
#define N 5
int b=0;
float Points[N][Nv]={ {0,1}, {3,4}, {1,2}, {5,1} ,{8,9}};
float length[wmax+1]={0};
float EuclDist(float* Ne, float* Pe) {
int i;
float s = 0;
for (i = 0; i < Nv; i++) {
s += pow(Ne[i] - Pe[i]);
}
return s;
}
void DistanceFinder(float* a[]){
int i;
#pragma omp simd
for (i=1;i<N+1;i++){
length[b] += EuclDist(&a[i],&a[i-1]);
}
//printf(" %f\n", length[b]);
}
void NewRoute(){
//some irrelevant things
DistanceFinder(Points);
}
int main(){
omp_set_num_threads(NUM_THREADS);
do{
b+=1;
NewRoute();
} while (b<wmax);
}
Trying to parallelize this loop and trying different things, tried this one.
Seems to be the fastest, however is it correct to use SIMD like that? Because I'm using a previous iteration (i and i - 1). The results I see though are correct weirdly or not.
Seems to be the fastest, however is it correct to use SIMD like that?
First, there is a race condition that needs to be fixed, namely during the updates of the array length[b]. Moreover, you are accessing memory outside the array a; (iterating from 1 to N + 1), and you are passing &a[i]. You can fix the race condition by using OpenMP reduction clause:
void DistanceFinder(float* a[]){
int i;
float sum = 0;
float tmp;
#pragma omp simd private(tmp) reduction(+:sum)
for (i=1;i<N;i++){
tmp = EuclDist(a[i], a[i-1]);
sum += tmp;
}
length[b] += sum;
}
Furthermore, you need to provide a version of EuclDist as follows:
#pragma omp declare simd uniform(Ne, Pe)
float EuclDist(float* Ne, float* Pe) {
int i;
float s = 0;
for (i = 0; i < Nv; i++)
s += pow(Ne[i] - Pe[i]);
return s;
}
Because I'm using a previous iteration (i and i - 1).
In your case, it is okay, since the array a is just being read.
The results I see though are correct weirdly or not.
Very-likely there was no vectorization taking place. Regardless, it would still be undefined behavior due to the aforementioned race condition.
You can simplify your code so that it increases the likelihood of the vectorization actually happening, for instance:
void DistanceFinder(float* a[]){
int i;
float sum = 0;
float tmp;
#pragma omp simd private(tmp) reduction(+:sum)
for (i=1;i<N;i++){
tmp = pow(a[i][0] - a[i-1][0]) + pow(a[i][1] - a[i-1][1])
sum += tmp;
}
length[b] += sum;
}
A further change that you can do to improve the performance of your code is to allocate the matrix (that is passed as a parameter of the function DistanceFinder) in a manner that when you iterate over its rows (i.e., a[i]) you would be iterating over continuous memory address.
For instance, you could pass two arrays a1 and a2 to represent the first and second columns of the matrix a:
void DistanceFinder(float a1[], float a2[]){
int i;
float sum = 0;
float tmp;
#pragma omp simd private(tmp) reduction(+:sum)
for (i=1;i<N;i++){
tmp = pow(a1[i] - a1[i-1]) + pow(a2[i][1] - a2[i-1][1])
sum += tmp;
}
length[b] += sum;
}

With openmp in C, how can I parallelize a for loop that contains a nested comparison function for qsort?

I want to parallelize a for loop which contains a nested comparison function for qsort:
#include <stdio.h>
#include <stdlib.h>
#include <omp.h>
int main(){
int i;
#pragma omp parallel for
for(i = 0; i < 100; i++){
int *index= (int *) malloc(sizeof(int)*10);
double *tmp_array = (double*) malloc(sizeof(double)*10);
int j;
for(j=0; j<10; j++){
tmp_array[j] = rand();
index[j] = j;
}
// QuickSort the index array based on tmp_array:
int simcmp(const void *a, const void *b){
int ia = *(int *)a;
int ib = *(int *)b;
if ((tmp_array[ia] - tmp_array[ib]) > 1e-12){
return -1;
}else{
return 1;
}
}
qsort(index, 10, sizeof(*index), simcmp);
free(index);
free(tmp_array);
}
return 0;
}
When I try to compile this, I get the error:
internal compiler error: in get_expr_operands, at tree-ssa-operands.c:881
}
As far as I can tell, this error is due to the nested comparison function. Is there a way to make openmp work with this nested comparison function? If not, is there a good way to achieve a similar result without a nested comparison function?
Edit:
I'm using GNU C compiler where nested functions are permitted. The code compiles and runs fine without the pragma statement. I can't define simcmp outside of the for loop because tmp_array would then have to be a global variable, which would mess up the multi-threading. However, if somebody has a suggestion to achieve the same result without a nested function, that would be most welcome.
I realize this has been self answered, but here are some standard C and OpenMP options. The qsort_r function is a good classic choice, but it's worth noting that qsort_s is part of the c11 standard, and thus is portable wherever c11 is offered (which does not include Windows, they don't quite offer c99 yet).
As to doing it in OpenMP without the nested comparison function, still using original qsort, there are two ways that come to mind. First is to use the classic global variable in combination with OpenMP threadprivate:
static int *index = NULL;
static double *tmp_array = NULL;
#pragma omp threadprivate(index, tmp_array)
int simcmp(const void *a, const void *b){
int ia = *(int *)a;
int ib = *(int *)b;
double aa = ((double *)tmp_array)[ia];
double bb = ((double *)tmp_array)[ib];
if ((aa - bb) > 1e-12){
return -1;
}else{
return 1;
}
}
int main(){
int i;
#pragma omp parallel for
for(i = 0; i < 100; i++){
index= (int *) malloc(sizeof(int)*10);
tmp_array = (double*) malloc(sizeof(double)*10);
int j;
for(j=0; j<10; j++){
tmp_array[j] = rand();
index[j] = j;
}
// QuickSort the index array based on tmp_array:
qsort_r(index, 10, sizeof(*index), simcmp, tmp_array);
free(index);
free(tmp_array);
}
return 0;
}
The version above causes every thread in the parallel region to use a private copy of the global variables index and tmp_array, which takes care of the issue. This is probably the most portable version you can write in standard C and OpenMP, with the only likely incompatible platforms being those that do not implement thread local memory (some microcontrollers, etc.).
If you want to avoid the global variable and still have portability and use OpenMP, then I would recommend using C++11 and the std::sort algorithm with a lambda:
std::sort(index, index+10, [=](const int& a, const int& b){
if ((tmp_array[a] - tmp_array[b]) > 1e-12){
return -1;
}else{
return 1;
}
});
I solved my problem with qsort_r, which allows you to pass an additional pointer to the comparison function.
#define _GNU_SOURCE
#include <stdio.h>
#include <stdlib.h>
#include <omp.h>
int simcmp(const void *a, const void *b, void *tmp_array){
int ia = *(int *)a;
int ib = *(int *)b;
double aa = ((double *)tmp_array)[ia];
double bb = ((double *)tmp_array)[ib];
if ((aa - bb) > 1e-12){
return -1;
}else{
return 1;
}
}
int main(){
int i;
#pragma omp parallel for
for(i = 0; i < 100; i++){
int *index= (int *) malloc(sizeof(int)*10);
double *tmp_array = (double*) malloc(sizeof(double)*10);
int j;
for(j=0; j<10; j++){
tmp_array[j] = rand();
index[j] = j;
}
// QuickSort the index array based on tmp_array:
qsort_r(index, 10, sizeof(*index), simcmp, tmp_array);
free(index);
free(tmp_array);
}
return 0;
}
This compiles and runs with no issue. However, it is not completely ideal as qsort_r is platform and compiler dependent. There is a portable version of qsort_r here where the author summarizes my problem nicely:
If you want to qsort() an array with a comparison operator that takes
parameters you need to use global variables to pass those parameters
(not possible when writing multithreaded code), or use qsort_r/qsort_s
which are not portable (there are separate GNU/BSD/Windows versions
and they all take different arguments).

Why getting incorrect results from an OpenMP program?

I'm writing some simple example to understand how the things work with OpenMP programs.
#include <stdio.h>
#include <math.h>
#include <stdlib.h>
#include <omp.h>
int main (int argc ,char* argv[]){
omp_set_num_threads(4);
int j =0;
#pragma omp parallel private (j)
{
int i;
for(i=1;i<2;i++){
printf("from thread %d : i is equel to %d and j is equal to %d\n ",omp_get_thread_num(),i,j);
}
}
}
So in this example I should get j=0 each time,
unfortunately the result is j == 0 3 times , and j == 32707 one time.
What is wrong with my example?
Use firstprivate(j) rather than private(j) if you want that each thread has a private copy of j with the initial value being the value before entering the parallel region.

Multithreading in c. Mutexes

My code does the following: creates N threads, each one of them increments the global variable counter M times. I am using a mutex in order to assure the final value of counter is M*N.
I would like to observe the situation without a mutex, to obtain a different value for counter, in order to proper assess the mutex's work. I commented out the mutex, but the results are the same. Should I put them to sleep for a random period of time? How should I procede?
#include <stdio.h>
#include <pthread.h>
#define N 10
#define M 4
pthread_mutex_t mutex;
int counter=0;
void *thread_routine(void *parameter)
{
pthread_mutex_lock(&mutex);
int i;
for (i=0; i<M; i++)
counter++;
pthread_mutex_unlock(&mutex);
}
int main(void)
{
pthread_t v[N];
int i;
pthread_mutex_init(&mutex,NULL);
for (i=0; i<N; i++)
{
pthread_create(&v[i],NULL,thread_routine,NULL);
}
for (i=0; i<N; i++)
{
pthread_join(v[i],NULL);
}
printf("%d %d\n",counter,N*M);
if (N*M==counter)
printf("Success!\n");
pthread_mutex_destroy(&mutex);
return 0;
}
I don't know what compiler you used, but g++ in this case would completely eliminate the threads and calculate the final value of counter at compile time.
To prevent that optimization you can make the counter variable volatile
volatile int counter=0;
As this will tell the compiler that the variable can change at any time by external resources, it is forced to not to do any optimizations on that variable that could have side effects. As an external resource could change the value, the final value might not be the result of N*M and therefore the value of counter will be calculated at run-time.
Also what WhozCraig stated in his comment will most likely apply in your case. But I think he meant M, not N.
Additionally to your original question: As you read the counter once all threads are joined, it might be worth to give each thread its own counter and sum all threads' counters when they finished computation. This way you can compute the final value without any locks or atomic operations.
Edit:
Your first test with mutex would look like this
#define N 10
#define M 10000000
pthread_mutex_t mutex;
volatile int counter=0;
void *thread_routine(void *parameter)
{
pthread_mutex_lock(&mutex);
int i;
for (i=0; i<M; i++)
counter++;
pthread_mutex_unlock(&mutex);
}
and your second test without mutex like this
#define N 10
#define M 10000000
pthread_mutex_t mutex;
volatile int counter=0;
void *thread_routine(void *parameter)
{
// pthread_mutex_lock(&mutex);
int i;
for (i=0; i<M; i++)
counter++;
// pthread_mutex_unlock(&mutex);
}
while the second test will have the expected race conditions when incrementing the counter variable.
Compilation can be done using gcc -O3 -lpthread test.c

OpenMP gathering data (join data?) after parallel for

What I am looking for is what is the best way to gather all the data from the parallel for loops into one variable. OpenMP seems to have a different routine then I am used to seeing as I started learning OpenMPI first which has scatter and gather routines.
Calculating PI (embarrassingly parallel routine)
#include <omp.h>
#include <stdio.h>
#include <stdlib.h>
#define NUM_STEPS 100
#define CHUNKSIZE 20
int main(int argc, char *argv[])
{
double step, x, pi, sum=0.0;
int i, chunk;
chunk = CHUNKSIZE;
step = 1.0/(double)NUM_STEPS;
#pragma omp parallel shared(chunk) private(i,x,sum,step)
{
#pragma omp for schedule(dynamic,chunk)
for(i = 0; i < NUM_STEPS; i++)
{
x = (i+0.5)*step;
sum = sum + 4.0/(1.0+x*x);
printf("Thread %d: i = %i sum = %f \n",tid,i,sum);
}
pi = step * sum;
}
EDIT: It seems that I could use an array sum[*NUM_STEPS / CHUNKSIZE*] and sum the array into one value, or would it be better to use some sort of blocking routine to sum the product of each iteration
Add this clause to your #pragma omp parallel ... statement:
reduction(+ : pi)
Then just do pi += step * sum; at the end of the parallel region. (Notice the plus!) OpenMP will then automagically sum up the partial sums for you.
Lets see, I am not quite sure what happens, because I havn't got deterministic behaviour on the finished application, but I have something looks like it resembles π. I removed the #pragma omp parallel shared(chunk) and changed the #pragma omp for schedule(dynamic,chunk) to #pragma omp parallel for schedule(dynamic) reduction(+:sum).
#pragma omp parallel for schedule(dynamic) reduction(+:sum)
This requires some explanation, I removed the schedules chunk just to make it all simpler (for me). The part that you are interested in is the reduction(+:sum) which is a normal reduce opeartion with the operator + and using the variable sum.
#include <omp.h>
#include <stdio.h>
#include <stdlib.h>
#define NUM_STEPS 100
int main(int argc, char *argv[])
{
double step, x, pi, sum=0.0;
int i;
step = 1.0/(double)NUM_STEPS;
#pragma omp parallel for schedule(dynamic) reduction(+:sum)
for(i = 0; i < NUM_STEPS; i++)
{
x = (i+0.5)*step;
sum +=4.0/(1.0+x*x);
printf("Thread %%d: i = %i sum = %f \n",i,sum);
}
pi = step * sum;
printf("pi=%lf\n", pi);
}

Resources