Why is the parallel application taking more time to execute than the one with the single thread? I am using an 8 CPU computer with Ubuntu 14.04. The code is just my simple way to test omp parallel sections, the aim later is to run two different functions in two different threads at the same time, so I do not want to use #pragma omp parallel for.
parallel:
int main()
{
int k = 0;
int m = 0;
omp_set_num_threads(2);
#pragma omp parallel
{
#pragma omp sections
{
#pragma omp section
{
for( k = 0; k < 1e9; k++ ){};
}
#pragma omp section
{
for( m = 0; m < 1e9; m++ ){};
}
}
}
return 0;
}
and the single thread:
int main()
{
int m = 0;
int k = 0;
for( k = 0; k < 1e9; k++ ){};
for( m = 0; m < 1e9; m++ ){};
return 0;
}
If the compiler would not optimise the loops, then the parallel code would suffer from false sharing because m and k are very likely to end up in the same cache line. Make the variables private:
#pragma omp parallel private(k,m)
{
#pragma omp sections
{
#pragma omp section
{
for( k = 0; k < 1e9; k++ ){};
}
#pragma omp section
{
for( m = 0; m < 1e9; m++ ){};
}
}
}
At high optimisation levels, the compiler could drop the loops altogether. But then the parallel version will still have the added overhead of spawning the OpenMP worker threads and joining them afterwards, which will make it slower than the sequential version.
In above test code compiler itself optimizing the code. You need to change your test code. Depending on number of thread you are creating also add an overhead.
Also refer, Amdahl’s Law.
Related
I have problem to parallel for-loop code in OpenMP, result of parallel for-loop is different with a sequential for-loop. How to make this code parallel with same result as sequential code.
const long nx = 20;
const long ny = 20;
const long nz = 20;
int i, j, k, a, v;
#pragma omp parallel private(tid_2, i,j,k,a,v) shared(numt_2,nx,ny,nz)
{
numt_2 = omp_get_num_threads();
tid_2 = omp_get_thread_num();
printf("Thread %d Total thread%d\n", tid_2, numt_2);
#pragma omp parallel for collapse(4) //num_threads(3)
for (i = 0; i <= nx; i++)
{
for (j = 0; j <= ny; j++)
{
for (k = 0; k <= nz; k++)
{
for (a = 0; a < 19; a++)
{
ff[fineindex(i, j, k, a)] = 0.0;
//#pragma omp barrier
for (v = 0; v < 19; v++)
{
ff[fineindex(i, j, k, a)] += Minv2[a][v] * rf[v];
}
}
}
}
}
}
Your outer parallel region will execute the inner parallel for region multiple times. Assume there are 8 cores on your machine, the loops will be calculated 8 times, compared to the sequential version only running loops once.
ff is implicitly shared between threads in the parallel region. Therefore, during the computation, data race may exist for ff[fineindex(i, j, k, a)]. Since 8 threads are working on ff at the same time, when two threads try to write to the same index of ff, it may lead to an unpredicted result.
To resolve this issue, you may use omp for instead of omp parallel for for the loops. omp for is just used for worksharing, which distributes your loop iterations to the threads in the outer parallel region. It will not start another parallel region. In this way, each thread in the outer parallel will handle different loop iterations.
So this is just a basic program that reads values from a file and then calculates for each value the amount of fuel it would take. f = (m/4)-3. I have it working but I feel I would have a better timing if more threads could be doing some of these things at the same time.
Currently without that critical section, I am getting incorrect or changing results. I am wondering if theres a way I can reduce the amount in the critical section or otherwise optimize this section more. Thanks in advance for the help!
#pragma omp parallel for reduction(+:fuelUnits) schedule(dynamic)
for (int j = 0; j < count; j++) {
#pragma omp critical
{
i = arr[j];
//printf("i = %d\n", i);
mass += i;
printf("i = %d\n", i);
fuel2 = fuel(i);
}
fuelUnits += fuel2;
printf("fuel is %d\n", fuel2);
}
What happens if you remove mass+=i? Does it need to be included in your reduction operation: (ex: reduction(+:fuelUnits, mass)?
This should work without the critical section:
// somewhere globally:
int fuelUnits = 0;
// your fuel function here:
int fuel(int m) {
return (m/4)-3
}
-- snip --
#pragma omp parallel for reduction(+:fuelUnits) schedule(dynamic)
for (int j = 0; j < count; j++)) {
int i = arr[j];
fuelUnits += fuel(i);
}
I am very new to openMP, but am trying to write a simple program that generates the entries of matrix in parallel, namely for the N by M matrix A, let A(i,j) = i*j. A minimal example is included below:
#include <stdio.h>
#include <stdlib.h>
#include <omp.h>
int main(int argc,
char **argv)
{
int i, j, N, M;
N = 20;
M = 20;
int* A;
A = (int*) calloc(N*M, sizeof(int));
// compute entries of A in parallel
#pragma omp parallel for shared(A)
for (i = 0; i < N; ++i){
for (j = 0; j < M; ++j){
A[i*M + j] = i*j;
}
}
// print parallel results
for (i = 0; i < N; ++i){
for (j = 0; j < M; ++j){
printf("%d ", A[i*M + j]);
}
printf("\n");
}
free(A);
return 0;
}
The results are not always correct. In theory, I am only parallelizing the outer loop, and each iteration of the for loop does not modify the entries that the other iterations will modify. But I am not sure how to translate this to openMP. When doing a similar procedure for a vector array (i.e. just one for loop), there seems to be no issue, e.g.
#pragma omp parallel for
for (i = 0; i < N; ++i)
{
v[i] = i*i;
}
Can someone explain to me how to fix this?
The issue in this case is that j is shared between threads which messes with the control flow of the inner loop. By default variables declared outside of a parallel region are shared whereas variables declared inside of a parallel region are private.
Follow the general rule to declare variables as locally as possible. In the for loop this means:
#pragma omp parallel for
for (int i = 0; i < N; ++i) {
for (int j = 0; j < M; ++j) {
This makes reasoning about your code much easier - and OpenMP code mostly correct by default. (Note A is shared by default because it is defined outside).
Alternatively you can manually specify private(i,j) shared(A) - this is more explicit and can help beginners. However it creates redundancy and can also be dangerous: private variables are uninitialized even if they had a valid value outside of the parallel region. Therefore I strongly recommend the implicit default approach unless necessary for advanced usage.
According to e.g. this
http://supercomputingblog.com/openmp/tutorial-parallel-for-loops-with-openmp/
The declaration of variables outside of a parallelized part is dangerous.
It can be defused by explicitly making the loop variable of the inner loop private.
For that, change this
#pragma omp parallel for shared(A)
to
#pragma omp parallel for private(j) shared(A)
The following function reads data from a file in loops and processes each loaded chunk at a time. To speed up this process, I thought to use openmp in the for loop so that this job is divided between the threads as the following:
void read_process(FILE *fp_read, double *centroids, int total) {
int i, j, c, dim = 16, chunk_size = 10000, num_itr;
double *buffer = calloc(total * dim, sizeof(double));
num_itr = total / chunk_size;
for (c = 0; c < total; ++c) {
fread(buffer, sizeof(double), chunk_size * dim, fp_read);
#pragma omp parallel private(i, j)
{
#pragma omp for
for (i = 0; i < chunk_size; i++) {
for (j = 0; j < dim; j++) {
#pragma omp atomic update
centroids[j] += buffer[i * dim + j];
}
}
}
}
free(buffer);
fclose(fp_read);
}
Without using openmp, my code works fine. However, adding #pragma section causes the code to stop and show the word Hangup in the terminal without further explanation of what was it hanged for. Some folks in StackOverflow answered other issues related to this error message that it is probably because of race condition but I think it won't be the case here because I am using atomic which serializes the access of the buffer. Am I right? Do you guys see an issue with my code? How can I enhance this code?
Thank you very much.
What you want to do is an array reduction. If you have a compiler that supports OpenMP 4.5 then you don't need to change your serial code. You can do
#pragma omp parallel for private (j) reduction(+:centroids[:dim])
for(i=0; i <chunck_size; i++) {
for(j=0; j < dim; j++) {
centroids[j] += buffer[i*dim +j];
}
}
Otherwise you can do the array reduction by hand. Here is one solution
#pragma omp parallel private(j)
{
double tmp[dim] = {0};
#pragma omp for
for(i=0; i < chunck_size; i++) {
for(j=0; j < dim; j++) {
tmp[j] += buffer[i*dim +j];
}
}
#pragma omp critical
for(int i=0; i < dim; i++) centroids[i] += tmp[i];
}
Your current solution is causing massive false sharing as each thread is writing to the same cache line. Both of the solutions above fix this problem by making private versions of centroid for each thread.
As long as dim << chunck_size then these are good solutions.
Disclaimer: following example is just an dummy example to quickly understand the problem. If you are thinking about real world problem, think anything dynamic programming.
The problem:
We have an n*m matrix, and we want to copy elements from previous row as in the following code:
for (i = 1; i < n; i++)
for (j = 0; j < m; j++)
x[i][j] = x[i-1][j];
Approach:
Outer loop iterations have to be executed in order, they would be executed sequentially.
Inner loop can be parallelized. We would want to minimize overhead of creating and killing threads, so we would want to create team of threads just once, however, this seems like an impossible task in OpenMP.
#pragma omp parallel private(j)
{
for (i = 1; i < n; i++)
{
#pragma omp for scheduled(dynamic)
for (j = 0; j < m; j++)
x[i][j] = x[i-1][j];
}
}
When we apply ordered option on the outer loop, the code will be executed sequential way, so there will be no performance gain.
I am looking to solution for the scenario above, even if I had to use some workaround.
I am adding my actual code. This is is actually slower than seq. version. Please review:
/* load input */
for (i = 1; i <= n; i++)
scanf ("%d %d", &in[i][W], &in[i][V]);
/* init */
for (i = 0; i <= wc; i++)
a[0][i] = 0;
/* compute */
#pragma omp parallel private(i,w)
{
for(i = 1; i <= n; ++i) // 1 000 000
{
j=i%2;
jn = j == 1 ? 0 : 1;
#pragma omp for
for(w = 0; w <= in[i][W]; w++) // 1000
a[j][w] = a[jn][w];
#pragma omp for
for(w = in[i][W]+1; w <= wc; w++) // 350 000
a[j][w] = max(a[jn][w], in[i][V] + a[jn][w-in[i][W]]);
}
}
As for measuring, I am using something like this:
double t;
t = omp_get_wtime();
// ...
t = omp_get_wtime() - t;
To sum up the parallelization in OpenMP for this particular case: It is not worth it.
Why?
Operations in the inner loops are simple. Code was compiled with -O3, so max() call was probably substituted with the code of function body.
Overhead in implicit barrier is probably high enough, to compensate the performance gain, and overall overhead is high enough to make the parallel code even slower than the sequential code was.
I also found out, there is no real performance gain in such construct:
#pragma omp parallel private(i,j)
{
for (i = 1; i < n; i++)
{
#pragma omp for
for (j = 0; j < m; j++)
x[i][j] = x[i-1][j];
}
}
because it's performance is similar to this one
for (i = 1; i < n; i++)
{
#pragma omp parallel for private(j)
for (j = 0; j < m; j++)
x[i][j] = x[i-1][j];
}
thanks to built-in thread reusing in GCC libgomp, according to this article: http://bisqwit.iki.fi/story/howto/openmp/
Since the outer loop cannot be paralellized (without ordered option) it looks there is no way to significantly improve performance of the program in question using OpenMP. If someone feels I did something wrong, and it is possible, I'll be glad to see and test the solution.