This might be a trivial question, but I'm encountering some issues with the parallelization of the following section of my code. I hope someone can help clearing up any issues (if there is any). By the way, the code runs perfectly in serial. I will start by presenting the code:
#pragma omp parallel for shared(P,Q,WE,CEx,EA,Pn,Wxlim) private(i,j,ij)
for(j=0;j<m;j++)
{
P[j] = -EA[j]/(CEx[j]+1.0E-20);
Q[j] = Pn[j]/(CEx[j]+1.0E-20);
for(i=1;i<(int)Wxlim[j];i++)
{
ij = (i*m)+j;
P[ij] = -EA[ij]/((WE[ij]*P[(ij)-m]) + CEx[ij]+1.0E-20);
Q[ij] = (Pn[ij]-WE[ij]*Q[(ij)-m])/((WE[ij]*P[(ij)-m]) + CEx[ij]+1.0E-20);
}
}
The code seems to run fine for a little bit, then gets a segmentation fault as some point and I'm not sure why. I only want the j loop to be parallel and I want the i loop to be run in serial. In other words, for each j I want a single thread to calculate the i loop. As you can see there is a dependency within the i loop, but each i loop as a whole is independent for a given j. That is why I want to parallelize the outer loop and run the inner loop on independent threads for a given j.
For starters, do I have this setup correctly to do as I intend? I should note that m is much larger than the number of threads. And again, the code runs fine in serial, so I know it's nothing to do with the variables.
Related
I would like to implement OpenMP to parallelize my code. I am starting from a very basic example to understand how it works, but I am missing something...
So, my example looks like this, without parallelization:
int main() {
...
for (i = 0; i < n-1; i++) {
u[i+1] = (1+h)*u[i]; // Euler
v[i+1] = v[i]/(1-h); // implicit Euler
}
...
return 0;
}
Where I omitted some parts in the "..." because are not relevant. It works, and if I print the u[] and v[] arrays on a file, I get the expected results.
Now, if I try to parallelize it just by adding:
#include <omp.h>
int main() {
...
omp_set_num_threads(2);
#pragma omp parallel for
for (i = 0; i < n-1; i++) {
u[i+1] = (1+h)*u[i]; // Euler
v[i+1] = v[i]/(1-h); // implicit Euler
}
...
return 0;
}
The code compiles and the program runs, BUT the u[] and v[] arrays are half full of zeros.
If I set omp_set_num_threads( 4 ), I get three quarters of zeros.
If I set omp_set_num_threads( 1 ), I get the expected result.
So it looks like only the first thread is being executed, while not the other ones...
What am I doing wrong?
OpenMP assumes that each iteration of a loop is independent of the others. When you write this:
for (i = 0; i < n-1; i++) {
u[i+1] = (1+h)*u[i]; // Euler
v[i+1] = v[i]/(1-h); // implicit Euler
}
The iteration i of the loop is modifying iteration i+1. Meanwhile, iteration i+1 might be happening at the same time.
Unless you can make the iterations independent, this isn't a good use-case for parallelism.
And, if you think about what Euler's method does, it should be obvious that it is not possible to parallelize the code you're working on in this way. Euler's method calculates the state of a system at time t+1 based on information at time t. Since you cannot knowing what's at t+1 without knowing first knowing t, there's no way to parallelize across the iterations of Euler's method.
u[i+1] = (1+h)*u[i];
v[i+1] = v[i]/(1-h);
is equivalent to
u[i] = pow((1+h), i)*u[0];
v[i] = v[0]*pow(1.0/(1-h), i);
therefore you can parallelize you code like this
#pragma omp parallel for
for (int i = 0; i < n; i++) {
u[i] = pow((1+h), i)*u[0];
v[i] = v[0]*pow(1.0/(1-h), i);
}
If you want to mitigate the cost of the pow function you can do it once per thread rather than once per iteration like his (since t << n).
#pragma omp parallel
{
int nt = omp_get_num_threads();
int t = omp_get_thread_num();
int s = (t+0)*n/nt;
int f = (t+1)*n/nt;
u[s] = pow((1+h), s)*u[0];
v[s] = v[0]*pow(1.0/(1-h), s);
for(int i=s; i<f-1; i++) {
u[i+1] = (1+h)*u[i];
v[i+1] = v[i]/(1-h);
}
}
You can also write your own pow(double, int) function optimized for integer powers.
Note that the relationship I used is not in fact 100% equivalent because floating point arithmetic is not associative. That's not usually a problem but it's something one should be aware of.
Before parallelizing your code you must identify its concurrency, i.e. the set of tasks that are logically happening at the same time and then figure out a way to make them actually happen in parallel.
As mentioned above, this is a not a good example to apply parallelism on due to the fact that there is no concurrency in its nature. Attempting to use parallelism like that will lead to wrong results, due to the so-called race conditions.
If you just wanna learn how OpenMP works, try to come up with examples where you can clearly identify conceptually independent tasks. One of the most simple I can think of would be computing the area under a curve by means of integration.
Welcome to the parallel ( or "just"-concurrent ) plurality of computing realities.
Why?
Any non-sequential schedule of processing the loop will have problems with hidden ( not correctly handled ) breach of data-{-access | -value}
integrity in time.
A pure-[SERIAL] flow of processing is free from such dangers as the principally serialised steps indirectly introduce ( right by a rigid order of executing nothing but a one-step-after-another as a sequence ) order, in which there is no chance to "touch" the same memory location twice or more times at the same time.
This "peace-of-mind" is inadvertently lost, once a process goes into a "just"-[CONCURRENT] or the true-[PARALLEL] processing.
Suddenly there is an almost random order ( in a case of a "just"-[CONCURRENT] ) or a principally "immediate" singularity ( avoiding any original meaning of "order" - in the case of a true-[PARALLEL] code execution mode -- like a robot, having 6DoF, arrives into each and every trajectory-point in a true-[PARALLEL] fashion, driving all 6DoF-axes in parallel, not a one-after-another, in a pure-[SERIAL]-manner, not in a some-now-some-other-later-and-the-rest-as-it-gets in a "just"-[CONCURRENT] fashion, as the 3D-trajectory of robot-arm will become hardly predictable and mutual collisions would be often on a car assembly line ... ).
Solution:
Using either a defensive tool, called atomic operations, or a principal approach - design (b)locking-free algorithm, where possible, or explicitly signal and coordinate reads and writes ( sure, at a cost in excess-time and degraded performance ), so as to warrant the values will not get damaged into an inconsistent digital trash, if protective steps ( ensuring all "old"-writes get safely "through" before any "next"-reads go ahead to grab a "right"-value ) were not coded in ( as was demonstrated above ).
Epilogue:
Using a tool, like OpenMP for problems, where it cannot bring any advantage, will result in spending time and decreased performance ( as there are needs to handle all tool-related overheads, while there is literally zero net-effect of parallelism in cases, where the algorithm does not allow any parallelism to be enjoyed ), so one finally pays ways more then one finally gets.
A good point to learn about OpenMP best practices could be sources for example from Lawrence Livermore National Laboratory ( indeed very competent ) and similar publications on using OpenMP.
I have the following piece of code that I would like to write in openmp.
My code abstractly looks like the following
I start first with dividing N=100 iterations equally among p=10pieces and I store the allocated iterations for every piece in a vector
Nvec[1]={0,1,..,9}
Nvec[2]={10,11,..,19}
Nvec[p]={N-9,..,N}
then I loop on the iterations
for(k=0;k<p;k++){\\loop on each piece of Nvec
for(j=0;j<2;j++){\\here is a nested loop
for(i=Nvec[k][0];i<Nvec[k][p];i++){
\\then I loop between the first and
\\last value of the array corresponding to piece k
}
}
Now, as you can see the code is sequential with a total of 2*100=200 iterations, I wanted to parallelize it using OpenMp with the absolute condition to keep the order of iterations!
I tried the following
#pragma omp parallel for schedule(static) collapse(2)
{
for(j=0;j<2;j++){
for(i=0;i<n;i++){
\\loop code here
}
}
}
this setting doesn't keep the order of the iterations as in the sequential version.
In the sequential version, each chunk is processed entirely with j=0 and then entirely with j=1.
In my openMP version, every thread takes a chunk of iterations and process it entirely with j=0. In a way all threads treats either j=0 or j=1 cases. Every worker with p=10 processes 200/10=20 iterations, problem is all iterations are j=0 or j=1.
How can I make sure that every thread get a chunk of iterations, performs the loop code with j=0 on all the iterations, then j=1 on the same chunk of iterations?
EDIT
what I want exactly for every chunk of 20 iterations
worker 1
j:0
i:1--->10
j:1
i:1--->10
worker p
j:0
i:90--->99
j:1
i:90--->99
the openMP code above does
worker 1
j:0
i:1--->20
worker p
j:1
i:80--->99
It's actually simple - just make the outer j-loop non-worksharing:
#pragma omp parallel
for (int j = 0; j < 2; j++) {
#pragma omp for schedule(static)
for (int i = 0; i < 10; i++) {
...
}
}
If you use the static schedule, OpenMP guarantees, that each worker will get to handle the same range of is for both j=0 and j=1.
Note: You moving the parallel construct to the outer loop is merely an optimization to avoid thread management overhead. The code works similarly if you just place a parallel for in-between the two loops.
I am trying to parallelize a loop in my program so i searched about multi-threading. First i took a look on POSIX multithreaded programming tutorial, it was so complicated so i tried to do something easier. I tried with OpenMP. I have successfully parallelized my code but the problem of execution time get worser than the serial case. this is below a portion ok my program. I wish you tell me what's the problem. Should i specify what variables are shared and what are private? and how can i know the kind of each variable? i wish you answer me because i searched in many forums and i still don't know what to do.
#include <stdio.h>
#include <math.h>
#include <stdlib.h>
#include <time.h>
#include <omp.h>
#define D 0.215 // magnetic dipolar constant
main()
{
int i,j,n,p,NTOT = 1600,Nc = NTOT-1;
float r[2],spin[2*NTOT],w[2],d;
double E,F,V,G,dU;
.
.
.
for(n = 1; n <= Nc; n++){
fscanf(voisins,"%d%d%f%f%f",&i,&j,&r[0],&r[1],&d);
V = 0.0;E = 0.0;F = 0.0;
#pragma omp parallel num_threads(4)
{
#pragma omp for schedule(auto)
for(p = 0;p < 2;p++)
{
V += (D/pow(d,3.0))*(spin[2*i-2+p]-w[p])*spin[2*j-2+p];
E += (spin[2*i-2+p]-w[p])*r[p];
F += spin[2*j-2+p]*r[p];
}
}
G = -3*(D/pow(d,5.0))*E*F;
dU += (V+G);
}
.
.
.
}//End of main()
You are parallelizing a loop with only 2 iterations: p=0 and p=1. The way that OpenMP's omp for works is by splitting up the loop iterations among your threads in the parallel team (which you've defined as 4 threads) and letting them work through their part of the problem in parallel.
With only 2 iterations, 2 of your threads will be sitting idle. On top of that, actually figuring out which threads will work on which part of the problem takes overhead. And if your actual loop doesn't take long (which in this case it clearly doesn't), the overhead will cost more than the benefits you've gained from parallelization.
A better strategy is usually to parallelize the outermost loops with OpenMP whenever possible in order to solve both the problems of splitting up the work evenly and reducing the (relative) overhead. Alternatively, you can parallelize at the lowest loop level using OpenMP 4.0's omp simd command.
Lastly, you are not computing the variables V, E, and F correctly. Because they are summed from iteration to iteration, you should define them all as reduction variables with reduction(+:V). I would be surprised if you are currently getting the correct answer as is.
(Also as High Performance Mark says: make sure you're timing the wall time execution of your program and not the CPU time execution of your program. This is typically done with omp_get_wtime().)
I've recently reading up about OpenMP and was trying to parallelize some existing for loops in my program to get a speed-up. However, for some reason I seem to be getting garbage data written to the file. What I mean by that is I don't have Points 1,2,3,4 etc. written to my file, I have Points 1,4,7,8 etc. I suspect this is because I am not keeping track of the threads and it just leads to race conditions?
I have been reading as much as I can find about OpenMP, since it seems like a great abstraction to do multi-threaded programing. I'd appreciate any pointers please to get to the bottom of what I might be doing incorrectly.
Here is what I have been trying to do so far (only the relevant bit of code):
#include <omp.h>
pixelIncrement = Image.rowinc/2;
#pragma omp parallel for
for (int i = 0; i < Image.nrows; i++ )
{
int k =0;
row = Image.data + i * pixelIncrement;
#pragma omp parallel for
for (int j = 0; j < Image.ncols; j++)
{
k++;
disparityThresholdValue = row[j];
// Don't want to save certain points
if ( disparityThresholdValue < threshHold)
{
// Get the data points
x = (int)Image.x[k];
y = (int)Image.y[k];
z = (int)Image.z[k];
grayValue= (int)Image.gray[k];
cloudObject->points[k].x = x;
cloudObject->points[k].y = y;
cloudObject->points[k].z = z;
cloudObject->points[k].grayValue = grayValue;
fprintf( cloudPointsFile, "%f %f %f %d\n", x, y, z, grayValue);
}
}
}
fclose( pointFile );
I did enable OpenMP in my Compiler settings (C/C++ -> Language -> Open MP Support (/openmp).
Any suggestions as to what might be the problem? I am using a Quadcore processor on Windows XP 32-bit.
Are all points written to the file, but just not sequentially, or is the actual point data messed up?
The first case is expected in parallel programming - once you execute something side-by-side you wont be able to guarantee order unless you synchronize the access (at which point you can just leave out the parallelization as it becomes effectively linear). If you need to rely on order, you can parallelize any calculations but need to write it down in one thread.
If the points itself are messed up, check where your variables are declared and if multiple threads are accessing the same.
A few problems here:
#pragma omp parallel for
for (int i = 0; i < Image.nrows; i++ )
{
int k =0;
row = Image.data + i * pixelIncrement;
#pragma omp parallel for
for (int j = 0; j < Image.ncols; j++)
{
k++;
There's no need for the inner parallel for. The outer loop should contain enough work to keep all cores busy.
Also, for the inner loop k is a shared variable and gets incremented in a non-atomic way. x, y, z are also shared among the inner loop threads and overwritten "randomly". Remove the inner directive and see how it goes.
When you have a loop with a nested loop there is no need for a second omp pragma.
It will already paralelize the first loop. Remember that this is valid only if the second loop has to be executed in sequence. You have a sequencial incrementation, so you can not execute the second loop in a random order. OMP pragmas are a very easy and cool way to paralelize code but do not use them too much!
More details here -> Parallel Loops with OpenMP
I am a little confused about creating many_plan by calling fftwf_plan_many_dft_r2c() and executing it with OpenMP. What I am trying to achieve here is to see if explicitly using OpenMP and organizing FFTW data could work together. ( I know I "should" use multithreaded version of fftw but I failed to get a expected speedup from it ).
My code looks like this:
/* I ignore some helper APIs */
#define N 1024*1024 //N is the total size of 1d fft
fftwf_plan p;
float * in;
fftwf_complex *out;
omp_set_num_threads(threadNum); // Suppose threadNum is 2 here
in = fftwf_alloc_real(2*(N/2+1));
std::fill(in,in+2*(N/2+1),1.1f); // just try with a random real floating numbers
out = (fftwf_complex *)&in[0]; // for in-place transformation
/* Problems start from here */
int n[] = {N/threadNum}; // according to the manual, n is the size of each "howmany" transformation
p = fftwf_plan_many_dft_r2c(1, n, threadNum, in, NULL,1 ,1, out, NULL, 1, 1, FFTW_ESTIMATE);
#pragma omp parallel for
for (int i = 0; i < threadNum; i ++)
{
fftwf_execute(p);
// fftwf_execute_dft_r2c(p,in+i*N/threadNum,out+i*N/threadNum);
}
What I got is like this:
If I use fftwf_execute(p), the program executes successfully, but the result seems not correct. ( I compare the result with the version of not using many_plan and openmp )
If I use fftwf_execute_dft_r2c(), I got segmentation fault.
Can somebody help me here? How should I partition the data across multiple threads? Or it is not correct in the first place.
Thank you in advance.
flyree
Do you properly allocate memory for out? Does this:
out = (fftwf_complex *)&in[0]; // for in-place transformation
do the same as this:
out = (fftw_complex*)fftw_malloc(sizeof(fftw_complex)*numberOfOutputColumns);
You are trying to access 'p' inside your parallel block, without specifically telling openMP how to use it. It should be:
pragma omp parallel for shared(p)
If you are going to split the work up for n threads, I would think you'd explicitly want to tell omp to use n threads:
pragma omp parallel for shared(p) num_threads(n)
Does this code work without multithreading? If you removed the for loop and openMP call and executed fftwf_execute(p) just once does it work?
I don't know much about FFTW's plans for many, but it seems like p is really many plans, not one single plan. So, when you "execute" p, you are executing all plans at once, right? You don't really need to iteratively execute p.
I'm still learning about OpenMP + FFTW so I could be wrong on these. StackOverflow doesn't like it when i put a # in front of pragma, but you need one.