openmp: barrier synchronization not working within loop with if condition - c

I have the following code:
#pragma omp parallel shared(a,n) private(i,j,k,x,pid,rows,mymin,mymax)
{
// nprocs=1;
#ifdef _OPENMP
nprocs=omp_get_num_threads();
#endif
#ifdef _OPENMP
pid=omp_get_thread_num();
#endif
rows=n/nprocs;
mymin=pid * rows;
mymax=mymin + rows - 1;
for(k=0;k<n;k++){
if(k>=mymin && k<=mymax){
#pragma omp for schedule(static,rows)
for(x=k+1;x<n;x++){
a[k][x]= a[k][x]/a[k][k];
}
#pragma omp barrier
}
}
}
Here I am selecting which thread will update which row of matrix based on the if condition. For eg, if there are two threads, thread 1 will update first two rows of matrix 'a' and thread 2 will update the other two.
And after I selected that, I divide the iterations on the columns of that row by paralleling the inner loop among thread 1 and two( where I start for(x=k+1,x<n;x++)). I am also putting a barrier after the inner for loop so that after every column value of single row is updated, its synchronized.
But the problem is I am not getting proper synchronized values. In the final matrix, some values updated by thread 0 are shown in some rows and some by other thread but not all.

Using omp barrier here is useless since there is an implicit barrier at the end of a omp for construct unless a nowait clause is specified.
On the other hand, you don't need to manually specify how to decompose the work to threads, and the way you decompose is not correct.
What you are trying to do in fact can be written as follows.
#pragma omp parallel for shared(a,n) private(k,x)
for(k=0;k<n;k++){
for(x=k+1;x<n;x++){
a[k][x]= a[k][x]/a[k][k];
}
}
Since the work load is not balanced across different k, you may want to use schedule(dynamic, ...) clause as well. Please refer to omp doc for more info.
http://msdn.microsoft.com/en-us/library/b5b5b6eb.aspx

Related

How to run a static parallel for loop without the main thread

I want to execute a funtion with multithreads, without using main thread. So this is what I want:
# pragma omp parallel num_threads(9)
{
// do something
# pragma omp for schedule(static,1)
for(int i = 0; i < 10; i++)
func(i); // random stuff
}
So I want func() to be executed just by 8 threads, without main thread. Is that possible somehow?
So I want func() to be executed just by 8 threads, without main
thread. Is that possible somehow?
Yes, you can do it. However, you will have to implement the functionality of
#pragma omp for schedule(static,1)
since, explicitly using the aforementioned clause will make the compiler automatically divide the iterations of the loop among the threads in the team, including the master thread of that team, which in your code example will be also the main thread. The code could look like the following:
# pragma omp parallel num_threads(9)
{
// do something
int thread_id = omp_get_thread_num();
int total_threads = omp_get_num_threads();
if(thread_id != 0) // all threads but the master thread
{
thread_id--; // shift all the ids
total_threads = total_threads - 1;
for(int i = thread_id ; i < 10; i += total_threads)
func(i); // random stuff
}
#pragma omp barrier
}
First, we ensure that all threads except the master executed the loop to be parallelized (i.e., if(thread_id != 0)), then we divided the iterations of the loop among the remaining threads (i.e., for(int i = thread_id ; i < 10; i += total_threads)), and finally we ensure that all threads wait for each other at the end of the parallel region (i.e., #pragma omp barrier).
If it isn't important which thread doesn't do the loop, another option would be to combine sections with the loop. This means nesting parallelism, which one should be very careful with, but it should work:
#pragma omp parallel sections num_threads(2)
{
#pragma omp section
{ /* work for one thread */ }
#pragma omp section
{
#pragma omp parallel for num_threads(8) schedule(static, 1)
for (int i = 0; i < N; ++i) { /* ... */ }
}
}
The main problem here is, that most likely one of those sections will be taking much longer than the other one, meaning that in the worst case (loop faster than first section) all but one thread are doing nothing most of the time.
If you really need the master thread to be outside the parallel region this might work (not tested):
#pragma omp parallel num_threads(2)
{
#pragma omp master
{ /* work for master thread, other thread is NOT waiting */ }
#pragma omp single
{
#pragma omp parallel for num_threads(8) schedule(static, 1)
for (int i = 0; i < N; ++i) { /* ... */ }
}
}
There is no guarantee that the master thread wont be computing the single region as well, but if your cores aren't over-occupied it should at least be unlikely. One could even argue that if the second thread from the outer parallel region doesn't reach the single region in time, it is better that the master thread also has a chance of going in there, even if that means, that the second thread doesn't get anything to do.
As the single region should only have an implicit barrier at it's end, while the master region doesn't contain any implicit barriers, they should potentially be executed in parallel as longs as the master region is in front of the single region. This assumes that the single region is well-implemented, such that every thread has a chance of computing it. This isn't guaranteed by the standard, I think.
EDIT:
These solutions require nested parallelism to work, which is disabled by default in most implementations. It can be activated via the environment variable OMP_NESTED or by calling omp_set_nested().

Is there a way to cancel from inside ordered clause?

I'm developing a program that calculates a certain number of prime numbers using multiple threads. Now I have run into a problem of exiting from threads after said number of primes.
I've tried #pragma omp cancel for, but I cannot use it inside an ordered clause. Is there another way to "break" the loop?
void get_primes(prime_type start, prime_type end) {
#pragma omp parallel for ordered schedule(dynamic) shared(prime_counter)
for (candidate = start; candidate <= end; candidate += 2) {
if (is_prime(candidate)) {
#pragma omp ordered
{
primes[prime_counter] = candidate;
prime_counter++;
if (prime_counter >= max_primes) {
#pragma omp cancel for
}
#pragma omp cancellation point for
}
}
}
}
I want to immediately "break" the loop when I've found the desired number of primes and if I'm not mistaken that must be done inside the ordered clause.
No. It is not possible to cancel an ordered loop.
A loop construct that is canceled must not have an ordered clause.
(cf. 2.14.1 of the OpenMP standard)
One of the workaround to emulate cancellation is to add a skip at the beginning of the loop, e.g.
#pragma omp parallel for ordered schedule(dynamic) shared(prime_counter)
for (candidate = start; candidate <= end; candidate += 2) {
if (prime_counter >= max_primes) {
continue;
}
if (is_prime(candidate)) {
However, that is not yet a thread safe access to prime_counter. In order to avoid race conditions, you must do something along the lines of:
int local_prime_counter;
#pragma omp atomic read
local_prime_counter = prime_counter;
if (local_prime_counter >= max_primes)
...
#pragma omp atomic update
prime_counter++;
P.S. I'm not quite 100% sure if it is standard conforming to have a conditional ordered construct.

Counting sort using OpenMP

another question about OpenMP...
I'm trying to speed up counting sort with OpenMP, but my code runs fastest on 1 thread, and slows down as I'm adding threads...(I've got 4 cores) Results are correct.
I'm paralleling only a loop in which the counter is incremented, the rest is computed sequentially (is that ok?) Here I try making incrementation by atomic operation. I tried also a version in which every thread had his own table "counters" but it was even slower.
#pragma omp parallel for private(i) num_threads(4) default(none) shared(counters, table, possible_values, table_size)
for(i=0; i < table_size; i++){
#pragma omp atomic
counters[(int)(table[i]*100)]++;
}
table - contains unsorted values
possible_values - 100 (i've got numbers from 0 to 0.99)
table_size - size of table
How can I speed things up?

How to nest parallel loops in a sequential loop with OpenMP

I am currently working on a matrix computation with OpenMP. I have several loops in my code, and instead on calling for each loop #pragma omp parallel for[...] (which create all the threads and destroy them right after) I would like to create all of them at the beginning, and delete them at the end of the program in order to avoid overhead.
I want something like :
#pragma omp parallel
{
#pragma omp for[...]
for(...)
#pragma omp for[...]
for(...)
}
The problem is that I have some parts those have to be execute by only one thread, but in a loop, which contains loops those have to be execute in parallel... This is how it looks:
//have to be execute by only one thread
int a=0,b=0,c=0;
for(a ; a<5 ; a++)
{
//some stuff
//loops which have to be parallelize
#pragma omp parallel for private(b,c) schedule(static) collapse(2)
for (b=0 ; b<8 ; b++);
for(c=0 ; c<10 ; c++)
{
//some other stuff
}
//end of the parallel zone
//stuff to be execute by only one thread
}
(The loop boundaries are quite small in my example. In my program the number of iterations can goes until 20.000...)
One of my first idea was to do something like this:
//have to be execute by only one thread
#pragma omp parallel //creating all the threads at the beginning
{
#pragma omp master //or single
{
int a=0,b=0,c=0;
for(a ; a<5 ; a++)
{
//some stuff
//loops which have to be parallelize
#pragma omp for private(b,c) schedule(static) collapse(2)
for (b=0 ; b<8 ; b++);
for(c=0 ; c<10 ; c++)
{
//some other stuff
}
//end of the parallel zone
//stuff to be execute by only one thread
}
}
} //deleting all the threads
It doesn't compile, I get this error from gcc: "work-sharing region may not be closely nested inside of work-sharing, critical, ordered, master or explicit task region".
I know it surely comes from the "wrong" nesting, but I can't understand why it doesn't work. Do I need to add a barrier before the parallel zone ? I am a bit lost and don't know how to solve it.
Thank you in advance for your help.
Cheers.
Most OpenMP runtimes don't "create all the threads and destroy them right after". The threads are created at the beginning of the first OpenMP section and destroyed when the program terminates (at least that's how Intel's OpenMP implementation does it). There's no performance advantage from using one big parallel region instead of several smaller ones.
Intel's runtimes (which is open source and can be found here) has options to control what threads do when they run out of work. By default they'll spin for a while (in case the program immediately starts a new parallel section), then they'll put themselves to sleep. If the do sleep, it will take a bit longer to start them up for the next parallel section, but this depends on the time between regions, not the syntax.
In the last of your code outlines you declare a parallel region, inside that use a master directive to ensure that only the master thread executes a block, and inside the master block attempt to parallelise a loop across all threads. You claim to know that the compiler errors arise from incorrect nesting but wonder why it doesn't work.
It doesn't work because distributing work to multiple threads within a region of code which only one thread will execute doesn't make any sense.
Your first pseudo-code is better, but you probably want to extend it like this:
#pragma omp parallel
{
#pragma omp for[...]
for(...)
#pragma omp single
{ ... }
#pragma omp for[...]
for(...)
}
The single directive ensures that the block of code it encloses is only executed by one thread. Unlike the master directive single also implies a barrier at exit; you can change this behaviour with the nowait clause.

Openmp: increase for loop iteration number

I have this parallel for loop
struct p
{
int n;
double *l;
}
#pragma omp parallel for default(none) private(i) shared(p)
for (i = 0; i < p.n; ++i)
{
DoSomething(p, i);
}
Now, it is possible that inside DoSomething(), p.n is increased because new elements are added to p.l. I'd like to process these elements in a parallel fashion. OpenMP manual states that parallel for can't be used with lists, so DoSomething() adds these p.l's new elements to another list which is processed sequentially and then it is joined back with p.l. I don't like this workaround. Anyone knows a cleaner way to do this?
A construct to support dynamic execution was added to OpenMP 3.0 and it is the task construct. Tasks are added to a queue and then executed as concurrently as possible. A sample code would look like this:
#pragma omp parallel private(i)
{
#pragma omp single
for (i = 0; i < p.n; ++i)
{
#pragma omp task
DoSomething(p, i);
}
}
This will spawn a new parallel region. One of the threads will execute the for loop and create a new OpenMP task for each value of i. Each different DoSomething() call will be converted to a task and will later execute inside an idle thread. There is a problem though: if one of the tasks add new values to p.l, it might happen after the creator thread has already exited the for loop. This could be fixed using task synchronisation constructs and an outer loop like this:
#pragma omp single
{
i = 0;
while (i < p.n)
{
for (; i < p.n; ++i)
{
#pragma omp task
DoSomething(p, i);
}
#pragma omp taskwait
#pragma omp flush
}
}
The taskwait construct makes for the thread to wait until all queued tasks are executed. If new elements were added to the list, the condition of the while would become true again and a new round of tasks creation will happen. The flush construct is supposed to synchronise the memory view between threads and e.g. update optimised register variables with the value from the shared storage.
OpenMP 3.0 is supported by all modern C compilers except MSVC, which is stuck at OpenMP 2.0.

Resources