OpenMP with 1 thread slower than sequential version - c

I have implemented knapsack using OpenMP (gcc version 4.6.3)
#define MAX(x,y) ((x)>(y) ? (x) : (y))
#define table(i,j) table[(i)*(C+1)+(j)]
for(i=1; i<=N; ++i) {
#pragma omp parallel for
for(j=1; j<=C; ++j) {
if(weights[i]>j) {
table(i,j) = table(i-1,j);
}else {
table(i,j) = MAX(profits[i]+table(i-1,j-weights[i]), table(i-1,j));
}
}
}
execution time for the sequential program = 1s
execution time for the openmp with 1 thread = 1.7s (overhead = 40%)
Used the same compiler optimization flags (-O3) in the both cases.
Can someone explain the reason behind this behavior.
Thanks.

Enabling OpenMP inhibits certain compiler optimisations, e.g. it could prevent loops from being vectorised or shared variables from being kept in registers. Therefore OpenMP-enabled code is usually slower than the serial and one has to utilise the available parallelism to offset this.
That being said, your code contains a parallel region nested inside the outer loop. This means that the overhead of entering and exiting the parallel region is multiplied N times. This only makes sense if N is relatively small and C is significantly larger (like orders of magnitude larger) than N, therefore the work being done inside the region greatly outweighs the OpenMP overhead.

Related

OpenMP low performance

I am trying to parallelize a loop in my program so i searched about multi-threading. First i took a look on POSIX multithreaded programming tutorial, it was so complicated so i tried to do something easier. I tried with OpenMP. I have successfully parallelized my code but the problem of execution time get worser than the serial case. this is below a portion ok my program. I wish you tell me what's the problem. Should i specify what variables are shared and what are private? and how can i know the kind of each variable? i wish you answer me because i searched in many forums and i still don't know what to do.
#include <stdio.h>
#include <math.h>
#include <stdlib.h>
#include <time.h>
#include <omp.h>
#define D 0.215 // magnetic dipolar constant
main()
{
int i,j,n,p,NTOT = 1600,Nc = NTOT-1;
float r[2],spin[2*NTOT],w[2],d;
double E,F,V,G,dU;
.
.
.
for(n = 1; n <= Nc; n++){
fscanf(voisins,"%d%d%f%f%f",&i,&j,&r[0],&r[1],&d);
V = 0.0;E = 0.0;F = 0.0;
#pragma omp parallel num_threads(4)
{
#pragma omp for schedule(auto)
for(p = 0;p < 2;p++)
{
V += (D/pow(d,3.0))*(spin[2*i-2+p]-w[p])*spin[2*j-2+p];
E += (spin[2*i-2+p]-w[p])*r[p];
F += spin[2*j-2+p]*r[p];
}
}
G = -3*(D/pow(d,5.0))*E*F;
dU += (V+G);
}
.
.
.
}//End of main()
You are parallelizing a loop with only 2 iterations: p=0 and p=1. The way that OpenMP's omp for works is by splitting up the loop iterations among your threads in the parallel team (which you've defined as 4 threads) and letting them work through their part of the problem in parallel.
With only 2 iterations, 2 of your threads will be sitting idle. On top of that, actually figuring out which threads will work on which part of the problem takes overhead. And if your actual loop doesn't take long (which in this case it clearly doesn't), the overhead will cost more than the benefits you've gained from parallelization.
A better strategy is usually to parallelize the outermost loops with OpenMP whenever possible in order to solve both the problems of splitting up the work evenly and reducing the (relative) overhead. Alternatively, you can parallelize at the lowest loop level using OpenMP 4.0's omp simd command.
Lastly, you are not computing the variables V, E, and F correctly. Because they are summed from iteration to iteration, you should define them all as reduction variables with reduction(+:V). I would be surprised if you are currently getting the correct answer as is.
(Also as High Performance Mark says: make sure you're timing the wall time execution of your program and not the CPU time execution of your program. This is typically done with omp_get_wtime().)

Do virtual cores contribute to performance when parallelizing a matrix multiplication?

I have an O(n^3) matrix multiplication function in C.
void matrixMultiplication(int N, double **A, double **B, double **C, int threadCount) {
int i = 0, j = 0, k = 0, tid;
pragma omp parallel num_threads(4) shared(N, A, B, C, threadCount) private(i, j, k, tid) {
tid = omp_get_thread_num();
pragma omp for
for (i = 1; i < N; i++)
{
printf("Thread %d starting row %d\n", tid, i);
for (j = 0; j < N; j++)
{
for (k = 0; k < N; k++)
{
C[i][j] = C[i][j] + A[i][k] * B[k][j];
}
}
}
}
return;
}
I am using OpenMP to parallelize this function by splitting up the multiplications. I am performing this computation on square matrices of size N = 3000 with a 1.8 GHz Intel Core i5 processor.
This processor has two physical cores and two virtual cores. I noticed the following performances for my computation
1 thread: 526.06s
2 threads: 264.531
3 threads: 285.195
4 threads: 279.914
I had expected my gains to continue until the setting the number of threads equal to four. However, this obviously did not occur.
Why did this happen? Is it because the performance of a core is equal to the sum of its physical and virtual cores?
Using more than one hardware thread per core can help or hurt, depending on circumstances.
It can help if one hardware thread stalls because of a cache miss, and the other hardware thread can keep going and keep the ALU busy.
It can hurt if each hardware thread forces evictions of data needed by the other thread. That is the threads destructively interfere with each other.
One way to address the problem is to write the kernel in a way such that each thread needs only half the cache. For example, blocked matrix multiplication can be used to minimize the cache footprint of a matrix multiplication.
Another way is to write the algorithm in a way such that both threads operate on the same data at the same time, so they help each other bring data into cache (constructive interference). This approach is admittedly hard to do with OpenMP unless the implementation has good support for nested parallelism.
I guess that the bottleneck is the memory (or L3 CPU cache) bandwidth. Arithmetic is quite cheap these days.
If you can afford it, try to benchmark the same code with the same data on some more powerful processor (e.g. some socket 2013 i7)
Remember that on today's processors, a cache miss lasts as long as several hundred instructions (or cycles): RAM is very slow w.r.t. cache or CPU.
BTW, if you have a GPGPU you could play with OpenCL.
Also, it is probable that linear software packages like LAPACK (or some other numerical libraries) are more efficient than your naive matrix multiplication.
You could also consider using __builtin_prefetch (see this)
BTW, numerical computation is hard. I am not expert at all, but I met people who worked dozens of years in it (often after a PhD in the field).

Is it possible to evaluate OpenMP overhead cost?

I am trying to optimize a code about image processing with OpenMP. I am still learning it and I came to the classic parallel for loop which is slower than my single threaded implementation.
The for loop I tried to parallelize has only 50 iterations and I saw it oculd explain why the OpenMP use in this case could be useless due to overhead operations.
But is it possible to evaluate these costs ? What are the cost differences between shared / private / firstprivate (...) clauses ?
This is the for loop I want to parallelize:
#pragma omp parallel for \
shared(img_in, img_height, img_width, w, update_gauss, MoGInitparameters, nb_motion_bloc) \
firstprivate(img_fg, gaussStruct) \
private(ContrastHisto, j, x, y) \
schedule(dynamic, 1) \
num_threads(max_threads)
for(i = 0; i<h; i++){ // h=50
for(j = 0; j<w; j++){
ComputeContrastHistogram_generic1D_rect(img_in, H_DESC_STEP*i, W_DESC_STEP*j, ContrastHisto, H_DESC_SIZE, W_DESC_SIZE, 2, img_height, img_width);
x = i;
y = w - 1 - j;
img_fg->data[y][x] = 1-MatchMoG_GaussianInt(ContrastHisto, gaussStruct->gauss[i*w+j], update_gauss, MoGInitparameters);;
#pragma omp critical
{
nb_motion_bloc += img_fg->data[y][x];
nb_motion_bloc += img_fg->data[y][x];
}
}
}
Maybe I am doing some mistakes but if that's the case please tell me why !!
Several points that don't address your specific questions.
Try to avoid #pragma omp critical. In this case, you can remove it altogether by adding a reduction(+:nb_motion_bloc ) clause to the omp parallel for line and using nb_motion_bloc += 2*img_fg->data[y][x];.
Depending on how much work each iteration has to do, short loops incur (much) more overhead than they're worth.
Now to the questions. If you don't change any of variables, don't bother classifying them as shared/private/firstprivate. If they are supposed to be used by each thread and discarded, you can use a construct like
#pragma omp parallel
{
int x, y;
#pragma omp for
for(i = 0; i<h; i++)
{
...
}
}
If the workload is balanced, then consider using schedule(static).
As to the differences between shared/private/firstprivate see this and this questions. From wikipedia:
shared: the data within a parallel region is shared, which means visible and accessible by all threads simultaneously. By default, all variables in the work sharing region are shared except the loop iteration counter.
private: the data within a parallel region is private to each thread, which means each thread will have a local copy and use it as a temporary variable. A private variable is not initialized and the value is not maintained for use outside the parallel region. By default, the loop iteration counters in the OpenMP loop constructs are private.
default: allows the programmer to state that the default data scoping within a parallel region will be either shared, or none for C/C++, or shared, firstprivate, private, or none for Fortran. The none option forces the programmer to declare each variable in the parallel region using the data sharing attribute clauses.
firstprivate: like private except initialized to original value.
lastprivate: like private except original value is updated after construct.

Why doesn't this code scale linearly?

I wrote this SOR solver code. Don't bother too much what this algorithm does, it is not the concern here. But just for the sake of completeness: it may solve a linear system of equations, depending on how well conditioned the system is.
I run it with an ill conditioned 2097152 rows sparce matrix (that never converges), with at most 7 non-zero columns per row.
Translating: the outer do-while loop will perform 10000 iterations (the value I pass as max_iters), the middle for will perform 2097152 iterations, split in chunks of work_line, divided among the OpenMP threads. The innermost for loop will have 7 iterations, except in very few cases (less than 1%) where it can be less.
There is data dependency among the threads in the values of sol array. Each iteration of the middle for updates one element but reads up to 6 other elements of the array. Since SOR is not an exact algorithm, when reading, it can have any of the previous or the current value on that position (if you are familiar with solvers, this is a Gauss-Siedel that tolerates Jacobi behavior on some places for the sake of parallelism).
typedef struct{
size_t size;
unsigned int *col_buffer;
unsigned int *row_jumper;
real *elements;
} Mat;
int work_line;
// Assumes there are no null elements on main diagonal
unsigned int solve(const Mat* matrix, const real *rhs, real *sol, real sor_omega, unsigned int max_iters, real tolerance)
{
real *coefs = matrix->elements;
unsigned int *cols = matrix->col_buffer;
unsigned int *rows = matrix->row_jumper;
int size = matrix->size;
real compl_omega = 1.0 - sor_omega;
unsigned int count = 0;
bool done;
do {
done = true;
#pragma omp parallel shared(done)
{
bool tdone = true;
#pragma omp for nowait schedule(dynamic, work_line)
for(int i = 0; i < size; ++i) {
real new_val = rhs[i];
real diagonal;
real residual;
unsigned int end = rows[i+1];
for(int j = rows[i]; j < end; ++j) {
unsigned int col = cols[j];
if(col != i) {
real tmp;
#pragma omp atomic read
tmp = sol[col];
new_val -= coefs[j] * tmp;
} else {
diagonal = coefs[j];
}
}
residual = fabs(new_val - diagonal * sol[i]);
if(residual > tolerance) {
tdone = false;
}
new_val = sor_omega * new_val / diagonal + compl_omega * sol[i];
#pragma omp atomic write
sol[i] = new_val;
}
#pragma omp atomic update
done &= tdone;
}
} while(++count < max_iters && !done);
return count;
}
As you can see, there is no lock inside the parallel region, so, for what they always teach us, it is the kind of 100% parallel problem. That is not what I see in practice.
All my tests were run on a Intel(R) Xeon(R) CPU E5-2670 v2 # 2.50GHz, 2 processors, 10 cores each, hyper-thread enabled, summing up to 40 logical cores.
On my first set runs, work_line was fixed on 2048, and the number of threads varied from 1 to 40 (40 runs in total). This is the graph with the execution time of each run (seconds x number of threads):
The surprise was the logarithmic curve, so I thought that since the work line was so large, the shared caches were not very well used, so I dug up this virtual file /sys/devices/system/cpu/cpu0/cache/index0/coherency_line_size that told me this processor's L1 cache synchronizes updates in groups of 64 bytes (8 doubles in the array sol). So I set the work_line to 8:
Then I thought 8 was too low to avoid NUMA stalls and set work_line to 16:
While running the above, I thought "Who am I to predict what work_line is good? Lets just see...", and scheduled to run every work_line from 8 to 2048, steps of 8 (i.e. every multiple of the cache line, from 1 to 256). The results for 20 and 40 threads (seconds x size of the split of the middle for loop, divided among the threads):
I believe the cases with low work_line suffers badly from cache synchronization, while bigger work_line offers no benefit beyond a certain number of threads (I assume because the memory pathway is the bottleneck). It is very sad that a problem that seems 100% parallel presents such bad behavior on a real machine. So, before I am convinced multi-core systems are a very well sold lie, I am asking you here first:
How can I make this code scale linearly to the number of cores? What am I missing? Is there something in the problem that makes it not as good as it seems at first?
Update
Following suggestions, I tested both with static and dynamic scheduling, but removing the atomics read/write on the array sol. For reference, the blue and orange lines are the same from the previous graph (just up to work_line = 248;). The yellow and green lines are the new ones. For what I could see: static makes a significant difference for low work_line, but after 96 the benefits of dynamic outweighs its overhead, making it faster. The atomic operations makes no difference at all.
The sparse matrix vector multiplication is memory bound (see here) and it could be shown with a simple roofline model. Memory bound problems benefit from higher memory bandwidth of multisocket NUMA systems but only if the data initialisation is done in such a way that the data is distributed among the two NUMA domains. I have some reasons to believe that you are loading the matrix in serial and therefore all its memory is allocated on a single NUMA node. In that case you won't benefit from the double memory bandwidth available on a dual-socket system and it really doesn't matter if you use schedule(dynamic) or schedule(static). What you could do is enable memory interleaving NUMA policy in order to have the memory allocation spread among both NUMA nodes. Thus each thread would end up with 50% local memory access and 50% remote memory access instead of having all threads on the second CPU being hit by 100% remote memory access. The easiest way to enable the policy is by using numactl:
$ OMP_NUM_THREADS=... OMP_PROC_BIND=1 numactl --interleave=all ./program ...
OMP_PROC_BIND=1 enables thread pinning and should improve the performance a bit.
I would also like to point out that this:
done = true;
#pragma omp parallel shared(done)
{
bool tdone = true;
// ...
#pragma omp atomic update
done &= tdone;
}
is a probably a not very efficient re-implementation of:
done = true;
#pragma omp parallel reduction(&:done)
{
// ...
if(residual > tolerance) {
done = false;
}
// ...
}
It won't have a notable performance difference between the two implementations because of the amount of work done in the inner loop, but still it is not a good idea to reimplement existing OpenMP primitives for the sake of portability and readability.
Try running the IPCM (Intel Performance Counter Monitor). You can watch memory bandwidth, and see if it maxes out with more cores. My gut feeling is that you are memory bandwidth limited.
As a quick back of the envelope calculation, I find that uncached read bandwidth is about 10 GB/s on a Xeon. If your clock is 2.5 GHz, that's one 32 bit word per clock cycle. Your inner loop is basically just a multiple-add operation whose cycles you can count on one hand, plus a few cycles for the loop overhead. It doesn't surprise me that after 10 threads, you don't get any performance gain.
Your inner loop has an omp atomic read, and your middle loop has an omp atomic write to a location that could be the same one read by one of the reads. OpenMP is obligated to ensure that atomic writes and reads of the same location are serialized, so in fact it probably does need to introduce a lock, even though there isn't any explicit one.
It might even need to lock the whole sol array unless it can somehow figure out which reads might conflict with which writes, and really, OpenMP processors aren't necessarily all that smart.
No code scales absolutely linearly, but rest assured that there are many codes that do scale much closer to linearly than yours does.
I suspect you are having caching issues. When one thread updates a value in the sol array, it invalids the caches on other CPUs that are storing that same cache line. This forces the caches to be updated, which then leads to the CPUs stalling.
Even if you don't have an explicit mutex lock in your code, you have one shared resource between your processes: the memory and its bus. You don't see this in your code because it is the hardware that takes care of handling all the different requests from the CPUs, but nevertheless, it is a shared resource.
So, whenever one of your processes writes to memory, that memory location will have to be reloaded from main memory by all other processes that use it, and they all have to use the same memory bus to do so. The memory bus saturates, and you have no more performance gain from additional CPU cores that only serve to worsen the situation.

OpenMP Parallel for-loop showing little performance increase

I am in the process of learning how to use OpenMP in C, and as a HelloWorld exercise I am writing a program to count primes. I then parallelise this as follows:
int numprimes = 0;
#pragma omp parallel for reduction (+:numprimes)
for (i = 1; i <= n; i++)
{
if (is_prime(i) == true)
numprimes ++;
}
I compile this code using gcc -g -Wall -fopenmp -o primes primes.c -lm (-lm for the math.h functions I am using). Then I run this code on an Intel® Core™2 Duo CPU E8400 # 3.00GHz × 2, and as expected, the performance is better than for a serial program.
The problem, however, comes when I try to run this on a much more powerful machine. (I have also tried to manually set the number of threads to use with num_threads, but this did not change anything.) Counting all the primes up to 10 000 000 gives me the following times (using time):
8-core machine:
real 0m8.230s
user 0m50.425s
sys 0m0.004s
dual-core machine:
real 0m10.846s
user 0m17.233s
sys 0m0.004s
And this pattern continues for counting more primes, the machine with more cores shows a slight performance increase, but not as much as I would expect for having so many more cores available. (I would expect 4 times more cores to imply almost 4 times less running time?)
Counting primes up to 50 000 000:
8-core machine:
real 1m29.056s
user 8m11.695s
sys 0m0.017s
dual-core machine:
real 1m51.119s
user 2m50.519s
sys 0m0.060s
If anyone can clarify this for me, it would be much appreciated.
EDIT
This is my prime-checking function.
static int is_prime(int n)
{
/* handle special cases */
if (n == 0) return 0;
else if (n == 1) return 0;
else if (n == 2) return 1;
int i;
for(i=2;i<=(int)(sqrt((double) n));i++)
if (n%i==0) return 0;
return 1;
}
This performance is happening because:
is_prime(i) takes longer the higher i gets, and
Your OpenMP implementation uses static scheduling by default for parallel for constructs without the schedule clause, i.e. it chops the for loop into equal sized contiguous chunks.
In other words, the highest-numbered thread is doing all of the hardest operations.
Explicitly selecting a more appropriate scheduling type with the schedule clause allows you to divide work among the threads fairly.
This version will divide the work better:
int numprimes = 0;
#pragma omp parallel for schedule(dynamic, 1) reduction(+:numprimes)
for (i = 1; i <= n; i++)
{
if (is_prime(i) == true)
numprimes ++;
}
Information on scheduling syntax is available via MSDN and Wikipedia.
schedule(dynamic, 1) may not be optimal, as High Performance Mark notes in his answer. There is a more in-depth discussion of scheduling granularity in this OpenMP wihtepaper.
Thanks also to Jens Gustedt and Mahmoud Fayez for contributing to this answer.
The reason for the apparently poor scaling of your program is, as #naroom has suggested, the variability in the run time of each call to your is_prime function. The run time does not simply increase with the value of i. Your code shows that the test terminates as soon as the first factor of i is found so the longest run times will be for numbers with few (and large) factors, including the prime numbers themselves.
As you've already been told, the default schedule for your parallelisation will parcel out the iterations of the master loop a chunk at a time to the available threads. For your case of 5*10^7 integers to test and 8 cores to use, the first thread will get the integers 1..6250000 to test, the second will get 6250001..12500000 and so on. This will lead to a severely unbalanced load across the threads because, of course, the prime numbers are not uniformly distributed.
Rather than using the default scheduling you should experiment with dynamic scheduling. The following statement tells the run-time to parcel out the iterations of your master loop m iterations at a time to the threads in your computation:
#pragma omp parallel for schedule(dynamic,m)
Once a thread has finished its m iterations it will be given m more to work on. The trick for you is to find the sweet spot for m. Too small and your computation will be dominated by the work that the run time does in parcelling out iterations, too large and your computation will revert to the unbalanced loads that you have already seen.
Take heart though, you will learn some useful lessons about the costs, and benefits, of parallel computation by working through all of this.
I think your code need to use dynamic so the threads each can consume different number of iterations as your iterations have different work load so the current code is balanced which won't help in your case try this out please:
int numprimes = 0;
#pragma omp parallel for reduction (+:numprimes) schedule(dynamic,1)
for (i = 1; i <= n; i++){
if (is_prime(i) == true)
++numprimes;
}

Resources