OpenMP paralellize a for loop - c

I want to use OpenMP to improve the process to calculate the arrays K11, K22, K33. In this code, I calculate the interaction force between 100 particles. The input arrays K11, K22, K33 are arrays with lengths 100*99 filled with zeros. The array sarray contains the separation between the particles. The function trapzd integrates the function K_i that is defined in a header file.
#include <stdio.h>
#include <math.h>
#include "FuncInteraction.h"
void ForceVel(double *sarray, double *K11, double *K22, double *K33, int NTrheads) {
double rr, ll;
rr=0;
ll=0;
int pp;
#pragma omp parallel for private(pp,rr,ll) num_threads(NTrheads)
for (pp = 0; pp <100*99; pp++)
{
rr = 50/sarray[pp];
ll = 5/sarray[pp];
K11[pp] = 2*trapzd( K1, rr, ll, 0, 12000, 20);
K22[pp] = 2*trapzd( K2,rr, ll, 0, 10000, 20) ;
K33[pp] = 2*trapzd( K3,rr, ll, 0, 10000, 20) ;
}
}
When I execute this code, I observe that runs in a single core independent of the value of the NTrheads. I would expect to be able to do that loop using more than 1 core, considering that the calculation of K11, K22 and K33 takes more than 1 s.

Related

How to write C code for a long signal and long kernel convolution

I would like to do a linear convolution for a signal of length 4000*270, with a kernel of length 16000. The signal is not fixed while the kernel is fixed. This needs to be repeated for many times for my purpose, so I want to improve the speed as soon as possible. I can implement this convolution in either R or C.
At first, I tried doing the convolution in R, but the speed cannot satisfy my need. I tried doing it by iteration and it was too slow. I also tried doing it using FFT, but because both signal and kernel are long, FFT didn't improve the speed a lot.
Then I decided to do convolution iteratively in C. But C seems not to be able to handle such amount of calculation and reported error very often. Even when it works, it is still very slow. I also tried doing fft convolution in C, but the program always shut down.
I found this code from a friend of mine and not sure about the original source. I will delete it if there is a copyright issue.This is the C code I used for doing fft in C, but the program cannot handle the long vector with length 2097152 (the smallest power of 2 greater than or equal to the signal vector length).
#define q 3 /* for 2^3 points */
#define N 2097152 /* N-point FFT, iFFT */
typedef float real;
typedef struct{real Re; real Im;} complex;
#ifndef PI
# define PI 3.14159265358979323846264338327950288
#endif
void fft( complex *v, int n, complex *tmp )
{
if(n>1) { /* otherwise, do nothing and return */
int k,m;
complex z, w, *vo, *ve;
ve = tmp;
vo = tmp+n/2;
for(k=0; k<n/2; k++) {
ve[k] = v[2*k];
vo[k] = v[2*k+1];
}
fft( ve, n/2, v ); /* FFT on even-indexed elements of v[] */
fft( vo, n/2, v ); /* FFT on odd-indexed elements of v[] */
for(m=0; m<n/2; m++) {
w.Re = cos(2*PI*m/(double)n);
w.Im = -sin(2*PI*m/(double)n);
z.Re = w.Re*vo[m].Re - w.Im*vo[m].Im; /* Re(w*vo[m]) */
z.Im = w.Re*vo[m].Im + w.Im*vo[m].Re; /* Im(w*vo[m]) */
v[ m ].Re = ve[m].Re + z.Re;
v[ m ].Im = ve[m].Im + z.Im;
v[m+n/2].Re = ve[m].Re - z.Re;
v[m+n/2].Im = ve[m].Im - z.Im;
}
}
return;
}
void ifft( complex *v, int n, complex *tmp )
{
if(n>1) { /* otherwise, do nothing and return */
int k,m;
complex z, w, *vo, *ve;
ve = tmp;
vo = tmp+n/2;
for(k=0; k<n/2; k++) {
ve[k] = v[2*k];
vo[k] = v[2*k+1];
}
ifft( ve, n/2, v ); /* FFT on even-indexed elements of v[] */
ifft( vo, n/2, v ); /* FFT on odd-indexed elements of v[] */
for(m=0; m<n/2; m++) {
w.Re = cos(2*PI*m/(double)n);
w.Im = sin(2*PI*m/(double)n);
z.Re = w.Re*vo[m].Re - w.Im*vo[m].Im; /* Re(w*vo[m]) */
z.Im = w.Re*vo[m].Im + w.Im*vo[m].Re; /* Im(w*vo[m]) */
v[ m ].Re = ve[m].Re + z.Re;
v[ m ].Im = ve[m].Im + z.Im;
v[m+n/2].Re = ve[m].Re - z.Re;
v[m+n/2].Im = ve[m].Im - z.Im;
}
}
return;
}
I found this page talking about long signal convolution https://ccrma.stanford.edu/~jos/sasp/Convolving_Long_Signals.html
But I'm not sure how to use the idea in it. Any thoughts would be truly appreciated and I'm ready to provide more information about my question.
The most common efficient long FIR filter method is to use FFT/IFFT overlap-add (or overlap-save) fast convolution, as per the CCRMA paper you referenced. Just chop your data into shorter blocks more suitable for your FFT library and processor data cache sizes, zero-pad by at least the filter kernel length, FFT filter, and sequentially overlap-add the remainder/tails after each IFFT.
Huge long FFTs will most likely trash your processor's caches, which will likely dominate over any algorithmic O(NlogN) speedup.

Parallelization for Monte Carlo pi approximation

I am writing a c script to parallelize pi approximation with OpenMp. I think my code works fine with a convincing output. I am running it with 4 threads now. What I am not sure is that if this code is vulnerable to race condition? and if it is, how do I coordinate the thread action in this code ?
the code looks as follows:
#include <stdlib.h>
#include <stdio.h>
#include <time.h>
#include <math.h>
#include <omp.h>
double sample_interval(double a, double b) {
double x = ((double) rand())/((double) RAND_MAX);
return (b-a)*x + a;
}
int main (int argc, char **argv) {
int N = atoi( argv[1] ); // convert command-line input to N = number of points
int i;
int NumThreads = 4;
const double pi = 3.141592653589793;
double x, y, z;
double counter = 0;
#pragma omp parallel firstprivate(x, y, z, i) reduction(+:counter) num_threads(NumThreads)
{
srand(time(NULL));
for (int i=0; i < N; ++i)
{
x = sample_interval(-1.,1.);
y = sample_interval(-1.,1.);
z = ((x*x)+(y*y));
if (z<= 1)
{
counter++;
}
}
}
double approx_pi = 4.0 * counter/ (double)N;
printf("%i %1.6e %1.6e\n ", N, 4.0 * counter/ (double)N, fabs(4.0 * counter/ (double)N - pi) / pi);
return 0;
}
Also I was wondering if the seed for random number should be declared inside or outside parallelization. my output looks like this:
10 3.600000e+00 1.459156e-01
100 3.160000e+00 5.859240e-03
1000 3.108000e+00 1.069287e-02
10000 3.142400e+00 2.569863e-04
100000 3.144120e+00 8.044793e-04
1000000 3.142628e+00 3.295610e-04
10000000 3.141379e+00 6.794439e-05
100000000 3.141467e+00 3.994585e-05
1000000000 3.141686e+00 2.971945e-05
Which looks OK for now. your suggestion for race condition and seed placement is most welcome.
There are a few problems in your code that I can see. The main one is from my standpoint that it isn't parallelized. Or more precisely, you didn't enable the parallelism you introduced with OpenMP while compiling it. Here is the way one can see that:
The way the code is parallelized, the main for loop should be executed in full by all the threads (there is no worksharing here, no #pragma omp parallel for, only a #pragma omp parallel). Therefore, considering you set the number of threads to be 4, the global number of iterations should be 4*N. Thus, your output should slowly converge towards 4*Pi, not towards Pi.
Indeed, I tried your code on my laptop, compiled it with OpenMP support, and that is pretty-much what I get. However, when I don't enable OpenMP, I get an output similar to yours. So in conclusion, you need to:
Enable OpenMP at compilation time for getting a parallel version of your code.
Divide your result by NumThreads to get a "valid" approximation of Pi (or distribute your loop over N with a #pragma omp for for example)
But that is if / when your code is correct elsewhere, which it isn't yet.
As BitTickler already hinted, rand() isn't thread-safe. So you have to go for another random number generator, which will allow you to privatize it's state. That could be rand_r() for example. That said, this still has quite a few issues:
rand() / rand_r() is a terrible RNG in term of randomness and periodicity. While increasing your number of tries, you'll rapidly go over the period of the RNG and repeat over and over again the same sequence. You need something more robust to do anything remotely serious.
Even with a "good" RNG, the parallelism aspect can be an issue in the sense that you want your sequences in parallel to be uncorrelated between each-other. And just using a different seed value per thread doesn't guaranty that to you (although with a wide-enough RNG, you have a bit of headroom for that)
Anyway, bottom line is:
Use a better thread-safe RNG (I find drand48_r() or random_r() to be OK for toy codes on Linux)
Initialize its state per-thread based on the thread id for example, while keeping in mind that this won't ensure a proper decorrelation of the random series in some circumstances (and the larger the number of times you call the functions, the more likely you are to finally have overlapping series).
This done (along with a few minor fixes), your code becomes for example as follows:
#include <stdlib.h>
#include <stdio.h>
#include <time.h>
#include <math.h>
#include <omp.h>
typedef struct drand48_data RNGstate;
double sample_interval(double a, double b, RNGstate *state) {
double x;
drand48_r(state, &x);
return (b-a)*x + a;
}
int main (int argc, char **argv) {
int N = atoi( argv[1] ); // convert command-line input to N = number of points
int NumThreads = 4;
const double pi = 3.141592653589793;
double x, y, z;
double counter = 0;
time_t ctime = time(NULL);
#pragma omp parallel private(x, y, z) reduction(+:counter) num_threads(NumThreads)
{
RNGstate state;
srand48_r(ctime+omp_get_thread_num(), &state);
for (int i=0; i < N; ++i) {
x = sample_interval(-1, 1, &state);
y = sample_interval(-1, 1, &state);
z = ((x*x)+(y*y));
if (z<= 1) {
counter++;
}
}
}
double approx_pi = 4.0 * counter / (NumThreads * N);
printf("%i %1.6e %1.6e\n ", N, approx_pi, fabs(approx_pi - pi) / pi);
return 0;
}
Which I compile like this:
gcc -std=gnu99 -fopenmp -O3 -Wall pi.c -o pi_omp

How to make gaussian package move in numerical simulation of a square barrier in C

I am trying to use Gaussian packages to study the transmission probability via Trotter-Suzuki formula and fast Fourier transform (FFT) when confronted with a square barrier, just as done in this Quantum Python article. But I need to realize it using C. In principle, the wave function will remain its shape before the collision with the square barrier. But I found that the wave function becomes flat dramatically with time before colliding with the square barrier. Anybody finds problems in the following codes?
Here, two files - result and psi.txt - are created to store the initial and evolved wave-function. The first two data for each are x coordinates, the probability of the wave function at that x. The third data for each line in file result is the square barrier distribution. The FFT I use is shown in this C program.
#include <stdio.h>
#include <math.h>
#define h_bar 1.0
#define pi 3.1415926535897932385E0
#define m0 1.0
typedef double real;
typedef struct { real Re; real Im; } complex;
extern void fft(complex x[], int N, int flag);
complex complex_product(complex x, real y_power, real y_scale)
{//x*exp(i*y_power)*y_scale
real Re, Im;
Re = (x.Re*cos(y_power)-x.Im*sin(y_power))*y_scale;
Im = (x.Re*sin(y_power)+x.Im*cos(y_power))*y_scale;
x.Re = Re; x.Im = Im;
return x;
}
real potential(real x, real a)
{
return (x<0 || x>=a) ? 0 : 1;
}
void main()
{
int t_steps=20, i, N=pow(2,10), m, n;
complex psi[N];
real x0=-2, p0=1, k0=p0/h_bar, x[N], k[N], V[N];
real sigma=0.5, a=0.1, x_lower=-5, x_upper=5;
real dt=1, dx=(x_upper-x_lower)/N, dk=2*pi/(dx*N);
FILE *file;
file = fopen("result", "w");
//initialize
for (n=0; n<N; n++)
{
x[n] = x_lower+n*dx;
k[n] = k0+(n-N*0.5)*dk;
V[n] = potential(x[n], a);
psi[n].Re = exp(-pow((x[n]-x0)/sigma, 2)/2)*cos(p0*(x[n]-x0)/h_bar);
psi[n].Im = exp(-pow((x[n]-x0)/sigma, 2)/2)*sin(p0*(x[n]-x0)/h_bar);
}
for (m=0; m<N; m++)
fprintf(file, "%g %g %g\n", x[m], psi[m].Re*psi[m].Re+psi[m].Im*psi[m].Im, V[m]);
fclose(file);
for (i=0; i<t_steps; i++)
{
printf("t_steps=%d\n", i);
for (n=0; n<N; n++)
{
psi[n]=complex_product(psi[n], -V[n]*dt/h_bar, 1);
psi[n]=complex_product(psi[n], -k[0]*x[n], dx/sqrt(2*pi));//x--->x_mod
}
fft(psi, N, 1);//psi: x_mod--->k_mod
for (m=0; m<N; m++)
{
psi[m]=complex_product(psi[m], -m*dk*x[0], 1);//k_mod--->k
psi[m]=complex_product(psi[m], -h_bar*k[m]*k[m]*dt/(2*m0), 1./N);
psi[m]=complex_product(psi[m], m*dk*x[0], 1);//k--->k_mod
}
fft(psi, N, -1);
for (n=0; n<N; n++)
psi[n] = complex_product(psi[n], k[0]*x[n], sqrt(2*pi)/dx);//x_mod--->x
}
file = fopen("psi.txt", "w");
for (m=0; m<N; m++)
fprintf(file, "%g %g 0\n", x[m], pow((psi[m]).Re, 2)+pow((psi[m]).Im, 2));
fclose(file);
}
I use the following Python code to plot the initial and final evolved wave functions:
call: `>>> python plot.py result psi.txt`
import matplotlib.pyplot as plt
from sys import argv
for filename in argv[1:]:
print filename
f = open(filename, 'r')
lines = [line.strip(" \n").split(" ") for line in f]
x = [float(line[0]) for line in lines]
y = [float(line[2]) for line in lines]
psi = [float(line[1]) for line in lines]
print "x=%g, max=%g" % (x[psi.index(max(psi))], max(psi))
plt.plot(x, y, x, psi)
#plt.xlim([-1.0e-10, 1.0e-10])
plt.ylim([0, 3])
plt.show()
Your code is almost correct, sans the fact that you are missing the initial/final half-step in the real domain and some unnecessary operations (kmod -> k and back), but the main problem is that your initial conditions are really chosen badly. The time evolution of a Gaussian wavepacket results in the uncertainty spreading out quadratically in time:
Given your choice of particle mass and initial wavepacket width, the term in the braces equals 1 + 4 t2. After one timestep, the wavepacket is already significantly wider than initially and after another timestep becomes wider than the entire simulation box. The periodicity implied by the use of FFT results in spatial and frequency aliasing, which together with the overly large timestep is why your final wavefunction looks that strange.
I would advise that you try to replicate exactly the conditions of the Python program, including the fact that the entire system is in a deep potential well (Vborder -> +oo).
The variable i is uninitialised here:
k[n] = k0+(i-N*0.5)*dk;

How to measure overall performance of parallel programs (with papi)

I asked myself what would be the best way to measure the performance (in flops) of a parallel program. I read about papi_flops. This seems to work fine for a serial program. But I don't know how I can measure the overall performance of a parallel program.
I would like to measure the performance of a blas/lapack function, in my example below gemm. But I also want to measure other function, specially functions where the number of operation is not known. (In the case of gemm the ops are known (ops(gemm) = 2*n^3), so I could calculate the performance as a function of the number of operations and the execution time.) The library (I am using Intel MKL) spawn the threads automatically. So I can't measure the performance of each thread individually and then reduce it.
This is my example:
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
#include "mkl.h"
#include "omp.h"
#include "papi.h"
int main(int argc, char *argv[] )
{
int i, j, l, k, n, m, idx, iter;
int mat, mat_min, mat_max;
int threads;
double *A, *B, *C;
double alpha =1.0, beta=0.0;
float rtime1, rtime2, ptime1, ptime2, mflops;
long long flpops;
#pragma omp parallel
{
#pragma omp master
threads = omp_get_num_threads();
}
if(argc < 4){
printf("pass me 3 arguments!\n");
return( -1 );
}
else
{
mat_min = atoi(argv[1]);
mat_max = atoi(argv[2]);
iter = atoi(argv[3]);
}
m = mat_max; n = mat_max; k = mat_max;
printf (" Initializing data for matrix multiplication C=A*B for matrix \n"
" A(%ix%i) and matrix B(%ix%i)\n\n", m, k, k, n);
A = (double *) malloc( m*k * sizeof(double) );
B = (double *) malloc( k*n * sizeof(double) );
C = (double *) malloc( m*n * sizeof(double) );
printf (" Intializing matrix data \n\n");
for (i = 0; i < (m*k); i++)
A[i] = (double)(i+1);
for (i = 0; i < (k*n); i++)
B[i] = (double)(-i-1);
memset(C,0,m*n*sizeof(double));
// actual meassurment
for(mat=mat_min;mat<=mat_max;mat+=5)
{
m = mat; n = mat; k = mat;
for( idx=-1; idx<iter; idx++ ){
PAPI_flops( &rtime1, &ptime1, &flpops, &mflops );
cblas_dgemm(CblasColMajor, CblasNoTrans, CblasNoTrans,
m, n, k, alpha, A, k, B, n, beta, C, n);
PAPI_flops( &rtime2, &ptime2, &flpops, &mflops );
}
printf("%d threads: %d in %f sec, %f MFLOPS\n",threads,mat,rtime2-rtime1,mflops);fflush(stdout);
}
printf("Done\n");fflush(stdout);
free(A);
free(B);
free(C);
return 0;
}
This is one output (for matrix size 200):
1 threads: 200 in 0.001459 sec, 5570.258789 MFLOPS
2 threads: 200 in 0.000785 sec, 5254.993652 MFLOPS
4 threads: 200 in 0.000423 sec, 4919.640137 MFLOPS
8 threads: 200 in 0.000264 sec, 3894.036865 MFLOPS
We can see for the execution time, that the function gemm scales. But the flops that I am measuring is only the performance of thread 0.
My question is: How can I measure the overall performance? I am grateful for any input.
First, I'm just curious - why do you need the FLOPS? don't you just care how much time is taken? or maybe time taken in compare to other BLAS libraries?
PAPI is thread based not much help on its own here.
What I would do is measure around the function call and see how time changes with number of threads it spawns. It should not spawn more threads than physical cores (HT is no good here). Then, if the matrix is big enough, and the machine is not loaded, the time should simply divide by the number of threads. E.g., 10 seconds over 4 core should become 2.5 seconds.
Other than that, there are 2 things you can do to really measure it:
1. Use whatever you use now but inject your start/end measurement code around the BLAS code. One way to do that (in linux) is by pre-loading a lib that defines pthread_start and using your own functions that call the originals but do some extra measurements. Another way to to override the function pointer when the process is already running (=trampoline). In linux it's in the GOT/PLT and in windows it's more complicated - look for a library.
2. Use oprofile, or some other profiler, to report number of instructions executed in the time you care for. Or better yet, to report the number of floating point instructions executed. A little problem with this is that SSE instructions are multiplying or adding 2 or more doubles at a time so you'd have to account for that. I guess you can assume they always use the maximum possible operands.

MPI in C parallelisation of custom function running in serial

I am a baginner MPI user and I may made some mistake with my parallel code for my calculation.
I need to compute an iterative estimation on a large data set and I want to calculate it in parallel using MPI in C.
I made a standard (ANSI) C function ('myFunc') to estimate an element in the output dataset ('param_2') based on the input parameters ('param_1',param_3,'table_1','table_2','table_3') and the estimation of the previous iteration ('param_2'). The calculation could be done parallel if we partition the new estimation ('param_2') into chunks.
When I made some profiling on the code, I realised that the calculation started almost at the same time on each node (thread), but it is finished in a serial fashion, one after another (with a fixed time interval between them).
It looks like they are using some shared resources or something like that... I tried to eliminate all concurrency between the threads, but i am affraid I do not have enough experience in MPI to solve the problem.
I thought all MPI thread have its own 'copy' of the declared variables and using them independently from each other, so I do not understand why the threads wait for each other to finish the calculation when they have their own copy of the parameters...
Here is the simplefield version of the code:
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include <mpi.h>
#define X 131
#define Y 131
#define Z 150
#define MASTER 0
float table_1[31][8];
float table_2[31][4];
float table_3[31][2];
int main(int argc, char* argv[]) {
float *param_1;
float *param_2;
float param_3;
float *chunk;
int file_length = X*Y*Z;
float myFunc(int i, float *param_1, float *param_2, float param_3);
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &taskid);
MPI_Comm_size(MPI_COMM_WORLD, &numtasks);
chunk_size = ceil(file_length / numtasks);
/* Allocate memory for the input parameters */
param_1 = malloc(file_length*sizeof(float));
param_2 = malloc(file_length*sizeof(float));
if( taskid == MASTER) {
/* Read parameters from file (table_1, table_2, table_3, param_1) */
}
MPI_Bcast(table_1,31*8,MPI_FLOAT,MASTER,MPI_COMM_WORLD);
MPI_Bcast(table_2,31*4,MPI_FLOAT,MASTER,MPI_COMM_WORLD);
MPI_Bcast(table_3,31*2,MPI_FLOAT,MASTER,MPI_COMM_WORLD);
MPI_Bcast(param_1,file_length,MPI_FLOAT,MASTER,MPI_COMM_WORLD);
for(it = 0; it < 10; it++) {
for(i = 0; i < chunk_size; i++) {
chunk[i] = myFunc((taskid*chunk_size)+i, param_1, param_2, param_3);
}
MPI_Gather(chunk, chunk_size, MPI_FLOAT, param_2, chunk_size, MPI_FLOAT, MASTER, MPI_COMM_WORLD);
MPI_Bcast(param_2, file_length, MPI_FLOAT, MASTER, MPI_COMM_WORLD);
}
MPI_Finalize();
free(...);
return 0;
}
float myFunc(int i, float *param_1, float *param_2, float param_3) {
/* Using the global tables (table_1,table_2,table_3) and some localy declared variable */
/* No MPI function here, only Math functions */
}
If you have a solution, advise or a comment please be kind and share with me, I would be grateful, thank you!

Resources