Segfault when running openmpi - c

I am currently working on a project where I need to implement a parallel fft algorithm using openmpi. I have a compiling piece of code, but when I run it over the cluster I get segmentation faults.
I have my hunches about where things are going wrong, but I don't think I have enough of an understanding about pointers and references to be able to make a efficient fix.
The first chunk that could be going wrong is in the passing of the arrays to the helper functions. I believe that either my looping is inconsistent, or I am not understanding how the to pass these pointers and get back the things I need.
The second possible spot would be within the actual mpi_Send/Recv commands. I am sending a type that is not supported by the openmpi c datatypes, so I am using the mpi_byte type to send the raw data instead. Is this a viable option? Or should I be looking into an alternative to this method.
/* function declarations */
double complex get_block(double complex c[], int start, int stop);
double complex put_block(double complex from[], double complex to[],
int start, int stop);
void main(int argc, char **argv)
{
/* Initialize MPI */
MPI_Init(&argc, &argv);
double complex c[N/p];
int myid;
MPI_Comm_rank(MPI_COMM_WORLD, &myid);
//printf("My id is %d\n",myid);
MPI_Status status;
int i;
for(i=0;i<N/p;i++){
c[i] = 1.0 + 1.0*I;
}
int j = log(p)/log(2) + 1;
double q;
double complex z;
double complex w = exp(-2*PI*I/N);
double complex block[N/(2*p)]; // half the size of chunk c
int e,l,t,k,m,rank,plus,minus;
int temp = (log(N)-log(p))/log(2);
//printf("temp = %d", temp);
for(e = 0; e < (log(p)/log(2)); e++){
/* loop constants */
t = pow(2,e); l = pow(2,e+temp);
q = n/2*l; z = cpow(w,(complex)q);
j = j-1; int v = pow(2,j);
if(e != 0){
plus = (myid + p/v)%p;
minus = (myid - p/v)%p;
} else {
plus = myid + p/v;
minus = myid - p/v;
}
if(myid%t == myid%(2*t)){
MPI_Recv((char*)&c,
sizeof(c),
MPI_BYTE,
plus,
MPI_ANY_TAG,
MPI_COMM_WORLD,
&status);
/* transform */
for(k = 0; k < N/p; k++){
m = (myid * N/p + k)%l;
c[k] = c[k] + c[k+N/v] * cpow(z,m);
c[k+N/v] = c[k] - c[k + N/v] * cpow(z,m);
printf("(k,k+N/v) = (%d,%d)\n",k,k+N/v);
}*/
printf("\n\n");
/* end transform */
*block = get_block(c, N/v, N/v + N/p + 1);
MPI_Send((char*)&block,
sizeof(block),
MPI_BYTE,
plus,
1,
MPI_COMM_WORLD);
} else {
// send data of this PE to the (i- p/v)th PE
MPI_Send((char*)&c,
sizeof(c),
MPI_BYTE,
minus,
1,
MPI_COMM_WORLD);
// after the transformation, receive data from (i-p/v)th PE
// and store them in c:
MPI_Recv((char*)&block,
sizeof(block),
MPI_BYTE,
minus,
MPI_ANY_TAG,
MPI_COMM_WORLD,
&status);
*c = put_block(block, c, N/v, N/v + N/p - 1);
//printf("Process %d send/receive %d\n",myid, plus);
}
}
/* shut down MPI */
MPI_Finalize();
}
/* helper functions */
double complex get_block(double complex *c, int start, int stop)
{
double complex block[stop - start + 1];
//printf("%d = %d\n",sizeof(block)/sizeof(double complex), sizeof(&c)/sizeof(double complex));
int j = 0;
int i;
for(i = start; i < stop+1; i++){
block[j] = c[i];
j = j+1;
}
return *block;
}
double complex put_block(double complex from[], double complex to[], int start, int stop)
{
int j = 0;
int i;
for(i = start; i<stop+1; i++){
to[i] = from[j];
j = j+1;
}
return *to;
}
I really appreciate the feedback!

You are using arrays / pointers to arrays in the wrong way. For example you declare an array as double complex block[N], which is fine (although uncommon, in most cases it is better to use malloc) and then you receive into it via MPI_Recv(&block). However "block" is already a pointer to that array, so by writing "&block" you are passing the pointer of the pointer to MPI_Recv. That's not what it expects. If you want to use the "&" notation you have to write &block[0], which would give you the pointer to the first element of the block-array.

Have you tried debugging your code? This can be a pain in a parallel setting, but it can tell you exactly where it is failing and usually also why.
If you're using Linux or OS X, you could run your code as follows on the command line:
mpirun -np 4 xterm -e gdb -ex run --args ./yourprog yourargs
where I'm assuming yourprog is the name of your program and yourargs are any command-line arguments you want to pass.
What this command will do is launch four xterm windows. Each xterm will in turn launch gdb as specified by the option -e. gdb will then execute the command run as specified by the option -ex and launch your executable with the given options, as specified by --args.
What you get are four xterm windows running four instances of your program in parallel with MPI. If any of the instances crashes, gdb will tell you where and why.

Related

Array of Structures in C with MPI

I am having a relatively a problem,I have defined struct and I want the array of structure has this information (processor name and the computation time for the processor) this is part of my code :
struct stru
{
double arr_time[50];
char pname[50];
};
int main (int argc, char *argv[])
{
struct stru all_info[50];
MPI_Status status;
MPI_Init(&argc,&argv);
MPI_Comm_rank(MPI_COMM_WORLD,&process_id);
MPI_Comm_size(MPI_COMM_WORLD,&num_of_processes);
char processor_name[MPI_MAX_PROCESSOR_NAME];
int name_len;
MPI_Get_processor_name(processor_name, &name_len);
if (process_id == 0)
{ //do somthing
}
if (process_id > 0)
{
double start = MPI_Wtime();
for (k=0; k<array_size; k++)
for (i=0; i<rows; i++)
{
c[i][k]=0.0;
for (j=0; j<array_size; j++)
c[i][k] = c[i][k] + a[i][j] * b[j][k];
}
end_time = MPI_Wtime() - start;
all_info[i].arr_time[i] = end_time;
for (int i=1 ;i <= numworkers ;i++)
strcpy( all_info[i].pname, processor_name);
printf(" time = %f for processor %s
\n",all_info[i].arr_time, all_info[i].pname);
}
MPI_Gather( &end_time, 1, MPI_DOUBLE, &all_info[i].arr_time, 1,
MPI_DOUBLE, 0, MPI_COMM_WORLD);
if (process_id == 0){
for(i = 1; i <= numworkers; i++ )
{
printf(" time %f for processor %s
\n",all_info[i].arr_time , all_info[i].pname);
} }
I have no result if I print it in if (process_id == 0) !!!
the out put is
time 0.000000 for processor
time 0.000000 for processor
time 0.000000 for processor
and just the time printed if I printting in if (process_id > 0)
In fact I don't know how can I use Structure with MPI can anyone give me advice how can I generate array of structure that has processor name and his time?
Thank you in advance for your time.
At this line:
processor_name[MPI_MAX_PROCESSOR_NAME];
you start using the array variable processor_name without defining it anywhere.
You're missing something like all_info[i]. in front of it. Like you have a bit lower:
all_info[i].processor_name;
Then, for storing a string your processor_name needs memory. A single char is just one byte (i.e. one letter). So let's assume these names are never longer than 255, you'd get:
struct stru
{
double end_time;
char processor_name[256];
};
There are so many basic things wrong in your code and your questions seem to indicate that you lack basic understanding of C programming. Therefore my advice would be to take more time studying this language.
The error occurs here because you have not defined any type processor name.
If I understand what you're trying to correctly, it seems like you were trying to access the attribute of the structures. For doing that, you might need to use the . operator. For that you might need to define an array
struct stru all_info[MPI_MAX_PROCESSOR_NAME];
instead of
struct stru all_info[50];

MPI_Scatter issues (C)

My assignment is to parallelize some provided sequential code that takes an array of numbers and merges it with another array in order to come up with a sorted list. The first step is to initialize the array on each processor, then fill it with values on the root process. Next, I'm supposed to scatter the array out to the other processors so each processor has a chunk of the data for sorting, then sort these small lists locally and then gather them back up with the root process. My problem here is that no matter what I try the scatter won't actually send any data to the other processors. The array in these processors is always full of 0's, whereas the root processor does contain a list of numbers. Anybody care to take a look at my code and tell me what I'm missing?
The original sequential code
#include <mpi.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
/**
* Prints a vector without decimal places...
*/
void print_vector(int n, double vector[]) {
int i;
printf("[%.0f", vector[0]);
for (i = 1; i < n; i++) {
printf(", %.0f", vector[i]);
}
printf("]\n");
}
/**
* Just checks sequentially if everything is in ascending order.
*
* Note: we don't care about stability in this sort since we
* have no data attached to the double value, see
* http://en.wikipedia.org/wiki/Sorting_algorithm#Stability
* for more detail.
*/
void test_correctness(int n, double v[]) {
int i;
for (i = 1; i < n; i++) {
if (v[i] < v[i-1]) {
printf("Correctness test found error at %d: %.4f is not < %.4f but appears before it\n", i, v[i-1], v[i]);
}
}
}
/**
* Initialize random vector.
*
* You may not parallelize this (even though it could be done).
*/
void init_random_vector(int n, double v[]) {
int i, j;
for (i = 0; i < n; i++) {
v[i] = rand() % n;
}
}
double *R;
/**
* Merges two arrays, left and right, and leaves result in R
*/
void merge(double *left_array, double *right_array, int leftCount, int rightCount) {
int i,j,k;
// i - to mark the index of left aubarray (left_array)
// j - to mark the index of right sub-raay (right_array)
// k - to mark the index of merged subarray (R)
i = 0; j = 0; k =0;
while (i < leftCount && j < rightCount) {
if(left_array[i] < right_array[j])
R[k++] = left_array[i++];
else
R[k++] = right_array[j++];
}
while (i < leftCount)
R[k++] = left_array[i++];
while (j < rightCount)
R[k++] = right_array[j++];
}
/**
* Recursively merges an array of n values using group sizes of s.
* For example, given an array of 128 values and starting s value of 16
* will result in 8 groups of 16 merging into 4 groups of 32, then recursively
* calling merge_all which merges them into 2 groups of 64, then once more
* recursively into 1 group of 128.
*/
void merge_all(double *v, int n, int s) {
if (s < n) {
int i;
for (i = 0; i < n; i += 2*s) {
merge(v+i, v+i+s, s, s);
// result is in R starting at index 0
memcpy(v+i, R, 2*s*sizeof(double));
}
merge_all(v, n, 2*s);
}
}
void merge_sort(int n, double *v) {
merge_all(v, n, 1);
}
int main(int argc, char *argv[]) {
if (argc < 2) {
// () means optional
printf("usage: %s n (seed)\n", argv[0]);
return 0;
}
int n = atoi(argv[1]);
int seed = 0;
if (argc > 2) {
seed = atoi(argv[2]);
}
srand(seed);
int mpi_p, mpi_rank;
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &mpi_p);
MPI_Comm_rank(MPI_COMM_WORLD, &mpi_rank);
// init the temporary array used later for merge
R = malloc(sizeof(double)*n);
// all MPI processes will allocate a vector of length n, but only
// process 0 will initialize the values. You must distribute the
// values to processes. You will also likely need to allocate
// additional memory in your processes, make sure to clean it up
// by adding a correct free at the end of each process.
double *v = malloc(sizeof(double)*n);
if (mpi_rank == 0) {
init_random_vector(n, v);
}
double start = MPI_Wtime();
// do all the work ourselves! (you should make a better algorithm here!)
if (mpi_rank == 0) {
merge_sort(n, v);
}
double end = MPI_Wtime();
if (mpi_rank == 0) {
printf("Total time to solve with %d MPI Processes was %.6f\n", mpi_p, (end-start));
test_correctness(n, v);
}
MPI_Finalize();
free(v);
free(R);
return 0;
}
The parts I edited
double start = MPI_Wtime();
// do all the work ourselves! (you should make a better algorithm here!)
if (mpi_rank == 0)
{
//Scatter(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, root, comm)
MPI_Scatter(&v, sizeOf, MPI_DOUBLE, &v, sizeOf, MPI_DOUBLE, 0, MPI_COMM_WORLD);
}
if (mpi_rank > 0)
{
//merge_sort(sizeOf, R);
printf("%f :v[0]", v[0]); printf("%s", "\n");
printf("%f :v[1]", v[1]); printf("%s", "\n");
printf("%f :v[2]", v[0]); printf("%s", "\n");
}
if (mpi_rank == 0)
{
//Gather(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, root, comm)
//MPI_Gather(&v, sizeOf, MPI_DOUBLE, test, sizeOf, MPI_DOUBLE, 0, MPI_COMM_WORLD);
//merge(v, test, n, sizeOf);
merge_sort(n, v);
}
double end = MPI_Wtime();
The code should output three values from each processor that isn't the root, but it just gives me 0's. I tried many different parameters inside the scatter function call, to no avail. Almost everything I tried produces the same output found below. Sample output:
0.000000 :v[0]
0.000000 :v[1]
0.000000 :v[2]
Total time to solve with 2 MPI Processes was 0.000052
EDIT: Calling scatter from every process is also something I tried, as it sounds like this is the way you're supposed to call it instead of just calling it from the root. However, then I get a bunch of errors instead of any output at all. Errors look like this:
*** Process received signal ***
Signal: Segmentation fault (11)
Signal code: (128)
Failing at address: (nil)
*** Process received signal ***
Signal: Segmentation fault (11)
Signal code: (128)
Failing at address: (nil)
Something is clearly wrong here. Not sure what I'm doing wrong though.

MPI runtime error: Either Scatterv count error, segmentationfault, or gets stuck

/*
Matricefilenames:
small matrix A.bin of dimension 100 × 50
small matrix B.bin of dimension 50 × 100
large matrix A.bin of dimension 1000 × 500
large matrix B.bin of dimension 500 × 1000
An MPI program should be implemented such that it can
• accept two file names at run-time,
• let process 0 read the A and B matrices from the two data files,
• let process 0 distribute the pieces of A and B to all the other processes,
• involve all the processes to carry out the the chosen parallel algorithm
for matrix multiplication C = A * B ,
• let process 0 gather, from all the other processes, the different pieces
of C ,
• let process 0 write out the entire C matrix to a data file.
*/
int main(int argc, char *argv[]) {
printf("Oblig 2 \n");
double **matrixa;
double **matrixb;
int ma,na,my_ma,my_na;
int mb,nb,my_mb,my_nb;
int i,j,k;
int myrank,numprocs;
int konstanta,konstantb;
MPI_Init(&argc,&argv);
MPI_Comm_rank(MPI_COMM_WORLD,&myrank);
MPI_Comm_size(MPI_COMM_WORLD,&numprocs);
if(myrank==0) {
read_matrix_binaryformat ("small_matrix_A.bin", &matrixa, &ma, &na);
read_matrix_binaryformat ("small_matrix_B.bin", &matrixb, &mb, &nb);
}
//mpi broadcast
MPI_Bcast(&ma,1,MPI_INT,0,MPI_COMM_WORLD);
MPI_Bcast(&mb,1,MPI_INT,0,MPI_COMM_WORLD);
MPI_Bcast(&na,1,MPI_INT,0,MPI_COMM_WORLD);
MPI_Bcast(&nb,1,MPI_INT,0,MPI_COMM_WORLD);
fflush(stdout);
int resta = ma % numprocs;//rest antall som har den største verdien
//int restb = mb % numprocs;
if (myrank == 0) {
printf("ma : %d",ma);
fflush(stdout);
printf("mb : %d",mb);
fflush(stdout);
}
MPI_Barrier(MPI_COMM_WORLD);
if (resta == 0) {
my_ma = ma / numprocs;
printf("null rest\n ");
fflush(stdout);
} else {
if (myrank < resta) {
my_ma = ma / numprocs + 1;//husk + 1
} else {
my_ma = ma / numprocs; //heltalls divisjon gir nedre verdien !
}
}
my_na = na;
my_nb = nb;
double **myblock = malloc(my_ma*sizeof(double*));
for(i=0;i<na;i++) {
myblock[i] = malloc(my_na*sizeof(double));
}
//send_cnt for scatterv
//________________________________________________________________________________________________________________________________________________
int* send_cnta = (int*)malloc(numprocs*sizeof(int));//array med antall elementer sendt til hver prosess array[i] = antall elementer , i er process
int tot_elemsa = my_ma*my_na;
MPI_Allgather(&tot_elemsa,1,MPI_INT,&send_cnta[0],1,MPI_INT,MPI_COMM_WORLD);//arrays i c må sendes &array[0]
//send_disp for scatterv
//__________________________________________________________________________________
int* send_dispa = (int*)malloc(numprocs*sizeof(int)); //hvorfor trenger disp
// int* send_dispb = (int*)malloc(numprocs*sizeof(int));
//disp hvor i imagechars første element til hver prosess skal til
fflush(stdout);
if(resta==0) {
send_dispa[myrank]=myrank*my_ma*my_na;
} else if(myrank<=resta) {
if(myrank<resta) {
send_dispa[myrank]=myrank*my_ma*my_na;
} else {//my_rank == rest
send_dispa[myrank]=myrank*(my_ma+1)*my_na;
konstanta=myrank*(my_ma+1)*my_na;
}
}
MPI_Bcast(&konstanta,1,MPI_INT,resta,MPI_COMM_WORLD);
if (myrank>resta){
send_dispa[myrank]=((myrank-resta)*(my_ma*my_na))+konstanta;
}
MPI_Allgather(&send_dispa[myrank],1,MPI_INT,&send_dispa[0],1,MPI_INT,MPI_COMM_WORLD);
//___________________________________________________________________________________
printf("print2: %d" , myrank);
fflush(stdout);
//recv_buffer for scatterv
double *recv_buffera=malloc((my_ma*my_na)*sizeof(double));
MPI_Scatterv(&matrixa[0], &send_cnta[0], &send_dispa[0], MPI_UNSIGNED_CHAR, &recv_buffera[0], my_ma*my_na, MPI_UNSIGNED_CHAR, 0, MPI_COMM_WORLD);
for(i=0; i<my_ma; i++) {
for(j=0; j<my_na; j++) {
myblock[i][j]=recv_buffera[i*my_na + j];
}
}
MPI_Finalize();
return 0;
}
OLD:I get three type of errors. I can get scatterv count error, segmentationfault 11, or the processes just get stuck. It seems to be random which error I get. I run the code with 2 procs each time. When it gets stuck it gets stuck before the printf("print2: %d" , myrank);. When my friend runs the code on his own computer also with two prosesses, he does not get past by the first MPI_Bcast. Nothing is printed out when he runs it. Here is a link for the errors I get: http://justpaste.it/zs0
UPDATED PROBLEM: Now I get only a segmentation fault after " printf("print2: %d" , myrank); " before the scatterv call. EVEN if I remove all the code after the printf statement I get the segmentation fault, but only if I run the code for more than two procs.
I'm having a little difficulty tracing what you were trying to do. I think you're making the scatterv call more complicated than it needs to be though. Here's a snippet I had from a similar assignment this year. Hopefully it's a clearer example of how scatterv works.
/*********************************************************************
* Scatter A to All Processes
* - Using Scatterv for versatility.
*********************************************************************/
int *send_counts; // Send Counts
int *displacements; // Send Offsets
int chunk; // Number of Rows per Process (- Root)
int chunk_size; // Number of Doubles per Chunk
int remainder; // Number of Rows for Root Process
double * rbuffer; // Receive Buffer
// Do Some Math
chunk = m / (p - 1);
remainder = m % (p - 1);
chunk_size = chunk * n;
// Setup Send Counts
send_counts = malloc(p * sizeof(int));
send_counts[0] = remainder * n;
for (i = 1; i < p; i++)
send_counts[i] = chunk_size;
// Setup Displacements
displacements = malloc(p * sizeof(int));
displacements[0] = 0;
for (i = 1; i < p; i++)
displacements[i] = (remainder * n) + ((i - 1) * chunk_size);
// Allocate Receive Buffer
rbuffer = malloc(send_counts[my_rank] * sizeof(double));
// Scatter A Over All Processes!
MPI_Scatterv(A, // A
send_counts, // Array of counts [int]
displacements, // Array of displacements [int]
MPI_DOUBLE, // Sent Data Type
rbuffer, // Receive Buffer
send_counts[my_rank], // Receive Count - Per Process
MPI_DOUBLE, // Received Data Type
root, // Root
comm); // Comm World
MPI_Barrier(comm);
Also, this causes a segfault on my machine, no mpi... Pretty sure it's the way myblock is being allocated. You should do what #Hristo suggested in the comments. Allocate both matrices and the resultant matrix as flat arrays. That would eliminate the use of double pointers and make your life a whole lot simpler.
#include <stdio.h>
#include <stdlib.h>
void main ()
{
int na = 5;
int my_ma = 5;
int my_na = 5;
int i;
int j;
double **myblock = malloc(my_ma*sizeof(double*));
for(i=0;i<na;i++) {
myblock = malloc(my_na*sizeof(double));
}
unsigned char *recv_buffera=malloc((my_ma*my_na)*sizeof(unsigned char));
for(i=0; i<my_ma; i++) {
for(j=0; j<my_na; j++) {
myblock[i][j]=(float)recv_buffera[i*my_na + j];
}
}
}
Try allocating more like this:
// Allocate A, b, and y. Generate random A and b
double *buff=0;
if (my_rank==0)
{
int A_size = m*n, b_size = n, y_size = m;
int size = (A_size+b_size+y_size)*sizeof(double);
buff = (double*)malloc(size);
if (buff==NULL)
{
printf("Process %d failed to allocate %d bytes\n", my_rank, size);
MPI_Abort(comm,-1);
return 1;
}
// Set pointers
A = buff; b = A+m*n; y = b+n;
// Generate matrix and vector
genMatrix(m, n, A);
genVector(n, b);
}

Sending distributed chunks of a 2D array to the root process in MPI

I have a 2D array which is distributed across a MPI process grid (3 x 2 processes in this example). The values of the array are generated within the process which that chunk of the array is distributed to, and I want to gather all of those chunks together at the root process to display them.
So far, I have the code below. This generates a cartesian communicator, finds out the co-ordinates of the MPI process and works out how much of the array it should get based on that (as the array need not be a multiple of the cartesian grid size). I then create a new MPI derived datatype which will send the whole of that processes subarray as one item (that is, the stride, blocklength and count are different for each process, as each process has different sized arrays). However, when I come to gather the data together with MPI_Gather, I get a segmentation fault.
I think this is because I shouldn't be using the same datatype for sending and receiving in the MPI_Gather call. The data type is fine for sending the data, as it has the right count, stride and blocklength, but when it gets to the other end it'll need a very different derived datatype. I'm not sure how to calculate the parameters for this datatype - does anyone have any ideas?
Also, if I'm approaching this from completely the wrong angle then please let me know!
#include<stdio.h>
#include<array_alloc.h>
#include<math.h>
#include<mpi.h>
int main(int argc, char ** argv)
{
int size, rank;
int dim_size[2];
int periods[2];
int A = 2;
int B = 3;
MPI_Comm cart_comm;
MPI_Datatype block_type;
int coords[2];
float **array;
float **whole_array;
int n = 10;
int rows_per_core;
int cols_per_core;
int i, j;
int x_start, x_finish;
int y_start, y_finish;
/* Initialise MPI */
MPI_Init(&argc, &argv);
/* Get the rank for this process, and the number of processes */
MPI_Comm_size(MPI_COMM_WORLD, &size);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
if (rank == 0)
{
/* If we're the master process */
whole_array = alloc_2d_float(n, n);
/* Initialise whole array to silly values */
for (i = 0; i < n; i++)
{
for (j = 0; j < n; j++)
{
whole_array[i][j] = 9999.99;
}
}
for (j = 0; j < n; j ++)
{
for (i = 0; i < n; i++)
{
printf("%f ", whole_array[j][i]);
}
printf("\n");
}
}
/* Create the cartesian communicator */
dim_size[0] = B;
dim_size[1] = A;
periods[0] = 1;
periods[1] = 1;
MPI_Cart_create(MPI_COMM_WORLD, 2, dim_size, periods, 1, &cart_comm);
/* Get our co-ordinates within that communicator */
MPI_Cart_coords(cart_comm, rank, 2, coords);
rows_per_core = ceil(n / (float) A);
cols_per_core = ceil(n / (float) B);
if (coords[0] == (B - 1))
{
/* We're at the far end of a row */
cols_per_core = n - (cols_per_core * (B - 1));
}
if (coords[1] == (A - 1))
{
/* We're at the bottom of a col */
rows_per_core = n - (rows_per_core * (A - 1));
}
printf("X: %d, Y: %d, RpC: %d, CpC: %d\n", coords[0], coords[1], rows_per_core, cols_per_core);
MPI_Type_vector(rows_per_core, cols_per_core, cols_per_core + 1, MPI_FLOAT, &block_type);
MPI_Type_commit(&block_type);
array = alloc_2d_float(rows_per_core, cols_per_core);
if (array == NULL)
{
printf("Problem with array allocation.\nExiting\n");
return 1;
}
for (j = 0; j < rows_per_core; j++)
{
for (i = 0; i < cols_per_core; i++)
{
array[j][i] = (float) (i + 1);
}
}
MPI_Barrier(MPI_COMM_WORLD);
MPI_Gather(array, 1, block_type, whole_array, 1, block_type, 0, MPI_COMM_WORLD);
/*
if (rank == 0)
{
for (j = 0; j < n; j ++)
{
for (i = 0; i < n; i++)
{
printf("%f ", whole_array[j][i]);
}
printf("\n");
}
}
*/
/* Close down the MPI environment */
MPI_Finalize();
}
The 2D array allocation routine I have used above is implemented as:
float **alloc_2d_float( int ndim1, int ndim2 ) {
float **array2 = malloc( ndim1 * sizeof( float * ) );
int i;
if( array2 != NULL ){
array2[0] = malloc( ndim1 * ndim2 * sizeof( float ) );
if( array2[ 0 ] != NULL ) {
for( i = 1; i < ndim1; i++ )
array2[i] = array2[0] + i * ndim2;
}
else {
free( array2 );
array2 = NULL;
}
}
return array2;
}
This is a tricky one. You're on the right track, and yes, you will need different types for sending and receiving.
The sending part is easy -- if you're sending the whole subarray array, then you don't even need the vector type; you can send the entire (rows_per_core)*(cols_per_core) contiguous floats starting at &(array[0][0]) (or array[0], if you prefer).
It's the receiving that's the tricky part, as you've gathered. Let's start with the simplest case -- assuming that everything divides evenly so all the blocks have the same size. Then you can use the very helfpul MPI_Type_create_subarray (you could always cobble this together with vector types, but for higher-dimensional arrays this becomes tedious, as you need to create 1 intermediate type for each dimension of the array except the last...
Also, rather than hardcoding the decomposition, you can use the also-helpful MPI_Dims_create to create an as-square-as-possible decomposition of your ranks. Note
that this doesn't necessarily have anything to do with MPI_Cart_create, although you can use it for the requested dimensions. I'm going to skip the cart_create stuff here, not because it's not useful, but because I want to focus on the gather stuff.
So if everyone has the same size of array, then root is receiving the same data type from everyone, and one can use a very simple subarray type to get their data:
MPI_Type_create_subarray(2, whole_array_size, sub_array_size, starts,
MPI_ORDER_C, MPI_FLOAT, &block_type);
MPI_Type_commit(&block_type);
where sub_array_size[] = {rows_per_core, cols_per_core}, whole_array_size[] = {n,n}, and for here, starts[]={0,0} - eg, we'll just assume that everything starts the start.
The reason for this is that we can then use Gatherv to explicitly set the displacements into the array:
for (int i=0; i<size; i++) {
counts[i] = 1; /* one block_type per rank */
int row = (i % A);
int col = (i / A);
/* displacement into the whole_array */
disps[i] = (col*cols_per_core + row*(rows_per_core)*n);
}
MPI_Gatherv(array[0], rows_per_core*cols_per_core, MPI_FLOAT,
recvptr, counts, disps, resized_type, 0, MPI_COMM_WORLD);
So now everyone sends their data in one chunk, and it's received into the type into the right part of the array. For this to work, I've resized the type so that it's extent is just one float, so the displacements can be calculated in that unit:
MPI_Type_create_resized(block_type, 0, 1*sizeof(float), &resized_type);
MPI_Type_commit(&resized_type);
The whole code is below:
#include<stdio.h>
#include<stdlib.h>
#include<math.h>
#include<mpi.h>
float **alloc_2d_float( int ndim1, int ndim2 ) {
float **array2 = malloc( ndim1 * sizeof( float * ) );
int i;
if( array2 != NULL ){
array2[0] = malloc( ndim1 * ndim2 * sizeof( float ) );
if( array2[ 0 ] != NULL ) {
for( i = 1; i < ndim1; i++ )
array2[i] = array2[0] + i * ndim2;
}
else {
free( array2 );
array2 = NULL;
}
}
return array2;
}
void free_2d_float( float **array ) {
if (array != NULL) {
free(array[0]);
free(array);
}
return;
}
void init_array2d(float **array, int ndim1, int ndim2, float data) {
for (int i=0; i<ndim1; i++)
for (int j=0; j<ndim2; j++)
array[i][j] = data;
return;
}
void print_array2d(float **array, int ndim1, int ndim2) {
for (int i=0; i<ndim1; i++) {
for (int j=0; j<ndim2; j++) {
printf("%6.2f ", array[i][j]);
}
printf("\n");
}
return;
}
int main(int argc, char ** argv)
{
int size, rank;
int dim_size[2];
int periods[2];
MPI_Datatype block_type, resized_type;
float **array;
float **whole_array;
float *recvptr;
int *counts, *disps;
int n = 10;
int rows_per_core;
int cols_per_core;
int i, j;
int whole_array_size[2];
int sub_array_size[2];
int starts[2];
int A, B;
/* Initialise MPI */
MPI_Init(&argc, &argv);
/* Get the rank for this process, and the number of processes */
MPI_Comm_size(MPI_COMM_WORLD, &size);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
if (rank == 0)
{
/* If we're the master process */
whole_array = alloc_2d_float(n, n);
recvptr = &(whole_array[0][0]);
/* Initialise whole array to silly values */
for (i = 0; i < n; i++)
{
for (j = 0; j < n; j++)
{
whole_array[i][j] = 9999.99;
}
}
print_array2d(whole_array, n, n);
puts("\n\n");
}
/* Create the cartesian communicator */
MPI_Dims_create(size, 2, dim_size);
A = dim_size[1];
B = dim_size[0];
periods[0] = 1;
periods[1] = 1;
rows_per_core = ceil(n / (float) A);
cols_per_core = ceil(n / (float) B);
if (rows_per_core*A != n) {
if (rank == 0) fprintf(stderr,"Aborting: rows %d don't divide by %d evenly\n", n, A);
MPI_Abort(MPI_COMM_WORLD,1);
}
if (cols_per_core*B != n) {
if (rank == 0) fprintf(stderr,"Aborting: cols %d don't divide by %d evenly\n", n, B);
MPI_Abort(MPI_COMM_WORLD,2);
}
array = alloc_2d_float(rows_per_core, cols_per_core);
printf("%d, RpC: %d, CpC: %d\n", rank, rows_per_core, cols_per_core);
whole_array_size[0] = n;
sub_array_size [0] = rows_per_core;
whole_array_size[1] = n;
sub_array_size [1] = cols_per_core;
starts[0] = 0; starts[1] = 0;
MPI_Type_create_subarray(2, whole_array_size, sub_array_size, starts,
MPI_ORDER_C, MPI_FLOAT, &block_type);
MPI_Type_commit(&block_type);
MPI_Type_create_resized(block_type, 0, 1*sizeof(float), &resized_type);
MPI_Type_commit(&resized_type);
if (array == NULL)
{
printf("Problem with array allocation.\nExiting\n");
MPI_Abort(MPI_COMM_WORLD,3);
}
init_array2d(array,rows_per_core,cols_per_core,(float)rank);
counts = (int *)malloc(size * sizeof(int));
disps = (int *)malloc(size * sizeof(int));
/* note -- we're just using MPI_COMM_WORLD rank here to
* determine location, not the cart_comm for now... */
for (int i=0; i<size; i++) {
counts[i] = 1; /* one block_type per rank */
int row = (i % A);
int col = (i / A);
/* displacement into the whole_array */
disps[i] = (col*cols_per_core + row*(rows_per_core)*n);
}
MPI_Gatherv(array[0], rows_per_core*cols_per_core, MPI_FLOAT,
recvptr, counts, disps, resized_type, 0, MPI_COMM_WORLD);
free_2d_float(array);
if (rank == 0) print_array2d(whole_array, n, n);
if (rank == 0) free_2d_float(whole_array);
MPI_Finalize();
}
Minor thing -- you don't need the barrier before the gather. In fact, you hardly ever really need a barrier, and they're expensive operations for a few reasons, and can hide problems -- my rule of thumb is to never, ever, use barriers unless you know exactly why the rule needs to be broken in this case. In this case in particular, the collective gather routine does exactly the same syncronization as the barrier, so just use that.
Now, moving onto the harder stuff. If things don't divide evenly, you have a few options. The simplest, though not necessarily the best, is just to pad the array so that it does divide evenly, even if just for this operation.
If you can arrange it so that the number of columns does divide evenly, even if the number of rows doesn't, then you can still use the gatherv and create a vector type for each part of the row, and gatherv that the appropriate number of rows from each processor. That would work fine.
If you definately have the case where neither can be counted on to divide, and you can't pad data for sending, then there are three sub-options I can see:
As susterpatt suggests, do point-to-point. For small numbers of tasks, this is fine, but as it gets larger, this will be significantly less efficient than the collective operations.
Create a communicator consisting of all the processors not on the outer edges, and use exactly the code above to gather their code; and then point-to-point the edge tasks' data.
Don't gather to process 0 at all; use the Distributed array type to describe the layout of the array, and use MPI-IO to write all the data to a file; once that's done, you can have process zero display the data in some way if you like.
It looks like the first argument to you MPI_Gather call should probably be array[0], and not array.
Also, if you need to get different amounts of data from each rank, you might be better off using MPI_Gatherv.
Finally, not that gathering all your data in once place to do output is not scalable in many circumstances. As the amount of data grows, eventually, it will exceed the memory available to rank 0. You might be much better off distributing the output work (if you are writing to a file, using MPI IO or other library calls) or doing point-to-point sends to rank 0 one at a time, to limit the total memory consumption.
On the other hand, I would not recommend coordinating each of your ranks printing to standard output, one after another, because some major MPI implementations don't guarantee that standard output will be produced in order. Cray's MPI, in particular, jumbles up standard output pretty thoroughly if multiple ranks print.
Accordding to this (emphasis by me):
The type-matching conditions for the collective operations are more strict than the corresponding conditions between sender and receiver in point-to-point. Namely, for collective operations, the amount of data sent must exactly match the amount of data specified by the receiver. Distinct type maps between sender and receiver are still allowed.
Sounds to me like you have two options:
Pad smaller submatrices so that all processes send the same amount of data, then crop the matrix back to its original size after the Gather. If you're feeling adventurous, you might try defining the receiving typemap so that paddings are automatically overwritten during the Gather operation, thus eliminating the need for the crop afterwards. This could get a bit complicated though.
Fall back to point-to-point communication. Much more straightforward, but possibly higher communication costs.
Personally, I'd go with option 2.

From OpenMP to MPI

I just wonder how to convert the following openMP program to a MPI program
#include <omp.h>
#define CHUNKSIZE 100
#define N 1000
int main (int argc, char *argv[])
{
int i, chunk;
float a[N], b[N], c[N];
/* Some initializations */
for (i=0; i < N; i++)
a[i] = b[i] = i * 1.0;
chunk = CHUNKSIZE;
#pragma omp parallel shared(a,b,c,chunk) private(i)
{
#pragma omp for schedule(dynamic,chunk) nowait
for (i=0; i < N; i++)
c[i] = a[i] + b[i];
} /* end of parallel section */
return 0;
}
I have a similar program that I would like to run on a cluster and the program is using OpenMP.
Thanks!
UPDATE:
In the following toy code, I want to limit the parallel part within function f():
#include "mpi.h"
#include <stdio.h>
#include <string.h>
void f();
int main(int argc, char **argv)
{
printf("%s\n", "Start running!");
f();
printf("%s\n", "End running!");
return 0;
}
void f()
{
char idstr[32]; char buff[128];
int numprocs; int myid; int i;
MPI_Status stat;
printf("Entering function f().\n");
MPI_Init(NULL, NULL);
MPI_Comm_size(MPI_COMM_WORLD,&numprocs);
MPI_Comm_rank(MPI_COMM_WORLD,&myid);
if(myid == 0)
{
printf("WE have %d processors\n", numprocs);
for(i=1;i<numprocs;i++)
{
sprintf(buff, "Hello %d", i);
MPI_Send(buff, 128, MPI_CHAR, i, 0, MPI_COMM_WORLD); }
for(i=1;i<numprocs;i++)
{
MPI_Recv(buff, 128, MPI_CHAR, i, 0, MPI_COMM_WORLD, &stat);
printf("%s\n", buff);
}
}
else
{
MPI_Recv(buff, 128, MPI_CHAR, 0, 0, MPI_COMM_WORLD, &stat);
sprintf(idstr, " Processor %d ", myid);
strcat(buff, idstr);
strcat(buff, "reporting for duty\n");
MPI_Send(buff, 128, MPI_CHAR, 0, 0, MPI_COMM_WORLD);
}
MPI_Finalize();
printf("Leaving function f().\n");
}
However, the running output is not expected. The printf parts before and after the parallel part have been executed by every process, not just the main process:
$ mpirun -np 3 ex2
Start running!
Entering function f().
Start running!
Entering function f().
Start running!
Entering function f().
WE have 3 processors
Hello 1 Processor 1 reporting for duty
Hello 2 Processor 2 reporting for duty
Leaving function f().
End running!
Leaving function f().
End running!
Leaving function f().
End running!
So it seems to me the parallel part is not limited between MPI_Init() and MPI_Finalize().
To answer your update:
When using MPI, the same program is run by each processor. In order to restrict parallel parts you will need to use a statement like:
if (rank == 0) { ...serial work... }
This will ensure that only one processor does the work inside this block.
You can see how this works in the example program you posted, inside f(), there is the if(myid == 0) statement. This block of statements will then only be executed by process 0, all other processes go straight to the else and receive their messages, before sending them back.
With regard to MPI_Init and MPI_Finalize -- MPI_Init initialises the MPI environment. Once you have called this method you can use the other MPI methods like Send and Recv. Once you have finished using MPI methods, MPI_Finalize will free up the resources etc, but the program will keep running. For example, you could call MPI_Finalize before performing some I/O that was going to take a long time. These methods do not delimit the parallel portion of the code, merely where you can use other MPI calls.
Hope this helps.
You just need to assign a portion of the arrays (a, b, c) to each process. Something like this:
#include <mpi.h>
#define N 1000
int main(int argc, char *argv[])
{
int i, myrank, myfirstindex, mylastindex, procnum;
float a[N], b[N], c[N];
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &procnum);
MPI_Comm_rank(comm, &myrank);
/* Dynamic assignment of chunks,
* depending on number of processes
*/
if (myrank == 0)
myfirstindex = 0;
else if (myrank < N % procnum)
myfirstindex = myrank * (N / procnum + 1);
else
myfirstindex = N % procnum + myrank * (N / procnum);
if (myrank == procnum - 1)
mylastindex = N - 1;
else if (myrank < N % procnum)
mylastindex = myfirstindex + N / procnum + 1;
else
mylastindex = myfirstindex + N / procnum;
// Initializations
for(i = myfirstindex; i < mylastindex; i++)
a[i] = b[i] = i * 1.0;
// Computations
for(i = myfirstindex; i < mylastindex; i++)
c[i] = a[i] + b[i];
MPI_Finalize();
}
You can try to use proprietary Intel Cluster OpenMP. It will run OpenMP programs on cluster.
Yes, it simulates shared memory computer on distributed memory clusters using the "Software Distributed shared memory" http://en.wikipedia.org/wiki/Distributed_shared_memory
It is easy to use and included in Intel C++ Compiler (9.1+). But it works only on 64-bit processors.

Resources