In MPI (MPICH) I am trying to use windows. I have a 3D grid topology and additional communicator i_comm.
MPI_Comm cartcomm;
int periods[3]={1,1,1}, reorder=0, coords[3];
int dims[3]={mesh, mesh, mesh}; //mesh is size of each dimention
MPI_Dims_create(size, 3, dims);
MPI_Cart_create(MPI_COMM_WORLD, 3, dims, periods,reorder, &cartcomm);
MPI_Cart_coords(cartcomm, my_rank, 3, coords);
MPI_Comm i_comm;
int i_remain_dims[3] = {false, true, false};
MPI_Cart_sub(cartcomm, i_remain_dims, &i_comm);
int i_rank;
MPI_Comm_rank(i_comm, &i_rank);
MPI_Win win_PB;
int * PA = (int *) malloc(r*r*sizeof(int)); //r is input size
int * PB = (int *) malloc(r*r*sizeof(int));
/* arrays are initialized*/
Then I create window and afterwards try to use get function
if(i_rank == 0){
MPI_Win_create(PB, r*r*sizeof(int), sizeof(int), MPI_INFO_NULL, i_comm, &win_PB);
}
else{
MPI_Win_create(NULL, 0, 1, MPI_INFO_NULL, i_comm, &win_PB);
}
MPI_Win_fence(0, win_PB);
if(i_rank != 0){
MPI_Get(PB, r*r*sizeof(int), MPI_INT, 0, 0, r*r*sizeof(int), MPI_INT, win_PB);
}
MPI_Win_fence(0, win_PB);
With this code I get long output of errors:
[ana:24006] *** Process received signal ***
[ana:24006] Signal: Segmentation fault (11)
[ana:24006] Signal code: Address not mapped (1)
[ana:24006] Failing at address: 0xa8
Also, without using MPI_Win_fence, I get error with get function: MPI_ERR_RMA_SYNC: error executing rma sync. Which I am not sure is normal.
What I observed is that if I declare arrays in a reverse order then it works fine:
int * PB = (int *) malloc(r*r*sizeof(int));
int * PA = (int *) malloc(r*r*sizeof(int));
The problem is that I will need to create another communicator and another window for PA buffer, so just switching order of lines does not help at the end.
I would highly appreciate any help to figure out what I am doing wrong.
Related
what I need to do is to distribute a given number of strings to all nodes in a network.
I'm using MPI in C.
I had a lot of trouble doing this using dynamic allocation and array of pointers, and I came to the conclusion that I have to use MPI_Datatypes.
So my new idea is to use a struct like this one:
typedef struct {
char data[10];
} Payload;
and building a new MPI_Datatype based on this struct, in order to, hopefully, being able to Scatter an array of struct.
I wrote some code in the attempt of doing so (assume comm_sz/number of nodes is 3):
MPI_Datatype MPI_Payload, tmp_type;
int array_of_blocklengths[] = {10};
MPI_Aint array_of_displacements[] = {0};
MPI_Datatype array_of_types[] = {MPI_CHAR};
MPI_Aint lb, extent;
MPI_Type_create_struct(1, array_of_blocklengths, array_of_displacements, array_of_types, &tmp_type);
MPI_Type_get_extent(tmp_type, &lb, &extent);
MPI_Type_create_resized(tmp_type, lb, extent, &MPI_Payload);
MPI_Type_commit(&MPI_Payload);
int n = 3; //total number of packets
int local_n = n/comm_sz; //number of packets for single node
Payload *local_buff = malloc(local_n*sizeof(Payload));
Payload *a = NULL;
if (my_rank == 0) {
a = malloc(n*sizeof(Payload));
char msg[10];
for (int i = 0; i < n; i++) {
printf("Enter a max 9 character length string\n");
scanf("%s", msg);
strcpy(a[i].data, msg);
}
//MPI_Send(&a[1], 1, MPI_Payload, 1, 0, MPI_COMM_WORLD);
//MPI_Send(&a[2], 1, MPI_Payload, 2, 0, MPI_COMM_WORLD);
MPI_Scatter(a, n, MPI_Payload, local_buff, local_n, MPI_Payload, 0, MPI_COMM_WORLD);
free(a);
}
else {
//MPI_Recv(&local_buff[0], 1, MPI_Payload, 0, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
MPI_Scatter(a, n, MPI_Payload, local_buff, local_n, MPI_Payload, 0, MPI_COMM_WORLD);
printf("Hi from process %d, here's the string:\n", my_rank);
printf("%s\n", local_buff[0].data);
}
I've read here: Trouble Understanding MPI_Type_create_struct how to resize my struct in order to send an array of them, but unfortunatly, this code yields me the following:
[node1:2267] *** An error occurred in MPI_Scatter
[node1:2267] *** reported by process [1136394241,0]
[node1:2267] *** on communicator MPI_COMM_WORLD
[node1:2267] *** MPI_ERR_TRUNCATE: message truncated
[node1:2267] *** MPI_ERRORS_ARE_FATAL (processes in this communicator
will now abort,
[node1:2267] *** and potentially your MPI job)
You can see in the code that I commented some lines, I tried to send the structures with Sends and Recvs and in that case it works, I can't figure out why it does not work with Scatter. Could you please help me?
Thank you.
I am attempting to make a parallel program that merge sorts two arrays that are being sent to each other from separate processes. In this simplified version, where I am attempting to get the communication to work, I wish to simply send one array (length of four unsigned integers) from process 0 to process 1, then print both the local and received arrays in process 1. Here is the code for this. (Load_and_distribute simply fills the arrays, and I have checked to ensure that both processes do indeed have four unsigned integers within).
int
main(int argc, char ** argv)
{
int ret;
unsigned int ln, tn;
unsigned int * lvals;
int rank, size;
ret = MPI_Init(&argc, &argv);
assert(MPI_SUCCESS == ret);
/* get information about MPI environment */
ret = MPI_Comm_size(MPI_COMM_WORLD, &size);
assert(MPI_SUCCESS == ret);
ret = MPI_Comm_rank(MPI_COMM_WORLD, &rank);
assert(MPI_SUCCESS == ret);
load_and_distribute(argv[1], &ln, &lvals);
unsigned int rn;
unsigned int * rvals;
rvals = malloc(4*sizeof(*rvals));
if(rank == 0){
MPI_Send(&lvals, 4, MPI_UNSIGNED, 1, 0, MPI_COMM_WORLD);
}
else if (rank == 1){
rvals[0] = 4;
MPI_Recv(&rvals, 4, MPI_UNSIGNED, 0, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
printf("lvals = %d %d %d %d\n",lvals[0],lvals[1],lvals[2],lvals[3]);
printf("rvals = %d %d %d %d\n",rvals[0],rvals[1],rvals[2],rvals[3]);
}
ret = MPI_Finalize();
assert(MPI_SUCCESS == ret);
return EXIT_SUCCESS;
}
The send and receive seems to go through without a fit, but when it attempts to print the rval values, I arrive at this output, and I am unsure why.
[hpc5:04355] *** Process received signal ***
[hpc5:04355] Signal: Segmentation fault (11)
[hpc5:04355] Signal code: Address not mapped (1)
[hpc5:04355] Failing at address: 0xe0c4ac
[hpc5:04355] [ 0] /lib64/libpthread.so.0(+0xf370)[0x7f2a8d23c370]
[hpc5:04355] [ 1] ./hms_mpi[0x40165d]
[hpc5:04355] [ 2] /lib64/libc.so.6(__libc_start_main+0xf5)[0x7f2a8ce8db35]
[hpc5:04355] [ 3] ./hms_mpi[0x400c29]
[hpc5:04355] *** End of error message ***
--------------------------------------------------------------------------
mpirun noticed that process rank 1 with PID 4355 on node hpc5 exited on signal 11 (Segmentation fault).
--------------------------------------------------------------------------
The correct buffers for MPI_Send() and MPI_Recv() are lvals and rvals (e.g. do not use the & keyword)
Remove & in your MPI_Send and MPI_Recv:
MPI_Send(lvals, 4, MPI_UNSIGNED, 1, 0, MPI_COMM_WORLD);
MPI_Recv(rvals, 4, MPI_UNSIGNED, 0, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
It is working like this:
int MPI_Send(const void *buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm)
buf: initial address of send buffer (choice)*
I have the following code which works:
#include <mpi.h>
#include <stdio.h>
int main(int argc, char** argv) {
int world_rank, world_size;
MPI_Init(NULL, NULL);
MPI_Comm_rank(MPI_COMM_WORLD, &world_rank);
MPI_Comm_size(MPI_COMM_WORLD, &world_size);
int n = 10000;
int ni, i;
double t[n];
int x[n];
int buf[n];
int buf_size = n*sizeof(int);
MPI_Buffer_attach(buf, buf_size);
if (world_rank == 0) {
for (ni = 0; ni < n; ++ni) {
int msg_size = ni;
int msg[msg_size];
for (i = 0; i < msg_size; ++i) {
msg[i] = rand();
}
double time0 = MPI_Wtime();
MPI_Bsend(&msg, msg_size, MPI_INT, 1, 0, MPI_COMM_WORLD);
t[ni] = MPI_Wtime() - time0;
x[ni] = msg_size;
MPI_Barrier(MPI_COMM_WORLD);
printf("P0 sent msg with size %d\n", msg_size);
}
}
else if (world_rank == 1) {
for (ni = 0; ni < n; ++ni) {
int msg_size = ni;
int msg[msg_size];
MPI_Request request;
MPI_Barrier(MPI_COMM_WORLD);
MPI_Irecv(&msg, msg_size, MPI_INT, 0, 0, MPI_COMM_WORLD, &request);
MPI_Wait(&request, MPI_STATUS_IGNORE);
printf("P1 received msg with size %d\n", msg_size);
}
}
MPI_Buffer_detach(&buf, &buf_size);
MPI_Finalize();
}
As soon as I remove the print statements, the program crashes, telling me there is a MPI_ERR_BUFFER: invalid buffer pointer. If I remove only one of the print statements the other print statements are still executed, so I believe it crashes at the end of the program. I don't see why it crashes and the fact that it does not crash when I am using the print statements goes beyond my logic...
Would anybody have a clue what is going on here?
You are simply not providing enough buffer space to MPI. In buffered mode, all ongoing messages are stored in the buffer space which is used as a ring buffer. In your code, there can be multiple messages that need to be buffered, regardless of the printf. Note that not even 2*n*sizeof(int) would be enough buffer space - the barriers do not provide a guarantee that the buffer is locally freed even though the corresponding receive is completed. You would have to provide (n*(n-1)/2)*sizeof(int) memory to be sure, or something in-between and hope.
Bottom line: Don't use buffered mode.
Generally, use standard blocking send calls and write the application such that it doesn't deadlock. Tune the MPI implementation such that small messages regardless of the receiver - to avoid wait times on late receivers.
If you want to overlap communication and computation, use nonblocking messages - providing proper memory for each communication.
I am using an example code from an MPI book [will give the name shortly].
What it does is the following:
a) It creates two communicators world = MPI_COMM_WORLD containing all the processes and worker which excludes the random number generator server (the last rank process).
b) So, the server generates random numbers and serves them to the workers on requests from the workers.
c) What the workers do is they count separately the number of samples falling inside and outside an unit circle inside an unit square.
d) After sufficient level of accuracy, the counts inside and outside are Allreduced to compute the value of PI as their ratio.
**The code compiles well. However, when running with the following command (actually with any value of n) **
>mpiexec -n 2 apple.exe 0.0001
I get the following errors:
Fatal error in MPI_Allreduce: Invalid communicator, error stack:
MPI_Allreduce(855): MPI_Allreduce(sbuf=000000000022EDCC, rbuf=000000000022EDDC,
count=1, MPI_INT, MPI_SUM, MPI_COMM_NULL) failed
MPI_Allreduce(780): Null communicator
pi = 0.00000000000000000000
job aborted:
rank: node: exit code[: error message]
0: PC: 1: process 0 exited without calling finalize
1: PC: 123
Edit: ((( Removed: But when I am removing any one of the two MPI_Allreduce() functions, it is running without any runtime errors, albeit with wrong answer.))
Code:
#include <mpi.h>
#include <mpe.h>
#include <stdlib.h>
#define CHUNKSIZE 1000
/* message tags */
#define REQUEST 1
#define REPLY 2
int main(int argc, char *argv[])
{
int iter;
int in, out, i, iters, max, ix, iy, ranks [1], done, temp;
double x, y, Pi, error, epsilon;
int numprocs, myid, server, totalin, totalout, workerid;
int rands[CHUNKSIZE], request;
MPI_Comm world, workers;
MPI_Group world_group, worker_group;
MPI_Status status;
MPI_Init(&argc,&argv);
world = MPI_COMM_WORLD;
MPI_Comm_size(world,&numprocs);
MPI_Comm_rank(world,&myid);
server = numprocs-1; /* last proc is server */
if(myid==0) sscanf(argv[1], "%lf", &epsilon);
MPI_Bcast(&epsilon, 1, MPI_DOUBLE, 0, MPI_COMM_WORLD);
MPI_Comm_group(world, &world_group);
ranks[0] = server;
MPI_Group_excl(world_group, 1, ranks, &worker_group);
MPI_Comm_create(world, worker_group, &workers);
MPI_Group_free(&worker_group);
if(myid==server) /* I am the rand server */
{
srand(time(NULL));
do
{
MPI_Recv(&request, 1, MPI_INT, MPI_ANY_SOURCE, REQUEST, world, &status);
if(request)
{
for(i=0; i<CHUNKSIZE;)
{
rands[i] = rand();
if(rands[i]<=INT_MAX) ++i;
}
MPI_Send(rands, CHUNKSIZE, MPI_INT,status.MPI_SOURCE, REPLY, world);
}
}
while(request>0);
}
else /* I am a worker process */
{
request = 1;
done = in = out = 0;
max = INT_MAX; /* max int, for normalization */
MPI_Send(&request, 1, MPI_INT, server, REQUEST, world);
MPI_Comm_rank(workers, &workerid);
iter = 0;
while(!done)
{
++iter;
request = 1;
MPI_Recv(rands, CHUNKSIZE, MPI_INT, server, REPLY, world, &status);
for(i=0; i<CHUNKSIZE;)
{
x = (((double) rands[i++])/max)*2-1;
y = (((double) rands[i++])/max)*2-1;
if(x*x+y*y<1.0) ++in;
else ++out;
}
/* ** see error here ** */
MPI_Allreduce(&in, &totalin, 1, MPI_INT, MPI_SUM, workers);
MPI_Allreduce(&out, &totalout, 1, MPI_INT, MPI_SUM, workers);
/* only one of the above two MPI_Allreduce() functions working */
Pi = (4.0*totalin)/(totalin+totalout);
error = fabs( Pi-3.141592653589793238462643);
done = (error<epsilon||(totalin+totalout)>1000000);
request = (done)?0:1;
if(myid==0)
{
printf("\rpi = %23.20f", Pi);
MPI_Send(&request, 1, MPI_INT, server, REQUEST, world);
}
else
{
if(request)
MPI_Send(&request, 1, MPI_INT, server, REQUEST, world);
}
MPI_Comm_free(&workers);
}
}
if(myid==0)
{
printf("\npoints: %d\nin: %d, out: %d, <ret> to exit\n", totalin+totalout, totalin, totalout);
getchar();
}
MPI_Finalize();
}
What is the error here? Am I missing something? Any help or pointer will be highly appreciated.
You are freeing the workers communicator before you are done using it. Move the MPI_Comm_free(&workers) call after the while(!done) { ... } loop.
I have encountered a problem on using MPI_Gather for gathering indexed integers to a vector of integers. When I try to gather the integers without creating a new receive type, I get a MPI_ERR_TRUNCATE error.
*** An error occurred in MPI_Gather
*** on communicator MPI_COMM_WORLD
*** MPI_ERR_TRUNCATE: message truncated
*** MPI_ERRORS_ARE_FATAL: your MPI job will now abort
The minimal example replicating the issue is
#include <stdlib.h>
#include "mpi.h"
int i, comm_rank, comm_size, err;
int *send_data, *recv_data;
int *blocklengths, *displacements;
MPI_Datatype send_type;
int main ( int argc, char *argv[] ){
MPI_Init ( &argc, &argv );
MPI_Comm_rank(MPI_COMM_WORLD, &comm_rank);
MPI_Comm_size(MPI_COMM_WORLD, &comm_size);
unsigned int block = 1000;
unsigned int count = 1000;
send_data = malloc(sizeof(int)*block*count);
for (i=0; i<block*count; ++i) send_data[i] = i;
recv_data = 0;
if(comm_rank==0) recv_data = malloc(sizeof(int)*block*count*comm_size);
blocklengths = malloc(sizeof(int)*count);
displacements = malloc(sizeof(int)*count);
for (i=0; i<count; ++i) {
blocklengths[i] = block;
displacements[i] = i*block;
}
MPI_Type_indexed(count, blocklengths, displacements, MPI_INT, &send_type);
MPI_Type_commit(&send_type);
err = MPI_Gather((void*)send_data, 1, send_type, (void*)recv_data, block*count, MPI_INT, 0, MPI_COMM_WORLD);
if (err) MPI_Abort(MPI_COMM_WORLD, err);
free(send_data);
free(recv_data);
free(blocklengths);
free(displacements);
MPI_Finalize ( );
return 0;
}
I noticed that this error does not occur when I use data transfer size less than 6K bytes.
I found a workaround using MPI_Type_contiguous, although it seems I add extra overhead to my code.
MPI_Type_contiguous(block*count, MPI_INT, &recv_type);
MPI_Type_commit(&recv_type);
err = MPI_Gather((void*)send_data, 1, send_type, (void*)recv_data, 1, recv_type, 0, MPI_COMM_WORLD);
I have verified the error occurs in open-mpi v1.6 and v1.8.
Could anyone explain the source of this issue?