MPI code error check - c

I have a piece of code, unfortunately, I couldn't run it, but I was trying to find if it has an error logically. Or if there is something missing, here is the
code:
main(int argc, char *argv[]) {
int numtasks, rank, dest, source, rc, count, tag=1;
char inmsg, outmsg=’x’;
MPI_Status Stat;
MPI_Init(&argc,&argv);
MPI_Comm_size(MPI_COMM_WORLD, &numtasks);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
if (rank == 0) {
dest = 1;
source = 1;
rc = MPI_Send(&outmsg, 1, MPI_CHAR, dest, tag, MPI_COMM_WORLD);
rc = MPI_Recv(&inmsg, 1, MPI_CHAR, source, tag, MPI_COMM_WORLD, &Stat);
}
else if (rank == 1) {
dest = 0;
source = 0;
rc = MPI_Send(&outmsg, 1, MPI_CHAR, dest, tag, MPI_COMM_WORLD);
rc = MPI_Recv(&inmsg, 1, MPI_CHAR, source, tag, MPI_COMM_WORLD, &Stat);
}
rc = MPI_Get_count(&Stat, MPI_CHAR, &count);
printf("Task %d: Received %d char(s) from task %d with tag %d \n",
rank, count, Stat.MPI_SOURCE, Stat.MPI_TAG);
MPI_Finalize();
}
And, is it allowed to save MPI send and receive in a variable, here rc has been used?

Your code is wrong. It contains a deadlock, which means that it can hang forever or misbehave otherwise. MPI_Send is a blocking operation - it may block until the respective MPI_Recv is called. So both processes will be stuck at their respective MPI_Send operation before MPI_Recv is called. Use MPI_Sendrecv instead.
Note that due to optimizations, MPI may instead chose to send the data immediately for small messages, so the code may complete even though it is wrong. Do not rely on that!
Normally, you don't have to check MPI return codes, as errors are fatal in MPI by default. In particular, don't assign the return code without checking it for MPI_SUCCESS.
Note that you can easily install MPI on any system, e.g. OpenMPI is available for most Linux distributions. There is no reason not to play around with MPI on a normal desktop system.

Related

MPI I_Send/MPI_Irecv issue

I am writing an MPI program that has the first instance working as a master, sending and receiving results from its workers.
The receive function does something like this:
struct result *check_for_message(void) {
...
static unsigned int message_size;
static char *buffer;
static bool started_reception = false;
static MPI_Request req;
if (!started_reception) {
MPI_Irecv(&message_size, 1, MPI_INT, MPI_ANY_SOURCE, SIZE_TAG,
MPI_COMM_WORLD, &req);
started_reception = true;
} else {
int flag = 0;
MPI_Status status;
MPI_Test(&req, &flag, &status);
if (flag == 1) {
started_reception = false;
buffer = calloc(message_size + 1, sizeof(char));
DIE_IF_NULL(buffer); // printf + MPI_Finalize + exit
MPI_Request content_req;
MPI_Irecv(buffer, MAX_MSG_SIZE, MPI_CHAR, status.MPI_SOURCE, CONTENT_TAG,
MPI_COMM_WORLD, &content_req);
MPI_Wait(&content_req, MPI_STATUS_IGNORE);
ret = process_request(buffer);
free(buffer);
}
}
...
}
The send function does something like this:
MPI_Request size_req;
MPI_Request content_req;
MPI_Isend(&size, 1, MPI_INT, dest, SIZE_TAG, MPI_COMM_WORLD, &size_req);
MPI_Wait(&size_req, MPI_STATUS_IGNORE);
MPI_Isend(buf, size, MPI_CHAR, dest, CONTENT_TAG, MPI_COMM_WORLD,
&content_req);
MPI_Wait(&content_req, MPI_STATUS_IGNORE);
I noticed that if I remove the MPI_Wait in the sending function it often happens that the execution blocks or some sort of SIGNAL stops the execution of an instance(I can check the output but I think it was something about a free error SIGSEGV).
When I add the MPI_Wait it always seems to run perfectly. Could it be something related to the order in which the two sends perform? Aren't they supposed to be in order?
I run the program locally with -n 16 but have also tested with -n 128. The messages that I send are above 50 chars (90% of the time), some being even > 300 chars.

MPI status gets wrong sender

I am solving a load balance problem using MPI: a Master process sends tasks to the slaves processes and collect results as they compute and send back their job.
Since I want to improve performances more as possible I use non-blocking communications: Master sends several tasks and then wait until one process sends back its response so that the master can send additional work to it and so on.
I use MPI_Waitany() since I don't know in advance which slave process responses first, then I get the sender from the status and I can send the new job to it.
My problem is that sometimes the sender I get is wrong (a rank not in MPI_COMM_WORLD) and the program crashes; other times it works fine.
Here's the code. thanks!
//master
if (rank == 0) {
int N_chunks = 10;
MPI_Request request[N_chunks];
MPI_Status status[N_chunks];
int N_computed = 0;
int dest,index_completed;
//initialize array of my data structure
vec send[N_chunks];
vec recv[N_chunks];
//send one job to each process in communicator
for(int i=1;i<size;i++){
MPI_Send( &send[N_computed], 1, mpi_vec_type, i, tag, MPI_COMM_WORLD);
MPI_Irecv(&recv[N_computed], 1, mpi_vec_type, i, tag, MPI_COMM_WORLD,
&request[N_computed]);
N_computed++;
}
// loop
while (N_computed < N_chunks){
//get processed messages
MPI_Waitany(N_computed,request,&index_completed,status);
//get sender ID dest
dest = status[index_completed].MPI_SOURCE;
//send a new job to that process
MPI_Send( &send[N_computed], 1, mpi_vec_type, dest, tag, MPI_COMM_WORLD);
MPI_Irecv(&recv[N_computed], 1, mpi_vec_type, dest, tag, MPI_COMM_WORLD,
&request[N_computed]);
N_computed++;
}
MPI_Waitall(N_computed,request,status);
//close all process
printf("End master\n");
}
You are not using MPI_Waitany() correctly.
It should be
MPI_Status status;
MPI_Waitany(N_computed,request,&index_completed,&status);
dest = status.MPI_SOURCE;
note :
you need an extra loop to MPI_Wait() the last size - 1 requests
you can revamp your algorithm and use MPI_Request request[size-1]; and hence save some memory
I forgot to add a line in the initial post in which I wait for all the pending requests: by the way if I initialize a new status everytime I do Waitany the master process crashes; I need to track which processes are still pending in order to wait the proper number of times...
thanks by the way
EDIT: now it works even if I find it not very smart; would it be possible to initialize an array of MPI_Status at the beginning instead of doing it each time before a wait?
//master
if (rank == 0) {
int N_chunks = 10;
MPI_Request request[size-1];
int N_computed = 0;
int dest;
int index_completed;
//initialize array of vec
vec send[N_chunks];
vec recv[N_chunks];
//initial case
for(int i=1;i<size;i++){
MPI_Send(&send[N_computed], 1, mpi_vec_type, i, tag, MPI_COMM_WORLD);
MPI_Irecv(&recv[N_computed], 1, mpi_vec_type, i, tag, MPI_COMM_WORLD, &request[N_computed]);
N_computed++;
}
// loop
while (N_computed < N_chunks){
MPI_Status status;
//get processed messages
MPI_Waitany(N_computed,request,&index_completed,&status);
//get sender ID dest
dest = status.MPI_SOURCE;
MPI_Send(&send[N_computed], 1, mpi_vec_type, dest, tag, MPI_COMM_WORLD);
MPI_Irecv(&recv[N_computed], 1, mpi_vec_type, dest, tag, MPI_COMM_WORLD, &request[N_computed]);
N_computed++;
}
//wait other process to send back their load
for(int i=0;i<size-1;i++){
MPI_Status status;
MPI_Waitany(N_computed, request, &index_completed,&status);
}
//end
printf("Ms finish\n");
}

MPI Debugging, Segmentation fault?

EDIT: My question is similar to C, Open MPI: segmentation fault from call to MPI_Finalize(). Segfault does not always happen, especially with low numbers of processes, so it you answer that one instead that would be great, either way . . .
I was hoping to get some help debugging the following code:
int main(){
long* my_local;
long n, s, f;
MPI_Init(NULL, NULL);
MPI_Comm_size(MPI_COMM_WORLD, &comm_sz);
MPI_Comm_rank(MPI_COMM_WORLD, &my_rank);
if(my_rank == 0){
/* Get size n from user */
printf("Total processes: %d\n", comm_sz);
printf("Number of keys to be sorted? ");
fflush(stdout);
scanf("%ld", &n);
/* Broadcast size n to other processes */
MPI_Bcast(&n, 1, MPI_LONG, 0, MPI_COMM_WORLD);
/* Create n/comm_sz keys
NOTE! some processes will have 1 extra key if
n%comm_sz != 0 */
create_Keys(&my_local, my_rank, comm_sz, n, &s, &f);
}
if(my_rank != 0){
/* Receive n from process 0 */
MPI_Bcast(&n, 1, MPI_LONG, 0, MPI_COMM_WORLD);
/* Create n/comm_sz keys */
create_Keys(&my_local, my_rank, comm_sz, n, &s, &f);
}
/* The offending function, f is a long set to num elements of my_local*/
Odd_Even_Tsort(&my_local, my_rank, f, comm_sz);
printf("Process %d completed the function", my_rank);
MPI_Finalize();
return 0;
}
void Odd_Even_Tsort(long** my_local, int my_rank, long my_size, int comm_sz)
{
long nochange = 1;
long phase = 0;
long complete = 1;
MPI_Status Stat;
long your_size = 1;
long* recv_buf = malloc(sizeof(long)*(my_size+1));
printf("rank %d has size %ld\n", my_rank, my_size);
while (complete!=0){
if((phase%2)==0){
if( ((my_rank%2)==0) && my_rank < comm_sz-1){
/* Send right */
MPI_Send(&my_size, 1, MPI_LONG, my_rank+1, 0, MPI_COMM_WORLD);
MPI_Send(*my_local, my_size, MPI_LONG, my_rank+1, 0, MPI_COMM_WORLD);
MPI_Recv(&your_size, 1, MPI_LONG, my_rank+1, 0, MPI_COMM_WORLD, &Stat);
MPI_Recv(&recv_buf, your_size, MPI_LONG, my_rank+1, 0, MPI_COMM_WORLD, &Stat);
}
if( ((my_rank%2)==1) && my_rank < comm_sz){
/* Send left */
MPI_Recv(&your_size, 1, MPI_LONG, my_rank-1, 0, MPI_COMM_WORLD, &Stat);
MPI_Recv(&recv_buf, your_size, MPI_LONG, my_rank-1, 0, MPI_COMM_WORLD, &Stat);
MPI_Send(&my_size, 1, MPI_LONG, my_rank-1, 0, MPI_COMM_WORLD);
MPI_Send(*my_local, my_size, MPI_LONG, my_rank-1, 0, MPI_COMM_WORLD);
}
}
phase ++;
complete = 0;
}
printf("Done!\n");
fflush(stdout);
}
And the Error I'm getting is:
[ubuntu:04968] *** Process received signal ***
[ubuntu:04968] Signal: Segmentation fault (11)
[ubuntu:04968] Signal code: Address not mapped (1)
[ubuntu:04968] Failing at address: 0xb
--------------------------------------------------------------------------
mpiexec noticed that process rank 1 with PID 4968 on node ubuntu exited on signal 11 (Segmentation fault).
The reason I'm baffled is that the print statements after the function are still displayed, but if I comment out the function, no errors. So, where the heap am I getting a Segmentation fault?? I'm getting the error with mpiexec -n 2 ./a.out and an 'n' size bigger than 9.
If you actually wanted the entire runnable code, let me know. Really I was hoping not so much for the precise answer but more how to use the gdb/valgrind tools to debug this problem and others like it (and how to read their output).
(And yes, I realize the 'sort' function isn't sorting yet).
The problem here is simple, yet difficult to see unless you use a debugger or print out exhaustive debugging information:
Look at the code where MPI_Recv is called. The recv_buf variable should be supplied as an argument instead of &recv_buf.
MPI_Recv( recv_buf , your_size, MPI_LONG, my_rank-1, 0, MPI_COMM_WORLD, &Stat);
The rest seems ok.

Unclear behaviour in simple MPI send/receive program

I've been having a bug in my code for some time and could not figure out yet how to solve it.
What I'm trying to achieve is easy enough: every worker-node (i.e. node with rank!=0) gets a row (represented by 1-dimensional arry) in a square-structure that involves some computation. Once the computation is done, this row gets sent back to the master.
For testing purposes, there is no computation involved. All that's happening is:
master sends row number to worker, worker uses the row number to calculate the according values
worker sends the array with the result values back
Now, my issue is this:
all works as expected up to a certain size for the number of elements in a row (size = 1006) and number of workers > 1
if the elements in a row exceed 1006, workers fail to shutdown and the program does not terminate
this only occurs if I try to send the array back to the master. If I simply send back an INT, then everything is OK (see commented out line in doMasterTasks() and doWorkerTasks())
Based on the last bullet point, I assume that there must be some race-condition which only surfaces when the array to be sent back to the master reaches a certain size.
Do you have any idea what the issue could be?
Compile the following code with: mpicc -O2 -std=c99 -o simple
Run the executable like so: mpirun -np 3 simple <size> (e.g. 1006 or 1007)
Here's the code:
#include "mpi.h"
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
#define MASTER_RANK 0
#define TAG_RESULT 1
#define TAG_ROW 2
#define TAG_FINISHOFF 3
int mpi_call_result, my_rank, dimension, np;
// forward declarations
void doInitWork(int argc, char **argv);
void doMasterTasks(int argc, char **argv);
void doWorkerTasks(void);
void finalize();
void quit(const char *msg, int mpi_call_result);
void shutdownWorkers() {
printf("All work has been done, shutting down clients now.\n");
for (int i = 0; i < np; i++) {
MPI_Send(0, 0, MPI_INT, i, TAG_FINISHOFF, MPI_COMM_WORLD);
}
}
void doMasterTasks(int argc, char **argv) {
printf("Starting to distribute work...\n");
int size = dimension;
int * dataBuffer = (int *) malloc(sizeof(int) * size);
int currentRow = 0;
int receivedRow = -1;
int rowsLeft = dimension;
MPI_Status status;
for (int i = 1; i < np; i++) {
MPI_Send(&currentRow, 1, MPI_INT, i, TAG_ROW, MPI_COMM_WORLD);
rowsLeft--;
currentRow++;
}
for (;;) {
// MPI_Recv(dataBuffer, size, MPI_INT, MPI_ANY_SOURCE, TAG_RESULT, MPI_COMM_WORLD, &status);
MPI_Recv(&receivedRow, 1, MPI_INT, MPI_ANY_SOURCE, MPI_ANY_TAG, MPI_COMM_WORLD, &status);
if (rowsLeft == 0)
break;
if (currentRow > 1004)
printf("Sending row %d to worker %d\n", currentRow, status.MPI_SOURCE);
MPI_Send(&currentRow, 1, MPI_INT, status.MPI_SOURCE, TAG_ROW, MPI_COMM_WORLD);
rowsLeft--;
currentRow++;
}
shutdownWorkers();
free(dataBuffer);
}
void doWorkerTasks() {
printf("Worker %d started\n", my_rank);
// send the processed row back as the first element in the colours array.
int size = dimension;
int * data = (int *) malloc(sizeof(int) * size);
memset(data, 0, sizeof(size));
int processingRow = -1;
MPI_Status status;
for (;;) {
MPI_Recv(&processingRow, 1, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD, &status);
if (status.MPI_TAG == TAG_FINISHOFF) {
printf("Finish-OFF tag received!\n");
break;
} else {
// MPI_Send(data, size, MPI_INT, 0, TAG_RESULT, MPI_COMM_WORLD);
MPI_Send(&processingRow, 1, MPI_INT, 0, TAG_RESULT, MPI_COMM_WORLD);
}
}
printf("Slave %d finished work\n", my_rank);
free(data);
}
int main(int argc, char **argv) {
if (argc == 2) {
sscanf(argv[1], "%d", &dimension);
} else {
dimension = 1000;
}
doInitWork(argc, argv);
if (my_rank == MASTER_RANK) {
doMasterTasks(argc, argv);
} else {
doWorkerTasks();
}
finalize();
}
void quit(const char *msg, int mpi_call_result) {
printf("\n%s\n", msg);
MPI_Abort(MPI_COMM_WORLD, mpi_call_result);
exit(mpi_call_result);
}
void finalize() {
mpi_call_result = MPI_Finalize();
if (mpi_call_result != 0) {
quit("Finalizing the MPI system failed, aborting now...", mpi_call_result);
}
}
void doInitWork(int argc, char **argv) {
mpi_call_result = MPI_Init(&argc, &argv);
if (mpi_call_result != 0) {
quit("Error while initializing the system. Aborting now...\n", mpi_call_result);
}
MPI_Comm_size(MPI_COMM_WORLD, &np);
MPI_Comm_rank(MPI_COMM_WORLD, &my_rank);
}
Any help is greatly appreciated!
Best,
Chris
If you take a look at your doWorkerTasks, you see that they send exactly as many data messages as they receive; (and they receive one more to shut them down).
But your master code:
for (int i = 1; i < np; i++) {
MPI_Send(&currentRow, 1, MPI_INT, i, TAG_ROW, MPI_COMM_WORLD);
rowsLeft--;
currentRow++;
}
for (;;) {
MPI_Recv(dataBuffer, size, MPI_INT, MPI_ANY_SOURCE, TAG_RESULT, MPI_COMM_WORLD, &status);
if (rowsLeft == 0)
break;
MPI_Send(&currentRow, 1, MPI_INT, status.MPI_SOURCE, TAG_ROW, MPI_COMM_WORLD);
rowsLeft--;
currentRow++;
}
sends np-2 more data messages than it receives. In particular, it only keeps receiving data until it has no more to send, even though there should be np-2 more data messages outstanding. Changing the code to the following:
int rowsLeftToSend= dimension;
int rowsLeftToReceive = dimension;
for (int i = 1; i < np; i++) {
MPI_Send(&currentRow, 1, MPI_INT, i, TAG_ROW, MPI_COMM_WORLD);
rowsLeftToSend--;
currentRow++;
}
while (rowsLeftToReceive > 0) {
MPI_Recv(dataBuffer, size, MPI_INT, MPI_ANY_SOURCE, TAG_RESULT, MPI_COMM_WORLD, &status);
rowsLeftToReceive--;
if (rowsLeftToSend> 0) {
if (currentRow > 1004)
printf("Sending row %d to worker %d\n", currentRow, status.MPI_SOURCE);
MPI_Send(&currentRow, 1, MPI_INT, status.MPI_SOURCE, TAG_ROW, MPI_COMM_WORLD);
rowsLeftToSend--;
currentRow++;
}
}
Now works.
Why the code doesn't deadlock (note this is deadlock, not a race condition; this is a more common parallel error in distributed computing) for smaller message sizes is a subtle detail of how most MPI implementations work. Generally, MPI implementations just "shove" small messages down the pipe whether or not the receiver is ready for them, but larger messages (since they take more storage resources on the receiving end) need some handshaking between the sender and the receiver. (If you want to find out more, search for eager vs rendezvous protocols).
So for the small message case (less than 1006 ints in this case, and 1 int definitely works, too) the worker nodes did their send whether or not the master was receiving them. If the master had called MPI_Recv(), the messages would have been there already and it would have returned immediately. But it didn't, so there were pending messages on the master side; but it didn't matter. The master sent out its kill messages, and everyone exited.
But for larger messages, the remaining send()s have to have the receiver particpating to clear, and since the receiver never does, the remaining workers hang.
Note that even for the small message case where there was no deadlock, the code didn't work properly - there was missing computed data.
Update: There was a similar problem in your shutdownWorkers:
void shutdownWorkers() {
printf("All work has been done, shutting down clients now.\n");
for (int i = 0; i < np; i++) {
MPI_Send(0, 0, MPI_INT, i, TAG_FINISHOFF, MPI_COMM_WORLD);
}
}
Here you are sending to all processes, including rank 0, the one doing the sending. In principle, that MPI_Send should deadlock, as it is a blocking send and there isn't a matching receive already posted. You could post a non-blocking receive before to avoid this, but that's unnecessary -- rank 0 doesn't need to let itself know to end. So just change the loop to
for (int i = 1; i < np; i++)
tl;dr - your code deadlocked because the master wasn't receiving enough messages from the workers; it happened to work for small message sizes because of an implementation detail common to most MPI libraries.

A problem about using MPI_Send

I'm learning MPI_Send, but I'm confused about this method. I wrote a simple pingpong program which the rank-0 node send the message to rank-1 node, and then the latter one returns a message to the former one.
if (rank == 0) { /* Send Ping, Receive Pong */
dest = 2;
source = 2;
rc = MPI_Send(pingmsg, strlen(pingmsg)+1, MPI_CHAR, dest, tag, MPI_COMM_WORLD);
rc = MPI_Recv(buff, strlen(pongmsg)+1, MPI_CHAR, source, tag, MPI_COMM_WORLD, &Stat);
printf("Rank0 Sent: %s & Received: %s\n", pingmsg, buff);
}
else if (rank == 2) { /* Receive Ping, Send Pong */
dest = 0;
source = 0;
rc = MPI_Recv(buff, strlen(pingmsg)+1, MPI_CHAR, source, tag, MPI_COMM_WORLD, &Stat);
printf("Rank1 received: %s & Sending: %s\n", buff, pongmsg);
rc = MPI_Send(pongmsg, strlen(pongmsg)+1, MPI_CHAR, dest, tag, MPI_COMM_WORLD);
}
I run this program on a 3 nodes environment. However, the system displays:
Fatal error in MPI_Send: Other MPI error, error stack:
MPI_Send(173)..............: MPI_Send(buf=0xbffffb90, count=10, MPI_CHAR, dest=2, tag=1, MPI_COMM_WORLD) failed
MPID_nem_tcp_connpoll(1811): Communication error with rank 2: Unknown error 4294967295
I'm wondering why I can send a message from rank-0 node to rank-1 node, but an error occurs when changed from rank-0 node to rank-1 node? Thanks.
Actually have you checked whether strlen(pingmsg) is the same in both MPI_SEND and MPI_RECV
The amount of data sent using MPI_SEND should be less than or equal to the amount of data to be received by MPI_RECV or else it will lead to an error.

Resources