Receiving an array allocated with malloc in MPI - c

EDIT: There is no problem with this code in particular. I created a reduced version of my code and this part works perfectly. I still don't understand why it is not working in my whole code, because I have everything commented but this, but that's maybe too particular. Sorry, wrong question.
(I edited and added at the bottom the error that I get).
I'm trying to parallelize a C program.
I'm encountering errors when I try to pass an array allocated with malloc from the master process to the rest of processes. Or better, when I try to receive it.
This is the piece of code I'm having trouble with:
if (rank == 0)
{
int *data=(int *) malloc(size*sizeof(int));
int error_code = MPI_Send(data, size, MPI_INT, 1, 1, MPI_COMM_WORLD);
if (error_code != MPI_SUCCESS) {
char error_string[BUFSIZ];
int length_of_error_string;
MPI_Error_string(error_code, error_string, &length_of_error_string);
printf("%3d: %s\n", rank, error_string);
}
printf("Data sent.");
}
else if (rank == 1)
{
int *data=(int *) malloc(size*sizeof(int));
int error_code = MPI_Recv(data, size, MPI_INT, 0, 1, MPI_COMM_WORLD, &status);
if (error_code != MPI_SUCCESS) {
char error_string[BUFSIZ];
int length_of_error_string;
MPI_Error_string(error_code, error_string, &length_of_error_string);
printf("%3d: %s\n", rank, error_string);
}
printf("Received.");
}
"Data sent." is printed, followed by a segmentation fault (with memory dump) caused by the second process and "Received" is never printed.
I think I'm not receiving well the data. But I tried several possibilities, I think I have to pass the address of the variable and not just the pointer to the first position, so I thought this was the correct way, but it is not working.
From the error codes nothing gets printed.
Does anyone know what's causing the error and what was my mistake?
Thanks!
EDIT:
This is the exact error:
*** Process received signal ***
*** End of error message ***
Signal: Segmentation fault (11)
Signal code: Address not mapped (1)
EDIT 2:
This code works:
int main(int argc, char* argv[])
{
int size_x = 12;
int size_y = 12;
int rank, size, length;
char nodename[BUFSIZ];
MPI_Status status;
MPI_Init(&argc,&argv);
MPI_Comm_size(MPI_COMM_WORLD,&size);
MPI_Comm_rank(MPI_COMM_WORLD,&rank);
MPI_Get_processor_name(nodename, &length);
MPI_Errhandler_set(MPI_COMM_WORLD, MPI_ERRORS_RETURN);
if (rank == 0)
{
int *data=malloc(size*sizeof(int));
int error_code = MPI_Send(data, size, MPI_INT, 1, 1, MPI_COMM_WORLD);
if (error_code != MPI_SUCCESS)
{
char error_string[BUFSIZ];
int length_of_error_string;
MPI_Error_string(error_code, error_string, &length_of_error_string);
printf("%3d: %s\n", rank, error_string);
}
printf("Data sent.");
}
else if (rank > 0)
{
int *data=malloc(size*sizeof(int));
int error_code = MPI_Recv(data, size, MPI_INT, 0, 1, MPI_COMM_WORLD, &status);
if (error_code != MPI_SUCCESS)
{
char error_string[BUFSIZ];
int length_of_error_string;
MPI_Error_string(error_code, error_string, &length_of_error_string);
printf("%3d: %s\n", rank, error_string);
}
printf("Received.");
}
MPI_Finalize();
return 0;
}

I found the problem, it was not the MPI calls, but there was a problem with a previous function (forgot a variable in "printf") which I didn't notice. That made the whole code break. Tricky MPI...

Related

Receiving an array with MPI

I am attempting to make a parallel program that merge sorts two arrays that are being sent to each other from separate processes. In this simplified version, where I am attempting to get the communication to work, I wish to simply send one array (length of four unsigned integers) from process 0 to process 1, then print both the local and received arrays in process 1. Here is the code for this. (Load_and_distribute simply fills the arrays, and I have checked to ensure that both processes do indeed have four unsigned integers within).
int
main(int argc, char ** argv)
{
int ret;
unsigned int ln, tn;
unsigned int * lvals;
int rank, size;
ret = MPI_Init(&argc, &argv);
assert(MPI_SUCCESS == ret);
/* get information about MPI environment */
ret = MPI_Comm_size(MPI_COMM_WORLD, &size);
assert(MPI_SUCCESS == ret);
ret = MPI_Comm_rank(MPI_COMM_WORLD, &rank);
assert(MPI_SUCCESS == ret);
load_and_distribute(argv[1], &ln, &lvals);
unsigned int rn;
unsigned int * rvals;
rvals = malloc(4*sizeof(*rvals));
if(rank == 0){
MPI_Send(&lvals, 4, MPI_UNSIGNED, 1, 0, MPI_COMM_WORLD);
}
else if (rank == 1){
rvals[0] = 4;
MPI_Recv(&rvals, 4, MPI_UNSIGNED, 0, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
printf("lvals = %d %d %d %d\n",lvals[0],lvals[1],lvals[2],lvals[3]);
printf("rvals = %d %d %d %d\n",rvals[0],rvals[1],rvals[2],rvals[3]);
}
ret = MPI_Finalize();
assert(MPI_SUCCESS == ret);
return EXIT_SUCCESS;
}
The send and receive seems to go through without a fit, but when it attempts to print the rval values, I arrive at this output, and I am unsure why.
[hpc5:04355] *** Process received signal ***
[hpc5:04355] Signal: Segmentation fault (11)
[hpc5:04355] Signal code: Address not mapped (1)
[hpc5:04355] Failing at address: 0xe0c4ac
[hpc5:04355] [ 0] /lib64/libpthread.so.0(+0xf370)[0x7f2a8d23c370]
[hpc5:04355] [ 1] ./hms_mpi[0x40165d]
[hpc5:04355] [ 2] /lib64/libc.so.6(__libc_start_main+0xf5)[0x7f2a8ce8db35]
[hpc5:04355] [ 3] ./hms_mpi[0x400c29]
[hpc5:04355] *** End of error message ***
--------------------------------------------------------------------------
mpirun noticed that process rank 1 with PID 4355 on node hpc5 exited on signal 11 (Segmentation fault).
--------------------------------------------------------------------------
The correct buffers for MPI_Send() and MPI_Recv() are lvals and rvals (e.g. do not use the & keyword)
Remove & in your MPI_Send and MPI_Recv:
MPI_Send(lvals, 4, MPI_UNSIGNED, 1, 0, MPI_COMM_WORLD);
MPI_Recv(rvals, 4, MPI_UNSIGNED, 0, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
It is working like this:
int MPI_Send(const void *buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm)
buf: initial address of send buffer (choice)*

MPI_ERR_BUFFER from MPI_Bsend after removing following print statement?

I have the following code which works:
#include <mpi.h>
#include <stdio.h>
int main(int argc, char** argv) {
int world_rank, world_size;
MPI_Init(NULL, NULL);
MPI_Comm_rank(MPI_COMM_WORLD, &world_rank);
MPI_Comm_size(MPI_COMM_WORLD, &world_size);
int n = 10000;
int ni, i;
double t[n];
int x[n];
int buf[n];
int buf_size = n*sizeof(int);
MPI_Buffer_attach(buf, buf_size);
if (world_rank == 0) {
for (ni = 0; ni < n; ++ni) {
int msg_size = ni;
int msg[msg_size];
for (i = 0; i < msg_size; ++i) {
msg[i] = rand();
}
double time0 = MPI_Wtime();
MPI_Bsend(&msg, msg_size, MPI_INT, 1, 0, MPI_COMM_WORLD);
t[ni] = MPI_Wtime() - time0;
x[ni] = msg_size;
MPI_Barrier(MPI_COMM_WORLD);
printf("P0 sent msg with size %d\n", msg_size);
}
}
else if (world_rank == 1) {
for (ni = 0; ni < n; ++ni) {
int msg_size = ni;
int msg[msg_size];
MPI_Request request;
MPI_Barrier(MPI_COMM_WORLD);
MPI_Irecv(&msg, msg_size, MPI_INT, 0, 0, MPI_COMM_WORLD, &request);
MPI_Wait(&request, MPI_STATUS_IGNORE);
printf("P1 received msg with size %d\n", msg_size);
}
}
MPI_Buffer_detach(&buf, &buf_size);
MPI_Finalize();
}
As soon as I remove the print statements, the program crashes, telling me there is a MPI_ERR_BUFFER: invalid buffer pointer. If I remove only one of the print statements the other print statements are still executed, so I believe it crashes at the end of the program. I don't see why it crashes and the fact that it does not crash when I am using the print statements goes beyond my logic...
Would anybody have a clue what is going on here?
You are simply not providing enough buffer space to MPI. In buffered mode, all ongoing messages are stored in the buffer space which is used as a ring buffer. In your code, there can be multiple messages that need to be buffered, regardless of the printf. Note that not even 2*n*sizeof(int) would be enough buffer space - the barriers do not provide a guarantee that the buffer is locally freed even though the corresponding receive is completed. You would have to provide (n*(n-1)/2)*sizeof(int) memory to be sure, or something in-between and hope.
Bottom line: Don't use buffered mode.
Generally, use standard blocking send calls and write the application such that it doesn't deadlock. Tune the MPI implementation such that small messages regardless of the receiver - to avoid wait times on late receivers.
If you want to overlap communication and computation, use nonblocking messages - providing proper memory for each communication.

Using thread creates with pthread_create to call the function MPI_Finalize in a MPI application write in C

First, I precise that I am french and my english is not really good.
I am working on MPI application and I have some problems and I hope that somebody can help me.
As reported in the title of my post, I try to use a thread to listen when I have to kill my application and then call the MPI_Finalize function.
However, my application does not finish correcty.
More precisely, I obtain the following message:
[XPS-2720:27441] * Process received signal *
[XPS-2720:27441] Signal: Segmentation fault (11)
[XPS-2720:27441] Signal code: Address not mapped (1)
[XPS-2720:27441] Failing at address: 0x7f14077a3b6d
[XPS-2720:27440] * Process received signal *
[XPS-2720:27440] Signal: Segmentation fault (11)
[XPS-2720:27440] Signal code: Address not mapped (1)
[XPS-2720:27440] Failing at address: 0x7fb11d07bb6d
mpirun noticed that process rank 1 with PID 27440 on node lagniez-XPS-2720 exited on signal 11 (Segmentation fault).
My slave code is:
#include "mpi.h"
#include <stdio.h>
#include <stdlib.h>
#include <signal.h>
#include <unistd.h>
#include <sys/types.h>
#include <pthread.h>
#include <cassert>
#define send_data_tag 1664
#define send_kill_tag 666
void *finilizeMPICom(void *intercomm)
{
printf("the finilizeMPICom was called\n");
MPI_Comm parentcomm = * ((MPI_Comm *) intercomm);
MPI_Status status;
int res;
// sleep(10);
MPI_Recv(&res, 1, MPI_INT, 0, send_kill_tag, parentcomm, &status);
int rank;
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
printf("we receive something %d -- %d\n", rank, res);
MPI_Finalize();
exit(0);
}// finilizeMPICom
int main( int argc, char *argv[])
{
int numtasks, rank, len, rc;
char hostname[MPI_MAX_PROCESSOR_NAME];
int provided, claimed;
rc = MPI_Init_thread(0, 0, MPI_THREAD_MULTIPLE, &provided);
MPI_Query_thread( &claimed );
if (rc != MPI_SUCCESS || provided != 3)
{
printf ("Error starting MPI program. Terminating.\n");
MPI_Abort(MPI_COMM_WORLD, rc);
}
MPI_Comm_rank(MPI_COMM_WORLD,&rank);
MPI_Comm parentcomm;
MPI_Comm_get_parent(&parentcomm);
/* create a second thread to listen when we have to kill the program */
pthread_t properlyKill;
if(pthread_create(&properlyKill, NULL, finilizeMPICom, (void *) &parentcomm))
{
fprintf(stderr, "Error creating thread\n");
return 0;
}
assert(parentcomm != MPI_COMM_NULL);
MPI_Status status;
int root_process, ierr, num_rows_to_receive;
int mode;
MPI_Recv( &mode, 1, MPI_INT, 0, send_data_tag, parentcomm, &status);
printf("c The solver works in the mode %d\n", mode);
printf("I sent a message %d\n", rank);
// if(rank != 1) sleep(100);
int res = 1;
MPI_Send(&res, 1, MPI_INT, 0, send_data_tag, parentcomm);
printf("we want to listen for somethiing %d\n", rank);
int rescc = 1;
MPI_Recv(&rescc, 1, MPI_INT, 0, send_data_tag, parentcomm, &status);
printf("I received the message %d %d\n", rescc, rank);
if(rescc == 1000)
{
printf("~~~~~~~~>>> I print the solution %d\n", rank);
int res3 = 1001;
MPI_Send(&res3, 1, MPI_INT, 0, send_data_tag, parentcomm);
}
else printf("I do not understand %d\n", rank);
printf("I wait the thread to kill the programm %d\n", rank);
pthread_join(properlyKill, (void**)&(res));
return 0;
}
For the master I have:
int main(int argc, char **argv)
{
Parser *p = new Parser("slave.xml");
MPI_Init(&argc, &argv);
if(p->method == "concurrent")
{
ConcurrentManager cc(p->instance, p->solvers);
cc.run();
}
else
{
cerr << "c The only available methods are: concurrent, eps (Embarrassingly Parallel Search) or tree" << endl;
exit(1);
}
delete(p);
MPI_Finalize();
exit(0);
}// main
/**
Create a concurrent manager (means init the data structures to run
the solvers).
#param[in] _instance, the benchmark path
#param[in] _solvers, the set of solvers that will be ran
*/
ConcurrentManager::ConcurrentManager(string _instance, vector<Solver> &_solvers) :
instance(_instance), solvers(_solvers)
{
cout << "c\nc Concurrent manager called" << endl;
nbSolvers = _solvers.size();
np = new int[nbSolvers];
cmds = new char*[nbSolvers];
arrayOfArgs = new char **[nbSolvers];
infos = new MPI_Info[nbSolvers];
for(int i = 0 ; i<nbSolvers ; i++)
{
np[i] = solvers[i].npernode;
cmds[i] = new char[(solvers[i].executablePath).size() + 1];
strcpy(cmds[i], (solvers[i].executablePath).c_str());
arrayOfArgs[i] = new char *[(solvers[i].options).size() + 1];
for(unsigned int j = 0 ; j<(solvers[i].options).size() ; j++)
{
arrayOfArgs[i][j] = new char[(solvers[i].options[j]).size() + 1];
strcpy(arrayOfArgs[i][j], (solvers[i].options[j]).c_str());
}
arrayOfArgs[i][(solvers[i].options).size()] = NULL;
MPI_Info_create(&infos[i]);
char hostname[solvers[i].hostname.size()];
strcpy(hostname, solvers[i].hostname.c_str());
MPI_Info_set(infos[i], "host", hostname);
}
sizeComm = 0;
}// constructor
/**
Wait that at least one process finish and return the code
SOLUTION_FOUND.
#param[in] intercomm, the communicator
*/
void ConcurrentManager::waitForSolution(MPI_Comm &intercomm)
{
MPI_Status arrayStatus[sizeComm], status;
MPI_Request request[sizeComm];
int val[sizeComm], flag;
for(int i = 0 ; i<sizeComm ; i++) MPI_Irecv(&val[i], 1, MPI_INT, i, TAG_MSG, intercomm, &request[i]);
bool solutionFound = false;
while(!solutionFound)
{
for(int i = 0 ; i<sizeComm ; i++)
{
MPI_Test(&request[i], &flag, &arrayStatus[i]);
if(flag)
{
printf("---------------------> %d reveived %d\n", i , val[i]);
if(val[i] == SOLUTION_FOUND)
{
int msg = PRINT_SOLUTION;
MPI_Send(&msg, 1, MPI_INT, i, TAG_MSG, intercomm); // ask to print the solution
int msgJobFinished;
MPI_Recv(&msgJobFinished, 1, MPI_INT, i, TAG_MSG, intercomm, &status); // wait the answer
assert(msgJobFinished == JOB_FINISHED);
cout << "I am going to kill everybody" << endl;
int msgKill[sizeComm];
for(int j = 0 ; j<sizeComm ; j++)
{
msgKill[i] = STOP_AT_ONCE;
MPI_Send(&msgKill[i], 1, MPI_INT, j, TAG_KILL, intercomm);
}
solutionFound = true;
break;
} else
{
printf("restart the communication for %d\n", i);
MPI_Irecv(&val[i], 1, MPI_INT, i, TAG_MSG, intercomm, &request[i]);
}
}
}
}
}// waitForSolution
/**
Run the solver.
*/
void ConcurrentManager::run()
{
MPI_Comm intercomm;
int errcodes[solvers.size()];
MPI_Comm_spawn_multiple(nbSolvers, cmds, arrayOfArgs, np, infos, 0, MPI_COMM_WORLD, &intercomm, errcodes);
MPI_Comm_remote_size(intercomm, &sizeComm);
cout << "c Solvers are now running: " << sizeComm << endl;
int msg = CONCU_MODE;
for(int i = 0 ; i<sizeComm ; i++) MPI_Send(&msg, 1, MPI_INT, i, TAG_MSG, intercomm); // init the working mode
waitForSolution(intercomm);
}// run
I know that I put a lot of code :(
But, I do not know where is the problem.
Please, help me :)
Best regards.
The MPI documentation for how MPI interacts with threads demands that the call to MPI_Finalize() be performed by the main thread -- that is, the same one that initialized MPI. In your case, that happens also to be your process's initial thread.
In order to satisfy MPI's requirements, you could reorganize your application so that the initial thread is the one that waits for a kill signal and then shuts down MPI. The other work it currently does would then need to be moved to a different thread.

MPI Debugging, Segmentation fault?

EDIT: My question is similar to C, Open MPI: segmentation fault from call to MPI_Finalize(). Segfault does not always happen, especially with low numbers of processes, so it you answer that one instead that would be great, either way . . .
I was hoping to get some help debugging the following code:
int main(){
long* my_local;
long n, s, f;
MPI_Init(NULL, NULL);
MPI_Comm_size(MPI_COMM_WORLD, &comm_sz);
MPI_Comm_rank(MPI_COMM_WORLD, &my_rank);
if(my_rank == 0){
/* Get size n from user */
printf("Total processes: %d\n", comm_sz);
printf("Number of keys to be sorted? ");
fflush(stdout);
scanf("%ld", &n);
/* Broadcast size n to other processes */
MPI_Bcast(&n, 1, MPI_LONG, 0, MPI_COMM_WORLD);
/* Create n/comm_sz keys
NOTE! some processes will have 1 extra key if
n%comm_sz != 0 */
create_Keys(&my_local, my_rank, comm_sz, n, &s, &f);
}
if(my_rank != 0){
/* Receive n from process 0 */
MPI_Bcast(&n, 1, MPI_LONG, 0, MPI_COMM_WORLD);
/* Create n/comm_sz keys */
create_Keys(&my_local, my_rank, comm_sz, n, &s, &f);
}
/* The offending function, f is a long set to num elements of my_local*/
Odd_Even_Tsort(&my_local, my_rank, f, comm_sz);
printf("Process %d completed the function", my_rank);
MPI_Finalize();
return 0;
}
void Odd_Even_Tsort(long** my_local, int my_rank, long my_size, int comm_sz)
{
long nochange = 1;
long phase = 0;
long complete = 1;
MPI_Status Stat;
long your_size = 1;
long* recv_buf = malloc(sizeof(long)*(my_size+1));
printf("rank %d has size %ld\n", my_rank, my_size);
while (complete!=0){
if((phase%2)==0){
if( ((my_rank%2)==0) && my_rank < comm_sz-1){
/* Send right */
MPI_Send(&my_size, 1, MPI_LONG, my_rank+1, 0, MPI_COMM_WORLD);
MPI_Send(*my_local, my_size, MPI_LONG, my_rank+1, 0, MPI_COMM_WORLD);
MPI_Recv(&your_size, 1, MPI_LONG, my_rank+1, 0, MPI_COMM_WORLD, &Stat);
MPI_Recv(&recv_buf, your_size, MPI_LONG, my_rank+1, 0, MPI_COMM_WORLD, &Stat);
}
if( ((my_rank%2)==1) && my_rank < comm_sz){
/* Send left */
MPI_Recv(&your_size, 1, MPI_LONG, my_rank-1, 0, MPI_COMM_WORLD, &Stat);
MPI_Recv(&recv_buf, your_size, MPI_LONG, my_rank-1, 0, MPI_COMM_WORLD, &Stat);
MPI_Send(&my_size, 1, MPI_LONG, my_rank-1, 0, MPI_COMM_WORLD);
MPI_Send(*my_local, my_size, MPI_LONG, my_rank-1, 0, MPI_COMM_WORLD);
}
}
phase ++;
complete = 0;
}
printf("Done!\n");
fflush(stdout);
}
And the Error I'm getting is:
[ubuntu:04968] *** Process received signal ***
[ubuntu:04968] Signal: Segmentation fault (11)
[ubuntu:04968] Signal code: Address not mapped (1)
[ubuntu:04968] Failing at address: 0xb
--------------------------------------------------------------------------
mpiexec noticed that process rank 1 with PID 4968 on node ubuntu exited on signal 11 (Segmentation fault).
The reason I'm baffled is that the print statements after the function are still displayed, but if I comment out the function, no errors. So, where the heap am I getting a Segmentation fault?? I'm getting the error with mpiexec -n 2 ./a.out and an 'n' size bigger than 9.
If you actually wanted the entire runnable code, let me know. Really I was hoping not so much for the precise answer but more how to use the gdb/valgrind tools to debug this problem and others like it (and how to read their output).
(And yes, I realize the 'sort' function isn't sorting yet).
The problem here is simple, yet difficult to see unless you use a debugger or print out exhaustive debugging information:
Look at the code where MPI_Recv is called. The recv_buf variable should be supplied as an argument instead of &recv_buf.
MPI_Recv( recv_buf , your_size, MPI_LONG, my_rank-1, 0, MPI_COMM_WORLD, &Stat);
The rest seems ok.

Unclear behaviour in simple MPI send/receive program

I've been having a bug in my code for some time and could not figure out yet how to solve it.
What I'm trying to achieve is easy enough: every worker-node (i.e. node with rank!=0) gets a row (represented by 1-dimensional arry) in a square-structure that involves some computation. Once the computation is done, this row gets sent back to the master.
For testing purposes, there is no computation involved. All that's happening is:
master sends row number to worker, worker uses the row number to calculate the according values
worker sends the array with the result values back
Now, my issue is this:
all works as expected up to a certain size for the number of elements in a row (size = 1006) and number of workers > 1
if the elements in a row exceed 1006, workers fail to shutdown and the program does not terminate
this only occurs if I try to send the array back to the master. If I simply send back an INT, then everything is OK (see commented out line in doMasterTasks() and doWorkerTasks())
Based on the last bullet point, I assume that there must be some race-condition which only surfaces when the array to be sent back to the master reaches a certain size.
Do you have any idea what the issue could be?
Compile the following code with: mpicc -O2 -std=c99 -o simple
Run the executable like so: mpirun -np 3 simple <size> (e.g. 1006 or 1007)
Here's the code:
#include "mpi.h"
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
#define MASTER_RANK 0
#define TAG_RESULT 1
#define TAG_ROW 2
#define TAG_FINISHOFF 3
int mpi_call_result, my_rank, dimension, np;
// forward declarations
void doInitWork(int argc, char **argv);
void doMasterTasks(int argc, char **argv);
void doWorkerTasks(void);
void finalize();
void quit(const char *msg, int mpi_call_result);
void shutdownWorkers() {
printf("All work has been done, shutting down clients now.\n");
for (int i = 0; i < np; i++) {
MPI_Send(0, 0, MPI_INT, i, TAG_FINISHOFF, MPI_COMM_WORLD);
}
}
void doMasterTasks(int argc, char **argv) {
printf("Starting to distribute work...\n");
int size = dimension;
int * dataBuffer = (int *) malloc(sizeof(int) * size);
int currentRow = 0;
int receivedRow = -1;
int rowsLeft = dimension;
MPI_Status status;
for (int i = 1; i < np; i++) {
MPI_Send(&currentRow, 1, MPI_INT, i, TAG_ROW, MPI_COMM_WORLD);
rowsLeft--;
currentRow++;
}
for (;;) {
// MPI_Recv(dataBuffer, size, MPI_INT, MPI_ANY_SOURCE, TAG_RESULT, MPI_COMM_WORLD, &status);
MPI_Recv(&receivedRow, 1, MPI_INT, MPI_ANY_SOURCE, MPI_ANY_TAG, MPI_COMM_WORLD, &status);
if (rowsLeft == 0)
break;
if (currentRow > 1004)
printf("Sending row %d to worker %d\n", currentRow, status.MPI_SOURCE);
MPI_Send(&currentRow, 1, MPI_INT, status.MPI_SOURCE, TAG_ROW, MPI_COMM_WORLD);
rowsLeft--;
currentRow++;
}
shutdownWorkers();
free(dataBuffer);
}
void doWorkerTasks() {
printf("Worker %d started\n", my_rank);
// send the processed row back as the first element in the colours array.
int size = dimension;
int * data = (int *) malloc(sizeof(int) * size);
memset(data, 0, sizeof(size));
int processingRow = -1;
MPI_Status status;
for (;;) {
MPI_Recv(&processingRow, 1, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD, &status);
if (status.MPI_TAG == TAG_FINISHOFF) {
printf("Finish-OFF tag received!\n");
break;
} else {
// MPI_Send(data, size, MPI_INT, 0, TAG_RESULT, MPI_COMM_WORLD);
MPI_Send(&processingRow, 1, MPI_INT, 0, TAG_RESULT, MPI_COMM_WORLD);
}
}
printf("Slave %d finished work\n", my_rank);
free(data);
}
int main(int argc, char **argv) {
if (argc == 2) {
sscanf(argv[1], "%d", &dimension);
} else {
dimension = 1000;
}
doInitWork(argc, argv);
if (my_rank == MASTER_RANK) {
doMasterTasks(argc, argv);
} else {
doWorkerTasks();
}
finalize();
}
void quit(const char *msg, int mpi_call_result) {
printf("\n%s\n", msg);
MPI_Abort(MPI_COMM_WORLD, mpi_call_result);
exit(mpi_call_result);
}
void finalize() {
mpi_call_result = MPI_Finalize();
if (mpi_call_result != 0) {
quit("Finalizing the MPI system failed, aborting now...", mpi_call_result);
}
}
void doInitWork(int argc, char **argv) {
mpi_call_result = MPI_Init(&argc, &argv);
if (mpi_call_result != 0) {
quit("Error while initializing the system. Aborting now...\n", mpi_call_result);
}
MPI_Comm_size(MPI_COMM_WORLD, &np);
MPI_Comm_rank(MPI_COMM_WORLD, &my_rank);
}
Any help is greatly appreciated!
Best,
Chris
If you take a look at your doWorkerTasks, you see that they send exactly as many data messages as they receive; (and they receive one more to shut them down).
But your master code:
for (int i = 1; i < np; i++) {
MPI_Send(&currentRow, 1, MPI_INT, i, TAG_ROW, MPI_COMM_WORLD);
rowsLeft--;
currentRow++;
}
for (;;) {
MPI_Recv(dataBuffer, size, MPI_INT, MPI_ANY_SOURCE, TAG_RESULT, MPI_COMM_WORLD, &status);
if (rowsLeft == 0)
break;
MPI_Send(&currentRow, 1, MPI_INT, status.MPI_SOURCE, TAG_ROW, MPI_COMM_WORLD);
rowsLeft--;
currentRow++;
}
sends np-2 more data messages than it receives. In particular, it only keeps receiving data until it has no more to send, even though there should be np-2 more data messages outstanding. Changing the code to the following:
int rowsLeftToSend= dimension;
int rowsLeftToReceive = dimension;
for (int i = 1; i < np; i++) {
MPI_Send(&currentRow, 1, MPI_INT, i, TAG_ROW, MPI_COMM_WORLD);
rowsLeftToSend--;
currentRow++;
}
while (rowsLeftToReceive > 0) {
MPI_Recv(dataBuffer, size, MPI_INT, MPI_ANY_SOURCE, TAG_RESULT, MPI_COMM_WORLD, &status);
rowsLeftToReceive--;
if (rowsLeftToSend> 0) {
if (currentRow > 1004)
printf("Sending row %d to worker %d\n", currentRow, status.MPI_SOURCE);
MPI_Send(&currentRow, 1, MPI_INT, status.MPI_SOURCE, TAG_ROW, MPI_COMM_WORLD);
rowsLeftToSend--;
currentRow++;
}
}
Now works.
Why the code doesn't deadlock (note this is deadlock, not a race condition; this is a more common parallel error in distributed computing) for smaller message sizes is a subtle detail of how most MPI implementations work. Generally, MPI implementations just "shove" small messages down the pipe whether or not the receiver is ready for them, but larger messages (since they take more storage resources on the receiving end) need some handshaking between the sender and the receiver. (If you want to find out more, search for eager vs rendezvous protocols).
So for the small message case (less than 1006 ints in this case, and 1 int definitely works, too) the worker nodes did their send whether or not the master was receiving them. If the master had called MPI_Recv(), the messages would have been there already and it would have returned immediately. But it didn't, so there were pending messages on the master side; but it didn't matter. The master sent out its kill messages, and everyone exited.
But for larger messages, the remaining send()s have to have the receiver particpating to clear, and since the receiver never does, the remaining workers hang.
Note that even for the small message case where there was no deadlock, the code didn't work properly - there was missing computed data.
Update: There was a similar problem in your shutdownWorkers:
void shutdownWorkers() {
printf("All work has been done, shutting down clients now.\n");
for (int i = 0; i < np; i++) {
MPI_Send(0, 0, MPI_INT, i, TAG_FINISHOFF, MPI_COMM_WORLD);
}
}
Here you are sending to all processes, including rank 0, the one doing the sending. In principle, that MPI_Send should deadlock, as it is a blocking send and there isn't a matching receive already posted. You could post a non-blocking receive before to avoid this, but that's unnecessary -- rank 0 doesn't need to let itself know to end. So just change the loop to
for (int i = 1; i < np; i++)
tl;dr - your code deadlocked because the master wasn't receiving enough messages from the workers; it happened to work for small message sizes because of an implementation detail common to most MPI libraries.

Resources