It's a master - slave situation. How can I make the master process search in a non-blocking way for a message transmitted to him. If at the point of searching there is no message transmited to master, it will continue with iterations. However, if there is a message transmitted to him, it will process the message and than continue with iterations. See the comment inside /* */
int main(int argc, char *argv[])
{
int numprocs,
rank;
MPI_Request request;
MPI_Status status;
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &numprocs);
if(rank == 0) // the searching process
{
for (int i=0; i < 4000000; i++)
{
// do some stuff here; does not matter what
/* see if any message has been transmitted to me at this point
without blocking the process; if at this time it happens to be
transmitted, do something and than continue with for iternations;
or just continue with for iterations and maybe next time will
have a message which sends me to do something */
}
}
else
{
int flag = 1;
while(flag)
{
// something done that at some point changes flag
}
// send a message to process with rank 0 and don't get stuck here
MPI_Isend(12, 1, MPI_INT, 0, 100, MPI_COMM_WORLD, &request);
// some other stuff done
// wait for message to be transmitted
MPI_Wait(&request, &status);
}
MPI_Finalize();
return 0;
}
One solution is to use MPI_IProbe() to test if a message is waiting.
On this line, use a pointer instead of "12"
MPI_Isend(12, 1, MPI_INT, 0, 100, MPI_COMM_WORLD, &request);
Add flag=0 here :
while(flag!=0)
{
// something done that at some point changes flag
}
Here goes the code :
#include "mpi.h"
#include "stdio.h"
int main(int argc, char *argv[])
{
int numprocs,
rank;
MPI_Request request;
MPI_Status status;
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &numprocs);
if(rank == 0) // the searching process
{
int i;
for (i=0; i < 4000000; i++)
{
// do some stuff here; does not matter what
//printf("I am still running!\n");
int flag;
MPI_Iprobe(MPI_ANY_SOURCE,100,MPI_COMM_WORLD,&flag,&status);
if(flag!=0){
int value;
MPI_Recv(&value, 1, MPI_INT, status.MPI_SOURCE, status.MPI_TAG, MPI_COMM_WORLD, &status);
printf("I (0) received %d \n",value);
}
/* see if any message has been transmitted to me at this point
without blocking the process; if at this time it happens to be
transmitted, do something and than continue with for iternations;
or just continue with for iterations and maybe next time will
have a message which sends me to do something */
}
}
else
{
int i;
for(i=0;i<42;i++){
int flag = 1;
while(flag!=0)
{
// something done that at some point changes flag
flag=0;
}
int bla=1000*rank+i;
// send a message to process with rank 0 and don't get stuck here
MPI_Isend(&bla, 1, MPI_INT, 0, 100, MPI_COMM_WORLD, &request);
// some other stuff done
printf("I (%d) do something\n",rank);
// wait for message to be transmitted
MPI_Wait(&request, &status);
}
}
MPI_Finalize();
return 0;
}
Bye,
Francis
The non-blocking test for available messages is done using the MPI_Iprobe call. In your case it would look like:
int available;
MPI_Status status;
if(rank == 0) // the searching process
{
for (int i=0; i < 4000000; i++)
{
// do some stuff here; does not matter what
/* see if any message has been transmitted to me at this point
without blocking the process; if at this time it happens to be
transmitted, do something and than continue with for iternations;
or just continue with for iterations and maybe next time will
have a message which sends me to do something */
// Tag value 100 matches the value used in the send operation
MPI_Iprobe(MPI_ANY_SOURCE, 100, MPI_COMM_WORLD, &available, &status);
if (available)
{
// Message source rank is now available in status.MPI_SOURCE
// Receive the message
MPI_Recv(..., status.MPI_SOURCE, status.MPI_TAG, MPI_COMM_WORLD, &status);
}
}
}
MPI_ANY_SOURCE is used as wildcard rank, i.e. it instruct MPI_Irecv to check for messages from any source. If a corresponding send was posted, then available will be set to true, otherwise it will be set to false. The actual source of the message is also written to the MPI_SOURCE field of the status object. If the available flags indicates the availability of a matching message, one should then post a receive operation in order to receive it. It is important that the rank and the tag are explicitly specified in the receive operation, otherwise a different message could get received instead.
You could also use persistent connections. These behave very much like non-blocking operations with the important difference that they could be restarted multiple times. The same code with persistent connections would look like:
if(rank == 0) // the searching process
{
MPI_Request req;
MPI_Status status;
int completed;
// Prepare the persistent connection request
MPI_Recv_init(buffer, buf_size, buf_type,
MPI_ANY_SOURCE, 100, MPI_COMM_WORLD, &req);
// Make the request active
MPI_Start(&req);
for (int i=0; i < 4000000; i++)
{
// do some stuff here; does not matter what
/* see if any message has been transmitted to me at this point
without blocking the process; if at this time it happens to be
transmitted, do something and than continue with for iternations;
or just continue with for iterations and maybe next time will
have a message which sends me to do something */
// Non-blocking Test for request completion
MPI_Test(&req, &completed, &status);
if (completed)
{
// Message is now in buffer
// Process the message
// ...
// Activate the request again
MPI_Start(&req);
}
}
// Cancel and free the request
MPI_Cancel(&req);
MPI_Request_free(&req);
}
Persistent operations have a slight performance edge over the non-persistent ones, shown in the previous code sample. It is important that buffer is not accessed while the request is active, i.e. after the call to MPI_Start and before MPI_Test signals completion. Persistent send/receive operations also match non-persistent receive/send operations, therefore it is not necessary to change the code of the workers and they can still use MPI_Isend.
Related
I am solving a load balance problem using MPI: a Master process sends tasks to the slaves processes and collect results as they compute and send back their job.
Since I want to improve performances more as possible I use non-blocking communications: Master sends several tasks and then wait until one process sends back its response so that the master can send additional work to it and so on.
I use MPI_Waitany() since I don't know in advance which slave process responses first, then I get the sender from the status and I can send the new job to it.
My problem is that sometimes the sender I get is wrong (a rank not in MPI_COMM_WORLD) and the program crashes; other times it works fine.
Here's the code. thanks!
//master
if (rank == 0) {
int N_chunks = 10;
MPI_Request request[N_chunks];
MPI_Status status[N_chunks];
int N_computed = 0;
int dest,index_completed;
//initialize array of my data structure
vec send[N_chunks];
vec recv[N_chunks];
//send one job to each process in communicator
for(int i=1;i<size;i++){
MPI_Send( &send[N_computed], 1, mpi_vec_type, i, tag, MPI_COMM_WORLD);
MPI_Irecv(&recv[N_computed], 1, mpi_vec_type, i, tag, MPI_COMM_WORLD,
&request[N_computed]);
N_computed++;
}
// loop
while (N_computed < N_chunks){
//get processed messages
MPI_Waitany(N_computed,request,&index_completed,status);
//get sender ID dest
dest = status[index_completed].MPI_SOURCE;
//send a new job to that process
MPI_Send( &send[N_computed], 1, mpi_vec_type, dest, tag, MPI_COMM_WORLD);
MPI_Irecv(&recv[N_computed], 1, mpi_vec_type, dest, tag, MPI_COMM_WORLD,
&request[N_computed]);
N_computed++;
}
MPI_Waitall(N_computed,request,status);
//close all process
printf("End master\n");
}
You are not using MPI_Waitany() correctly.
It should be
MPI_Status status;
MPI_Waitany(N_computed,request,&index_completed,&status);
dest = status.MPI_SOURCE;
note :
you need an extra loop to MPI_Wait() the last size - 1 requests
you can revamp your algorithm and use MPI_Request request[size-1]; and hence save some memory
I forgot to add a line in the initial post in which I wait for all the pending requests: by the way if I initialize a new status everytime I do Waitany the master process crashes; I need to track which processes are still pending in order to wait the proper number of times...
thanks by the way
EDIT: now it works even if I find it not very smart; would it be possible to initialize an array of MPI_Status at the beginning instead of doing it each time before a wait?
//master
if (rank == 0) {
int N_chunks = 10;
MPI_Request request[size-1];
int N_computed = 0;
int dest;
int index_completed;
//initialize array of vec
vec send[N_chunks];
vec recv[N_chunks];
//initial case
for(int i=1;i<size;i++){
MPI_Send(&send[N_computed], 1, mpi_vec_type, i, tag, MPI_COMM_WORLD);
MPI_Irecv(&recv[N_computed], 1, mpi_vec_type, i, tag, MPI_COMM_WORLD, &request[N_computed]);
N_computed++;
}
// loop
while (N_computed < N_chunks){
MPI_Status status;
//get processed messages
MPI_Waitany(N_computed,request,&index_completed,&status);
//get sender ID dest
dest = status.MPI_SOURCE;
MPI_Send(&send[N_computed], 1, mpi_vec_type, dest, tag, MPI_COMM_WORLD);
MPI_Irecv(&recv[N_computed], 1, mpi_vec_type, dest, tag, MPI_COMM_WORLD, &request[N_computed]);
N_computed++;
}
//wait other process to send back their load
for(int i=0;i<size-1;i++){
MPI_Status status;
MPI_Waitany(N_computed, request, &index_completed,&status);
}
//end
printf("Ms finish\n");
}
I have the following code which works:
#include <mpi.h>
#include <stdio.h>
int main(int argc, char** argv) {
int world_rank, world_size;
MPI_Init(NULL, NULL);
MPI_Comm_rank(MPI_COMM_WORLD, &world_rank);
MPI_Comm_size(MPI_COMM_WORLD, &world_size);
int n = 10000;
int ni, i;
double t[n];
int x[n];
int buf[n];
int buf_size = n*sizeof(int);
MPI_Buffer_attach(buf, buf_size);
if (world_rank == 0) {
for (ni = 0; ni < n; ++ni) {
int msg_size = ni;
int msg[msg_size];
for (i = 0; i < msg_size; ++i) {
msg[i] = rand();
}
double time0 = MPI_Wtime();
MPI_Bsend(&msg, msg_size, MPI_INT, 1, 0, MPI_COMM_WORLD);
t[ni] = MPI_Wtime() - time0;
x[ni] = msg_size;
MPI_Barrier(MPI_COMM_WORLD);
printf("P0 sent msg with size %d\n", msg_size);
}
}
else if (world_rank == 1) {
for (ni = 0; ni < n; ++ni) {
int msg_size = ni;
int msg[msg_size];
MPI_Request request;
MPI_Barrier(MPI_COMM_WORLD);
MPI_Irecv(&msg, msg_size, MPI_INT, 0, 0, MPI_COMM_WORLD, &request);
MPI_Wait(&request, MPI_STATUS_IGNORE);
printf("P1 received msg with size %d\n", msg_size);
}
}
MPI_Buffer_detach(&buf, &buf_size);
MPI_Finalize();
}
As soon as I remove the print statements, the program crashes, telling me there is a MPI_ERR_BUFFER: invalid buffer pointer. If I remove only one of the print statements the other print statements are still executed, so I believe it crashes at the end of the program. I don't see why it crashes and the fact that it does not crash when I am using the print statements goes beyond my logic...
Would anybody have a clue what is going on here?
You are simply not providing enough buffer space to MPI. In buffered mode, all ongoing messages are stored in the buffer space which is used as a ring buffer. In your code, there can be multiple messages that need to be buffered, regardless of the printf. Note that not even 2*n*sizeof(int) would be enough buffer space - the barriers do not provide a guarantee that the buffer is locally freed even though the corresponding receive is completed. You would have to provide (n*(n-1)/2)*sizeof(int) memory to be sure, or something in-between and hope.
Bottom line: Don't use buffered mode.
Generally, use standard blocking send calls and write the application such that it doesn't deadlock. Tune the MPI implementation such that small messages regardless of the receiver - to avoid wait times on late receivers.
If you want to overlap communication and computation, use nonblocking messages - providing proper memory for each communication.
I am trying to understand the difference between blocking and non-blocking message passing mechanisms in parallel processing using MPI. Suppose we have the following blocking code:
#include <stdio.h>
#include <string.h>
#include "mpi.h"
int main (int argc, char* argv[]) {
const int maximum_message_length = 100;
const int rank_0= 0;
char message[maximum_message_length+1];
MPI_Status status; /* Info about receive status */
int my_rank; /* This process ID */
int num_procs; /* Number of processes in run */
int source; /* Process ID to receive from */
int destination; /* Process ID to send to */
int tag = 0; /* Message ID */
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &my_rank);
MPI_Comm_size(MPI_COMM_WORLD, &num_procs);
/* clients processes */
if (my_rank != server_rank) {
sprintf(message, "Hello world from process# %d", my_rank);
MPI_Send(message, strlen(message) + 1, MPI_CHAR, rank_0, tag, MPI_COMM_WORLD);
} else {
/* rank 0 process */
for (source = 0; source < num_procs; source++) {
if (source != rank_0) {
MPI_Recv(message, maximum_message_length + 1, MPI_CHAR, source, tag,
MPI_COMM_WORLD,&status);
fprintf(stderr, "%s\n", message);
}
}
}
MPI_Finalize();
}
Each processor executes its task and send it back to rank_0 (the receiver). rank_0 will run a loop from 1 to n-1 processes and print them sequentially (i step in the loop may not proceed if the current client hasn't sent its task yet). How do I modify this code to achieve the non-blocking mechanism using MPI_Isend and MPI_Irecv? Do I need to remove the loop in receiver part (rank_0) and explicitly state MPI_Irecv(..) for each client, i.e.
MPI_Irecv(message, maximum_message_length + 1, MPI_CHAR, source, tag,
MPI_COMM_WORLD,&status);
Thank you.
What you do with non-blocking communication is to post the communication and then immediately proceed with your program to do other stuff, which again might be posting more communication. Especially, you can post all receives at once, and wait on them to complete only later on.
This is what you typically would do in your scenario here.
Note however, that this specific setup is a bad example, as it basically just reimplements an MPI_Gather!
Here is how you typically would go about the non-blocking communication in your setup. First, you need some storage for all the messages to end up in, and also a list of request handles to keep track of the non-blocking communication requests, thus your first part of the code needs to be changed accordingly:
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
#include "mpi.h"
int main (int argc, char* argv[]) {
const int maximum_message_length = 100;
const int server_rank = 0;
char message[maximum_message_length+1];
char *allmessages;
MPI_Status *status; /* Info about receive status */
MPI_Request *req; /* Non-Blocking Requests */
int my_rank; /* This process ID */
int num_procs; /* Number of processes in run */
int source; /* Process ID to receive from */
int tag = 0; /* Message ID */
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &my_rank);
MPI_Comm_size(MPI_COMM_WORLD, &num_procs);
/* clients processes */
if (my_rank != server_rank) {
sprintf(message, "Hello world from process# %d", my_rank);
MPI_Send(message, maximum_message_length + 1, MPI_CHAR, server_rank,
tag, MPI_COMM_WORLD);
} else {
No need for non-blocking sends here. Now we go on and receive all these messages on server_rank. We need to loop over all of them and store a request handle for each of them:
/* rank 0 process */
allmessages = malloc((maximum_message_length+1)*num_procs);
status = malloc(sizeof(MPI_Status)*num_procs);
req = malloc(sizeof(MPI_Request)*num_procs);
for (source = 0; source < num_procs; source++) {
req[source] = MPI_REQUEST_NULL;
if (source != server_rank) {
/* Post non-blocking receive for source */
MPI_Irecv(allmessages+(source*(maximum_message_length+1)),
maximum_message_length + 1, MPI_CHAR, source, tag,
MPI_COMM_WORLD, req+source);
/* Proceed without waiting on the receive */
/* (posting further receives */
}
}
/* Wait on all communications to complete */
MPI_Waitall(num_procs, req, status);
/* Print the messages in order to the screen */
for (source = 0; source < num_procs; source++) {
if (source != server_rank) {
fprintf(stderr, "%s\n",
allmessages+(source*(maximum_message_length+1)));
}
}
}
MPI_Finalize();
}
After posting the non-blocking receives, we need to wait on all of them to complete, to print the messages in the correct order. To do this, a MPI_Waitall is used, which allows us to block until all request handles are satisfied. Note, that I include the server_rank here for simplicity, but set its request to MPI_REQUEST_NULL initially, so it will be ignored.
If you do not care about the order, you could process the communications as soon as they become available, by looping over the requests and employing MPI_Waitany. That would return as soon as any communication is completed and you could act on the corresponding data.
With MPI_Gather that code would look like this:
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
#include "mpi.h"
int main (int argc, char* argv[]) {
const int maximum_message_length = 100;
const int server_rank = 0;
char message[maximum_message_length+1];
char *allmessages;
int my_rank; /* This process ID */
int num_procs; /* Number of processes in run */
int source; /* Process ID to receive from */
int tag = 0; /* Message ID */
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &my_rank);
MPI_Comm_size(MPI_COMM_WORLD, &num_procs);
if (my_rank == server_rank) {
allmessages = malloc((maximum_message_length+1)*num_procs);
}
sprintf(message, "Hello world from process# %d", my_rank);
MPI_Gather(message, (maximum_message_length+1), MPI_CHAR,
allmessages, (maximum_message_length+1), MPI_CHAR,
server_rank, MPI_COMM_WORLD);
if (my_rank == server_rank) {
/* Print the messages in order to the screen */
for (source = 0; source < num_procs; source++) {
if (source != server_rank) {
fprintf(stderr, "%s\n",
allmessages+(source*(maximum_message_length+1)));
}
}
}
MPI_Finalize();
}
And with MPI-3 you can even use a non-blocking MPI_Igather.
If you don't care about the ordering, the last part (starting with MPI_Waitall) could be done with MPI_Waitany like this:
for (i = 0; i < num_procs-1; i++) {
/* Wait on any next communication to complete */
MPI_Waitany(num_procs, req, &source, status);
fprintf(stderr, "%s\n",
allmessages+(source*(maximum_message_length+1)));
}
I am using MPI to distribute images to different processes so that:
Process 0 distribute images to different processes.
Processes other
than 0 process the image and then send the result back to process 0.
Process 0 tries to busy a process whenever the latter finishes its job with an image, so that as soon as it is idle, it is assigned another image to process. The code follows:
#include <stdlib.h>
#include <stdio.h>
#include <unistd.h>
#include "mpi.h"
#define MAXPROC 16 /* Max number of processes */
#define TOTAL_FILES 7
int main(int argc, char* argv[]) {
int i, nprocs, tprocs, me, index;
const int tag = 42; /* Tag value for communication */
MPI_Request recv_req[MAXPROC]; /* Request objects for non-blocking receive */
MPI_Request send_req[MAXPROC]; /* Request objects for non-blocking send */
MPI_Status status; /* Status object for non-blocing receive */
char myname[MPI_MAX_PROCESSOR_NAME]; /* Local host name string */
char hostname[MAXPROC][MPI_MAX_PROCESSOR_NAME]; /* Received host names */
int namelen;
MPI_Init(&argc, &argv); /* Initialize MPI */
MPI_Comm_size(MPI_COMM_WORLD, &nprocs); /* Get nr of processes */
MPI_Comm_rank(MPI_COMM_WORLD, &me); /* Get own identifier */
MPI_Get_processor_name(myname, &namelen); /* Get host name */
myname[namelen++] = (char)0; /* Terminating null byte */
/* First check that we have at least 2 and at most MAXPROC processes */
if (nprocs<2 || nprocs>MAXPROC) {
if (me == 0) {
printf("You have to use at least 2 and at most %d processes\n", MAXPROC);
}
MPI_Finalize(); exit(0);
}
/* if TOTAL_FILES < nprocs then use only TOTAL_FILES + 1 procs */
tprocs = (TOTAL_FILES < nprocs) ? TOTAL_FILES + 1 : nprocs;
int done = -1;
if (me == 0) { /* Process 0 does this */
int send_counter = 0, received_counter;
for (i=1; i<tprocs; i++) {
MPI_Isend(&send_counter, 1, MPI_INT, i, tag, MPI_COMM_WORLD, &send_req[i]);
++send_counter;
/* Receive a message from all other processes */
MPI_Irecv (hostname[i], namelen, MPI_CHAR, MPI_ANY_SOURCE, tag, MPI_COMM_WORLD, &recv_req[i]);
}
for (received_counter = 0; received_counter < TOTAL_FILES; received_counter++){
/* Wait until at least one message has been received from any process other than 0*/
MPI_Waitany(tprocs-1, &recv_req[1], &index, &status);
if (index == MPI_UNDEFINED) perror("Errorrrrrrr");
printf("Received a message from process %d on %s\n", status.MPI_SOURCE, hostname[index+1]);
if (send_counter < TOTAL_FILES){ /* si todavia faltan imagenes por procesar */
MPI_Isend(&send_counter, 1, MPI_INT, status.MPI_SOURCE, tag, MPI_COMM_WORLD, &send_req[status.MPI_SOURCE]);
++send_counter;
MPI_Irecv (hostname[status.MPI_SOURCE], namelen, MPI_CHAR, MPI_ANY_SOURCE, tag, MPI_COMM_WORLD, &recv_req[status.MPI_SOURCE]);
}
}
for (i=1; i<tprocs; i++) {
MPI_Isend(&done, 1, MPI_INT, i, tag, MPI_COMM_WORLD, &send_req[i]);
}
} else if (me < tprocs) { /* all other processes do this */
int y;
MPI_Recv(&y, 1, MPI_INT, 0,tag,MPI_COMM_WORLD,&status);
while (y != -1) {
printf("Process %d: Received image %d\n", me, y);
sleep(me%3+1); /* Let the processes sleep for 1-3 seconds */
/* Send own identifier back to process 0 */
MPI_Send (myname, namelen, MPI_CHAR, 0, tag, MPI_COMM_WORLD);
MPI_Recv(&y, 1, MPI_INT, 0,tag,MPI_COMM_WORLD,&status);
}
}
MPI_Finalize();
exit(0);
}
which is based on this example.
Right now I'm getting a segmentation fault, not sure why. I'm fairly new to MPI but I can't see a mistake in the code above. It only happens with certain numbers of processes. For example, when TOTAL_FILES = 7 and is run with 5, 6 or 7 processes. Works fine with 9 processes or above.
The entire code can be found here. Trying it with 6 processes causes the mentioned error.
To compile and execute :
mpicc -Wall sscce.c -o sscce -lm
mpirun -np 6 sscce
It's not MPI_Waitany that is causing segmentation fault but it is the way you handle the case when all requests in recv_req[] are completed (i.e. index == MPI_UNDEFINED). perror() does not stop the code and it continues further and then segfaults in the printf statement while trying to access hostname[index+1]. The reason for all requests in the array being completed is that due to the use of MPI_ANY_SOURCE in the receive call the rank of the sender is not guaranteed to be equal to the index of the request in recv_req[] - simply compare index and status.MPI_SOURCE after MPI_Waitany returns to see it for yourself. Therefore the subsequent calls to MPI_Irecv with great probability overwrite still not completed requests and thus the number of requests that can get completed by MPI_Waitany is less than the actual number of results expected.
Also note that you never wait for the send requests to complete. You are lucky that Open MPI implementation uses an eager protocol to send small messages and therefore those get sent even though MPI_Wait(any|all) or MPI_Test(any|all) is never called on the started send requests.
I've been having a bug in my code for some time and could not figure out yet how to solve it.
What I'm trying to achieve is easy enough: every worker-node (i.e. node with rank!=0) gets a row (represented by 1-dimensional arry) in a square-structure that involves some computation. Once the computation is done, this row gets sent back to the master.
For testing purposes, there is no computation involved. All that's happening is:
master sends row number to worker, worker uses the row number to calculate the according values
worker sends the array with the result values back
Now, my issue is this:
all works as expected up to a certain size for the number of elements in a row (size = 1006) and number of workers > 1
if the elements in a row exceed 1006, workers fail to shutdown and the program does not terminate
this only occurs if I try to send the array back to the master. If I simply send back an INT, then everything is OK (see commented out line in doMasterTasks() and doWorkerTasks())
Based on the last bullet point, I assume that there must be some race-condition which only surfaces when the array to be sent back to the master reaches a certain size.
Do you have any idea what the issue could be?
Compile the following code with: mpicc -O2 -std=c99 -o simple
Run the executable like so: mpirun -np 3 simple <size> (e.g. 1006 or 1007)
Here's the code:
#include "mpi.h"
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
#define MASTER_RANK 0
#define TAG_RESULT 1
#define TAG_ROW 2
#define TAG_FINISHOFF 3
int mpi_call_result, my_rank, dimension, np;
// forward declarations
void doInitWork(int argc, char **argv);
void doMasterTasks(int argc, char **argv);
void doWorkerTasks(void);
void finalize();
void quit(const char *msg, int mpi_call_result);
void shutdownWorkers() {
printf("All work has been done, shutting down clients now.\n");
for (int i = 0; i < np; i++) {
MPI_Send(0, 0, MPI_INT, i, TAG_FINISHOFF, MPI_COMM_WORLD);
}
}
void doMasterTasks(int argc, char **argv) {
printf("Starting to distribute work...\n");
int size = dimension;
int * dataBuffer = (int *) malloc(sizeof(int) * size);
int currentRow = 0;
int receivedRow = -1;
int rowsLeft = dimension;
MPI_Status status;
for (int i = 1; i < np; i++) {
MPI_Send(¤tRow, 1, MPI_INT, i, TAG_ROW, MPI_COMM_WORLD);
rowsLeft--;
currentRow++;
}
for (;;) {
// MPI_Recv(dataBuffer, size, MPI_INT, MPI_ANY_SOURCE, TAG_RESULT, MPI_COMM_WORLD, &status);
MPI_Recv(&receivedRow, 1, MPI_INT, MPI_ANY_SOURCE, MPI_ANY_TAG, MPI_COMM_WORLD, &status);
if (rowsLeft == 0)
break;
if (currentRow > 1004)
printf("Sending row %d to worker %d\n", currentRow, status.MPI_SOURCE);
MPI_Send(¤tRow, 1, MPI_INT, status.MPI_SOURCE, TAG_ROW, MPI_COMM_WORLD);
rowsLeft--;
currentRow++;
}
shutdownWorkers();
free(dataBuffer);
}
void doWorkerTasks() {
printf("Worker %d started\n", my_rank);
// send the processed row back as the first element in the colours array.
int size = dimension;
int * data = (int *) malloc(sizeof(int) * size);
memset(data, 0, sizeof(size));
int processingRow = -1;
MPI_Status status;
for (;;) {
MPI_Recv(&processingRow, 1, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD, &status);
if (status.MPI_TAG == TAG_FINISHOFF) {
printf("Finish-OFF tag received!\n");
break;
} else {
// MPI_Send(data, size, MPI_INT, 0, TAG_RESULT, MPI_COMM_WORLD);
MPI_Send(&processingRow, 1, MPI_INT, 0, TAG_RESULT, MPI_COMM_WORLD);
}
}
printf("Slave %d finished work\n", my_rank);
free(data);
}
int main(int argc, char **argv) {
if (argc == 2) {
sscanf(argv[1], "%d", &dimension);
} else {
dimension = 1000;
}
doInitWork(argc, argv);
if (my_rank == MASTER_RANK) {
doMasterTasks(argc, argv);
} else {
doWorkerTasks();
}
finalize();
}
void quit(const char *msg, int mpi_call_result) {
printf("\n%s\n", msg);
MPI_Abort(MPI_COMM_WORLD, mpi_call_result);
exit(mpi_call_result);
}
void finalize() {
mpi_call_result = MPI_Finalize();
if (mpi_call_result != 0) {
quit("Finalizing the MPI system failed, aborting now...", mpi_call_result);
}
}
void doInitWork(int argc, char **argv) {
mpi_call_result = MPI_Init(&argc, &argv);
if (mpi_call_result != 0) {
quit("Error while initializing the system. Aborting now...\n", mpi_call_result);
}
MPI_Comm_size(MPI_COMM_WORLD, &np);
MPI_Comm_rank(MPI_COMM_WORLD, &my_rank);
}
Any help is greatly appreciated!
Best,
Chris
If you take a look at your doWorkerTasks, you see that they send exactly as many data messages as they receive; (and they receive one more to shut them down).
But your master code:
for (int i = 1; i < np; i++) {
MPI_Send(¤tRow, 1, MPI_INT, i, TAG_ROW, MPI_COMM_WORLD);
rowsLeft--;
currentRow++;
}
for (;;) {
MPI_Recv(dataBuffer, size, MPI_INT, MPI_ANY_SOURCE, TAG_RESULT, MPI_COMM_WORLD, &status);
if (rowsLeft == 0)
break;
MPI_Send(¤tRow, 1, MPI_INT, status.MPI_SOURCE, TAG_ROW, MPI_COMM_WORLD);
rowsLeft--;
currentRow++;
}
sends np-2 more data messages than it receives. In particular, it only keeps receiving data until it has no more to send, even though there should be np-2 more data messages outstanding. Changing the code to the following:
int rowsLeftToSend= dimension;
int rowsLeftToReceive = dimension;
for (int i = 1; i < np; i++) {
MPI_Send(¤tRow, 1, MPI_INT, i, TAG_ROW, MPI_COMM_WORLD);
rowsLeftToSend--;
currentRow++;
}
while (rowsLeftToReceive > 0) {
MPI_Recv(dataBuffer, size, MPI_INT, MPI_ANY_SOURCE, TAG_RESULT, MPI_COMM_WORLD, &status);
rowsLeftToReceive--;
if (rowsLeftToSend> 0) {
if (currentRow > 1004)
printf("Sending row %d to worker %d\n", currentRow, status.MPI_SOURCE);
MPI_Send(¤tRow, 1, MPI_INT, status.MPI_SOURCE, TAG_ROW, MPI_COMM_WORLD);
rowsLeftToSend--;
currentRow++;
}
}
Now works.
Why the code doesn't deadlock (note this is deadlock, not a race condition; this is a more common parallel error in distributed computing) for smaller message sizes is a subtle detail of how most MPI implementations work. Generally, MPI implementations just "shove" small messages down the pipe whether or not the receiver is ready for them, but larger messages (since they take more storage resources on the receiving end) need some handshaking between the sender and the receiver. (If you want to find out more, search for eager vs rendezvous protocols).
So for the small message case (less than 1006 ints in this case, and 1 int definitely works, too) the worker nodes did their send whether or not the master was receiving them. If the master had called MPI_Recv(), the messages would have been there already and it would have returned immediately. But it didn't, so there were pending messages on the master side; but it didn't matter. The master sent out its kill messages, and everyone exited.
But for larger messages, the remaining send()s have to have the receiver particpating to clear, and since the receiver never does, the remaining workers hang.
Note that even for the small message case where there was no deadlock, the code didn't work properly - there was missing computed data.
Update: There was a similar problem in your shutdownWorkers:
void shutdownWorkers() {
printf("All work has been done, shutting down clients now.\n");
for (int i = 0; i < np; i++) {
MPI_Send(0, 0, MPI_INT, i, TAG_FINISHOFF, MPI_COMM_WORLD);
}
}
Here you are sending to all processes, including rank 0, the one doing the sending. In principle, that MPI_Send should deadlock, as it is a blocking send and there isn't a matching receive already posted. You could post a non-blocking receive before to avoid this, but that's unnecessary -- rank 0 doesn't need to let itself know to end. So just change the loop to
for (int i = 1; i < np; i++)
tl;dr - your code deadlocked because the master wasn't receiving enough messages from the workers; it happened to work for small message sizes because of an implementation detail common to most MPI libraries.