So, somehow MPI_Probe receives the same message even though it is only sent once.
I execute the program with only 2 process of which process 1 is sending two messages, one two retrieve a task and another one to send the result. I send the messages with different tags to differentiate those two.
So what's supposed to happen is following:
Process 0 waits for a task request
Process 1 sends a task request indicated by tag=0
Process 0 sends the task
Process 1 does the task and sends the results back to process 0 indicated by tag=1.
-- This is the part where the first problem occurs: Process 0 still receives tag=0
shown by the printf
Process 0 receives the task enters the else-if-block where tag==1 -- It does not.
Process 1 breaks the while loop - DEBUGGING PURPOSE
Procces 0 is supposed to be blocked by MPI_Probe but is not, instead it continues to
run and always shows that the received Tag is still 0
The code is still messy and inefficient. I just want a minimal working program to build upon and to optimize. But still any tip is appreciated!
The code:
if(rank == 0) {
struct stack* idle_stack = init_stack(env_size-1);
struct sudoku_stack* sudoku_stack_ptr = init_sudoku_stack(256);
push_sudoku(sudoku_stack_ptr, sudoku);
int it=0;
while(1) {
printf("ITERATION %d\n", it);
int idle_stack_size = stack_size(idle_stack);
int _sudoku_stack_empty = sudoku_stack_empty(sudoku_stack_ptr);
MPI_Probe(MPI_ANY_SOURCE, MPI_ANY_TAG, MPI_COMM_WORLD, &status);
printf("TAG: %d\n", status.MPI_TAG);
// So this part is supposed to be entered once
if(status.MPI_TAG == 0) {
if(!sudoku_stack_empty(sudoku_stack_ptr)) {
printf("SENDING TASK\n");
int *next_sudoku = pop_sudoku(sudoku_stack_ptr);
MPI_Send(next_sudoku, v_size, MPI_INT, status.MPI_SOURCE, 0, MPI_COMM_WORLD);
} else {
// But since the Tag stays 0, it is called multiple times until
// a stack overflow occurs
printf("PUSHING TO IDLE STACK\n");
push(idle_stack, status.MPI_SOURCE);
}
} else if(status.MPI_TAG == 1) {
// This part should actually be entered by the second received message
printf("RECEIVING SOLUTION\n");
int count;
MPI_Get_count(&status, MPI_INT, &count);
int* recv_sudokus = (int*)malloc(count * sizeof(int));
MPI_Recv(recv_sudokus, count, MPI_INT, status.MPI_SOURCE, MPI_ANY_TAG, MPI_COMM_WORLD, &status);
for(int i = 0; i < count; i+=v_size) {
printf("%d ", recv_sudokus[i]);
if((i+1) % m_size == 0){
printf("\n");
}
}
// DEBUG - EXIT PROGRAM
teardown(sudoku_stack_ptr, idle_stack);
break;
push_sudoku(sudoku_stack_ptr, recv_sudokus);
} else if(status.MPI_TAG == 2) {
//int* solved_sudoku = (int*)malloc(v_size * sizeof(int));
//MPI_Recv(solved_sudoku, v_size, MPI_INT, status.MPI_SOURCE, MPI_ANY_TAG, MPI_COMM_WORLD, &status);
//TODO
}
it++;
}
} else {
int* sudoku = (int*)malloc(sizeof(int)*v_size);
int* possible_sudokus = (int*)malloc(sizeof(int)*m_size*v_size);
while(1) {
// Send task request
printf("REQUESTING TASK\n");
int i = 0;
MPI_Send(&i, 1, MPI_INT, 0, 0, MPI_COMM_WORLD);
// Wait for and receive task
printf("RECEIVING TASK\n");
MPI_Recv(sudoku, v_size, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
printf("CALCULATING\n");
int index = 0;
for(int i = 1; i <= m_size; i++) {
int is_safe_res = is_safe_first_empty_cell(sudoku, i);
if(is_safe_res) {
int* sudoku_cp = (int*)malloc(sizeof(int)*v_size);
memcpy(sudoku_cp, sudoku, sizeof(int)*v_size);
insert_to_first_empty_cell(sudoku_cp, i);
memcpy(&possible_sudokus[index], sudoku_cp, sizeof(int)*v_size);
index+=v_size;
free(sudoku_cp);
}
}
printf("SENDING\n");
MPI_Send(possible_sudokus, index*v_size, MPI_INT, 0, 1, MPI_COMM_WORLD);
break;
}
}
Related
I am writing an MPI program that has the first instance working as a master, sending and receiving results from its workers.
The receive function does something like this:
struct result *check_for_message(void) {
...
static unsigned int message_size;
static char *buffer;
static bool started_reception = false;
static MPI_Request req;
if (!started_reception) {
MPI_Irecv(&message_size, 1, MPI_INT, MPI_ANY_SOURCE, SIZE_TAG,
MPI_COMM_WORLD, &req);
started_reception = true;
} else {
int flag = 0;
MPI_Status status;
MPI_Test(&req, &flag, &status);
if (flag == 1) {
started_reception = false;
buffer = calloc(message_size + 1, sizeof(char));
DIE_IF_NULL(buffer); // printf + MPI_Finalize + exit
MPI_Request content_req;
MPI_Irecv(buffer, MAX_MSG_SIZE, MPI_CHAR, status.MPI_SOURCE, CONTENT_TAG,
MPI_COMM_WORLD, &content_req);
MPI_Wait(&content_req, MPI_STATUS_IGNORE);
ret = process_request(buffer);
free(buffer);
}
}
...
}
The send function does something like this:
MPI_Request size_req;
MPI_Request content_req;
MPI_Isend(&size, 1, MPI_INT, dest, SIZE_TAG, MPI_COMM_WORLD, &size_req);
MPI_Wait(&size_req, MPI_STATUS_IGNORE);
MPI_Isend(buf, size, MPI_CHAR, dest, CONTENT_TAG, MPI_COMM_WORLD,
&content_req);
MPI_Wait(&content_req, MPI_STATUS_IGNORE);
I noticed that if I remove the MPI_Wait in the sending function it often happens that the execution blocks or some sort of SIGNAL stops the execution of an instance(I can check the output but I think it was something about a free error SIGSEGV).
When I add the MPI_Wait it always seems to run perfectly. Could it be something related to the order in which the two sends perform? Aren't they supposed to be in order?
I run the program locally with -n 16 but have also tested with -n 128. The messages that I send are above 50 chars (90% of the time), some being even > 300 chars.
I am using an example code from an MPI book [will give the name shortly].
What it does is the following:
a) It creates two communicators world = MPI_COMM_WORLD containing all the processes and worker which excludes the random number generator server (the last rank process).
b) So, the server generates random numbers and serves them to the workers on requests from the workers.
c) What the workers do is they count separately the number of samples falling inside and outside an unit circle inside an unit square.
d) After sufficient level of accuracy, the counts inside and outside are Allreduced to compute the value of PI as their ratio.
**The code compiles well. However, when running with the following command (actually with any value of n) **
>mpiexec -n 2 apple.exe 0.0001
I get the following errors:
Fatal error in MPI_Allreduce: Invalid communicator, error stack:
MPI_Allreduce(855): MPI_Allreduce(sbuf=000000000022EDCC, rbuf=000000000022EDDC,
count=1, MPI_INT, MPI_SUM, MPI_COMM_NULL) failed
MPI_Allreduce(780): Null communicator
pi = 0.00000000000000000000
job aborted:
rank: node: exit code[: error message]
0: PC: 1: process 0 exited without calling finalize
1: PC: 123
Edit: ((( Removed: But when I am removing any one of the two MPI_Allreduce() functions, it is running without any runtime errors, albeit with wrong answer.))
Code:
#include <mpi.h>
#include <mpe.h>
#include <stdlib.h>
#define CHUNKSIZE 1000
/* message tags */
#define REQUEST 1
#define REPLY 2
int main(int argc, char *argv[])
{
int iter;
int in, out, i, iters, max, ix, iy, ranks [1], done, temp;
double x, y, Pi, error, epsilon;
int numprocs, myid, server, totalin, totalout, workerid;
int rands[CHUNKSIZE], request;
MPI_Comm world, workers;
MPI_Group world_group, worker_group;
MPI_Status status;
MPI_Init(&argc,&argv);
world = MPI_COMM_WORLD;
MPI_Comm_size(world,&numprocs);
MPI_Comm_rank(world,&myid);
server = numprocs-1; /* last proc is server */
if(myid==0) sscanf(argv[1], "%lf", &epsilon);
MPI_Bcast(&epsilon, 1, MPI_DOUBLE, 0, MPI_COMM_WORLD);
MPI_Comm_group(world, &world_group);
ranks[0] = server;
MPI_Group_excl(world_group, 1, ranks, &worker_group);
MPI_Comm_create(world, worker_group, &workers);
MPI_Group_free(&worker_group);
if(myid==server) /* I am the rand server */
{
srand(time(NULL));
do
{
MPI_Recv(&request, 1, MPI_INT, MPI_ANY_SOURCE, REQUEST, world, &status);
if(request)
{
for(i=0; i<CHUNKSIZE;)
{
rands[i] = rand();
if(rands[i]<=INT_MAX) ++i;
}
MPI_Send(rands, CHUNKSIZE, MPI_INT,status.MPI_SOURCE, REPLY, world);
}
}
while(request>0);
}
else /* I am a worker process */
{
request = 1;
done = in = out = 0;
max = INT_MAX; /* max int, for normalization */
MPI_Send(&request, 1, MPI_INT, server, REQUEST, world);
MPI_Comm_rank(workers, &workerid);
iter = 0;
while(!done)
{
++iter;
request = 1;
MPI_Recv(rands, CHUNKSIZE, MPI_INT, server, REPLY, world, &status);
for(i=0; i<CHUNKSIZE;)
{
x = (((double) rands[i++])/max)*2-1;
y = (((double) rands[i++])/max)*2-1;
if(x*x+y*y<1.0) ++in;
else ++out;
}
/* ** see error here ** */
MPI_Allreduce(&in, &totalin, 1, MPI_INT, MPI_SUM, workers);
MPI_Allreduce(&out, &totalout, 1, MPI_INT, MPI_SUM, workers);
/* only one of the above two MPI_Allreduce() functions working */
Pi = (4.0*totalin)/(totalin+totalout);
error = fabs( Pi-3.141592653589793238462643);
done = (error<epsilon||(totalin+totalout)>1000000);
request = (done)?0:1;
if(myid==0)
{
printf("\rpi = %23.20f", Pi);
MPI_Send(&request, 1, MPI_INT, server, REQUEST, world);
}
else
{
if(request)
MPI_Send(&request, 1, MPI_INT, server, REQUEST, world);
}
MPI_Comm_free(&workers);
}
}
if(myid==0)
{
printf("\npoints: %d\nin: %d, out: %d, <ret> to exit\n", totalin+totalout, totalin, totalout);
getchar();
}
MPI_Finalize();
}
What is the error here? Am I missing something? Any help or pointer will be highly appreciated.
You are freeing the workers communicator before you are done using it. Move the MPI_Comm_free(&workers) call after the while(!done) { ... } loop.
I've got many slave nodes which might or might not send messages to the master node. So currently there's no way the master node knows how many MPI_Recv to expect. Slave nodes had to sent minimum number of messages to the master node for efficiency reasons.
I managed to find a cool trick, which sends an additional "done" message when its no longer expecting any messages. Unfortunately, it doesn't seem to work in my case, where there're variable number of senders. Any idea on how to go about this? Thanks!
if(rank == 0){ //MASTER NODE
while (1) {
MPI_Recv(&buffer, 10, MPI_INT, MPI_ANY_SOURCE, MPI_ANY_TAG, MPI_COMM_WORLD, &status);
if (status.MPI_TAG == DONE) break;
/* Do stuff */
}
}else{ //MANY SLAVE NODES
if(some conditions){
MPI_Send(&buffer, 64, MPI_INT, root, 1, MPI_COMM_WORLD);
}
}
MPI_Barrier(MPI_COMM_WORLD);
MPI_Send(NULL, 1, MPI_INT, root, DONE, MPI_COMM_WORLD);
Not working, the program seem to be still waiting for a MPI_Recv
A simpler and more elegant option would be to use the MPI_IBARRIER. Have each worker call all of the sends that it needs to and then call MPI_IBARRIER when it's done. On the master, you can loop on both an MPI_IRECV on MPI_ANY_SOURCE and an MPI_IBARRIER. When the MPI_IBARRIER is done, you know that everyone has finished and you can cancel the MPI_IRECV and move on. The pseudocode would look something like this:
if (master) {
/* Start the barrier. Each process will join when it's done. */
MPI_Ibarrier(MPI_COMM_WORLD, &requests[0]);
do {
/* Do the work */
MPI_Irecv(..., MPI_ANY_SOURCE, &requests[1]);
/* If the index that finished is 1, we received a message.
* Otherwise, we finished the barrier and we're done. */
MPI_Waitany(2, requests, &index, MPI_STATUSES_IGNORE);
} while (index == 1);
/* If we're done, we should cancel the receive request and move on. */
MPI_Cancel(&requests[1]);
} else {
/* Keep sending work back to the master until we're done. */
while( ...work is to be done... ) {
MPI_Send(...);
}
/* When we finish, join the Ibarrier. Note that
* you can't use an MPI_Barrier here because it
* has to match with the MPI_Ibarrier above. */
MPI_Ibarrier(MPI_COMM_WORLD, &request);
MPI_Wait(&request, MPI_STATUS_IGNORE);
}
1- you called MPI_Barrier in wrong place, it should be called after MPI_Send.
2- the root will exit from loop when it receives DONE from all other ranks (size -1).
the code after some modifications:
#include <mpi.h>
#include <stdlib.h>
#include <stdio.h>
int main(int argc, char** argv)
{
MPI_Init(NULL, NULL);
int size;
MPI_Comm_size(MPI_COMM_WORLD, &size);
int rank;
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Status status;
int DONE = 888;
int buffer = 77;
int root = 0 ;
printf("here is rank %d with size=%d\n" , rank , size);fflush(stdout);
int num_of_DONE = 0 ;
if(rank == 0){ //MASTER NODE
while (1) {
MPI_Recv(&buffer, 1, MPI_INT, MPI_ANY_SOURCE, MPI_ANY_TAG, MPI_COMM_WORLD, &status);
printf("root recev %d from %d with tag = %d\n" , buffer , status.MPI_SOURCE , status.MPI_TAG );fflush(stdout);
if (status.MPI_TAG == DONE)
num_of_DONE++;
printf("num_of_DONE=%d\n" , num_of_DONE);fflush(stdout);
if(num_of_DONE == size -1)
break;
/* Do stuff */
}
}else{ //MANY SLAVE NODES
if(1){
buffer = 66;
MPI_Send(&buffer, 1, MPI_INT, root, 1, MPI_COMM_WORLD);
printf("rank %d sent data.\n" , rank);fflush(stdout);
}
}
if(rank != 0)
{
buffer = 55;
MPI_Send(&buffer, 1, MPI_INT, root, DONE, MPI_COMM_WORLD);
}
MPI_Barrier(MPI_COMM_WORLD);
printf("rank %d done.\n" , rank);fflush(stdout);
MPI_Finalize();
return 0;
}
output:
hosam#hosamPPc:~/Desktop$ mpicc -o aa aa.c
hosam#hosamPPc:~/Desktop$ mpirun -n 3 ./aa
here is rank 2 with size=3
here is rank 0 with size=3
rank 2 sent data.
here is rank 1 with size=3
rank 1 sent data.
root recev 66 from 1 with tag = 1
num_of_DONE=0
root recev 66 from 2 with tag = 1
num_of_DONE=0
root recev 55 from 2 with tag = 888
num_of_DONE=1
root recev 55 from 1 with tag = 888
num_of_DONE=2
rank 0 done.
rank 1 done.
rank 2 done.
I have developed a given simple MPI program such that process 0 sends message to process 1 and receives message from process p-1. Following is the code :
In the skeleton given to me ,
char *message;
message= (char*)malloc(msg_size);
is confusing me. To check the correctness of program, I am trying to look value of message that been sent or received. So should it be hexadecimal value?
int main(int argc, char **argv)
{
double startwtime, endwtime;
float elapsed_time, bandwidth;
int my_id, next_id; /* process id-s */
int p; /* number of processes */
char* message; /* storage for the message */
int i, k, max_msgs, msg_size, v;
MPI_Status status; /* return status for receive */
MPI_Init( &argc, &argv );
MPI_Comm_rank( MPI_COMM_WORLD, &my_id );
MPI_Comm_size( MPI_COMM_WORLD, &p );
if (argc < 3)
{
fprintf (stderr, "need msg count and msg size as params\n");
goto EXIT;
}
if ((sscanf (argv[1], "%d", &max_msgs) < 1) ||
(sscanf (argv[2], "%d", &msg_size) < 1))
{
fprintf (stderr, "need msg count and msg size as params\n");
goto EXIT;
}
**message = (char*)malloc (msg_size);**
if (argc > 3) v=1; else v=0; /*are we in verbose mode*/
/* don't start timer until everybody is ok */
MPI_Barrier(MPI_COMM_WORLD);
int t=0;
if( my_id == 0 ) {
startwtime = MPI_Wtime();
// do max_msgs times:
// send message of size msg_size chars to process 1
// receive message of size msg_size chars from process p-1
while(t<max_msgs) {
MPI_Send((char *) message, msg_size, MPI_CHAR, 1 , 0, MPI_COMM_WORLD);
MPI_Recv((char *) message, msg_size, MPI_CHAR, p-1, 0, MPI_COMM_WORLD, &status);
t++;
}
MPI_Barrier(MPI_COMM_WORLD);
endwtime = MPI_Wtime();
elapsed_time = endwtime-startwtime;
bandwidth = 2.0 * max_msgs * msg_size / (elapsed_time);
printf("Number, size of messages: %3d , %3d \n", max_msgs, msg_size);
fflush(stdout);
printf("Wallclock time = %f seconds\n", elapsed_time );
fflush(stdout);
printf("Bandwidth = %f bytes per second\n", bandwidth);
fflush(stdout);
} else if( my_id == p-1 ) {
// do max_msgs times:
// receive message of size msg_size from process to the left
// send message of size msg_size to process to the right (p-1 sends to 0)
while(t<max_msgs) {
MPI_Send((char *) message, msg_size, MPI_CHAR, 0, 0, MPI_COMM_WORLD);
MPI_Recv((char *) message, msg_size, MPI_CHAR, my_id-1, 0, MPI_COMM_WORLD, &status);
t++;
}
} else {
while(t<max_msgs) {
MPI_Send((char *) message, msg_size, MPI_CHAR, my_id+1, 0, MPI_COMM_WORLD);
MPI_Recv((char *) message, msg_size, MPI_CHAR, my_id-1, 0, MPI_COMM_WORLD, &status);
t++;
}
}
MPI_Barrier(MPI_COMM_WORLD);
EXIT:
MPI_Finalize();
return 0;
}
I am not completely sure if this is what you mean, but I will try.
For what I understand, you want to know what is the message being sent. Well, for the code you provide, memory is assign to the message but any real "readable" message is specify. In this line.
message = (char*)malloc (msg_size);
malloc reserves the memory for the messages, so anyone can write it, however, it doesn't provide any initial value. Sometimes, the memory contains other information previously stored and freed. Then, the message being sent is that "garbage" that is before. This is probably what you call hexadecimal (I hope I understand this right).
The type of value in this case is char (defined as MPI_CHAR in the MPI_Send and MPI_Recv functions). Here you can find more data types for MPI.
I will suggest to assign a value to the message with the with my_id and next_id. So you know who is sending to whom.
I've been having a bug in my code for some time and could not figure out yet how to solve it.
What I'm trying to achieve is easy enough: every worker-node (i.e. node with rank!=0) gets a row (represented by 1-dimensional arry) in a square-structure that involves some computation. Once the computation is done, this row gets sent back to the master.
For testing purposes, there is no computation involved. All that's happening is:
master sends row number to worker, worker uses the row number to calculate the according values
worker sends the array with the result values back
Now, my issue is this:
all works as expected up to a certain size for the number of elements in a row (size = 1006) and number of workers > 1
if the elements in a row exceed 1006, workers fail to shutdown and the program does not terminate
this only occurs if I try to send the array back to the master. If I simply send back an INT, then everything is OK (see commented out line in doMasterTasks() and doWorkerTasks())
Based on the last bullet point, I assume that there must be some race-condition which only surfaces when the array to be sent back to the master reaches a certain size.
Do you have any idea what the issue could be?
Compile the following code with: mpicc -O2 -std=c99 -o simple
Run the executable like so: mpirun -np 3 simple <size> (e.g. 1006 or 1007)
Here's the code:
#include "mpi.h"
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
#define MASTER_RANK 0
#define TAG_RESULT 1
#define TAG_ROW 2
#define TAG_FINISHOFF 3
int mpi_call_result, my_rank, dimension, np;
// forward declarations
void doInitWork(int argc, char **argv);
void doMasterTasks(int argc, char **argv);
void doWorkerTasks(void);
void finalize();
void quit(const char *msg, int mpi_call_result);
void shutdownWorkers() {
printf("All work has been done, shutting down clients now.\n");
for (int i = 0; i < np; i++) {
MPI_Send(0, 0, MPI_INT, i, TAG_FINISHOFF, MPI_COMM_WORLD);
}
}
void doMasterTasks(int argc, char **argv) {
printf("Starting to distribute work...\n");
int size = dimension;
int * dataBuffer = (int *) malloc(sizeof(int) * size);
int currentRow = 0;
int receivedRow = -1;
int rowsLeft = dimension;
MPI_Status status;
for (int i = 1; i < np; i++) {
MPI_Send(¤tRow, 1, MPI_INT, i, TAG_ROW, MPI_COMM_WORLD);
rowsLeft--;
currentRow++;
}
for (;;) {
// MPI_Recv(dataBuffer, size, MPI_INT, MPI_ANY_SOURCE, TAG_RESULT, MPI_COMM_WORLD, &status);
MPI_Recv(&receivedRow, 1, MPI_INT, MPI_ANY_SOURCE, MPI_ANY_TAG, MPI_COMM_WORLD, &status);
if (rowsLeft == 0)
break;
if (currentRow > 1004)
printf("Sending row %d to worker %d\n", currentRow, status.MPI_SOURCE);
MPI_Send(¤tRow, 1, MPI_INT, status.MPI_SOURCE, TAG_ROW, MPI_COMM_WORLD);
rowsLeft--;
currentRow++;
}
shutdownWorkers();
free(dataBuffer);
}
void doWorkerTasks() {
printf("Worker %d started\n", my_rank);
// send the processed row back as the first element in the colours array.
int size = dimension;
int * data = (int *) malloc(sizeof(int) * size);
memset(data, 0, sizeof(size));
int processingRow = -1;
MPI_Status status;
for (;;) {
MPI_Recv(&processingRow, 1, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD, &status);
if (status.MPI_TAG == TAG_FINISHOFF) {
printf("Finish-OFF tag received!\n");
break;
} else {
// MPI_Send(data, size, MPI_INT, 0, TAG_RESULT, MPI_COMM_WORLD);
MPI_Send(&processingRow, 1, MPI_INT, 0, TAG_RESULT, MPI_COMM_WORLD);
}
}
printf("Slave %d finished work\n", my_rank);
free(data);
}
int main(int argc, char **argv) {
if (argc == 2) {
sscanf(argv[1], "%d", &dimension);
} else {
dimension = 1000;
}
doInitWork(argc, argv);
if (my_rank == MASTER_RANK) {
doMasterTasks(argc, argv);
} else {
doWorkerTasks();
}
finalize();
}
void quit(const char *msg, int mpi_call_result) {
printf("\n%s\n", msg);
MPI_Abort(MPI_COMM_WORLD, mpi_call_result);
exit(mpi_call_result);
}
void finalize() {
mpi_call_result = MPI_Finalize();
if (mpi_call_result != 0) {
quit("Finalizing the MPI system failed, aborting now...", mpi_call_result);
}
}
void doInitWork(int argc, char **argv) {
mpi_call_result = MPI_Init(&argc, &argv);
if (mpi_call_result != 0) {
quit("Error while initializing the system. Aborting now...\n", mpi_call_result);
}
MPI_Comm_size(MPI_COMM_WORLD, &np);
MPI_Comm_rank(MPI_COMM_WORLD, &my_rank);
}
Any help is greatly appreciated!
Best,
Chris
If you take a look at your doWorkerTasks, you see that they send exactly as many data messages as they receive; (and they receive one more to shut them down).
But your master code:
for (int i = 1; i < np; i++) {
MPI_Send(¤tRow, 1, MPI_INT, i, TAG_ROW, MPI_COMM_WORLD);
rowsLeft--;
currentRow++;
}
for (;;) {
MPI_Recv(dataBuffer, size, MPI_INT, MPI_ANY_SOURCE, TAG_RESULT, MPI_COMM_WORLD, &status);
if (rowsLeft == 0)
break;
MPI_Send(¤tRow, 1, MPI_INT, status.MPI_SOURCE, TAG_ROW, MPI_COMM_WORLD);
rowsLeft--;
currentRow++;
}
sends np-2 more data messages than it receives. In particular, it only keeps receiving data until it has no more to send, even though there should be np-2 more data messages outstanding. Changing the code to the following:
int rowsLeftToSend= dimension;
int rowsLeftToReceive = dimension;
for (int i = 1; i < np; i++) {
MPI_Send(¤tRow, 1, MPI_INT, i, TAG_ROW, MPI_COMM_WORLD);
rowsLeftToSend--;
currentRow++;
}
while (rowsLeftToReceive > 0) {
MPI_Recv(dataBuffer, size, MPI_INT, MPI_ANY_SOURCE, TAG_RESULT, MPI_COMM_WORLD, &status);
rowsLeftToReceive--;
if (rowsLeftToSend> 0) {
if (currentRow > 1004)
printf("Sending row %d to worker %d\n", currentRow, status.MPI_SOURCE);
MPI_Send(¤tRow, 1, MPI_INT, status.MPI_SOURCE, TAG_ROW, MPI_COMM_WORLD);
rowsLeftToSend--;
currentRow++;
}
}
Now works.
Why the code doesn't deadlock (note this is deadlock, not a race condition; this is a more common parallel error in distributed computing) for smaller message sizes is a subtle detail of how most MPI implementations work. Generally, MPI implementations just "shove" small messages down the pipe whether or not the receiver is ready for them, but larger messages (since they take more storage resources on the receiving end) need some handshaking between the sender and the receiver. (If you want to find out more, search for eager vs rendezvous protocols).
So for the small message case (less than 1006 ints in this case, and 1 int definitely works, too) the worker nodes did their send whether or not the master was receiving them. If the master had called MPI_Recv(), the messages would have been there already and it would have returned immediately. But it didn't, so there were pending messages on the master side; but it didn't matter. The master sent out its kill messages, and everyone exited.
But for larger messages, the remaining send()s have to have the receiver particpating to clear, and since the receiver never does, the remaining workers hang.
Note that even for the small message case where there was no deadlock, the code didn't work properly - there was missing computed data.
Update: There was a similar problem in your shutdownWorkers:
void shutdownWorkers() {
printf("All work has been done, shutting down clients now.\n");
for (int i = 0; i < np; i++) {
MPI_Send(0, 0, MPI_INT, i, TAG_FINISHOFF, MPI_COMM_WORLD);
}
}
Here you are sending to all processes, including rank 0, the one doing the sending. In principle, that MPI_Send should deadlock, as it is a blocking send and there isn't a matching receive already posted. You could post a non-blocking receive before to avoid this, but that's unnecessary -- rank 0 doesn't need to let itself know to end. So just change the loop to
for (int i = 1; i < np; i++)
tl;dr - your code deadlocked because the master wasn't receiving enough messages from the workers; it happened to work for small message sizes because of an implementation detail common to most MPI libraries.