Using collective communication in MPI to recieve results - c

I am receiving results from multiple processes in a for-loop. The result is various parts of a vector.
for (int d= 1; d<= num; d++)
{
MPI_Recv(&result, 1, MPI_DOUBLE, d, mtype, MPI_COMM_WORLD, &status);
}
So ideally I would create a vector like an int[] v=new int[num] and put all the values returned by the Recv into it, But using the function MPI_Recv it needs to be in a for-loop. Is there a way to use the collective communication method MPI_Reduce to do this and avoid the for loop?
For the worker process with rank=0 I added this line:
double *c = malloc(N * sizeof(double));
MPI_Gather(NULL, 0, MPI_DOUBLE,
c, total_rows + 1, MPI_DOUBLE, 0, MPI_COMM_WORLD);
For all other processes the following line is executed:
MPI_Gather(result, total_rows, MPI_DOUBLE,
NULL, total_rows + 1, MPI_DOUBLE, 0, MPI_COMM_WORLD);
where double result is a double with total_rows elements
But there is something wrong with this. As my code is not working properly. Each of the worker processes is storing a part of the vector. The root process needs to generate the complete vector.
Any suggestions on the usage of MPI_Gather?

Related

MPI dynamic communication between 2 Processes

I have a process that manages a list of 2d arrays and passes different 2d arrays to each process. There is a possibility that I have not enough processes for all 2d arrays So I need to ask process 0 which manages the list of 2d arrays if there are any arrays left after any process which is not 0 finishes working with the first received array. I dont know how to implement this.
(...)
if(rank == 0)
//check if we have elements in 2d array left
while (ptr != NULL)
{
MPI_Status status;
int sig;
// Wait for a process to ask for a 2d array
MPI_Recv(&sig, 1, MPI_INT, MPI_ANY_SOURCE, 0, MPI_COMM_WORLD, &status);
// send it to them
MPI_Send(&ptr->grid.sudoku, N * N, MPI_INT, status.MPI_SOURCE, 0, MPI_COMM_WORLD);
ptr = ptr->next;
}
// free memory
delete_grids(list);
elementsAvailable = 0;
}
// rank != 0
else
{
lookForSolution(recvGrid(), rank); // recvGrid calls MPI_Recv and passes
//the given array to a function to calculate something
}
MPI_Bcast(&sudokusAvailable, 1, MPI_INT, 0, MPI_COMM_WORLD); // Here I thought I make a MPI_Bcast
//to tell other processes that process 0 has no arrays left but If I put it here
//the next if statement will never be called in the first place
if (rank != 0 && elementsAvailable == 1)
{
MPI_Status status;
// Send process 0 that ready for new 2d array
MPI_Send(1, 1, MPI_INT, 0, 0, MPI_COMM_WORLD);
lookForSolution(recvGrid(), rank);
}
(...)
The simplest solution is to let process zero send data to another process, and then post an MPI_Irecv for the result. When all the processes have work, the manager does an MPI_Waitany to see if any process is returning results, if so, accept, and send new work. If the manager runs out of work, it sends a special message to the workers, and they quit.
From here on it's an exercise to the reader.
If you want the manager also to participate in the work, it becomes a little more complicated, and you could solve the whole thing with one-sided communication, creating in effect a shared object with all the work.

What is the use of status of MPI_Isend obtained using MPI_Wait?

Case: 1. What is the use of status obtained using MPI_Wait()
if(rank==0)
MPI_Isend(&buffer0, count, MPI_INT, 1, 0, MPI_COMM_WORLD, &request0);
if(rank==1)
MPI_Recv(&buffer1, count, MPI_INT, 0, 0, MPI_COMM_WORLD);
if(rank==0)
MPI_Wait(&request0, &status);
// Can i use status here to do something?
MPI_Finalize();
Case:2. Use of status is clear here (Just added for comparison)
if(rank==0)
MPI_Ssend(&buffer0, count, MPI_INT, 1, 0, MPI_COMM_WORLD);
if(rank==1)
MPI_Irecv(&buffer1, count, MPI_INT, 0, 0, MPI_COMM_WORLD, &request1);
if(rank==1)
MPI_Wait(&request1, &status);
printf("The source is %d", status.MPI_SOURCE);
MPI_Finalize();
Generally MPI_Status is used to get the the following properties for received messages.
rank of the sender, (status.MPI_SOURCE) particularly relevant when MPI_ANY_SOURCE was used.
tag of the message, (status.MPI_TAG) particularly relevant when MPI_ANY_TAG was used
element-count that was sent, which may differ from posted receive buffer, using MPI_Get_count.
For send messages, you can use the status to test for MPI_Test_cancelled. Further, for functions that return multiple status, such as MPI_Waitall, in the case of errors, you can use status[i].MPI_ERROR. The main wait function will return MPI_ERR_IN_STATUS in this case.
If you do not need any of those, you may pass MPI_STATUS_IGNORE instead of a MPI_Status*.

MPI hang on persistance calling

I am trying to implement some form of persistent calling. Somehow the following code keeps hanging - I guessed I must have introduced a deadlock but can't really wrap my head around it...
MPI_Request r[4];
[...]
MPI_Send_init(&Arr[1][1], 1, MPI_DOUBLE, 1, A, MPI_COMM_WORLD, &r[0]);
MPI_Recv_init(&Arr[1][0], 1, MPI_DOUBLE, 0, A, MPI_COMM_WORLD, &r[1]);
MPI_Send_init(&Arr[2][1], 1, MPI_DOUBLE, 0, B, MPI_COMM_WORLD, &r[2]);
MPI_Recv_init(&Arr[2][0], 1, MPI_DOUBLE, 1, B, MPI_COMM_WORLD, &r[3]);
[...]
MPI_Startall(4, r);
MPI_Waitall(4, r, MPI_STATUSES_IGNORE);
I think this is perfect material for deadlock - what would be the remedy here if I want to init these send/receive message and just invoke the processes later all with Startall and Waitall?
EDIT: So if I do
MPI_Start(&r[0]);
MPI_Wait(&r[0], &status):
Then it does not hang. Invoking something like:
for (int k=0; k<1; k++)
{
MPI_Start(&r[k]);
MPI_Wait(&r[k], &status);
}
fail and hang. if that helps
your tags do not match.
for example, rank 0 receives from itself with tag A
but it sends to itself with tag B
I have to admit I'm not familiar with the concept of MPI requests and the MPI_Send/Recv_init. However, I could reproduce the deadlock with simple sends and receives. This is the code (it has a deadlock):
double someVal = 3.5;
const int MY_FIRST_TAG = 42;
MPI_Send(&someVal, 1, MPI_DOUBLE, 1, MY_FIRST_TAG, MPI_COMM_WORLD);
MPI_Recv(&someVal, 1, MPI_DOUBLE, 0, MY_FIRST_TAG, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
Even if you run it with only two processes, the problem is the following: Both, process 0 and 1 send a message to process 1. Then both processes want to receive a message from process 0. Process 1 can because process zero actually sent a message to process 1. But nobody sent a message to process 0. Consequently, this process will wait there forever.
How to fix: You need to specify that only process 0 sends to process 1 and only process 1 is supposed to receive from process 0. You can simply do it with:
int rank;
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
if (rank == 0)
MPI_Send(&someVal, 1, MPI_DOUBLE, 1, MY_FIRST_TAG, MPI_COMM_WORLD);
else // Assumption: Only two processes
MPI_Recv(&someVal, 1, MPI_DOUBLE, 0, MY_FIRST_TAG, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
I'm not 100% sure how to translate this to the concept of requests and MPI_Send/Recv_init but maybe this helps you nonetheless.

Looking for an MPI routine

So just today I started messing around with the MPI library in C and I've tried it out some and have now found myself in a situation where I need the following:
A routine that'll send a message to a random process in a blocking receive while leaving the others still blocked.
Does such a routine exist? If not, how can something like this be accomplished?
No, such routine does not exist. However, you can easily build one using the available routines in the MPI standard. For example if you want a routine that sends to a random process which is not the current one you can write the following:
int MPI_SendRand(void *data, unsigned size, int tag, MPI_Comm comm, MPI_Status *status) {
// one process sends
int comm_size, my_rank, dest;
MPI_Comm_rank(comm, &my_rank);
MPI_Comm_size(comm, &comm_size);
// random number between [0, comm_size) excluding my_rank
while ((dst = ((float)rand())/RAND_MAX*comm_size)) == my_rank) ;
return MPI_Send(data, size, dst, tag, comm, status);
}
can be used as follows:
if (rank == master) {
MPI_SendRand(some_data, sime_size, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
} else {
// the rest waits
MPI_Recv(some_buff, some_size, MPI_SOURCE_ANY, MPI_TAG_ANY, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
// do work...
}

Can I write from different MPI_Irecv into the same buffer/array at different index positions?

MPI_IRecv(&myArr[0], 5, MPI_INT, 1, MPI_ANY_TAG, MPI_COMM_WORLD, request);
MPI_IRecv(&myArr[5], 5, MPI_INT, 2, MPI_ANY_TAG, MPI_COMM_WORLD, request);
MPI_IRecv(&myArr[10], 5, MPI_INT, 3, MPI_ANY_TAG, MPI_COMM_WORLD, request);
Hi, does c/mpi allow you to write into different areas of the same array from an mpi non-blocking receive? The above code shows roughly what I would like to achieve.
Yes. You aren't allowed to read or modify the buffer of a non-blocking communications request until the communications are done; but as far as MPI are concerned, non-overlapping regions of the same array are completely different buffers.

Resources