MPI hang on persistance calling - c

I am trying to implement some form of persistent calling. Somehow the following code keeps hanging - I guessed I must have introduced a deadlock but can't really wrap my head around it...
MPI_Request r[4];
[...]
MPI_Send_init(&Arr[1][1], 1, MPI_DOUBLE, 1, A, MPI_COMM_WORLD, &r[0]);
MPI_Recv_init(&Arr[1][0], 1, MPI_DOUBLE, 0, A, MPI_COMM_WORLD, &r[1]);
MPI_Send_init(&Arr[2][1], 1, MPI_DOUBLE, 0, B, MPI_COMM_WORLD, &r[2]);
MPI_Recv_init(&Arr[2][0], 1, MPI_DOUBLE, 1, B, MPI_COMM_WORLD, &r[3]);
[...]
MPI_Startall(4, r);
MPI_Waitall(4, r, MPI_STATUSES_IGNORE);
I think this is perfect material for deadlock - what would be the remedy here if I want to init these send/receive message and just invoke the processes later all with Startall and Waitall?
EDIT: So if I do
MPI_Start(&r[0]);
MPI_Wait(&r[0], &status):
Then it does not hang. Invoking something like:
for (int k=0; k<1; k++)
{
MPI_Start(&r[k]);
MPI_Wait(&r[k], &status);
}
fail and hang. if that helps

your tags do not match.
for example, rank 0 receives from itself with tag A
but it sends to itself with tag B

I have to admit I'm not familiar with the concept of MPI requests and the MPI_Send/Recv_init. However, I could reproduce the deadlock with simple sends and receives. This is the code (it has a deadlock):
double someVal = 3.5;
const int MY_FIRST_TAG = 42;
MPI_Send(&someVal, 1, MPI_DOUBLE, 1, MY_FIRST_TAG, MPI_COMM_WORLD);
MPI_Recv(&someVal, 1, MPI_DOUBLE, 0, MY_FIRST_TAG, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
Even if you run it with only two processes, the problem is the following: Both, process 0 and 1 send a message to process 1. Then both processes want to receive a message from process 0. Process 1 can because process zero actually sent a message to process 1. But nobody sent a message to process 0. Consequently, this process will wait there forever.
How to fix: You need to specify that only process 0 sends to process 1 and only process 1 is supposed to receive from process 0. You can simply do it with:
int rank;
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
if (rank == 0)
MPI_Send(&someVal, 1, MPI_DOUBLE, 1, MY_FIRST_TAG, MPI_COMM_WORLD);
else // Assumption: Only two processes
MPI_Recv(&someVal, 1, MPI_DOUBLE, 0, MY_FIRST_TAG, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
I'm not 100% sure how to translate this to the concept of requests and MPI_Send/Recv_init but maybe this helps you nonetheless.

Related

Using collective communication in MPI to recieve results

I am receiving results from multiple processes in a for-loop. The result is various parts of a vector.
for (int d= 1; d<= num; d++)
{
MPI_Recv(&result, 1, MPI_DOUBLE, d, mtype, MPI_COMM_WORLD, &status);
}
So ideally I would create a vector like an int[] v=new int[num] and put all the values returned by the Recv into it, But using the function MPI_Recv it needs to be in a for-loop. Is there a way to use the collective communication method MPI_Reduce to do this and avoid the for loop?
For the worker process with rank=0 I added this line:
double *c = malloc(N * sizeof(double));
MPI_Gather(NULL, 0, MPI_DOUBLE,
c, total_rows + 1, MPI_DOUBLE, 0, MPI_COMM_WORLD);
For all other processes the following line is executed:
MPI_Gather(result, total_rows, MPI_DOUBLE,
NULL, total_rows + 1, MPI_DOUBLE, 0, MPI_COMM_WORLD);
where double result is a double with total_rows elements
But there is something wrong with this. As my code is not working properly. Each of the worker processes is storing a part of the vector. The root process needs to generate the complete vector.
Any suggestions on the usage of MPI_Gather?

MPI dynamic communication between 2 Processes

I have a process that manages a list of 2d arrays and passes different 2d arrays to each process. There is a possibility that I have not enough processes for all 2d arrays So I need to ask process 0 which manages the list of 2d arrays if there are any arrays left after any process which is not 0 finishes working with the first received array. I dont know how to implement this.
(...)
if(rank == 0)
//check if we have elements in 2d array left
while (ptr != NULL)
{
MPI_Status status;
int sig;
// Wait for a process to ask for a 2d array
MPI_Recv(&sig, 1, MPI_INT, MPI_ANY_SOURCE, 0, MPI_COMM_WORLD, &status);
// send it to them
MPI_Send(&ptr->grid.sudoku, N * N, MPI_INT, status.MPI_SOURCE, 0, MPI_COMM_WORLD);
ptr = ptr->next;
}
// free memory
delete_grids(list);
elementsAvailable = 0;
}
// rank != 0
else
{
lookForSolution(recvGrid(), rank); // recvGrid calls MPI_Recv and passes
//the given array to a function to calculate something
}
MPI_Bcast(&sudokusAvailable, 1, MPI_INT, 0, MPI_COMM_WORLD); // Here I thought I make a MPI_Bcast
//to tell other processes that process 0 has no arrays left but If I put it here
//the next if statement will never be called in the first place
if (rank != 0 && elementsAvailable == 1)
{
MPI_Status status;
// Send process 0 that ready for new 2d array
MPI_Send(1, 1, MPI_INT, 0, 0, MPI_COMM_WORLD);
lookForSolution(recvGrid(), rank);
}
(...)
The simplest solution is to let process zero send data to another process, and then post an MPI_Irecv for the result. When all the processes have work, the manager does an MPI_Waitany to see if any process is returning results, if so, accept, and send new work. If the manager runs out of work, it sends a special message to the workers, and they quit.
From here on it's an exercise to the reader.
If you want the manager also to participate in the work, it becomes a little more complicated, and you could solve the whole thing with one-sided communication, creating in effect a shared object with all the work.

What is the use of status of MPI_Isend obtained using MPI_Wait?

Case: 1. What is the use of status obtained using MPI_Wait()
if(rank==0)
MPI_Isend(&buffer0, count, MPI_INT, 1, 0, MPI_COMM_WORLD, &request0);
if(rank==1)
MPI_Recv(&buffer1, count, MPI_INT, 0, 0, MPI_COMM_WORLD);
if(rank==0)
MPI_Wait(&request0, &status);
// Can i use status here to do something?
MPI_Finalize();
Case:2. Use of status is clear here (Just added for comparison)
if(rank==0)
MPI_Ssend(&buffer0, count, MPI_INT, 1, 0, MPI_COMM_WORLD);
if(rank==1)
MPI_Irecv(&buffer1, count, MPI_INT, 0, 0, MPI_COMM_WORLD, &request1);
if(rank==1)
MPI_Wait(&request1, &status);
printf("The source is %d", status.MPI_SOURCE);
MPI_Finalize();
Generally MPI_Status is used to get the the following properties for received messages.
rank of the sender, (status.MPI_SOURCE) particularly relevant when MPI_ANY_SOURCE was used.
tag of the message, (status.MPI_TAG) particularly relevant when MPI_ANY_TAG was used
element-count that was sent, which may differ from posted receive buffer, using MPI_Get_count.
For send messages, you can use the status to test for MPI_Test_cancelled. Further, for functions that return multiple status, such as MPI_Waitall, in the case of errors, you can use status[i].MPI_ERROR. The main wait function will return MPI_ERR_IN_STATUS in this case.
If you do not need any of those, you may pass MPI_STATUS_IGNORE instead of a MPI_Status*.

C - MPI child processes receiving incorrect string

I am making a MPI password cracker, that uses brute force approach to crack a SHA512 hash key, I have code that works fine with 1 password and multiple processes, or multiple passwords and 1 process, but when I do multiple passwords & multiple processes I get the following error:
[ubuntu:2341] *** An error occurred in MPI_Recv
[ubuntu:2341] *** reported by process [191954945,1]
[ubuntu:2341] *** on communicator MPI_COMM_WORLD
[ubuntu:2341] *** MPI_ERR_TRUNCATE: message truncated
[ubuntu:2341] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
[ubuntu:2341] *** and potentially your MPI job)
I believe this is caused by process rank #1 receiving a string "/" instead of the password hash.
The issue is I am not sure why.
I have also noticed something strange with my code, I have the following loop in process rank 0:
while(!done){
MPI_Iprobe(MPI_ANY_SOURCE, MPI_ANY_TAG, MPI_COMM_WORLD, &done, &status);
if(done==1) {
for(i=1;i<size;i++){
if(i!=status.MPI_SOURCE){
printf("sending done to process %d\n", i);
MPI_Isend(&done, 1, MPI_INT, i, 0, MPI_COMM_WORLD, &request[i]);
}
}
}
}
Which keeps looping waiting for one of the child processes to alert it that it has found the password. Lets say I am running 2 processes (excluding the base process) and process 2 finds the password, the output will then be:
sending done to process 1
sending done to process 1
When it should only be sending that once, or at the least if it is sending it twice surely the one of those values should be 2, not both of them being 1?
The main bulk of my code is as follows:
Process 0 :
while(!feof(f)) {
fscanf(f, "%s\n", buffer);
int done = 0;
int i, sm;
// lengths of the word (we know it should be 92 though)
length = strlen(buffer);
// Send the password to every process except process 0
for (sm=1;sm<size;sm++) {
MPI_Send(&length, 1, MPI_INT, sm, 0, MPI_COMM_WORLD);
MPI_Send(buffer, length+1, MPI_CHAR, sm, 0, MPI_COMM_WORLD);
}
// While the passwords are busy cracking - Keep probing.
while(!done){
MPI_Iprobe(MPI_ANY_SOURCE, MPI_ANY_TAG, MPI_COMM_WORLD, &done, &status);
if(done==1) {
for(i=1;i<size;i++){
if(i!=status.MPI_SOURCE){
printf("sending done to process %d\n", i);
MPI_Isend(&done, 1, MPI_INT, i, 0, MPI_COMM_WORLD, &request[i]);
}
}
}
}
}
Which loops through the file, grabs a new password, sends the string to the child processes, at which point they receive it:
MPI_Recv(&length, 1, MPI_INT, 0, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
printf("string to be recieived has %d characters\n", length);
MPI_Recv(buffer, length+1, MPI_INT, 0, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
printf("Process %d received %s\n", rank, buffer);
The child processes crack the password, then repeat with the next one. (assuming there is one, currently it's an infinite loop but I want to sort it out with 2 passwords before I fix that).
All processes recieve the correct password for the first time, it's when they grab the second password that only 1 process has the correct one and the rest receive a "/" character.
Alright, typical case of me getting worked up and over looking something simple.
I'll leave this question just in case anyone else happens to have the same issue though.
I was forgetting to also receive the solution after probing it.
Was never fully clear how probe differed from receive but I guess probe just flags something changes, but to actually take it out of the "queue" you need to then collect it with receive.

Can I write from different MPI_Irecv into the same buffer/array at different index positions?

MPI_IRecv(&myArr[0], 5, MPI_INT, 1, MPI_ANY_TAG, MPI_COMM_WORLD, request);
MPI_IRecv(&myArr[5], 5, MPI_INT, 2, MPI_ANY_TAG, MPI_COMM_WORLD, request);
MPI_IRecv(&myArr[10], 5, MPI_INT, 3, MPI_ANY_TAG, MPI_COMM_WORLD, request);
Hi, does c/mpi allow you to write into different areas of the same array from an mpi non-blocking receive? The above code shows roughly what I would like to achieve.
Yes. You aren't allowed to read or modify the buffer of a non-blocking communications request until the communications are done; but as far as MPI are concerned, non-overlapping regions of the same array are completely different buffers.

Resources