how do I print the log in order in MPI - c

What I have
I have a C-program using MPI, and it uses 4 processes: 1 vehicle(taskid=0) and 3 passengers.
Vehicle can accommodate 2 passengers at a time.
3 customers keep coming back to get a ride.
For Vehicle, I have:
int passengers[C] = {0};
while(1)
MPI_Recv(&pid, 1, MPI_INT, MPI_ANY_SOURCE, 1, MPI_COMM_WORLD, &status);
//put pid in passengers[totalNumberArrived]
if(totalNumberArrived == 2)
printf("vehicle left...");
sleep(5);
printf("vehicle came back...");
for (i=0; i<2; i++)
MPI_Send(&passengers[i], 1, MPI_INT, passengers[i], 1, MPI_COMM_WORLD);
totalNumberArrived = 0;
if(done)//omitting the details here
break;
And, for each passengers, I have:
for (i to NumOfRound)
sleep(X);
printf("%d is sending a msg", tasked)
MPI_Send(&taskid, 1, MPI_INT, 0, 1, MPI_COMM_WORLD);
MPI_Recv(&pid, 1, MPI_INT, 0, 1, MPI_COMM_WORLD, &pstatus);
printf("%d received from %d\n", tasked, pid, pstatus.MPI_SOURCE);
issue
If the vehicle left for a ride with taskid 1 and 3, I expect to see this kind of output:
vehicle left...
vehicle came back...
1 is sending a msg
3 is sending a msg (this could be before 1's msg though)
but I sometimes get
vehicle left...
1 is sending a msg
3 is sending a msg
vehicle came back...
which looks like the passenger is not blocked until the vehicle comes back.
I thought that MPI_Recv blocks the task until it gets a msg from the vehicle, so I researched and read that MPI_Recv does block but this kind of issues occur because the printf is not necessarily printing in order. I also read that some recommends to use flush but in some cases flush doesn't work.
I'm not sure what I should do in my case. Is it really just the matter of printf order?
I've also read this: Ordering Output in MPI
and wonder if I should add a master thread and let it be the central controller for vehicle and passengers??

You can't rely on printing output to be ordered between processes. The only thing you can count on is that output will be ordered per processes. Therefore, if for some reason it's critical that you can print things to STDOUT/STDERR in order, you need to aggregate it to one process first.

Related

mpi send and recv all processes to one another

I am trying to send data between all processes where I have an array on each process such as
int local_data[] = {0*rank,1*rank,2*rank,3*rank};
I have a corresponding flag array where each value in that array points to which process I should be sending this value, for example:
int part[] = {0,1,3,2};
so this means local_data[0] should go to process with rank 0
local_data[2] should go to process with rank 3 and so on.
The values in the flag arr changes from one process to the other ( all within range 0-P-1 where P is the total number of processes available) .
Using this, What I am currently doing is :
for(int i=0; i<local_len; i++){
if(part[i] != rank){
MPI_Send(&local_data[i], 1,MPI_INT, part[i], 0, MPI_COMM_WORLD);
MPI_Recv(&temp,1, MPI_INT, rank, 0, MPI_COMM_WORLD, &status );
recvbuf[j] = temp;
j++;
}
else{
recvbuf[j] = local_data[i];
j++;
}
}
where I am only sending and receiving data if the part[i] != rank, to avoid sending and receiving from the same process
recvbuf is the array I receive the values in for each process. It can be longer than the initial local_data length.
I also tried
MPI_Sendrecv(&local_data[i], 1,MPI_INT, part[i], 0, &temp, 1, MPI_INT, rank, 0, MPI_COMM_WORLD, &status);
the program gets stuck for both ways
How do I go about solving this?
Is the All-to-All collective the way to go here?
Your basic problem is that your send call goes to a dynamically determined target, but there is no corresponding logic to determine which processes need to do a receive at all, and if so, from where.
If the logic of your application implies that everyone will send to everyone, then you can use MPI_Alltoall.
If everyone sends to some, but you know that you will receive exactly four messages, then you can combine MPI_Isend for the sends and MPI_Recv from ANY_SOURCE. Note that you need Isend because your code will deadlock, strictly speaking. It may work if your MPI has a "eager mode" for small messages.
If the number of sends and the targets are entirely random, then you need something like MPI_Ibarrier to detect that all is over and done.
But I suspect you're leaving out major information here. Why is the length of local_data 4? Is the part array a permutation? Et cetera.
Following #GillesGouaillardet advice, I used MPI_AlltoAllv
to solve this problem.

MPI Ring Leader Election returns segmentation fault

This is what I am trying to achieve.
Blue is the message.
Yellow is when the specific node changes the leader known to it.
Green is the final election of each node.
The code seems correct to me but it's always stuck inside the while loop no matter what I tried. For a small number of nodes during runtime it returns a segmentation fault after a while.
election_status=0;
firstmsg[0]=world_rank; // self rank
firstmsg[1]=0; // counter for hops
chief=world_rank; // each node declares himself as leader
counter=0; // message counter for each node
// each node sends the first message to the next one
MPI_Send(&firstmsg, 2, MPI_INT, (world_rank+1)%world_size, 1, MPI_COMM_WORLD);
printf("Sent ID with counter to the right node [%d -> %d]\n",world_rank, (world_rank+1)%world_size);
while (election_status==0){
// EDIT: Split MPI_Recv for rank 0 and rest
if (world_rank==0){
MPI_Recv(&incoming, 2, MPI_INT, world_size-1, 1, MPI_COMM_WORLD, &status);
}
else {
MPI_Recv(&incoming, 2, MPI_INT, (world_rank-1)%world_size, 1, MPI_COMM_WORLD, &status);
}
counter=counter+1;
if (incoming[0]<chief){
chief=incoming[0];
}
incoming[1]=incoming[1]+1;
// if message is my own and hopped same times as counter
if (incoming[0]==world_rank && incoming[1]==counter) {
printf("Node %d declares node %d a leader.\n", world_rank, chief);
election_status=1;
}
//sends the incremented message to the next node
MPI_Send(&incoming, 2, MPI_INT, (world_rank+1)%world_size, 1, MPI_COMM_WORLD);
}
MPI_Finalize();
In order to determine some minimum number among a number of ranks for all ranks, use MPI_Allreduce!
MPI_Send is blocking. It can block forever until a matching receive is posted. Your program deadlocks on the first call to MPI_Send - and any successive once should it complete by coincidence. To avoid that specifically use MPI_Sendrecv.
(world_rank-1)%world_size will produce -1 for world_rank == 0. Using -1 as rank number is not valid. It might coincidentially be MPI_ANY_SOURCE.

C - MPI child processes receiving incorrect string

I am making a MPI password cracker, that uses brute force approach to crack a SHA512 hash key, I have code that works fine with 1 password and multiple processes, or multiple passwords and 1 process, but when I do multiple passwords & multiple processes I get the following error:
[ubuntu:2341] *** An error occurred in MPI_Recv
[ubuntu:2341] *** reported by process [191954945,1]
[ubuntu:2341] *** on communicator MPI_COMM_WORLD
[ubuntu:2341] *** MPI_ERR_TRUNCATE: message truncated
[ubuntu:2341] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
[ubuntu:2341] *** and potentially your MPI job)
I believe this is caused by process rank #1 receiving a string "/" instead of the password hash.
The issue is I am not sure why.
I have also noticed something strange with my code, I have the following loop in process rank 0:
while(!done){
MPI_Iprobe(MPI_ANY_SOURCE, MPI_ANY_TAG, MPI_COMM_WORLD, &done, &status);
if(done==1) {
for(i=1;i<size;i++){
if(i!=status.MPI_SOURCE){
printf("sending done to process %d\n", i);
MPI_Isend(&done, 1, MPI_INT, i, 0, MPI_COMM_WORLD, &request[i]);
}
}
}
}
Which keeps looping waiting for one of the child processes to alert it that it has found the password. Lets say I am running 2 processes (excluding the base process) and process 2 finds the password, the output will then be:
sending done to process 1
sending done to process 1
When it should only be sending that once, or at the least if it is sending it twice surely the one of those values should be 2, not both of them being 1?
The main bulk of my code is as follows:
Process 0 :
while(!feof(f)) {
fscanf(f, "%s\n", buffer);
int done = 0;
int i, sm;
// lengths of the word (we know it should be 92 though)
length = strlen(buffer);
// Send the password to every process except process 0
for (sm=1;sm<size;sm++) {
MPI_Send(&length, 1, MPI_INT, sm, 0, MPI_COMM_WORLD);
MPI_Send(buffer, length+1, MPI_CHAR, sm, 0, MPI_COMM_WORLD);
}
// While the passwords are busy cracking - Keep probing.
while(!done){
MPI_Iprobe(MPI_ANY_SOURCE, MPI_ANY_TAG, MPI_COMM_WORLD, &done, &status);
if(done==1) {
for(i=1;i<size;i++){
if(i!=status.MPI_SOURCE){
printf("sending done to process %d\n", i);
MPI_Isend(&done, 1, MPI_INT, i, 0, MPI_COMM_WORLD, &request[i]);
}
}
}
}
}
Which loops through the file, grabs a new password, sends the string to the child processes, at which point they receive it:
MPI_Recv(&length, 1, MPI_INT, 0, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
printf("string to be recieived has %d characters\n", length);
MPI_Recv(buffer, length+1, MPI_INT, 0, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
printf("Process %d received %s\n", rank, buffer);
The child processes crack the password, then repeat with the next one. (assuming there is one, currently it's an infinite loop but I want to sort it out with 2 passwords before I fix that).
All processes recieve the correct password for the first time, it's when they grab the second password that only 1 process has the correct one and the rest receive a "/" character.
Alright, typical case of me getting worked up and over looking something simple.
I'll leave this question just in case anyone else happens to have the same issue though.
I was forgetting to also receive the solution after probing it.
Was never fully clear how probe differed from receive but I guess probe just flags something changes, but to actually take it out of the "queue" you need to then collect it with receive.

MPI_Scatter usage

I am trying to modify my program so the code can look better. Right now I am using MPI_Send and MPI_Recv, but I am trying to make it work with MPI_Scatter. I have an array called All_vals and I trying to send Sub_arr_len equal parts to every slave.
So after I first find the number of values I should send to every slave I have to send and recieve the number of values and then send and recieve the values. I am trying to change those send/recv to MPI_Scatter and think about the case where the number of the values wont be able to be divided to equal parts like when I have 20 numbers,but I have 3 processes. So slaves should have 20/3=6 values, but the master should have the rest 20-2*6=8.
Here's the part I am trying to edit:
int master(int argc, char **argv){
...
for (i=0; i<ntasks; i++)
{
Sub_arr_len[i]=Num_vals/ntasks; //Finding the number of values I should send to every process
printf("\n\nIn range %d there are %d\n\n",i,Sub_arr_len[i]);
}
for(i=1;i<ntasks;i++) {
Sub_len = Sub_arr_len[i];
MPI_Isend(&Sub_len,1, MPI_INTEGER, i, i, MPI_COMM_WORLD, &request);
MPI_Wait(&request, &status);
Sub_arr_start += Sub_arr_len[i-1];
printf("\r\nSub_arr_start = %d ",Sub_arr_start);
MPI_Isend(&All_vals[Sub_arr_start],Sub_len, MPI_INTEGER, i, i, MPI_COMM_WORLD, &request);
MPI_Wait(&request, &status);
}
...
}
int slave(){
MPI_Irecv(&Sub_arr_len, 1, MPI_INTEGER, 0, myrank, MPI_COMM_WORLD, &request);
MPI_Wait(&request,&status);
printf("\r\nSLAVE:Sub_arr_len = %d\n\n",Sub_arr_len);
All_vals = (int*) malloc(Sub_arr_len * sizeof(MPI_INTEGER));
MPI_Irecv(&All_vals[0], Sub_arr_len, MPI_INTEGER, 0, myrank, MPI_COMM_WORLD, &request);
MPI_Wait(&request,&status);
}
I am trying to make the scatter thing, but I am doing something wrong, so I would love if someone help me build it.
With MPI_Scatter, each rank involved must receive the same number of elements, including the root. Generally the root process is a normal participant in the scatter operation and "receives" his own chunk. This means you also need to specify a receive buffer for the root process. You basically have the following options:
Use MPI_IN_PLACE as recvbuf on the root, by doing that the root will not send anything to itself. Combine that with scattering a tail of the original sendbuffer from the root such that this "tail" has a number of elements that is divisible by the number of processes. E.g. for 20 elements scatter &All_vals[2] which has a total 18 elements, 6 for each (including root again). The root can then use All_vals[0-7].
Pad the sendbuffer with a few elements that don't do anything such that it is divisible by the number of processes.
Use MPI_Scatterv to send unequal numbers of elements to each rank. Here is a good example how to properly setup the send counts and displacements
The last option is has a some distinct advantages. It creates the least load imbalance among processes and allows all processes to use the same code. If applicable, the second option is very simple and still has a good load balance.

MPI - Message Truncation in MPI_Recv

I am having problems in one my project related to MPI development. I am working on the implementation of an RNA parsing algorithm using MPI in which I started the parsing of an input string based on some parsing rules and parsing table (contains different states and related actions) with a master node. In parsing table, there are multiple actions for each state which can be done in parallel. So, I have to distribute these actions among different processes. To do that, I am sending the current state and parsing info (current stack of parsing) to the nodes using separate thread to receive actions from other nodes while the main thread is busy in parsing based on received actions. Following are the code snippets of the sender and receiver:
Sender Code:
StackFlush(&snd_stack);
StackPush(&snd_stack, state_index);
StackPush(&snd_stack, current_ch);
StackPush(&snd_stack, actions_to_skip);
elements_in_stack = stack.top + 1;
for(int a=elements_in_stack-1;a>=0;a--)
StackPush(&snd_stack, stack.contents[a]);
StackPush(&snd_stack, elements_in_stack);
elements_in_stack = parse_tree.top + 1;
for(int a=elements_in_stack-1;a>=0;a--)
StackPush(&snd_stack, parse_tree.contents[a]);
StackPush(&snd_stack, elements_in_stack);
elements_in_stack = snd_stack.top+1;
MPI_Send(&elements_in_stack, 1, MPI_INT, (myrank + actions_to_skip) % mysize, MSG_ACTION_STACK_COUNT, MPI_COMM_WORLD);
MPI_Send(&snd_stack.contents[0], elements_in_stack, MPI_CHAR, (myrank + actions_to_skip) % mysize, MSG_ACTION_STACK, MPI_COMM_WORLD);
Receiver Code:
MPI_Recv(&e_count, 1, MPI_INT, MPI_ANY_SOURCE, MSG_ACTION_STACK_COUNT, MPI_COMM_WORLD, &status);
if(e_count == 0){
break;
}
while((bt_stack.top + e_count) >= bt_stack.maxSize - 1){usleep(500);}
pthread_mutex_lock(&mutex_bt_stack); //using mutex for accessing shared data among threads
MPI_Recv(&bt_stack.contents[bt_stack.top + 1], e_count, MPI_CHAR, status.MPI_SOURCE, MSG_ACTION_STACK, MPI_COMM_WORLD, &status);
bt_stack.top += e_count;
pthread_mutex_unlock(&mutex_bt_stack);
The program is running fine for small input having less communications but as we increase the input size which in response increases the communication so the receiver receives many requests while processing few then it get crashed with the following errors:
Fatal error in MPI_Recv: Message truncated, error stack:
MPI_Recv(186) ……………………………………: MPI_Recv(buf=0x5b8d7b1, count=19, MPI_CHAR, src=3, tag=1, MPI_COMM_WORLD, status=0x41732100) failed
MPIDI_CH3U_Request_unpack_uebuf(625)L Message truncated; 21 bytes received but buffer size is 19
Rank 0 in job 73 hpc081_56549 caused collective abort of all ranks exit status of rank 0: killed by signal 9.
I have also tried this by using Non-Blocking MPI calls but still the similar errors.
I don't know what the rest of the code looks like, but here's an idea. Since there is a break I'm assuming the receiver code is part of a loop or a switch statement. If that's the case, there is a mismatch between sends and receives when the element count becomes 0:
The sender will send the element count and a zero-length message (the MPI_Send(&snd_stack.contents... line).
There will be no matching receive for this second message because the receiver breaks out of the loop.
The zero-length message will then match something else, possibly causing the error you are seeing down the line.

Resources