I am having problems in one my project related to MPI development. I am working on the implementation of an RNA parsing algorithm using MPI in which I started the parsing of an input string based on some parsing rules and parsing table (contains different states and related actions) with a master node. In parsing table, there are multiple actions for each state which can be done in parallel. So, I have to distribute these actions among different processes. To do that, I am sending the current state and parsing info (current stack of parsing) to the nodes using separate thread to receive actions from other nodes while the main thread is busy in parsing based on received actions. Following are the code snippets of the sender and receiver:
Sender Code:
StackFlush(&snd_stack);
StackPush(&snd_stack, state_index);
StackPush(&snd_stack, current_ch);
StackPush(&snd_stack, actions_to_skip);
elements_in_stack = stack.top + 1;
for(int a=elements_in_stack-1;a>=0;a--)
StackPush(&snd_stack, stack.contents[a]);
StackPush(&snd_stack, elements_in_stack);
elements_in_stack = parse_tree.top + 1;
for(int a=elements_in_stack-1;a>=0;a--)
StackPush(&snd_stack, parse_tree.contents[a]);
StackPush(&snd_stack, elements_in_stack);
elements_in_stack = snd_stack.top+1;
MPI_Send(&elements_in_stack, 1, MPI_INT, (myrank + actions_to_skip) % mysize, MSG_ACTION_STACK_COUNT, MPI_COMM_WORLD);
MPI_Send(&snd_stack.contents[0], elements_in_stack, MPI_CHAR, (myrank + actions_to_skip) % mysize, MSG_ACTION_STACK, MPI_COMM_WORLD);
Receiver Code:
MPI_Recv(&e_count, 1, MPI_INT, MPI_ANY_SOURCE, MSG_ACTION_STACK_COUNT, MPI_COMM_WORLD, &status);
if(e_count == 0){
break;
}
while((bt_stack.top + e_count) >= bt_stack.maxSize - 1){usleep(500);}
pthread_mutex_lock(&mutex_bt_stack); //using mutex for accessing shared data among threads
MPI_Recv(&bt_stack.contents[bt_stack.top + 1], e_count, MPI_CHAR, status.MPI_SOURCE, MSG_ACTION_STACK, MPI_COMM_WORLD, &status);
bt_stack.top += e_count;
pthread_mutex_unlock(&mutex_bt_stack);
The program is running fine for small input having less communications but as we increase the input size which in response increases the communication so the receiver receives many requests while processing few then it get crashed with the following errors:
Fatal error in MPI_Recv: Message truncated, error stack:
MPI_Recv(186) ……………………………………: MPI_Recv(buf=0x5b8d7b1, count=19, MPI_CHAR, src=3, tag=1, MPI_COMM_WORLD, status=0x41732100) failed
MPIDI_CH3U_Request_unpack_uebuf(625)L Message truncated; 21 bytes received but buffer size is 19
Rank 0 in job 73 hpc081_56549 caused collective abort of all ranks exit status of rank 0: killed by signal 9.
I have also tried this by using Non-Blocking MPI calls but still the similar errors.
I don't know what the rest of the code looks like, but here's an idea. Since there is a break I'm assuming the receiver code is part of a loop or a switch statement. If that's the case, there is a mismatch between sends and receives when the element count becomes 0:
The sender will send the element count and a zero-length message (the MPI_Send(&snd_stack.contents... line).
There will be no matching receive for this second message because the receiver breaks out of the loop.
The zero-length message will then match something else, possibly causing the error you are seeing down the line.
Related
I am new to MPI programming and I am trying to create a program that would perform 2-way communication between processes in a ring.
I was getting MemoryLeaks errors at the MPI_Finalize() statement. Later I found out that I could use the -fsanitize=address -fno-omit-frame-pointer flags to help me debug where the leaks could be.
Now I get a very bizarre (at least for me) error.
Here's my code:
MPI_Request request_s1, request_s2, request_r1, request_r2;
// receiving 2 elems from the left neighbor, which i shall be needing
if (0 > MPI_Irecv(lefties, EXTENT, MPI_DOUBLE, my_left, 1, MPI_COMM_WORLD, &request_r1)) {
return 2;
}
// receiving 2 elems from my right neighbor which i will be appending at the end of my input
if (0 > MPI_Irecv(righties, EXTENT, MPI_DOUBLE, my_right, 1, MPI_COMM_WORLD, &request_r2)) {
return 2;
}
// sending the first 2 elems which will be required by the left neighbor
if (0 > MPI_Isend(my_output_buffer, EXTENT, MPI_DOUBLE, my_left, 1, MPI_COMM_WORLD, &request_s1)) {
return 2;
}
// sending the last 2 elems to my right neighbor
if (0 > MPI_Isend(&my_output_buffer[displacement - EXTENT], EXTENT, MPI_DOUBLE, my_right, 1, MPI_COMM_WORLD, &request_s2)) {
return 2;
}
MPI_Wait(&request_r2, MPI_STATUS_IGNORE);
MPI_Wait(&request_r1, MPI_STATUS_IGNORE);
The error I get is
[my_machine:18353] *** An error occurred in MPI_Wait
[my_machine:18359] *** reported by process [204079105,1]
[my_machine:18359] *** on communicator MPI_COMM_WORLD
[my_machine:18359] *** MPI_ERR_TRUNCATE: message truncated
[my_machine:18359] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
[my_machine:18359] *** and potentially your MPI job)
[my_machine:18353] 1 more process has sent help message help-mpi-btl-base.txt / btl:no-nics
[my_machine:18353] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages
and I have no clue how to progress from here.
You seem to be reusing your request variables. Don't. If one is created, you have to wait for it.
It wouldn't hurt to initialize the request variables with MPI_REQUEST_NULL, in case you're waiting for a request that was not created.
The 0>MPI_whatever idiom is strange. Instead: MPI_SUCCESS!=MPI_Whatever.
But even that may not be work because the default is that routines do not return on error, but abort the program.
And it may be something else entirely which I can't tell without seeing the rest of the code.
I am trying to send data between all processes where I have an array on each process such as
int local_data[] = {0*rank,1*rank,2*rank,3*rank};
I have a corresponding flag array where each value in that array points to which process I should be sending this value, for example:
int part[] = {0,1,3,2};
so this means local_data[0] should go to process with rank 0
local_data[2] should go to process with rank 3 and so on.
The values in the flag arr changes from one process to the other ( all within range 0-P-1 where P is the total number of processes available) .
Using this, What I am currently doing is :
for(int i=0; i<local_len; i++){
if(part[i] != rank){
MPI_Send(&local_data[i], 1,MPI_INT, part[i], 0, MPI_COMM_WORLD);
MPI_Recv(&temp,1, MPI_INT, rank, 0, MPI_COMM_WORLD, &status );
recvbuf[j] = temp;
j++;
}
else{
recvbuf[j] = local_data[i];
j++;
}
}
where I am only sending and receiving data if the part[i] != rank, to avoid sending and receiving from the same process
recvbuf is the array I receive the values in for each process. It can be longer than the initial local_data length.
I also tried
MPI_Sendrecv(&local_data[i], 1,MPI_INT, part[i], 0, &temp, 1, MPI_INT, rank, 0, MPI_COMM_WORLD, &status);
the program gets stuck for both ways
How do I go about solving this?
Is the All-to-All collective the way to go here?
Your basic problem is that your send call goes to a dynamically determined target, but there is no corresponding logic to determine which processes need to do a receive at all, and if so, from where.
If the logic of your application implies that everyone will send to everyone, then you can use MPI_Alltoall.
If everyone sends to some, but you know that you will receive exactly four messages, then you can combine MPI_Isend for the sends and MPI_Recv from ANY_SOURCE. Note that you need Isend because your code will deadlock, strictly speaking. It may work if your MPI has a "eager mode" for small messages.
If the number of sends and the targets are entirely random, then you need something like MPI_Ibarrier to detect that all is over and done.
But I suspect you're leaving out major information here. Why is the length of local_data 4? Is the part array a permutation? Et cetera.
Following #GillesGouaillardet advice, I used MPI_AlltoAllv
to solve this problem.
I have a master process and more slave processes. I want that every slave process to send back to the master one integer, so I guess I should gather them using MPI_Gather. But somehow it doesn't work and I started to think that MPI_Gather is incompatible with MPI_Send.
The relevant lines of code look like this:
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &process_id);
MPI_Comm_size(MPI_COMM_WORLD, &process_count);
int full_word_count = 0;
int* receiving_buffer = (int*)malloc(sizeof(int) * 100);
if (process_id == 0)
{
// Some Master code here ...
MPI_Gather(full_word_count, 1, MPI_INT, receiving_buffer, 1, MPI_INT, 0, MPI_COMM_WORLD);
// ...
}
else
{
// Some Slave code here ...
MPI_Send(full_word_count, 1, MPI_INT, 0, 0, MPI_COMM_WORLD);
//...
}
MPI_Finalize();
I also know that I used "1" for MPI_Gather because I tried to run only for two processes, so process 1 would send, and process 0 would gather; of course, for more processes I should modify it using ranks. But my main question here is that I can use (and if yes, how) MPI_Gather combined with MPI_Send for a situation like this.
MPI_Gather() is a collective operation and must hence be called by all the ranks of the communicator. They also must provide matching signatures (datatype and count) and all use the same root value.
Note the send buffer of the root rank is also gathered into the receive buffers, so if the send count is 1, you really should allocate your receive buffer with
int* receiving_buffer = (int*)malloc(sizeof(int) * process_count)
and since all ranks send 1 * MPI_INT, a correct receive signature is also be 1 * MPI_INT.
Also note that "threads" is improper in this context. MPI tasks or MPI processes are the right terminology.
Keep in mind that the standard does not specify how a collective operation should be implemented. In the case of MPI_Gather(), a naive implementation would have all MPI tasks send their buffer to the root rank. But some more sophisticated algorithm can be used such as a tree-based gather, and in that case, not all tasks would send their buffer to the root rank.
I am making a MPI password cracker, that uses brute force approach to crack a SHA512 hash key, I have code that works fine with 1 password and multiple processes, or multiple passwords and 1 process, but when I do multiple passwords & multiple processes I get the following error:
[ubuntu:2341] *** An error occurred in MPI_Recv
[ubuntu:2341] *** reported by process [191954945,1]
[ubuntu:2341] *** on communicator MPI_COMM_WORLD
[ubuntu:2341] *** MPI_ERR_TRUNCATE: message truncated
[ubuntu:2341] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
[ubuntu:2341] *** and potentially your MPI job)
I believe this is caused by process rank #1 receiving a string "/" instead of the password hash.
The issue is I am not sure why.
I have also noticed something strange with my code, I have the following loop in process rank 0:
while(!done){
MPI_Iprobe(MPI_ANY_SOURCE, MPI_ANY_TAG, MPI_COMM_WORLD, &done, &status);
if(done==1) {
for(i=1;i<size;i++){
if(i!=status.MPI_SOURCE){
printf("sending done to process %d\n", i);
MPI_Isend(&done, 1, MPI_INT, i, 0, MPI_COMM_WORLD, &request[i]);
}
}
}
}
Which keeps looping waiting for one of the child processes to alert it that it has found the password. Lets say I am running 2 processes (excluding the base process) and process 2 finds the password, the output will then be:
sending done to process 1
sending done to process 1
When it should only be sending that once, or at the least if it is sending it twice surely the one of those values should be 2, not both of them being 1?
The main bulk of my code is as follows:
Process 0 :
while(!feof(f)) {
fscanf(f, "%s\n", buffer);
int done = 0;
int i, sm;
// lengths of the word (we know it should be 92 though)
length = strlen(buffer);
// Send the password to every process except process 0
for (sm=1;sm<size;sm++) {
MPI_Send(&length, 1, MPI_INT, sm, 0, MPI_COMM_WORLD);
MPI_Send(buffer, length+1, MPI_CHAR, sm, 0, MPI_COMM_WORLD);
}
// While the passwords are busy cracking - Keep probing.
while(!done){
MPI_Iprobe(MPI_ANY_SOURCE, MPI_ANY_TAG, MPI_COMM_WORLD, &done, &status);
if(done==1) {
for(i=1;i<size;i++){
if(i!=status.MPI_SOURCE){
printf("sending done to process %d\n", i);
MPI_Isend(&done, 1, MPI_INT, i, 0, MPI_COMM_WORLD, &request[i]);
}
}
}
}
}
Which loops through the file, grabs a new password, sends the string to the child processes, at which point they receive it:
MPI_Recv(&length, 1, MPI_INT, 0, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
printf("string to be recieived has %d characters\n", length);
MPI_Recv(buffer, length+1, MPI_INT, 0, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
printf("Process %d received %s\n", rank, buffer);
The child processes crack the password, then repeat with the next one. (assuming there is one, currently it's an infinite loop but I want to sort it out with 2 passwords before I fix that).
All processes recieve the correct password for the first time, it's when they grab the second password that only 1 process has the correct one and the rest receive a "/" character.
Alright, typical case of me getting worked up and over looking something simple.
I'll leave this question just in case anyone else happens to have the same issue though.
I was forgetting to also receive the solution after probing it.
Was never fully clear how probe differed from receive but I guess probe just flags something changes, but to actually take it out of the "queue" you need to then collect it with receive.
What I have
I have a C-program using MPI, and it uses 4 processes: 1 vehicle(taskid=0) and 3 passengers.
Vehicle can accommodate 2 passengers at a time.
3 customers keep coming back to get a ride.
For Vehicle, I have:
int passengers[C] = {0};
while(1)
MPI_Recv(&pid, 1, MPI_INT, MPI_ANY_SOURCE, 1, MPI_COMM_WORLD, &status);
//put pid in passengers[totalNumberArrived]
if(totalNumberArrived == 2)
printf("vehicle left...");
sleep(5);
printf("vehicle came back...");
for (i=0; i<2; i++)
MPI_Send(&passengers[i], 1, MPI_INT, passengers[i], 1, MPI_COMM_WORLD);
totalNumberArrived = 0;
if(done)//omitting the details here
break;
And, for each passengers, I have:
for (i to NumOfRound)
sleep(X);
printf("%d is sending a msg", tasked)
MPI_Send(&taskid, 1, MPI_INT, 0, 1, MPI_COMM_WORLD);
MPI_Recv(&pid, 1, MPI_INT, 0, 1, MPI_COMM_WORLD, &pstatus);
printf("%d received from %d\n", tasked, pid, pstatus.MPI_SOURCE);
issue
If the vehicle left for a ride with taskid 1 and 3, I expect to see this kind of output:
vehicle left...
vehicle came back...
1 is sending a msg
3 is sending a msg (this could be before 1's msg though)
but I sometimes get
vehicle left...
1 is sending a msg
3 is sending a msg
vehicle came back...
which looks like the passenger is not blocked until the vehicle comes back.
I thought that MPI_Recv blocks the task until it gets a msg from the vehicle, so I researched and read that MPI_Recv does block but this kind of issues occur because the printf is not necessarily printing in order. I also read that some recommends to use flush but in some cases flush doesn't work.
I'm not sure what I should do in my case. Is it really just the matter of printf order?
I've also read this: Ordering Output in MPI
and wonder if I should add a master thread and let it be the central controller for vehicle and passengers??
You can't rely on printing output to be ordered between processes. The only thing you can count on is that output will be ordered per processes. Therefore, if for some reason it's critical that you can print things to STDOUT/STDERR in order, you need to aggregate it to one process first.