I am trying to probe for messages in order to communicate between MPI processes (8 processes). The first process to reach a certain part of the code will signal all the other processes, and the others will terminate upon reaching there.
Here is what I've implemented: (any simpler solutions are welcome)
if(depth == size){
endTime = MPI_Wtime();
MPI_Iprobe(MPI_ANY_SOURCE, 0, MPI_COMM_WORLD, &stat, MPI_STATUS_IGNORE);
if(!stat){
printf("Execution completed in %.5f seconds.\n", endTime - beginTime);
for (ctr = 0; ctr < mpi_processors; ctr++) {
if(ctr == mpi_my_pid) continue;
MPI_Isend(&stat, 1, MPI_INT, ctr, 0, MPI_COMM_WORLD, &req);
printf("sent to %d from %d\n",ctr,mpi_my_pid);
}
}
return 1;
}
The code is self-explanatory. stat is a dummy variable just for "send"ing a message, and also used as the flag for the Iprobe. The problem is, stat is always zero in all processes, meaning that probing doesn't return any waiting messages to be received. But I can confirm that MPI_Isend runs correctly and sends the message.
Am I doing something fundamentally wrong, or is there a simple bug somewhere that I can't see?
Thanks,
Can.
Related
Suppose that I have a code block that attempts to calculate the first 15 numbers in the Fibonacci sequence and distributes each unique number among 3 processes (MPI_Send) using a for loop as shown in the code block below.
int main(int argc, char* argv[]) {
int rank, size, recieve_data1, recieve_data2;
MPI_Init(NULL, NULL);
MPI_Status status;
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &size);
printf("Available ranks are: %d \n \n", rank); // first rank rollcall
fflush(stdout);
int num1 = 1; int num2 = 1;
int RecieveNum; int SumNum;
for
(int n = 0; n < 16; n++) {
if
(rank == 0) {
// perform the fibb sequence algorithim
SumNum = num1 + num2;
num1 = num2;
num2 = SumNum;
// define the sorting algorithim
int DeliverTo = (n % 3) + 1;
// send calculated result
MPI_Send(&SumNum, 1, MPI_INT, DeliverTo, 1, MPI_COMM_WORLD);
}
else {
// recieve the element integer
MPI_Recv(&RecieveNum, 1, MPI_INT, 0, 1, MPI_COMM_WORLD, &status);
// print and flush the buffer
printf("I am process rank %d, and I recieved the number %d. \n", rank, RecieveNum);
fflush(stdout);
}
}
printf("Available ranks are: %d \n \n", rank); // second rank roll call
fflush(stdout);
/*
more code that I run...
*/
MPI_Finalize();
return 0;
}
Before the first for loop is called, processes 0,1,2,3 all respond to the printf(Available ranks are: %d \n \n", rank); on line 25. However, after executing the first for loop and using a second printf, only process 0 responds. I was expecting all 4 processes 0 - 3 to respond again after the execution of the for loop. To solve this problem, I isolated this section of code and attempted to debug for several hours with no sucess. This particular issue proves to be problematic, as I have additional code (not shown here for the sake of being concise), that will access the numbers generated from this sequence.
Finally, I am running the code by building the solution, running the VS terminal as an administrator, and typing mpiexec -n 4 my_file_name.exe. No build errors or complication mistakes occurred. From what I can see (correct me if I'm wrong), all processes hang after completing the for loop, but I am unsure why or how to fix it.
After searching the website, I did not see anything that answered this question (from my point of view). I am a bit of an MPI (and Stack Overflow) newbie, so any code pointers are also welcomed. Thanks
You have process zero compute who to send to. And then everyone does a receive. That means that all processes that are not the computed receiver will hang.
This scenario where you send to a dynamically computed receiver is not easy to do in MPI. You either need to
send a message to all other processes "no, I have nothing for you", or
send a message to all, and all-but-one process ignores the data, or
use one sided operations where you MPI_Put the data on the computer receiver.
I'm implementing the Chan and Dehne sorting algorithm using MPI and the CGM realistic parallel model. So far each process receives N/p numbers from the original vector, each process then order their numbers sequentially using quick sort, each process then creates a sample from it's local vector (the sample has size p), each process then sends their sample over to P0; P0 should receive all samples in a bigger vector of size p*p so it can accommodate data from all processors. This is where I'm stuck, it seems to be working but for some reason after P0 receives all the data it exits with Signal: Segmentation fault (11). Thank you.
Here is the relevant part of the code:
// Step 2. Each process calculates it's local sample with size comm_sz
local_sample = create_local_sample(sub_vec, n_over_p, comm_sz);
// Step 3. Each process sends it's local sample to P0
if (my_rank == 0) {
global_sample_receiver = (int*)malloc(pow(comm_sz,2)*sizeof(int));
global_sample_receiver = local_sample;
for (i = 1; i < comm_sz; i++) {
MPI_Recv(global_sample_receiver+(i*comm_sz), comm_sz, MPI_INT,
i, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
}
} else {
MPI_Send(local_sample, comm_sz, MPI_INT, 0, 0, MPI_COMM_WORLD);
}
printf("P%d got here\n", my_rank);
MPI_Finalize();
What is funny is that every process reachs the command printf("P%d got here\n", my_rank); and therefor prints to the terminal. Also global_sample_receiver does contain the data it is supposed to contain at the end, but the program still finished with a segmentation fault.
Here is the output:
P2 got here
P0 got here
P3 got here
P1 got here
[Krabbe-Ubuntu:05969] *** Process received signal ***
[Krabbe-Ubuntu:05969] Signal: Segmentation fault (11)
[Krabbe-Ubuntu:05969] Signal code: Address not mapped (1)
[Krabbe-Ubuntu:05969] Failing at address: 0x18000003e7
--------------------------------------------------------------------------
mpiexec noticed that process rank 0 with PID 5969 on node Krabbe-Ubuntu
exited on signal 11 (Segmentation fault).
--------------------------------------------------------------------------
Edit: I found the problem, turns out local_sample also needed a malloc.
The issue is you overwrite global_sample_receiver (which is a pointer) with local_sample (which is an other pointer) on rank zero.
If you want to set the first comm_sz elements of global_sample_receiver with the first comm_sz elements from local_sample, then you have to copy the data (e.g. not the pointer) manually.
memcpy(global_sample_receiver, local_sample, comm_sz * sizeof(int));
That being said, the natural MPI way of doing this is via MPI_Gather().
Here is what step 3 would look like :
// Step 3. Each process sends it's local sample to P0
if (my_rank == 0) {
global_sample_receiver = (int*)malloc(pow(comm_sz,2)*sizeof(int));
}
MPI_Gather(global_sample_receiver,comm_sz, MPI_INT, local_sample, comm_sz, MPI_INT, 0, MPI_COMM_WORLD);
This is what I am trying to achieve.
Blue is the message.
Yellow is when the specific node changes the leader known to it.
Green is the final election of each node.
The code seems correct to me but it's always stuck inside the while loop no matter what I tried. For a small number of nodes during runtime it returns a segmentation fault after a while.
election_status=0;
firstmsg[0]=world_rank; // self rank
firstmsg[1]=0; // counter for hops
chief=world_rank; // each node declares himself as leader
counter=0; // message counter for each node
// each node sends the first message to the next one
MPI_Send(&firstmsg, 2, MPI_INT, (world_rank+1)%world_size, 1, MPI_COMM_WORLD);
printf("Sent ID with counter to the right node [%d -> %d]\n",world_rank, (world_rank+1)%world_size);
while (election_status==0){
// EDIT: Split MPI_Recv for rank 0 and rest
if (world_rank==0){
MPI_Recv(&incoming, 2, MPI_INT, world_size-1, 1, MPI_COMM_WORLD, &status);
}
else {
MPI_Recv(&incoming, 2, MPI_INT, (world_rank-1)%world_size, 1, MPI_COMM_WORLD, &status);
}
counter=counter+1;
if (incoming[0]<chief){
chief=incoming[0];
}
incoming[1]=incoming[1]+1;
// if message is my own and hopped same times as counter
if (incoming[0]==world_rank && incoming[1]==counter) {
printf("Node %d declares node %d a leader.\n", world_rank, chief);
election_status=1;
}
//sends the incremented message to the next node
MPI_Send(&incoming, 2, MPI_INT, (world_rank+1)%world_size, 1, MPI_COMM_WORLD);
}
MPI_Finalize();
In order to determine some minimum number among a number of ranks for all ranks, use MPI_Allreduce!
MPI_Send is blocking. It can block forever until a matching receive is posted. Your program deadlocks on the first call to MPI_Send - and any successive once should it complete by coincidence. To avoid that specifically use MPI_Sendrecv.
(world_rank-1)%world_size will produce -1 for world_rank == 0. Using -1 as rank number is not valid. It might coincidentially be MPI_ANY_SOURCE.
I am making a MPI password cracker, that uses brute force approach to crack a SHA512 hash key, I have code that works fine with 1 password and multiple processes, or multiple passwords and 1 process, but when I do multiple passwords & multiple processes I get the following error:
[ubuntu:2341] *** An error occurred in MPI_Recv
[ubuntu:2341] *** reported by process [191954945,1]
[ubuntu:2341] *** on communicator MPI_COMM_WORLD
[ubuntu:2341] *** MPI_ERR_TRUNCATE: message truncated
[ubuntu:2341] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
[ubuntu:2341] *** and potentially your MPI job)
I believe this is caused by process rank #1 receiving a string "/" instead of the password hash.
The issue is I am not sure why.
I have also noticed something strange with my code, I have the following loop in process rank 0:
while(!done){
MPI_Iprobe(MPI_ANY_SOURCE, MPI_ANY_TAG, MPI_COMM_WORLD, &done, &status);
if(done==1) {
for(i=1;i<size;i++){
if(i!=status.MPI_SOURCE){
printf("sending done to process %d\n", i);
MPI_Isend(&done, 1, MPI_INT, i, 0, MPI_COMM_WORLD, &request[i]);
}
}
}
}
Which keeps looping waiting for one of the child processes to alert it that it has found the password. Lets say I am running 2 processes (excluding the base process) and process 2 finds the password, the output will then be:
sending done to process 1
sending done to process 1
When it should only be sending that once, or at the least if it is sending it twice surely the one of those values should be 2, not both of them being 1?
The main bulk of my code is as follows:
Process 0 :
while(!feof(f)) {
fscanf(f, "%s\n", buffer);
int done = 0;
int i, sm;
// lengths of the word (we know it should be 92 though)
length = strlen(buffer);
// Send the password to every process except process 0
for (sm=1;sm<size;sm++) {
MPI_Send(&length, 1, MPI_INT, sm, 0, MPI_COMM_WORLD);
MPI_Send(buffer, length+1, MPI_CHAR, sm, 0, MPI_COMM_WORLD);
}
// While the passwords are busy cracking - Keep probing.
while(!done){
MPI_Iprobe(MPI_ANY_SOURCE, MPI_ANY_TAG, MPI_COMM_WORLD, &done, &status);
if(done==1) {
for(i=1;i<size;i++){
if(i!=status.MPI_SOURCE){
printf("sending done to process %d\n", i);
MPI_Isend(&done, 1, MPI_INT, i, 0, MPI_COMM_WORLD, &request[i]);
}
}
}
}
}
Which loops through the file, grabs a new password, sends the string to the child processes, at which point they receive it:
MPI_Recv(&length, 1, MPI_INT, 0, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
printf("string to be recieived has %d characters\n", length);
MPI_Recv(buffer, length+1, MPI_INT, 0, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
printf("Process %d received %s\n", rank, buffer);
The child processes crack the password, then repeat with the next one. (assuming there is one, currently it's an infinite loop but I want to sort it out with 2 passwords before I fix that).
All processes recieve the correct password for the first time, it's when they grab the second password that only 1 process has the correct one and the rest receive a "/" character.
Alright, typical case of me getting worked up and over looking something simple.
I'll leave this question just in case anyone else happens to have the same issue though.
I was forgetting to also receive the solution after probing it.
Was never fully clear how probe differed from receive but I guess probe just flags something changes, but to actually take it out of the "queue" you need to then collect it with receive.
So just today I started messing around with the MPI library in C and I've tried it out some and have now found myself in a situation where I need the following:
A routine that'll send a message to a random process in a blocking receive while leaving the others still blocked.
Does such a routine exist? If not, how can something like this be accomplished?
No, such routine does not exist. However, you can easily build one using the available routines in the MPI standard. For example if you want a routine that sends to a random process which is not the current one you can write the following:
int MPI_SendRand(void *data, unsigned size, int tag, MPI_Comm comm, MPI_Status *status) {
// one process sends
int comm_size, my_rank, dest;
MPI_Comm_rank(comm, &my_rank);
MPI_Comm_size(comm, &comm_size);
// random number between [0, comm_size) excluding my_rank
while ((dst = ((float)rand())/RAND_MAX*comm_size)) == my_rank) ;
return MPI_Send(data, size, dst, tag, comm, status);
}
can be used as follows:
if (rank == master) {
MPI_SendRand(some_data, sime_size, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
} else {
// the rest waits
MPI_Recv(some_buff, some_size, MPI_SOURCE_ANY, MPI_TAG_ANY, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
// do work...
}