#include "mpi.h"
#include <stdio.h>
int main(int argc,char *argv[]){
int numtasks, rank, rc, count, tag=1, i =0;
MPI_Status Stat;
MPI_Init(&argc,&argv);
MPI_Comm_size(MPI_COMM_WORLD, &numtasks);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
if (rank == 0) //for process 0 we print received messages
{
for(i=0; i< 9; i ++){
printf("value of i is: %d\n",i );
rc = MPI_Recv(&inmsg, 1, MPI_CHAR, MPI_ANY_SOURCE, tag, MPI_COMM_WORLD, &Stat);
printf("Task %d: Received %d char(s) from task %d with tag %d \n", rank, count, Stat.MPI_SOURCE, Stat.MPI_TAG);
}
}
else //for the other 9 processes
{
if(rank % 2 == 0){ //if rank is an even number
rc = MPI_Send(&outmsg, 1, MPI_CHAR, 0, tag, MPI_COMM_WORLD); //send message to process with rank 0
}
}
MPI_Finalize();
}
//
This program is run with 10 processes. Process with rank 0 receives messages and prints them out if the source process has an even numbered rank. Processes with rank other than 0 send process with rank 0 a message containing a character 'x'
Now, in regards to rank 0, it has a for loop that basically loops 9 times. In the loop it prints out the value of the iterating variable i, and the received character and source process.
However, when I run my program it does not terminate.
The output looks like this:
Task 0: Received 0 char(s) from task 2 with tag 1
value of i is: 1
Task 0: Received 0 char(s) from task 6 with tag 1
value of i is: 2
Task 0: Received 0 char(s) from task 4 with tag 1
value of i is: 3
Task 0: Received 0 char(s) from task 8 with tag 1
value of i is: 4
How do I get it to print the other values of i such as 5,6,7,8,9?
You're using a master-slave architecture for parallel processing, your process 0 is the master and is waiting for the input of 9 other process, but in your code only process with even id number will fire an output, namely process 2, 4, 6, 8.
you didn't put behavior for process 1,3,5,7 and 9 so the master is still waiting for them hence, the program waiting for parallel process to finish:
you need to complete your source code here
if(rank % 2 == 0){ //if rank is an even number
rc = MPI_Send(&outmsg, 1, MPI_CHAR, 0, tag, MPI_COMM_WORLD); //send message to process with rank 0
}else{
//logic for process 1,3,5,7,9
}
Related
Just a general question:
I wanted to ask if there is anyway to broadcast elements to only certain ranks in MPI without using the MPI_Send and MPI_Recv routines.
I wanted to ask if there is anyway to broadcast elements to only
certain ranks in MPI without using the MPI_Send MPI_Recv.
Let us start by looking at the description of the MPI_Bcast routine.
Broadcasts a message from the process with rank "root" to all other
processes of the communicator
The MPI_Bcast broadcast routine is a collective communication. Hence:
Collective communication is a method of communication which involves
participation of all processes in a communicator.
Notice the text in bold i.e., "all processes in a communicator". Therefore, one approach (to achieve what you want) is to create a subset composed of the processes that will participate in the broadcast routine. This subset can be materialized through the creation of a new MPI communicator. To create that communicator one can use the MPI function MPI_Comm_split. About that routine from source one can read:
As the name implies, MPI_Comm_split creates new communicators by
“splitting” a communicator into a group of sub-communicators based on
the input values color and key. It’s important to note here that the
original communicator doesn’t go away, but a new communicator is
created on each process.
The first argument, comm, is the communicator
that will be used as the basis for the new communicators. This could
be MPI_COMM_WORLD, but it could be any other communicator as well.
The second argument, color, determines to which new communicator each
processes will belong. All processes which pass in the same value for
color are assigned to the same communicator. If the color is
MPI_UNDEFINED, that process won’t be included in any of the new
communicators. The third argument, key, determines the ordering (rank)
within each new communicator. The process which passes in the smallest
value for key will be rank 0, the next smallest will be rank 1, and so
on. If there is a tie, the process that had the lower rank in the
original communicator will be first. The final argument, newcomm is
how MPI returns the new communicator back to the user.
Let us say that we wanted to have only the processes with an even rank to participate in the MPI_Bcast; We would first create the communicator:
MPI_Comm new_comm;
int color = (world_rank % 2 == 0) ? 1 : MPI_UNDEFINED;
MPI_Comm_split(MPI_COMM_WORLD, color, world_rank, &new_comm);
and eventually call the MPI_Bcast for the new communicator:
if(world_rank % 2 == 0){
....
MPI_Bcast(&bcast_value, 1, MPI_INT, 0, new_comm);
...
}
At the end we would free the memory used by the communicator:
MPI_Comm_free(&new_comm);
A running code example:
#include <assert.h>
#include <stdio.h>
#include <stdlib.h>
#include <mpi.h>
int main(int argc,char *argv[]){
MPI_Init(NULL,NULL); // Initialize the MPI environment
int world_rank;
int world_size;
MPI_Comm_rank(MPI_COMM_WORLD,&world_rank);
MPI_Comm_size(MPI_COMM_WORLD,&world_size);
int bcast_value = world_rank;
MPI_Bcast(&bcast_value, 1, MPI_INT, 0, MPI_COMM_WORLD);
printf("MPI_Bcast 1 : MPI_COMM_WORLD ProcessID = %d, bcast_value = %d \n", world_rank, bcast_value);
MPI_Comm new_comm;
int color = (world_rank % 2 == 0) ? 1 : MPI_UNDEFINED;
MPI_Comm_split(MPI_COMM_WORLD, color, world_rank, &new_comm);
if(world_rank % 2 == 0){
int new_comm_rank, new_comm_size;
MPI_Comm_rank(new_comm, &new_comm_rank);
MPI_Comm_size(new_comm, &new_comm_size);
bcast_value = 1000;
MPI_Bcast(&bcast_value, 1, MPI_INT, 0, new_comm);
printf("MPI_Bcast 2 : MPI_COMM_WORLD ProcessID = %d, new_comm = %d, bcast_value = %d \n", world_rank, new_comm_rank, bcast_value);
MPI_Comm_free(&new_comm);
}
MPI_Finalize();
return 0;
}
This code example, showcases two MPI_Bcast calls, one with all the processes of the MPI_COMM_WORLD (i.e., MPI_Bcast 1) and another with only a subset of those processes (i.e., MPI_Bcast 2).
The output (for 8 processes):
MPI_Bcast 1 : MPI_COMM_WORLD ProcessID = 0, bcast_value = 0
MPI_Bcast 1 : MPI_COMM_WORLD ProcessID = 4, bcast_value = 0
MPI_Bcast 1 : MPI_COMM_WORLD ProcessID = 5, bcast_value = 0
MPI_Bcast 1 : MPI_COMM_WORLD ProcessID = 6, bcast_value = 0
MPI_Bcast 1 : MPI_COMM_WORLD ProcessID = 7, bcast_value = 0
MPI_Bcast 1 : MPI_COMM_WORLD ProcessID = 1, bcast_value = 0
MPI_Bcast 1 : MPI_COMM_WORLD ProcessID = 2, bcast_value = 0
MPI_Bcast 1 : MPI_COMM_WORLD ProcessID = 3, bcast_value = 0
MPI_Bcast 2 : MPI_COMM_WORLD ProcessID = 0, new_comm = 0, bcast_value = 1000
MPI_Bcast 2 : MPI_COMM_WORLD ProcessID = 4, new_comm = 2, bcast_value = 1000
MPI_Bcast 2 : MPI_COMM_WORLD ProcessID = 2, new_comm = 1, bcast_value = 1000
MPI_Bcast 2 : MPI_COMM_WORLD ProcessID = 6, new_comm = 3, bcast_value = 1000
This program written using C Lagrange and MPI. I am new to MPI and want to use all processors to do some calculations, including process 0. To learn this concept, I have written the following simple program. But this program hangs at the bottom after receiving input from the process 0 and won't send results back to the process 0.
#include <mpi.h>
#include <stdio.h>
int main(int argc, char** argv) {
MPI_Init(&argc, &argv);
int world_rank;
MPI_Comm_rank(MPI_COMM_WORLD, &world_rank);
int world_size;
MPI_Comm_size(MPI_COMM_WORLD, &world_size);
int number;
int result;
if (world_rank == 0)
{
number = -2;
int i;
for(i = 0; i < 4; i++)
{
MPI_Send(&number, 1, MPI_INT, i, 0, MPI_COMM_WORLD);
}
for(i = 0; i < 4; i++)
{ /*Error: can't get result send by other processos bellow*/
MPI_Recv(&number, 1, MPI_INT, i, 99, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
printf("Process 0 received number %d from i:%d\n", number, i);
}
}
/*I want to do this without using an else statement here, so that I can use process 0 to do some calculations as well*/
MPI_Recv(&number, 1, MPI_INT, 0, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
printf("*Process %d received number %d from process 0\n",world_rank, number);
result = world_rank + 1;
MPI_Send(&result, 1, MPI_INT, 0, 99, MPI_COMM_WORLD); /* problem happens here when trying to send result back to process 0*/
MPI_Finalize();
}
Runing and getting results:
:$ mpicc test.c -o test
:$ mpirun -np 4 test
*Process 1 received number -2 from process 0
*Process 2 received number -2 from process 0
*Process 3 received number -2 from process 0
/* hangs here and will not continue */
If you can, please show me with an example or edit the above code if possible.
I don't really get what would be wrong with using 2 if statements, surrounding the working domain. But anyway, here is an example of what could be done.
I modified your code to use collective communications as they make much more sense than the series of send/receive you used. Since the initial communications are with a uniform value, I use a MPI_Bcast() which does the same in one single call.
Conversely, since the result values are all different, a call to MPI_Gather() is perfectly appropriate.
I also introduce a call to sleep() just to simulate that the processes are working for a while, prior to sending back their results.
The code now looks like this:
#include <mpi.h>
#include <stdlib.h> // for malloc and free
#include <stdio.h> // for printf
#include <unistd.h> // for sleep
int main( int argc, char *argv[] ) {
MPI_Init( &argc, &argv );
int world_rank;
MPI_Comm_rank( MPI_COMM_WORLD, &world_rank );
int world_size;
MPI_Comm_size( MPI_COMM_WORLD, &world_size );
// sending the same number to all processes via broadcast from process 0
int number = world_rank == 0 ? -2 : 0;
MPI_Bcast( &number, 1, MPI_INT, 0, MPI_COMM_WORLD );
printf( "Process %d received %d from process 0\n", world_rank, number );
// Do something usefull here
sleep( 1 );
int my_result = world_rank + 1;
// Now collecting individual results on process 0
int *results = world_rank == 0 ? malloc( world_size * sizeof( int ) ) : NULL;
MPI_Gather( &my_result, 1, MPI_INT, results, 1, MPI_INT, 0, MPI_COMM_WORLD );
// Process 0 prints what it collected
if ( world_rank == 0 ) {
for ( int i = 0; i < world_size; i++ ) {
printf( "Process 0 received result %d from process %d\n", results[i], i );
}
free( results );
}
MPI_Finalize();
return 0;
}
After compiling it as follows:
$ mpicc -std=c99 simple_mpi.c -o simple_mpi
It runs and gives this:
$ mpiexec -n 4 ./simple_mpi
Process 0 received -2 from process 0
Process 1 received -2 from process 0
Process 3 received -2 from process 0
Process 2 received -2 from process 0
Process 0 received result 1 from process 0
Process 0 received result 2 from process 1
Process 0 received result 3 from process 2
Process 0 received result 4 from process 3
Actually, processes 1-3 are indeed sending the result back to processor 0. However, processor 0 is stuck in the first iteration of this loop:
for(i=0; i<4; i++)
{
MPI_Recv(&number, 1, MPI_INT, i, 99, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
printf("Process 0 received number %d from i:%d\n", number, i);
}
In the first MPI_Recv call, processor 0 will block waiting to receive a message from itself with tag 99, a message that 0 did not send yet.
Generally, it is a bad idea for a processor to send/receive messages to itself, especially using blocking calls. 0 already have the value in memory. It does not need to send it to itself.
However, a workaround is to start the receive loop from i=1
for(i=1; i<4; i++)
{
MPI_Recv(&number, 1, MPI_INT, i, 99, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
printf("Process 0 received number %d from i:%d\n", number, i);
}
Running the code now will give you:
Process 1 received number -2 from process 0
Process 2 received number -2 from process 0
Process 3 received number -2 from process 0
Process 0 received number 2 from i:1
Process 0 received number 3 from i:2
Process 0 received number 4 from i:3
Process 0 received number -2 from process 0
Note that using MPI_Bcast and MPI_Gather as mentioned by Gilles is a much more efficient and standard way for data distribution/collection.
How do I pair processes using MPI in C? It's a tree structured approach. Process 0 should be adding from all of the other even processes, which they are paired with. I only need it to work for powers of 2.
Should I be using MPI_Reduce instead of MPI Send/Receive? If so, why?
My program doesn't seem to get past for loop inside the first if statement. Why?
#include <stdio.h>
#include <time.h>
#include <stdlib.h>
#include <mpi.h>
int main(void){
int sum, comm_sz, my_rank, i, next, value;
int divisor = 2;
int core_difference = 1;
MPI_Init(NULL, NULL);
MPI_Comm_size(MPI_COMM_WORLD, &comm_sz);
MPI_Comm_rank(MPI_COMM_WORLD, &my_rank);
srandom((unsigned)time(NULL) + my_rank);
value = random() % 10;
//process should recieve and add
if (my_rank % divisor == 0){
printf("IF----");
printf("Process %d generates: %d\n", my_rank, value);
for (i = 0; i < comm_sz; i++)
{
MPI_Recv(&value, 1, MPI_INT, i, my_rank , MPI_COMM_WORLD, MPI_STATUS_IGNORE);
sum += value;
printf("Current Sum=: %d\n", sum);
}
printf("The NEW divisor is:%d\n", divisor);
divisor *= 2;
core_difference *= 2;
}
//sending the random value - no calculation
else if (my_rank % divisor == core_difference){
printf("ELSE----");
printf("Process %d generates: %d\n", my_rank, value);
MPI_Send(&value, 1, MPI_INT, 0, 0, MPI_COMM_WORLD);
}
else
if (my_rank==0)
printf("Sum=: %d\n", sum);
MPI_Finalize();
return 0;
}
The problem is that your processes are all receiving from themselves. If I add a print statement before each send and receive with the processes involved in the operation, here's the output:
$ mpiexec -n 8 ./a.out
IF----Process 0 generates: 5
ELSE----Process 1 generates: 1
ELSE----Process 3 generates: 1
IF----Process 4 generates: 9
ELSE----Process 5 generates: 7
IF----Process 6 generates: 2
ELSE----Process 7 generates: 0
0 RECV FROM 0
1 SEND TO 0
3 SEND TO 0
4 RECV FROM 0
5 SEND TO 0
6 RECV FROM 0
7 SEND TO 0
IF----Process 2 generates: 7
2 RECV FROM 0
1 SEND TO 0 DONE
3 SEND TO 0 DONE
5 SEND TO 0 DONE
7 SEND TO 0 DONE
Obviously, everyone is hanging while waiting for rank 0, including rank 0. If you want to send to yourself, you'll need to use either MPI_Sendrecv to do both the send and receive at the same time or use nonblocking sends and receives (MPI_Isend/MPI_Irecv).
As you said, another option would be to use collectives, but if you do that, you'll need to create new subcommunicators. Collectives require all processes in the communicator to participate. You can't pick just a subset.
I wrote a program on MPI where it would go around each processor in a sort of ring fashion x amount of times (for example if I wanted it to go twice around the "ring" of four processors it would go to 0, 1, 2, 3, 0 ,1....3).
Everything compiled fine but when I ran the program on my Ubuntu VM it would never output anything. It wouldn't even run the first output. Can anyone explain what's going on?
This is my code:
#include <stdio.h>
#include <mpi.h>
int main(int argc, char **argv){
int rank, size, tag, next, from, num;
tag = 201;
MPI_Status status;
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &size);
next = (rank + 1)/ size;
from = (rank - 1)/size;
if (rank == 0){
printf("How many times around the ring? :: ");
scanf ("%d", &num);
MPI_Send(&num, 1, MPI_INT, 1, tag, MPI_COMM_WORLD);
}
do{
MPI_Recv(&num, 1, MPI_INT, from, tag, MPI_COMM_WORLD, &status);
printf("Process %d received %d from process %d\n", rank, num, status.MPI_SOURCE);
if (rank == 0){
num--;
printf("Process 0 has decremented the number\n");
}
printf("Process %d sending %d to process %d\n", rank, num ,next);
MPI_Send(&num, 1, MPI_INT, next, tag, MPI_COMM_WORLD);
}while (num > 0);
printf("Process %d has exited", rank);
if (rank == 0){
MPI_Recv(&num, 1, MPI_INT, size - 1, tag, MPI_COMM_WORLD, &status);
printf("Process 0 has received the last round, exiting");
}
MPI_Finalize();
return 0;
}
There's a problem with your neighbour assignment. If we insert the following line after the next/from calculation
printf("Rank %d: from = %d, next = %d\n", rank, from, next);
we get:
$ mpirun -np 4 ./ring
Rank 0: from = 0, next = 0
Rank 1: from = 0, next = 0
Rank 2: from = 0, next = 0
Rank 3: from = 0, next = 1
You want something more like
next = (rank + 1) % size;
from = (rank - 1 + size) % size;
which gives
$ mpirun -np 4 ./ring
Rank 0: from = 3, next = 1
Rank 1: from = 0, next = 2
Rank 2: from = 1, next = 3
Rank 3: from = 2, next = 0
and after that your code seems to work.
Whether your code is good or not, your first printf should be output.
If you have no messages printed at all, even the printf in the "if(rank==)" block, then it could be a problem with your VM. Are you sure you have any network interface activated on that VM ?
If the answer is yes, it might be useful to check its compatibility with MPI by checking the OpenMPI FAQ over tcp questions. Sections 7 (How do I tell Open MPI which TCP networks to use?) and 13 (Does Open MPI support virtual IP interfaces?) seems both interesting for any possible problems with running MPI in a Virtual Machine.
I've just been experimenting with MPI, and copied and ran this code, taken from the second code example at [the LLNL MPI tutorial][1].
#include <mpi.h>
#include <stdlib.h>
#include <stdio.h>
int main(int argc, char ** argv) {
int num_tasks, rank, next, prev, buf[2], tag1 = 1, tag2 = 2;
MPI_Request reqs[4];
MPI_Status status[2];
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &num_tasks);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
prev = rank - 1;
next = rank + 1;
if (rank == 0) prev = num_tasks - 1;
if (rank == (num_tasks - 1)) next = 0;
MPI_Irecv(&buf[0], 1, MPI_INT, prev, tag1, MPI_COMM_WORLD,
&reqs[0]);
MPI_Irecv(&buf[1], 1, MPI_INT, next, tag2, MPI_COMM_WORLD,
&reqs[1]);
MPI_Isend(&rank, 1, MPI_INT, prev, tag2, MPI_COMM_WORLD, &reqs[2]);
MPI_Isend(&rank, 1, MPI_INT, next, tag1, MPI_COMM_WORLD, &reqs[3]);
MPI_Waitall(4, reqs, status);
printf("Task %d received %d from %d and %d from %d\n",
rank, buf[0], prev, buf[1], next);
MPI_Finalize();
return EXIT_SUCCESS;
}
I would have expected an output like this (for, say, 4 tasks):
$ mpiexec -n 4 ./m3
Task 0 received 3 from 3 and 1 from 1
Task 1 received 0 from 0 and 2 from 2
Task 2 received 1 from 1 and 3 from 3
Task 3 received 2 from 2 and 0 from 0
However, instead, I get this:
$ mpiexec -n 4 ./m3
Task 0 received 0 from 3 and 1 from 1
Task 1 received 0 from 0 and 2 from 2
Task 3 received 0 from 2 and 0 from 0
Task 2 received 0 from 1 and 3 from 3
That is, the message (with tag == 1) going into buffer buf[0] always gets value 0. Moreover, if I alter the code so that I declare the buffer as buf[3] rather than buf[2], and replace each instance of buf[0] with buf[2], then I get precisely the output I would have expected (i.e., the first output set given above). This looks as if, for some reason, something is overwriting the value in buf[0] with 0. But I can't see what that might be. BTW, as far as I can tell, my code (without the modification) exactly matches the code inthe tutorial, except for my printf.
Thanks!
Array of statuses must be of size 4 not 2. In your case MPI_Waitall corrupts memory when writing statuses.