I am trying to learn to use MPI. Below is my simple program to test MPI scatter and gather. I don't understand how it works and why it produces the result
1 2 3 4 4 5 6 7 8 9 10 11
instead of expected
1 2 3 4 5 6 7 8 9 10 11 12
The documentation and all the examples I can find are too complicated/poorly worded for me to understand. I just want to scatter an array across 3 processes and add one to each value in each process. Alternatively I would be happy to see how a 2D array was sent row by row to each process and each row was processed simply.
int main(int argc, char **argv) {
int rank; // my process ID
int size = 3; // number of processes/nodes
MPI_Status status;
MPI_Init(&argc, &argv); // start MPI
MPI_Comm_size(MPI_COMM_WORLD, &size); // initialize MPI
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
unsigned char inData[12]; // data returned after being "processed"
unsigned char outData[12]; // buffer for receiving data
unsigned long datasize = 12; // size of data to process
unsigned char testData[12]; // data to be processed
if (rank == 0) {
// initialize data
for (int i = 0; i < datasize; i++) {
testData[i] = i;
outData[i] = 0;
inData[i] = 0;
}
}
// scatter the data to the processes
// I am not clear about the numbers sent in and out
MPI_Scatter(&testData, 12, MPI_UNSIGNED_CHAR, &outData,
12, MPI_UNSIGNED_CHAR, 0, MPI_COMM_WORLD);
MPI_Barrier(MPI_COMM_WORLD);
// process data
for (int i = 0; i < 4; i++) { outData[i] = outData[i] + 1; }
MPI_Barrier(MPI_COMM_WORLD);
// gather processed data
MPI_Gather(&outData, 12, MPI_UNSIGNED_CHAR, &inData,
12, MPI_UNSIGNED_CHAR, 0, MPI_COMM_WORLD);
//print processed data from root
if (rank == 0) {
for (int i = 0; i < 12; i++) {
printf("\n%d", inData[i]);
}
MPI_Finalize();
}
return 0;
}
Though your main error is using 12 instead of 4, let's do it step-by-step.
// int size = 3; // number of processes/nodes
int size;
...
MPI_Comm_size(MPI_COMM_WORLD, &size); // initialize MPI
assert(size == 3);
There is no point in setting size to 3. This value will be overwritten by MPI_Comm_size with the actual number of processes. This number is determined by how you run your MPI application (e.g. mpirun -np 3).
//unsigned char outData[12]; // buffer for receiving data
unsigned char outData[4];
We have 12 elements and 3 processes, 4 elements per processes. So, 4 elements are enough for outData.
outData[i] = 0;
inData[i] = 0;
There is no point in zeroing these buffers, they will be overwritten.
// scatter the data to the processes
// I am not clear about the numbers sent in and out
MPI_Scatter(&testData, 4 /*12*/, MPI_UNSIGNED_CHAR, &outData,
4 /*12*/, MPI_UNSIGNED_CHAR, 0, MPI_COMM_WORLD);
We have 4 elements per processes, so the number should be 4, not 12.
MPI_Barrier(MPI_COMM_WORLD);
You don't need barriers here.
MPI_Gather(&outData, 4 /*12*/, MPI_UNSIGNED_CHAR, &inData,
4 /*12*/, MPI_UNSIGNED_CHAR, 0, MPI_COMM_WORLD);
Same story, 4 instead of 12.
MPI_Finalize();
This should be called by all processes.
Related
I am just trying to get my head around MPI and can't seem to understand, why the following programs output is different from what I expect.
int rank, size;
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &size);
int *sendbuf, *recvbuf;
sendbuf = (int *) malloc(sizeof(int) * size);
recvbuf = (int *) malloc(sizeof(int) * size);
for(int i = 0; i < size; i++) {
sendbuf[i] = rank;
}
for(int i = 0; i < size; i++) {
printf("sendbuf[%d] = %d, rank: %d\n", i, sendbuf[i], rank);
}
MPI_Scatter(sendbuf, 1, MPI_INT,
recvbuf, 1, MPI_INT, rank, MPI_COMM_WORLD);
for(int i = 0; i < size; i++) {
printf("recvbuf[%d] = %d, rank: %d\n", i, recvbuf[i], rank);
}
As far as I understood, MPI_Scatter sends sendcount values from an array to all processses.
In my example I gave each process an array filled with the own rank number.
Then each process sends one of the indexes in its array to all other processes. With two processes the first procss has an sendbuf array of:
sendbuf[0] = 0
sendbuf[1] = 0
And the second process (rank 1) has an array of size MPI_Comm_size filled with 1.
The expected output should be:
recvbuf[0] = 0, rank: 0
recvbuf[1] = 1, rank: 0
recvbuf[0] = 0, rank: 1
revcbuf[1] = 1, rank: 1
But instead I get the following output (for two processes):
sendbuf[0] = 0, rank: 0
sendbuf[1] = 0, rank: 0
sendbuf[0] = 1, rank: 1
sendbuf[1] = 1, rank: 1
recvbuf[0] = 0, rank: 0
recvbuf[1] = 32690, rank: 0
recvbuf[0] = 1, rank: 1
recvbuf[1] = 32530, rank: 1
Any help pointing out my mistake is well appreciated.
I am just trying to get my head around MPI and can't seem to
understand, why the following programs output is different from what I
expect.
The problem lies in the use of MPI_Scatter to accomplish your goal:
Sends data from one process to all other processes in a communicator
Synopsis
int MPI_Scatter(const void *sendbuf, int sendcount, MPI_Datatype
sendtype, void *recvbuf, int recvcount, MPI_Datatype recvtype, int root,
MPI_Comm comm) Input Parameters
sendbuf address of send buffer (choice, significant only at root)
sendcount number of elements sent to each process (integer, significant only at root)
sendtype data type of send buffer elements (significant only at root) (handle)
recvcount number of elements in receive buffer (integer)
recvtype data type of receive buffer elements (handle)
root rank of sending process (integer)
comm communicator (handle)
Every process should call the MPI_Scatter with the same root, not with a different root (i.e., the process' rank) as you have done:
MPI_Scatter(sendbuf, 1, MPI_INT, recvbuf, 1, MPI_INT, rank, MPI_COMM_WORLD);
^^^^
Therefore, you are misusing the MPI_Scatter, the purpose of that routine is to "Sends data from one process to all other processes in a communicator". The following image (taken from source) illustrates it best:
Only one root process, which scatters its data across different processes. This routine is, for instance, used when a process has a chunk of data (e.g., an array), and the code performance some operation over that data. You can parallelize the code by splitting the data among the processes, where each process performs the aforementioned operation in parallel on its assigned data chunk. Afterward, you might call MPI_Gather to gather the data from all the processes back to the original process where that data came from.
Then each process sends one of the indexes in its array to all other
processes.
For that you can use MPI_Allgather instead, which "Gathers data from all tasks and distribute the combined data to all tasks". The following image (taken from source) illustrates it best:
As you can see, each process will gather the data send by all processes (including itself).
A running example:
#include <stdio.h>
#include <stdlib.h>
#include <mpi.h>
int main(int argc, char **argv){
int rank, size;
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &size);
int *sendbuf = malloc(sizeof(int) * size);
int *recvbuf = malloc(sizeof(int) * size);
for(int i = 0; i < size; i++)
sendbuf[i] = rank;
MPI_Allgather(sendbuf, 1, MPI_INT, recvbuf, 1, MPI_INT, MPI_COMM_WORLD);
for(int i = 0; i < size; i++)
printf("recvbuf[%d] = %d, rank: %d\n", i, recvbuf[i], rank);
MPI_Finalize();
return 0;
}
OUTPUT for two processes:
recvbuf[0] = 0, rank: 0
recvbuf[1] = 1, rank: 0
recvbuf[0] = 0, rank: 1
recvbuf[1] = 1, rank: 1
For your particular case (with the same input size), MPI_Alltoall would also work, to understand the differences between MPI_Allgather versus MPI_Alltoall, I recommend you to check this SO thread.
I'm trying to write a MPI program that calculates the sum of an array of integers.
For this purpose I used MPI_Scatter to send chunks of the array to the other processes then MPI_Gather to get the sum of each chunk by the root process(process 0).
The problem is one of the processes receives two elements but the other one receives random numbers. I'm running my code with 3 processes.
Here is what I have:
#include <stdio.h>
#include <mpi.h>
int main(int argc,char *argv[]){
MPI_Init(NULL,NULL); // Initialize the MPI environment
int world_rank;
int world_size;
MPI_Comm_rank(MPI_COMM_WORLD,&world_rank);
MPI_Comm_size(MPI_COMM_WORLD,&world_size);
int number1[2]; //buffer for processes
int sub_sum = 0;
int sub_sums[2];
int sum;
int number[4];
if(world_rank == 0){
number[0]=1;
number[1]=3;
number[2]=5;
number[3]=9;
}
//All processes
MPI_Scatter(number, 2, MPI_INT, &number1, 2, MPI_INT, 0, MPI_COMM_WORLD);
if(world_rank!=0){
printf("I'm process %d , I received the array : ",world_rank);
for(int i=0 ; i<2 ; i++){
printf("%d ",number1[i]);
sub_sum = sub_sum + number1[i];
}
printf("\n");
}
MPI_Gather(&sub_sum, 1, MPI_INT, &sub_sums, 1, MPI_INT, 0,MPI_COMM_WORLD);
if(world_rank == 0){
sum=0;
for(int i=0; i<2;i++){
sum+= sub_sums[i];
}
printf("\nthe sum of array is: %d\n",sum);
}
MPI_Finalize();
return 0;
}
The result:
I'm process 1 , I received the array : 5 9
I'm process 2 , I received the array : 1494772352 32767
the sum of array is: 14
It seems that you misunderstood how MPI works; Your code is hardcoded to work (correctly) with only two processes. However, you are trying to run the code with 3 processes, with the wrong assumption that the during the MPI_Scatter call the root rank will only send the data to the other processes. If you look at the following image (taken from source):
you notice that the root rank (i.e., rank = 0) also receives part of the data.
The problem is one of the processes receives two elements but the
other one receives random numbers.
MPI_Scatter(number, 2, MPI_INT, &number1, 2, MPI_INT, 0, MPI_COMM_WORLD);
So you have hardcoded an input as follows number{1,3,5,9} (with only 4 elements); and what is happen during the MPI_Scatter call is that process 0 will get the first and second elements from array number (i.e., {1, 3}), whereas process 1 gets the other two elements (i.e., {5, 9}), and the process 2 will get some random values, consequently:
I'm process 2 , I received the array : 1494772352 32767
You get
the sum of array is: 14
because the array sub_sums will have the sums performed by process 0, which is zero since you excluded, and process 1 which is 3 + 9. Hence, 0 + 14 = 14.
To fix this you need to remove if(world_rank!=0) from:
if(world_rank!=0){
printf("I'm process %d , I received the array : ",world_rank);
for(int i=0 ; i<2 ; i++){
printf("%d ",number1[i]);
sub_sum = sub_sum + number1[i];
}
printf("\n");
}
and run your code with only 2 processes.
For the last step instead of the MPI_Gather you can used MPI_Reduce to perform the sum in parallel and collect the value directly on the root rank. Consequently, you would not need to performed the sum manually on the root rank.
A running example:
int main(int argc,char *argv[]){
MPI_Init(NULL,NULL); // Initialize the MPI environment
int world_rank;
int world_size;
MPI_Comm_rank(MPI_COMM_WORLD,&world_rank);
MPI_Comm_size(MPI_COMM_WORLD,&world_size);
int number1[2];
int number[4];
if(world_rank == 0){
number[0]=1;
number[1]=3;
number[2]=5;
number[3]=9;
}
//All processes
MPI_Scatter(number, 2, MPI_INT, &number1, 2, MPI_INT, 0, MPI_COMM_WORLD);
printf("I'm process %d , I received the array : ",world_rank);
int sub_sum = 0;
for(int i=0 ; i<2 ; i++){
printf("%d ",number1[i]);
sub_sum = sub_sum + number1[i];
}
printf("\n");
int sum = 0;
MPI_Reduce(&sub_sum, &sum, 1, MPI_INT, MPI_SUM,0,MPI_COMM_WORLD);
if(world_rank == 0)
printf("\nthe sum of array is: %d\n",sum);
MPI_Finalize();
return 0;
}
Input : {1,3,5,9} running with 2 processes
Output
I'm process 0 , I received the array : 1 3
I'm process 1 , I received the array : 5 9
the sum of array is: 18
If you really want to only have the process 1 and 2 receive the data and performed the sum, I would suggest to look into the routines MPI_Send and MPI_Recv.
I am new to MPI and has written the following program using C language. Instead of using pointers, I would like to set up my array as shown below. My first array element reads correctly, after that, it won't read array elements. Can you please tell me if this is not the correct way of using scatter and gather
Following is the Result I get:
$ mpicc test.c -o test
$ mpirun -np 4 test
1. Processor 0 has data 0 1 2 3
2. Processor 0 has data 0
3. Processor 0 doubling the data, now has 5
2. Processor 1 has data 32767
3. Processor 1 doubling the data, now has 5
2. Processor 2 has data -437713961
3. Processor 2 doubling the data, now has 5
2. Processor 3 has data 60
3. Processor 3 doubling the data, now has 5
4. Processor 0 has data: 5 1 2 3
Correct Result should be:
$ mpicc test.c -o test
$ mpirun -np 4 test
1. Processor 0 has data 0 1 2 3
2. Processor 0 has data 0
3. Processor 0 doubling the data, now has 5
2. Processor 1 has data 1
3. Processor 1 doubling the data, now has 5
2. Processor 2 has data 2
3. Processor 2 doubling the data, now has 5
2. Processor 3 has data 3
3. Processor 3 doubling the data, now has 5
4. Processor 0 has data: 5 5 5 5
Any help would be greatly appreciated. Following code is run using 4 processors:
#include <mpi.h>
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char **argv) {
int size, rank;
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &size);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
int globaldata[4]; /*wants to declare array this way*/
int localdata[4]; /*without using pointers*/
int i;
if (rank == 0) {
for (i = 0; i < size; i++)
globaldata[i] = i;
printf("1. Processor %d has data: ", rank);
for (i = 0; i < size; i++)
printf("%d ", globaldata[i]);
printf("\n");
}
MPI_Scatter(globaldata, 1, MPI_INT, &localdata, 1, MPI_INT, 0, MPI_COMM_WORLD);
printf("2. Processor %d has data %d\n", rank, localdata[rank]);
localdata[rank]= 5;
printf("3. Processor %d now has %d\n", rank, localdata[rank]);
MPI_Gather(&localdata, 1, MPI_INT, globaldata, 1, MPI_INT, 0, MPI_COMM_WORLD);
if (rank == 0) {
printf("4. Processor %d has data: ", rank);
for (i = 0; i < size; i++)
printf("%d ", globaldata[i]);
printf("\n");
}
MPI_Finalize();
return 0;
}
Your setup and your scatter is in principle ok. Your problem is in printing, as you have misunderstood a detail of scatter/gather here.
When scattering the 4-element array, each process gets only one element (as you define with the 2nd and 5th arguments of the MPI_Scatter call()). This element is stored in the 0-index of the local array. It is actually a scalar.
In general, you may scatter very big arrays and each process may still have to process a big local array. In these cases it is essential to correctly calculate the global indices and the local indices.
Assume the following toy problem: you want to scatter the array [1 2 3 4 5 6] to two processes. Proc0 should have the [1 2 3] part and the Proc1 should have the [4 5 6] part. In this case, the global array has size 6 and the local arrays have size 3. The Proc0 gets the global elements 0, 1, 2 and assigns them to its local 0, 1, 2. The Proc1 gets the global elements 3, 4, 5 and assigns them to its local 0, 1, 2.
Probably you will understand this concept better when you learn about the MPI_Scatterv which doesn't assume the same number of local elements for every process.
This version of your code seems to work:
#include <mpi.h>
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char **argv) {
int size, rank;
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &size);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
int globaldata[4];/*wants to declare array this way*/
int localdata;/*without using pointers*/
int i;
if (rank == 0) {
for (i=0; i<size; i++)
globaldata[i] = i;
printf("1. Processor %d has data: ", rank);
for (i=0; i<size; i++)
printf("%d ", globaldata[i]);
printf("\n");
}
MPI_Scatter(globaldata, 1, MPI_INT, &localdata, 1, MPI_INT, 0, MPI_COMM_WORLD);
printf("2. Processor %d has data %d\n", rank, localdata);
localdata= 5;
printf("3. Processor %d now has %d\n", rank, localdata);
MPI_Gather(&localdata, 1, MPI_INT, globaldata, 1, MPI_INT, 0, MPI_COMM_WORLD);
if (rank == 0) {
printf("4. Processor %d has data: ", rank);
for (i=0; i<size; i++)
printf("%d ", globaldata[i]);
printf("\n");
}
MPI_Finalize();
return 0;
}
Enjoy learning MPI! :-)
if I have this code:
int main(void) {
int result=0;
int num[6] = {1, 2, 4, 3, 7, 1};
if (my_rank != 0) {
MPI_Reduce(num, &result, 6, MPI_INT, MPI_MIN, 0, MPI_COMM_WORLD);
} else {
MPI_Reduce(num, &result, 6, MPI_INT, MPI_MIN, 0, MPI_COMM_WORLD)
printf("result = %d\n", result);
}
}
the result print is 1 ;
But if the num[0]=9; then the result is 9
I read to solve this problem I must to define the variable num as array.
I can't understand how the function MPI_Reduce works with MPI_MIN. Why, if the num[0] is not equal to the smallest number, then I must to define the variable num as array?
MPI_Reduce performs a reduction over the members of the communicator - not the members of the local array. sendbuf and recvbuf must both be of the same size.
I think the standard says it best:
Thus, all processes provide input buffers and output buffers of the same length, with elements of the same type. Each process can provide one element, or a sequence of elements, in which case the combine operation is executed element-wise on each entry of the sequence.
MPI does not get the minimum of all elements in the array, you have to do that manually.
You can use MPI_MIN to obtain the min value among those passed via reduction.
Lets' examine the function declaration:
int MPI_Reduce(void* sendbuf, void* recvbuf, int count, MPI_Datatype
datatype, MPI_Op op, int root, MPI_Comm comm)
Each process send it's value (or array of values) using the buffer sendbuff.
The process identified by the root id receive the buffers and stores them in the buffer recvbuf. The number of elements to receive from each of the other processes is specified in count, so that recvbuff must be allocated with dimension sizeof(datatype)*count.
If each process has only one integer to send (count = 1) then recvbuff it's also an integer, If each process has two integers then recvbuff it's an array of integers of size 2. See this nice post for further explanations and nice pictures.
Now it should be clear that your code is wrong, sendbuff and recvbuff must be of the same size and there is no need of the condition: if(myrank==0). Simply, recvbuff has meaning only for the root process and sendbuff for the others.
In your example you can assign one or more element of the array to a different process and then compute the minvalue (if there are as many processes as values in the array) or the array of minvalues (if there are more values than processes).
Here is a working example that illustrates the usage of MPI_MIN, MPI_MAX and MPI_SUM (slightly modified from this), in the case of simple values (not array).
Each process do some work, depending on their rank and send to the root process the time spent doing the work. The root process collect the times and output the min, max and average values of the times.
#include <stdio.h>
#include <mpi.h>
int myrank, numprocs;
/* just a function to waste some time */
float work()
{
float x, y;
if (myrank%2) {
for (int i = 0; i < 100000000; ++i) {
x = i/0.001;
y += x;
}
} else {
for (int i = 0; i < 100000; ++i) {
x = i/0.001;
y += x;
}
}
return y;
}
int main(int argc, char **argv)
{
int node;
MPI_Init(&argc,&argv);
MPI_Comm_rank(MPI_COMM_WORLD, &node);
printf("Hello World from Node %d\n",node);
/*variables used for gathering timing statistics*/
double mytime,
maxtime,
mintime,
avgtime;
MPI_Comm_rank(MPI_COMM_WORLD, &myrank);
MPI_Comm_size(MPI_COMM_WORLD, &numprocs);
MPI_Barrier(MPI_COMM_WORLD); /*synchronize all processes*/
mytime = MPI_Wtime(); /*get time just before work section */
work();
mytime = MPI_Wtime() - mytime; /*get time just after work section*/
/*compute max, min, and average timing statistics*/
MPI_Reduce(&mytime, &maxtime, 1, MPI_DOUBLE,MPI_MAX, 0, MPI_COMM_WORLD);
MPI_Reduce(&mytime, &mintime, 1, MPI_DOUBLE, MPI_MIN, 0,MPI_COMM_WORLD);
MPI_Reduce(&mytime, &avgtime, 1, MPI_DOUBLE, MPI_SUM, 0,MPI_COMM_WORLD);
/* plot the output */
if (myrank == 0) {
avgtime /= numprocs;
printf("Min: %lf Max: %lf Avg: %lf\n", mintime, maxtime,avgtime);
}
MPI_Finalize();
return 0;
}
If I run this on my OSX laptop, this is what I get:
urcaurca$ mpirun -n 4 ./a.out
Hello World from Node 3
Hello World from Node 0
Hello World from Node 2
Hello World from Node 1
Min: 0.000974 Max: 0.985291 Avg: 0.493081
I new to MPI and I am trying to write program that uses MPI_scatter. I have 4 nodes(0, 1, 2, 3). Node0 is master, others are slaves. Master asks user for number of elements of array to send to slaves. Then it creates array of size number of elements * 4. Then every node prints it`s results.
#include <mpi.h>
#include <stdio.h>
#include <stdlib.h>
#define MASTER 0
int main(int argc, char **argv) {
int id, nproc, len, numberE, i, sizeArray;
int *arrayN=NULL;
int arrayNlocal[sizeArray];
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &size);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
if (id == MASTER){
printf("Enter number of elements: ");
scanf("%d", &numberE);
sizeArray = numberE * 4;
arrayN = malloc(numberE * sizeof(int));
for (i = 0; i < sizeArray; i++){
arrayN[i] = i + 1;
}
}
MPI_Scatter(arrayN, numberE, MPI_INT, &arrayNlocal, numberE,MPI_INT, MPI_COMM_WORLD);
printf("Node %d has: ", id);
for (i = 0; i < numberE; i++){
printf("%d ",arrayNlocal[i]);
}
MPI_Finalize();
return 0;
}
And as error i get:
BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
PID 9278 RUNNING AT 192.168.100.100
EXIT CODE: 139
CLEANING UP REMAINING PROCESSES
YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
In arrayNlocal[sizeArray];, sizeArray is not initialized. The best way to go is to broadcast numberE to every processes and allocate memory for arrayNlocal. Something like:
MPI_Bcast( &numberE, 1, MPI_Int, 0, MPI_COMM_WORLD)
arrayN is an array of size sizeArray = numberE * 4, so:
arrayN = malloc(sizeArray * sizeof(int));
MPI_Scatter() needs pointers to the data to be sent on root node, and a pointer to receive buffer on each process of the communicator. Since arrayNlocal is an array:
MPI_Scatter(arrayN, numberE, MPI_INT, arrayNlocal, numberE,MPI_INT,MASTER, MPI_COMM_WORLD);
or alternatively:
MPI_Scatter(arrayN, numberE, MPI_INT, &arrayNlocal[0], numberE,MPI_INT,MASTER, MPI_COMM_WORLD);
id is not initialized in id == MASTER: it must be rank==MASTER.
As is, the prints at the end might occur in a mixed way between processes.
Try to compile your code using mpicc main.c -o main -Wall to enable all warnings: it can save you a few hours in the near future!