I new to MPI and I am trying to write program that uses MPI_scatter. I have 4 nodes(0, 1, 2, 3). Node0 is master, others are slaves. Master asks user for number of elements of array to send to slaves. Then it creates array of size number of elements * 4. Then every node prints it`s results.
#include <mpi.h>
#include <stdio.h>
#include <stdlib.h>
#define MASTER 0
int main(int argc, char **argv) {
int id, nproc, len, numberE, i, sizeArray;
int *arrayN=NULL;
int arrayNlocal[sizeArray];
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &size);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
if (id == MASTER){
printf("Enter number of elements: ");
scanf("%d", &numberE);
sizeArray = numberE * 4;
arrayN = malloc(numberE * sizeof(int));
for (i = 0; i < sizeArray; i++){
arrayN[i] = i + 1;
}
}
MPI_Scatter(arrayN, numberE, MPI_INT, &arrayNlocal, numberE,MPI_INT, MPI_COMM_WORLD);
printf("Node %d has: ", id);
for (i = 0; i < numberE; i++){
printf("%d ",arrayNlocal[i]);
}
MPI_Finalize();
return 0;
}
And as error i get:
BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
PID 9278 RUNNING AT 192.168.100.100
EXIT CODE: 139
CLEANING UP REMAINING PROCESSES
YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
In arrayNlocal[sizeArray];, sizeArray is not initialized. The best way to go is to broadcast numberE to every processes and allocate memory for arrayNlocal. Something like:
MPI_Bcast( &numberE, 1, MPI_Int, 0, MPI_COMM_WORLD)
arrayN is an array of size sizeArray = numberE * 4, so:
arrayN = malloc(sizeArray * sizeof(int));
MPI_Scatter() needs pointers to the data to be sent on root node, and a pointer to receive buffer on each process of the communicator. Since arrayNlocal is an array:
MPI_Scatter(arrayN, numberE, MPI_INT, arrayNlocal, numberE,MPI_INT,MASTER, MPI_COMM_WORLD);
or alternatively:
MPI_Scatter(arrayN, numberE, MPI_INT, &arrayNlocal[0], numberE,MPI_INT,MASTER, MPI_COMM_WORLD);
id is not initialized in id == MASTER: it must be rank==MASTER.
As is, the prints at the end might occur in a mixed way between processes.
Try to compile your code using mpicc main.c -o main -Wall to enable all warnings: it can save you a few hours in the near future!
Related
I am just trying to get my head around MPI and can't seem to understand, why the following programs output is different from what I expect.
int rank, size;
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &size);
int *sendbuf, *recvbuf;
sendbuf = (int *) malloc(sizeof(int) * size);
recvbuf = (int *) malloc(sizeof(int) * size);
for(int i = 0; i < size; i++) {
sendbuf[i] = rank;
}
for(int i = 0; i < size; i++) {
printf("sendbuf[%d] = %d, rank: %d\n", i, sendbuf[i], rank);
}
MPI_Scatter(sendbuf, 1, MPI_INT,
recvbuf, 1, MPI_INT, rank, MPI_COMM_WORLD);
for(int i = 0; i < size; i++) {
printf("recvbuf[%d] = %d, rank: %d\n", i, recvbuf[i], rank);
}
As far as I understood, MPI_Scatter sends sendcount values from an array to all processses.
In my example I gave each process an array filled with the own rank number.
Then each process sends one of the indexes in its array to all other processes. With two processes the first procss has an sendbuf array of:
sendbuf[0] = 0
sendbuf[1] = 0
And the second process (rank 1) has an array of size MPI_Comm_size filled with 1.
The expected output should be:
recvbuf[0] = 0, rank: 0
recvbuf[1] = 1, rank: 0
recvbuf[0] = 0, rank: 1
revcbuf[1] = 1, rank: 1
But instead I get the following output (for two processes):
sendbuf[0] = 0, rank: 0
sendbuf[1] = 0, rank: 0
sendbuf[0] = 1, rank: 1
sendbuf[1] = 1, rank: 1
recvbuf[0] = 0, rank: 0
recvbuf[1] = 32690, rank: 0
recvbuf[0] = 1, rank: 1
recvbuf[1] = 32530, rank: 1
Any help pointing out my mistake is well appreciated.
I am just trying to get my head around MPI and can't seem to
understand, why the following programs output is different from what I
expect.
The problem lies in the use of MPI_Scatter to accomplish your goal:
Sends data from one process to all other processes in a communicator
Synopsis
int MPI_Scatter(const void *sendbuf, int sendcount, MPI_Datatype
sendtype, void *recvbuf, int recvcount, MPI_Datatype recvtype, int root,
MPI_Comm comm) Input Parameters
sendbuf address of send buffer (choice, significant only at root)
sendcount number of elements sent to each process (integer, significant only at root)
sendtype data type of send buffer elements (significant only at root) (handle)
recvcount number of elements in receive buffer (integer)
recvtype data type of receive buffer elements (handle)
root rank of sending process (integer)
comm communicator (handle)
Every process should call the MPI_Scatter with the same root, not with a different root (i.e., the process' rank) as you have done:
MPI_Scatter(sendbuf, 1, MPI_INT, recvbuf, 1, MPI_INT, rank, MPI_COMM_WORLD);
^^^^
Therefore, you are misusing the MPI_Scatter, the purpose of that routine is to "Sends data from one process to all other processes in a communicator". The following image (taken from source) illustrates it best:
Only one root process, which scatters its data across different processes. This routine is, for instance, used when a process has a chunk of data (e.g., an array), and the code performance some operation over that data. You can parallelize the code by splitting the data among the processes, where each process performs the aforementioned operation in parallel on its assigned data chunk. Afterward, you might call MPI_Gather to gather the data from all the processes back to the original process where that data came from.
Then each process sends one of the indexes in its array to all other
processes.
For that you can use MPI_Allgather instead, which "Gathers data from all tasks and distribute the combined data to all tasks". The following image (taken from source) illustrates it best:
As you can see, each process will gather the data send by all processes (including itself).
A running example:
#include <stdio.h>
#include <stdlib.h>
#include <mpi.h>
int main(int argc, char **argv){
int rank, size;
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &size);
int *sendbuf = malloc(sizeof(int) * size);
int *recvbuf = malloc(sizeof(int) * size);
for(int i = 0; i < size; i++)
sendbuf[i] = rank;
MPI_Allgather(sendbuf, 1, MPI_INT, recvbuf, 1, MPI_INT, MPI_COMM_WORLD);
for(int i = 0; i < size; i++)
printf("recvbuf[%d] = %d, rank: %d\n", i, recvbuf[i], rank);
MPI_Finalize();
return 0;
}
OUTPUT for two processes:
recvbuf[0] = 0, rank: 0
recvbuf[1] = 1, rank: 0
recvbuf[0] = 0, rank: 1
recvbuf[1] = 1, rank: 1
For your particular case (with the same input size), MPI_Alltoall would also work, to understand the differences between MPI_Allgather versus MPI_Alltoall, I recommend you to check this SO thread.
if I have this code:
int main(void) {
int result=0;
int num[6] = {1, 2, 4, 3, 7, 1};
if (my_rank != 0) {
MPI_Reduce(num, &result, 6, MPI_INT, MPI_MIN, 0, MPI_COMM_WORLD);
} else {
MPI_Reduce(num, &result, 6, MPI_INT, MPI_MIN, 0, MPI_COMM_WORLD)
printf("result = %d\n", result);
}
}
the result print is 1 ;
But if the num[0]=9; then the result is 9
I read to solve this problem I must to define the variable num as array.
I can't understand how the function MPI_Reduce works with MPI_MIN. Why, if the num[0] is not equal to the smallest number, then I must to define the variable num as array?
MPI_Reduce performs a reduction over the members of the communicator - not the members of the local array. sendbuf and recvbuf must both be of the same size.
I think the standard says it best:
Thus, all processes provide input buffers and output buffers of the same length, with elements of the same type. Each process can provide one element, or a sequence of elements, in which case the combine operation is executed element-wise on each entry of the sequence.
MPI does not get the minimum of all elements in the array, you have to do that manually.
You can use MPI_MIN to obtain the min value among those passed via reduction.
Lets' examine the function declaration:
int MPI_Reduce(void* sendbuf, void* recvbuf, int count, MPI_Datatype
datatype, MPI_Op op, int root, MPI_Comm comm)
Each process send it's value (or array of values) using the buffer sendbuff.
The process identified by the root id receive the buffers and stores them in the buffer recvbuf. The number of elements to receive from each of the other processes is specified in count, so that recvbuff must be allocated with dimension sizeof(datatype)*count.
If each process has only one integer to send (count = 1) then recvbuff it's also an integer, If each process has two integers then recvbuff it's an array of integers of size 2. See this nice post for further explanations and nice pictures.
Now it should be clear that your code is wrong, sendbuff and recvbuff must be of the same size and there is no need of the condition: if(myrank==0). Simply, recvbuff has meaning only for the root process and sendbuff for the others.
In your example you can assign one or more element of the array to a different process and then compute the minvalue (if there are as many processes as values in the array) or the array of minvalues (if there are more values than processes).
Here is a working example that illustrates the usage of MPI_MIN, MPI_MAX and MPI_SUM (slightly modified from this), in the case of simple values (not array).
Each process do some work, depending on their rank and send to the root process the time spent doing the work. The root process collect the times and output the min, max and average values of the times.
#include <stdio.h>
#include <mpi.h>
int myrank, numprocs;
/* just a function to waste some time */
float work()
{
float x, y;
if (myrank%2) {
for (int i = 0; i < 100000000; ++i) {
x = i/0.001;
y += x;
}
} else {
for (int i = 0; i < 100000; ++i) {
x = i/0.001;
y += x;
}
}
return y;
}
int main(int argc, char **argv)
{
int node;
MPI_Init(&argc,&argv);
MPI_Comm_rank(MPI_COMM_WORLD, &node);
printf("Hello World from Node %d\n",node);
/*variables used for gathering timing statistics*/
double mytime,
maxtime,
mintime,
avgtime;
MPI_Comm_rank(MPI_COMM_WORLD, &myrank);
MPI_Comm_size(MPI_COMM_WORLD, &numprocs);
MPI_Barrier(MPI_COMM_WORLD); /*synchronize all processes*/
mytime = MPI_Wtime(); /*get time just before work section */
work();
mytime = MPI_Wtime() - mytime; /*get time just after work section*/
/*compute max, min, and average timing statistics*/
MPI_Reduce(&mytime, &maxtime, 1, MPI_DOUBLE,MPI_MAX, 0, MPI_COMM_WORLD);
MPI_Reduce(&mytime, &mintime, 1, MPI_DOUBLE, MPI_MIN, 0,MPI_COMM_WORLD);
MPI_Reduce(&mytime, &avgtime, 1, MPI_DOUBLE, MPI_SUM, 0,MPI_COMM_WORLD);
/* plot the output */
if (myrank == 0) {
avgtime /= numprocs;
printf("Min: %lf Max: %lf Avg: %lf\n", mintime, maxtime,avgtime);
}
MPI_Finalize();
return 0;
}
If I run this on my OSX laptop, this is what I get:
urcaurca$ mpirun -n 4 ./a.out
Hello World from Node 3
Hello World from Node 0
Hello World from Node 2
Hello World from Node 1
Min: 0.000974 Max: 0.985291 Avg: 0.493081
So far, my application is reading in a txt file with a list of integers. These integers needs to be stored in an array by the master process i.e. processor with rank 0. This is working fine.
Now, when I run the program I have an if statement checking whether it's the master process and if it is, I'm executing the MPI_Scatter command.
From what I understand this will subdivide the array with the numbers and pass it out to the slave processes i.e. all with rank > 0 . However, I'm not sure how to handle the MPI_Scatter. How does the slave process "subscribe" to get the sub-array? How can I tell the non-master processes to do something with the sub-array?
Can someone please provide a simple example to show me how the master process sends out elements from the array and then have the slaves add the sum and return this to the master, which adds all the sums together and prints it out?
My code so far:
#include <stdio.h>
#include <mpi.h>
//A pointer to the file to read in.
FILE *fr;
int main(int argc, char *argv[]) {
int rank,size,n,number_read;
char line[80];
int numbers[30];
int buffer[30];
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &size);
fr = fopen ("int_data.txt","rt"); //We open the file to be read.
if(rank ==0){
printf("my rank = %d\n",rank);
//Reads in the flat file of integers and stores it in the array 'numbers' of type int.
n=0;
while(fgets(line,80,fr) != NULL) {
sscanf(line, "%d", &number_read);
numbers[n] = number_read;
printf("I am processor no. %d --> At element %d we have number: %d\n",rank,n,numbers[n]);
n++;
}
fclose(fr);
MPI_Scatter(&numbers,2,MPI_INT,&buffer,2,MPI_INT,rank,MPI_COMM_WORLD);
}
else {
MPI_Gather ( &buffer, 2, MPI_INT, &numbers, 2, MPI_INT, 0, MPI_COMM_WORLD);
printf("%d",buffer[0]);
}
MPI_Finalize();
return 0;
}
This is a common misunderstanding of how operations work in MPI with people new to it; particularly with collective operations, where people try to start using broadcast (MPI_Bcast) just from rank 0, expecting the call to somehow "push" the data to the other processors. But that's not really how MPI routines work; most MPI communication requires both the sender and the receiver to make MPI calls.
In particular, MPI_Scatter() and MPI_Gather() (and MPI_Bcast, and many others) are collective operations; they have to be called by all of the tasks in the communicator. All processors in the communicator make the same call, and the operation is performed. (That's why scatter and gather both require as one of the parameters the "root" process, where all the data goes to / comes from). By doing it this way, the MPI implementation has a lot of scope to optimize the communication patterns.
So here's a simple example (Updated to include gather):
#include <mpi.h>
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char **argv) {
int size, rank;
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &size);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
int *globaldata=NULL;
int localdata;
if (rank == 0) {
globaldata = malloc(size * sizeof(int) );
for (int i=0; i<size; i++)
globaldata[i] = 2*i+1;
printf("Processor %d has data: ", rank);
for (int i=0; i<size; i++)
printf("%d ", globaldata[i]);
printf("\n");
}
MPI_Scatter(globaldata, 1, MPI_INT, &localdata, 1, MPI_INT, 0, MPI_COMM_WORLD);
printf("Processor %d has data %d\n", rank, localdata);
localdata *= 2;
printf("Processor %d doubling the data, now has %d\n", rank, localdata);
MPI_Gather(&localdata, 1, MPI_INT, globaldata, 1, MPI_INT, 0, MPI_COMM_WORLD);
if (rank == 0) {
printf("Processor %d has data: ", rank);
for (int i=0; i<size; i++)
printf("%d ", globaldata[i]);
printf("\n");
}
if (rank == 0)
free(globaldata);
MPI_Finalize();
return 0;
}
Running it gives:
gpc-f103n084-$ mpicc -o scatter-gather scatter-gather.c -std=c99
gpc-f103n084-$ mpirun -np 4 ./scatter-gather
Processor 0 has data: 1 3 5 7
Processor 0 has data 1
Processor 0 doubling the data, now has 2
Processor 3 has data 7
Processor 3 doubling the data, now has 14
Processor 2 has data 5
Processor 2 doubling the data, now has 10
Processor 1 has data 3
Processor 1 doubling the data, now has 6
Processor 0 has data: 2 6 10 14
I have a basic question about MPI programming in C. Essentially what I want is that there is a master process that spawns a specific number of child processes, collects some information from all of them (waits until all of the children finish), calculates some metric, based on this metric it decides if it has to spawn more threads... it keeps doing this until the metric meets some specific condition. I have searched through the literature, to no avail. How can this be done? any pointers?.
Thanks for the help.
Courtesy : An introduction to the Message Passing Interface (MPI) using C. In the "complete parallel program to sum an array", lets say, "for some lame reason", I want the master process to sum the contents of the array twice. I.e in the first iteration, the master process starts the slave processes which compute the sum of the arrays, once they are done and the master process returns the value, I would like to invoke the master process to reinvoke another set of threads to do the computation again. Why would the code below not work? I added a while loop around the master process process which spawns the slave processes.
#include <stdio.h>
#include <mpi.h>
#define max_rows 100000
#define send_data_tag 2001
#define return_data_tag 2002
int array[max_rows];
int array2[max_rows];
main(int argc, char **argv)
{
long int sum, partial_sum,number_of_times;
number_of_times=0;
MPI_Status status;
int my_id, root_process, ierr, i, num_rows, num_procs,
an_id, num_rows_to_receive, avg_rows_per_process,
sender, num_rows_received, start_row, end_row, num_rows_to_send;
/* Now replicte this process to create parallel processes.
* From this point on, every process executes a seperate copy
* of this program */
ierr = MPI_Init(&argc, &argv);
root_process = 0;
/* find out MY process ID, and how many processes were started. */
ierr = MPI_Comm_rank(MPI_COMM_WORLD, &my_id);
ierr = MPI_Comm_size(MPI_COMM_WORLD, &num_procs);
if(my_id == root_process) {
/* I must be the root process, so I will query the user
* to determine how many numbers to sum. */
//printf("please enter the number of numbers to sum: ");
//scanf("%i", &num_rows);
num_rows=10;
while (number_of_times<2)
{
number_of_times++;
start_row=0;
end_row=0;
if(num_rows > max_rows) {
printf("Too many numbers.\n");
exit(1);
}
avg_rows_per_process = num_rows / num_procs;
/* initialize an array */
for(i = 0; i < num_rows; i++) {
array[i] = i + 1;
}
/* distribute a portion of the bector to each child process */
for(an_id = 1; an_id < num_procs; an_id++) {
start_row = an_id*avg_rows_per_process + 1;
end_row = (an_id + 1)*avg_rows_per_process;
if((num_rows - end_row) < avg_rows_per_process)
end_row = num_rows - 1;
num_rows_to_send = end_row - start_row + 1;
ierr = MPI_Send( &num_rows_to_send, 1 , MPI_INT,
an_id, send_data_tag, MPI_COMM_WORLD);
ierr = MPI_Send( &array[start_row], num_rows_to_send, MPI_INT,
an_id, send_data_tag, MPI_COMM_WORLD);
}
/* and calculate the sum of the values in the segment assigned
* to the root process */
sum = 0;
for(i = 0; i < avg_rows_per_process + 1; i++) {
sum += array[i];
}
printf("sum %i calculated by root process\n", sum);
/* and, finally, I collet the partial sums from the slave processes,
* print them, and add them to the grand sum, and print it */
for(an_id = 1; an_id < num_procs; an_id++) {
ierr = MPI_Recv( &partial_sum, 1, MPI_LONG, MPI_ANY_SOURCE,
return_data_tag, MPI_COMM_WORLD, &status);
sender = status.MPI_SOURCE;
printf("Partial sum %i returned from process %i\n", partial_sum, sender);
sum += partial_sum;
}
printf("The grand total is: %i\n", sum);
}
}
else {
/* I must be a slave process, so I must receive my array segment,
* storing it in a "local" array, array1. */
ierr = MPI_Recv( &num_rows_to_receive, 1, MPI_INT,
root_process, send_data_tag, MPI_COMM_WORLD, &status);
ierr = MPI_Recv( &array2, num_rows_to_receive, MPI_INT,
root_process, send_data_tag, MPI_COMM_WORLD, &status);
num_rows_received = num_rows_to_receive;
/* Calculate the sum of my portion of the array */
partial_sum = 0;
for(i = 0; i < num_rows_received; i++) {
partial_sum += array2[i];
}
/* and finally, send my partial sum to hte root process */
ierr = MPI_Send( &partial_sum, 1, MPI_LONG, root_process,
return_data_tag, MPI_COMM_WORLD);
}
ierr = MPI_Finalize();
}
You should start by looking at MPI_Comm_spawn and collective operations. To collect information from old child processes,one would typically use MPI_Reduce.
This stackoverflow question might also be helpful.
...to spawn more threads...
I guess you meant the right thing since you used "process" instead of "thread" mostly, but just to clarify: MPI only deals with processes and not with threads.
I'm not sure how well you know MPI already - let me know if my answer was any help or if you need more hints.
The MPI-2 standard includes process management functionality. It's described in detail in Chapter 5. I have not used it myself though, so perhaps someone else may weigh in with more practical hints.
I have a 2D double precision array that is being manipulated in parallel by several processes. Each process manipulates a part of the array, and at the end of every iteration, I need to ensure that all the processes have the SAME copy of the 2D array.
Assuming an array of size 10*10 and 2 processes (or processors). Process 1 (P1) manipulates the first 5 rows of the 2D row (5*10=50 elements in total) and P2 manipulates the last 5 rows (50 elements total). And at the end of each iteration, I need P1 to have (ITS OWN first 5 rows + P2's last 5 rows). P2 should have (P1's first 5 rows + it's OWN last 5 rows). I hope the scenario is clear.
I am trying to broadcast using the code given below. But my program keeps exiting with this error: "APPLICATION TERMINATED WITH THE EXIT STRING: Hangup (signal 1)".
I am already using a contiguous 2D memory allocator as pointed out here: MPI_Bcast a dynamic 2d array by Jonathan. But I am still getting the same error.
Can someone help me out?
My code:
double **grid, **oldgrid;
int gridsize; // size of grid
int rank, size; // rank of current process and no. of processes
int rowsforeachprocess, offset; // to keep track of rows that need to be handled by each process
/* allocation, MPI_Init, and lots of other stuff */
rowsforeachprocess = ceil((float)gridsize/size);
offset = rank*rowsforeachprocess;
/* Each process is handling "rowsforeachprocess" #rows.
* Lots of work done here
* Now I need to broadcast these rows to all other processes.
*/
for(i=0; i<gridsize; i++){
MPI_Bcast(&(oldgrid[i]), gridsize-2, MPI_DOUBLE, (i/rowsforeachprocess), MPI_COMM_WORLD);
}
Part 2: The code above is part of a parallel solver for the laplace equation using 1D decomposition and I did not want to use a Master-worker model. Will my code be easier if I use a Master-worker model?
The crash-causing problem here is a 2d-array pointer issue -- &(oldgrid[i]) is a pointer-to-a-pointer to doubles, not a pointer to doubles, and it points to the pointer to row i of your array, not to row i of your array. You want MPI_Bcast(&(oldgrid[i][0]),.. or MPI_Bcast(oldgrid[i],....
There's another way to do this, too, which only uses one expensive collective communicator instead of one per row; if you need everyone to have a copy of the whole array, you can use MPI_Allgather to gather the data together and distribute it to everyone; or, in the general case where the processes don't have the same number of rows, MPI_Allgatherv. Instead of the loop over broadcasts, this would look a little like:
{
int *counts = malloc(size*sizeof(int));
int *displs = malloc(size*sizeof(int));
for (int i=0; i<size; i++) {
counts[i] = rowsforeachprocess*gridsize;
displs[i] = i*rowsforeachprocess*gridsize;
}
counts[size-1] = (gridsize-(size-1)*rowsforeachprocess)*gridsize;
MPI_Allgatherv(oldgrid[offset], mynumrows*gridsize, MPI_DOUBLE,
oldgrid[0], counts, displs, MPI_DOUBLE, MPI_COMM_WORLD);
free(counts);
free(displs);
}
where counts are the number of items sent by each task, and displs are the displacements.
But finally, are you sure that every process has to have a copy of the entire array? If you're just computing a laplacian, you probably just need neighboring rows, not the whole array.
This would look like:
int main(int argc, char**argv) {
double **oldgrid;
const int gridsize=10; // size of grid
int rank, size; // rank of current process and no. of processes
int rowsforeachprocess; // to keep track of rows that need to be handled by each process
int offset, mynumrows;
MPI_Status status;
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &size);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
rowsforeachprocess = (int)ceil((float)gridsize/size);
offset = rank*rowsforeachprocess;
mynumrows = rowsforeachprocess;
if (rank == size-1)
mynumrows = gridsize-offset;
rowsforeachprocess = (int)ceil((float)gridsize/size);
offset = rank*rowsforeachprocess;
mynumrows = rowsforeachprocess;
if (rank == size-1)
mynumrows = gridsize-offset;
malloc2ddouble(&oldgrid, mynumrows+2, gridsize);
for (int i=0; i<mynumrows+2; i++)
for (int j=0; j<gridsize; j++)
oldgrid[i][j] = rank;
/* exchange row data with neighbours */
int highneigh = rank+1;
if (rank == size-1) highneigh = 0;
int lowneigh = rank-1;
if (rank == 0) lowneigh = size-1;
/* send data to high neibhour and receive from low */
MPI_Sendrecv(oldgrid[mynumrows], gridsize, MPI_DOUBLE, highneigh, 1,
oldgrid[0], gridsize, MPI_DOUBLE, lowneigh, 1,
MPI_COMM_WORLD, &status);
/* send data to low neibhour and receive from high */
MPI_Sendrecv(oldgrid[1], gridsize, MPI_DOUBLE, lowneigh, 1,
oldgrid[mynumrows+1], gridsize, MPI_DOUBLE, highneigh, 1,
MPI_COMM_WORLD, &status);
for (int proc=0; proc<size; proc++) {
if (rank == proc) {
printf("Rank %d:\n", proc);
for (int i=0; i<mynumrows+2; i++) {
for (int j=0; j<gridsize; j++) {
printf("%f ", oldgrid[i][j]);
}
printf("\n");
}
printf("\n");
}
MPI_Barrier(MPI_COMM_WORLD);
}