Related
I'm attempting to combine row/column 2-D arrays outputted from each process into a single complete 2-D array on all processes. Essentially, I have a large NxN 2-D array (4000x4000 +) that requires the same operation to be carried out on all elements. My intention is to break this down into either sections of rows or columns that each process will complete. I need each process to have the entirety of the array once all sections have been completed.
I have looked at multiple examples of using these MPI instructions but could not find one that combined rows/columns from N processes. Could someone please inform me as to how I can implement this?
Below is a boilerplate example of what I'm trying to achieve. Each process creates a master matrix and then calculates the set of rows it's responsible for. It then creates a 2-D array of that size and copies the data from the master. It then carries out its operation on each element of the 2-D array. I then need to use Allgatherv to collect each process's 2-D array and combine them to overwrite the master matrix. Please note that it is not important whether I use rows or columns from my point of view, I believed sticking to one and not trying to create multiple submatrices would allow me to easily increase the number of processes running without added complexity.
#include <stdio.h>
#include <stdlib.h>
#include <mpi.h>
double **alloc_2d_array(int m, int n) {
double **x;
int i;
x = (double **)malloc(m*sizeof(double *));
x[0] = (double *)calloc(m*n,sizeof(double));
for ( i = 1; i < m; i++ )
x[i] = &x[0][i*n];
return x;
}
void main(int argc, char *argv[]) {
int n = 8;
int rank, size;
int root_rank = 0;
MPI_Init(&argc,&argv);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &size);
// Report active to console
printf("Rank: %d, reporting!\n", rank);
// Make master matrix
double ** master_matrix = alloc_2d_array(n, n);
// Set starting values in master matrix
for (int i=0; i<n; i++) {
for (int j=0; j<n; j++) {
master_matrix[i][j] = i*n+j;
}
}
// Calculate each ranks section of matrix
int interval = n/size;
int section_end = interval*(rank+1);
int section_start = section_end - interval;
if (rank == size-1) {
section_end += n % size;
}
int section_length = section_end-section_start;
printf("Start: %d, End: %d\n", section_start, section_end);
// Make local rows
double ** local_sect = alloc_2d_array(section_length, n);
// Set local rows to master_matrix rows
for (int i=0; i<section_length; i++) {
local_sect[i] = master_matrix[i];
}
// Carry out operation (in this example, adding 7)
for (int i=0; i<section_length; i++) {
for (int j=0; j<n; j++) {
local_sect[i][j] = local_sect[i][j]+7;
}
}
// Use Allgatherv to overwrite master matrix to new complete matrix
// MPI_Allgatherv(my_values, my_values_count, MPI_INT, buffer, counts, displacements, MPI_INT, MPI_COMM_WORLD);
// Print new master matrix out on all processes
printf("NEW MASTER MATRIX\n");
for (int i=0; i<n; i++) {
for (int j=0; j<n; j++) {
printf("%f ", master_matrix[i][j]);
}
printf("\n");
}
MPI_Finalize();
}
I'll suggest reading more indepth about MPI_Type_create_subarray which allow you to create subarrays from multidimensional arrays , by that you can then gather them together with MPI_Gatherv and have them connect to each other like you want.
You might wanna check this example that I've encountered in the past:
How to combine subarrays of different widths using only one array for send and receive in MPI
I am having a relatively a problem,I have defined struct and I want the array of structure has this information (processor name and the computation time for the processor) this is part of my code :
struct stru
{
double arr_time[50];
char pname[50];
};
int main (int argc, char *argv[])
{
struct stru all_info[50];
MPI_Status status;
MPI_Init(&argc,&argv);
MPI_Comm_rank(MPI_COMM_WORLD,&process_id);
MPI_Comm_size(MPI_COMM_WORLD,&num_of_processes);
char processor_name[MPI_MAX_PROCESSOR_NAME];
int name_len;
MPI_Get_processor_name(processor_name, &name_len);
if (process_id == 0)
{ //do somthing
}
if (process_id > 0)
{
double start = MPI_Wtime();
for (k=0; k<array_size; k++)
for (i=0; i<rows; i++)
{
c[i][k]=0.0;
for (j=0; j<array_size; j++)
c[i][k] = c[i][k] + a[i][j] * b[j][k];
}
end_time = MPI_Wtime() - start;
all_info[i].arr_time[i] = end_time;
for (int i=1 ;i <= numworkers ;i++)
strcpy( all_info[i].pname, processor_name);
printf(" time = %f for processor %s
\n",all_info[i].arr_time, all_info[i].pname);
}
MPI_Gather( &end_time, 1, MPI_DOUBLE, &all_info[i].arr_time, 1,
MPI_DOUBLE, 0, MPI_COMM_WORLD);
if (process_id == 0){
for(i = 1; i <= numworkers; i++ )
{
printf(" time %f for processor %s
\n",all_info[i].arr_time , all_info[i].pname);
} }
I have no result if I print it in if (process_id == 0) !!!
the out put is
time 0.000000 for processor
time 0.000000 for processor
time 0.000000 for processor
and just the time printed if I printting in if (process_id > 0)
In fact I don't know how can I use Structure with MPI can anyone give me advice how can I generate array of structure that has processor name and his time?
Thank you in advance for your time.
At this line:
processor_name[MPI_MAX_PROCESSOR_NAME];
you start using the array variable processor_name without defining it anywhere.
You're missing something like all_info[i]. in front of it. Like you have a bit lower:
all_info[i].processor_name;
Then, for storing a string your processor_name needs memory. A single char is just one byte (i.e. one letter). So let's assume these names are never longer than 255, you'd get:
struct stru
{
double end_time;
char processor_name[256];
};
There are so many basic things wrong in your code and your questions seem to indicate that you lack basic understanding of C programming. Therefore my advice would be to take more time studying this language.
The error occurs here because you have not defined any type processor name.
If I understand what you're trying to correctly, it seems like you were trying to access the attribute of the structures. For doing that, you might need to use the . operator. For that you might need to define an array
struct stru all_info[MPI_MAX_PROCESSOR_NAME];
instead of
struct stru all_info[50];
I am currently working on a project where I need to implement a parallel fft algorithm using openmpi. I have a compiling piece of code, but when I run it over the cluster I get segmentation faults.
I have my hunches about where things are going wrong, but I don't think I have enough of an understanding about pointers and references to be able to make a efficient fix.
The first chunk that could be going wrong is in the passing of the arrays to the helper functions. I believe that either my looping is inconsistent, or I am not understanding how the to pass these pointers and get back the things I need.
The second possible spot would be within the actual mpi_Send/Recv commands. I am sending a type that is not supported by the openmpi c datatypes, so I am using the mpi_byte type to send the raw data instead. Is this a viable option? Or should I be looking into an alternative to this method.
/* function declarations */
double complex get_block(double complex c[], int start, int stop);
double complex put_block(double complex from[], double complex to[],
int start, int stop);
void main(int argc, char **argv)
{
/* Initialize MPI */
MPI_Init(&argc, &argv);
double complex c[N/p];
int myid;
MPI_Comm_rank(MPI_COMM_WORLD, &myid);
//printf("My id is %d\n",myid);
MPI_Status status;
int i;
for(i=0;i<N/p;i++){
c[i] = 1.0 + 1.0*I;
}
int j = log(p)/log(2) + 1;
double q;
double complex z;
double complex w = exp(-2*PI*I/N);
double complex block[N/(2*p)]; // half the size of chunk c
int e,l,t,k,m,rank,plus,minus;
int temp = (log(N)-log(p))/log(2);
//printf("temp = %d", temp);
for(e = 0; e < (log(p)/log(2)); e++){
/* loop constants */
t = pow(2,e); l = pow(2,e+temp);
q = n/2*l; z = cpow(w,(complex)q);
j = j-1; int v = pow(2,j);
if(e != 0){
plus = (myid + p/v)%p;
minus = (myid - p/v)%p;
} else {
plus = myid + p/v;
minus = myid - p/v;
}
if(myid%t == myid%(2*t)){
MPI_Recv((char*)&c,
sizeof(c),
MPI_BYTE,
plus,
MPI_ANY_TAG,
MPI_COMM_WORLD,
&status);
/* transform */
for(k = 0; k < N/p; k++){
m = (myid * N/p + k)%l;
c[k] = c[k] + c[k+N/v] * cpow(z,m);
c[k+N/v] = c[k] - c[k + N/v] * cpow(z,m);
printf("(k,k+N/v) = (%d,%d)\n",k,k+N/v);
}*/
printf("\n\n");
/* end transform */
*block = get_block(c, N/v, N/v + N/p + 1);
MPI_Send((char*)&block,
sizeof(block),
MPI_BYTE,
plus,
1,
MPI_COMM_WORLD);
} else {
// send data of this PE to the (i- p/v)th PE
MPI_Send((char*)&c,
sizeof(c),
MPI_BYTE,
minus,
1,
MPI_COMM_WORLD);
// after the transformation, receive data from (i-p/v)th PE
// and store them in c:
MPI_Recv((char*)&block,
sizeof(block),
MPI_BYTE,
minus,
MPI_ANY_TAG,
MPI_COMM_WORLD,
&status);
*c = put_block(block, c, N/v, N/v + N/p - 1);
//printf("Process %d send/receive %d\n",myid, plus);
}
}
/* shut down MPI */
MPI_Finalize();
}
/* helper functions */
double complex get_block(double complex *c, int start, int stop)
{
double complex block[stop - start + 1];
//printf("%d = %d\n",sizeof(block)/sizeof(double complex), sizeof(&c)/sizeof(double complex));
int j = 0;
int i;
for(i = start; i < stop+1; i++){
block[j] = c[i];
j = j+1;
}
return *block;
}
double complex put_block(double complex from[], double complex to[], int start, int stop)
{
int j = 0;
int i;
for(i = start; i<stop+1; i++){
to[i] = from[j];
j = j+1;
}
return *to;
}
I really appreciate the feedback!
You are using arrays / pointers to arrays in the wrong way. For example you declare an array as double complex block[N], which is fine (although uncommon, in most cases it is better to use malloc) and then you receive into it via MPI_Recv(&block). However "block" is already a pointer to that array, so by writing "&block" you are passing the pointer of the pointer to MPI_Recv. That's not what it expects. If you want to use the "&" notation you have to write &block[0], which would give you the pointer to the first element of the block-array.
Have you tried debugging your code? This can be a pain in a parallel setting, but it can tell you exactly where it is failing and usually also why.
If you're using Linux or OS X, you could run your code as follows on the command line:
mpirun -np 4 xterm -e gdb -ex run --args ./yourprog yourargs
where I'm assuming yourprog is the name of your program and yourargs are any command-line arguments you want to pass.
What this command will do is launch four xterm windows. Each xterm will in turn launch gdb as specified by the option -e. gdb will then execute the command run as specified by the option -ex and launch your executable with the given options, as specified by --args.
What you get are four xterm windows running four instances of your program in parallel with MPI. If any of the instances crashes, gdb will tell you where and why.
I've literally copied and pasted from the supplied source code for Numerical Recipes for C for in-place LU Matrix Decomposition, problem is its not working.
I'm sure I'm doing something stupid but would appreciate anyone being able to point me in the right direction on this; I've been working on its all day and can't see what I'm doing wrong.
POST-ANSWER UPDATE: The project is finished and working. Thanks to everyone for their guidance.
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#define MAT1 3
#define TINY 1e-20
int h_NR_LU_decomp(float *a, int *indx){
//Taken from Numerical Recipies for C
int i,imax,j,k;
float big,dum,sum,temp;
int n=MAT1;
float vv[MAT1];
int d=1.0;
//Loop over rows to get implicit scaling info
for (i=0;i<n;i++) {
big=0.0;
for (j=0;j<n;j++)
if ((temp=fabs(a[i*MAT1+j])) > big)
big=temp;
if (big == 0.0) return -1; //Singular Matrix
vv[i]=1.0/big;
}
//Outer kij loop
for (j=0;j<n;j++) {
for (i=0;i<j;i++) {
sum=a[i*MAT1+j];
for (k=0;k<i;k++)
sum -= a[i*MAT1+k]*a[k*MAT1+j];
a[i*MAT1+j]=sum;
}
big=0.0;
//search for largest pivot
for (i=j;i<n;i++) {
sum=a[i*MAT1+j];
for (k=0;k<j;k++) sum -= a[i*MAT1+k]*a[k*MAT1+j];
a[i*MAT1+j]=sum;
if ((dum=vv[i]*fabs(sum)) >= big) {
big=dum;
imax=i;
}
}
//Do we need to swap any rows?
if (j != imax) {
for (k=0;k<n;k++) {
dum=a[imax*MAT1+k];
a[imax*MAT1+k]=a[j*MAT1+k];
a[j*MAT1+k]=dum;
}
d = -d;
vv[imax]=vv[j];
}
indx[j]=imax;
if (a[j*MAT1+j] == 0.0) a[j*MAT1+j]=TINY;
for (k=j+1;k<n;k++) {
dum=1.0/(a[j*MAT1+j]);
for (i=j+1;i<n;i++) a[i*MAT1+j] *= dum;
}
}
return 0;
}
void main(){
//3x3 Matrix
float exampleA[]={1,3,-2,3,5,6,2,4,3};
//pivot array (not used currently)
int* h_pivot = (int *)malloc(sizeof(int)*MAT1);
int retval = h_NR_LU_decomp(&exampleA[0],h_pivot);
for (unsigned int i=0; i<3; i++){
printf("\n%d:",h_pivot[i]);
for (unsigned int j=0;j<3; j++){
printf("%.1lf,",exampleA[i*3+j]);
}
}
}
WolframAlpha says the answer should be
1,3,-2
2,-2,7
3,2,-2
I'm getting:
2,4,3
0.2,2,-2.8
0.8,1,6.5
And so far I have found at least 3 different versions of the 'same' algorithm, so I'm completely confused.
PS yes I know there are at least a dozen different libraries to do this, but I'm more interested in understanding what I'm doing wrong than the right answer.
PPS since in LU Decomposition the lower resultant matrix is unity, and using Crouts algorithm as (i think) implemented, array index access is still safe, both L and U can be superimposed on each other in-place; hence the single resultant matrix for this.
I think there's something inherently wrong with your indices. They sometimes have unusual start and end values, and the outer loop over j instead of i makes me suspicious.
Before you ask anyone to examine your code, here are a few suggestions:
double-check your indices
get rid of those obfuscation attempts using sum
use a macro a(i,j) instead of a[i*MAT1+j]
write sub-functions instead of comments
remove unnecessary parts, isolating the erroneous code
Here's a version that follows these suggestions:
#define MAT1 3
#define a(i,j) a[(i)*MAT1+(j)]
int h_NR_LU_decomp(float *a, int *indx)
{
int i, j, k;
int n = MAT1;
for (i = 0; i < n; i++) {
// compute R
for (j = i; j < n; j++)
for (k = 0; k < i-2; k++)
a(i,j) -= a(i,k) * a(k,j);
// compute L
for (j = i+1; j < n; j++)
for (k = 0; k < i-2; k++)
a(j,i) -= a(j,k) * a(k,i);
}
return 0;
}
Its main advantages are:
it's readable
it works
It lacks pivoting, though. Add sub-functions as needed.
My advice: don't copy someone else's code without understanding it.
Most programmers are bad programmers.
For the love of all that is holy, don't use Numerical Recipies code for anything except as a toy implementation for teaching purposes of the algorithms described in the text -- and, really, the text isn't that great. And, as you're learning, neither is the code.
Certainly don't put any Numerical Recipies routine in your own code -- the license is insanely restrictive, particularly given the code quality. You won't be able to distribute your own code if you have NR stuff in there.
See if your system already has a LAPACK library installed. It's the standard interface to linear algebra routines in computational science and engineering, and while it's not perfect, you'll be able to find lapack libraries for any machine you ever move your code to, and you can just compile, link, and run. If it's not already installed on your system, your package manager (rpm, apt-get, fink, port, whatever) probably knows about lapack and can install it for you. If not, as long as you have a Fortran compiler on your system, you can download and compile it from here, and the standard C bindings can be found just below on the same page.
The reason it's so handy to have a standard API to linear algebra routines is that they are so common, but their performance is so system-dependant. So for instance, Goto BLAS
is an insanely fast implementation for x86 systems of the low-level operations which are needed for linear algebra; once you have LAPACK working, you can install that library to make everything as fast as possible.
Once you have any sort of LAPACK installed, the routine for doing an LU factorization of a general matrix is SGETRF for floats, or DGETRF for doubles. There are other, faster routines if you know something about the structure of the matrix - that it's symmetric positive definite, say (SBPTRF), or that it's tridiagonal (STDTRF). It's a big library, but once you learn your way around it you'll have a very powerful piece of gear in your numerical toolbox.
The thing that looks most suspicious to me is the part marked "search for largest pivot". This does not only search but it also changes the matrix A. I find it hard to believe that is correct.
The different version of the LU algorithm differ in pivoting, so make sure you understand that. You cannot compare the results of different algorithms. A better check is to see whether L times U equals your original matrix, or a permutation thereof if your algorithm does pivoting. That being said, your result is wrong because the determinant is wrong (pivoting does not change the determinant, except for the sign).
Apart from that #Philip has good advice. If you want to understand the code, start by understanding LU decomposition without pivoting.
To badly paraphrase Albert Einstein:
... a man with a watch always knows the
exact time, but a man with two is
never sure ....
Your code is definitely not producing the correct result, but even if it were, the result with pivoting will not directly correspond to the result without pivoting. In the context of a pivoting solution, what Alpha has really given you is probably the equivalent of this:
1 0 0 1 0 0 1 3 -2
P= 0 1 0 L= 2 1 0 U = 0 -2 7
0 0 1 3 2 1 0 0 -2
which will then satisfy the condition A = P.L.U (where . denotes the matrix product). If I compute the (notionally) same decomposition operation another way (using the LAPACK routine dgetrf via numpy in this case):
In [27]: A
Out[27]:
array([[ 1, 3, -2],
[ 3, 5, 6],
[ 2, 4, 3]])
In [28]: import scipy.linalg as la
In [29]: LU,ipivot = la.lu_factor(A)
In [30]: print LU
[[ 3. 5. 6. ]
[ 0.33333333 1.33333333 -4. ]
[ 0.66666667 0.5 1. ]]
In [31]: print ipivot
[1 1 2]
After a little bit of black magic with ipivot we get
0 1 0 1 0 0 3 5 6
P = 0 0 1 L = 0.33333 1 0 U = 0 1.3333 -4
1 0 0 0.66667 0.5 1 0 0 1
which also satisfies A = P.L.U . Both of these factorizations are correct, but they are different and they won't correspond to a correctly functioning version of the NR code.
So before you can go deciding whether you have the "right" answer, you really should spend a bit of time understanding the actual algorithm that the code you copied implements.
This thread has been viewed 6k times in the past 10 years. I had used NR Fortran and C for many years, and do not share the low opinions expressed here.
I explored the issue you encountered, and I believe the problem in your code is here:
for (k=j+1;k<n;k++) {
dum=1.0/(a[j*MAT1+j]);
for (i=j+1;i<n;i++) a[i*MAT1+j] *= dum;
}
while in the original if (j != n-1) { ... } is used. I think the two are not equivalent.
NR's lubksb() does have a small issue in the way they set up finding the first non-zero element, but this can be skipped at very low cost, even for a large matrix. With that, both ludcmp() and lubksb(), entered as published, work just fine, and as far as I can tell perform well.
Here's a complete test code, mostly preserving the notation of NR, wth minor simplifications (tested under Ubuntu Linux/gcc):
/* A sample program to demonstrate matrix inversion using the
* Crout's algorithm from Teukolsky and Press (Numerical Recipes):
* LU decomposition + back-substitution, with partial pivoting
* 2022.06 edward.sternin at brocku.ca
*/
#define N 7
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#define a(i,j) a[(i)*n+(j)]
/* implied 1D layout is a(0,0), a(0,1), ... a(0,n-1), a(1,0), a(1,1), ... */
void matrixPrint (double *M, int nrow, int ncol) {
int i,j;
for (i=0;i<nrow;i++) {
for (j=0;j<ncol;j++) { fprintf(stderr," %+.3f\t",M[i*ncol+j]); }
fprintf(stderr,"\n");
}
}
void die(char msg[]) {
fprintf(stderr,"ERROR in %s, aborting\n",msg);
exit(1);
}
void ludcmp(double *a, int n, int *indx) {
int i, imax, j, k;
double big, dum, sum, temp;
double *vv;
/* i=row index, i=0..(n-1); j=col index, j=0..(n-1) */
vv=(double *)malloc((size_t)(n * sizeof(double)));
if (!vv) die("ludcmp: allocation failure");
for (i = 0; i < n; i++) { /* loop over rows */
big = 0.0;
for (j = 0; j < n; j++) {
if ((temp=fabs(a(i,j))) > big) big=temp;
}
if (big == 0.0) die("ludcmp: a singular matrix provided");
vv[i] = 1.0 / big; /* vv stores the scaling factor for each row */
}
for (j = 0; j < n; j++) { /* Crout's method: loop over columns */
for (i = 0; i < j; i++) { /* except for i=j */
sum = a(i,j);
for (k = 0; k < i; k++) { sum -= a(i,k) * a(k,j); }
a(i,j) = sum; /* Eq. 2.3.12, in situ */
}
big = 0.0; /* searching for the largest pivot element */
for (i = j; i < n; i++) {
sum = a(i,j);
for (k = 0; k < j; k++) { sum -= a(i,k) * a(k,j); }
a(i,j) = sum;
if ((dum = vv[i] * fabs(sum)) >= big) {
big = dum;
imax = i;
}
}
if (j != imax) { /* if needed, interchange rows */
for (k = 0; k < n; k++){
dum = a(imax,k);
a(imax,k) = a(j,k);
a(j,k) = dum;
}
vv[imax] = vv[j]; /* keep the scale factor with the new row location */
}
indx[j] = imax;
if (j != n-1) { /* divide by the pivot element */
dum = 1.0 / a(j,j);
for (i = j + 1; i < n; i++) a(i,j) *= dum;
}
}
free(vv);
}
void lubksb(double *a, int n, int *indx, double *b) {
int i, ip, j;
double sum;
for (i = 0; i < n; i++) {
/* Forward substitution, Eq.2.3.6, unscrambling permutations from indx[] */
ip = indx[i];
sum = b[ip];
b[ip] = b[i];
for (j = 0; j < i; j++) sum -= a(i,j) * b[j];
b[i] = sum;
}
for (i = n-1; i >= 0; i--) { /* backsubstitution, Eq. 2.3.7 */
sum = b[i];
for (j = i + 1; j < n; j++) sum -= a(i,j) * b[j];
b[i] = sum / a(i,i);
}
}
int main() {
double *a,*y,*col,*aa,*res,sum;
int i,j,k,*indx;
a=(double *)malloc((size_t)(N*N * sizeof(double)));
y=(double *)malloc((size_t)(N*N * sizeof(double)));
col=(double *)malloc((size_t)(N * sizeof(double)));
indx=(int *)malloc((size_t)(N * sizeof(int)));
aa=(double *)malloc((size_t)(N*N * sizeof(double)));
res=(double *)malloc((size_t)(N*N * sizeof(double)));
if (!a || !y || !col || !indx || !aa || !res) die("main: memory allocation failure");
srand48((long int) N);
for (i=0;i<N;i++) {
for (j=0;j<N;j++) { aa[i*N+j] = a[i*N+j] = drand48(); }
}
fprintf(stderr,"\nRandomly generated matrix A = \n");
matrixPrint(a,N,N);
ludcmp(a,N,indx);
for(j=0;j<N;j++) {
for(i=0;i<N;i++) { col[i]=0.0; }
col[j]=1.0;
lubksb(a,N,indx,col);
for(i=0;i<N;i++) { y[i*N+j]=col[i]; }
}
fprintf(stderr,"\nResult of LU/BackSub is inv(A) :\n");
matrixPrint(y,N,N);
for (i=0; i<N; i++) {
for (j=0;j<N;j++) {
sum = 0;
for (k=0; k<N; k++) { sum += y[i*N+k] * aa[k*N+j]; }
res[i*N+j] = sum;
}
}
fprintf(stderr,"\nResult of inv(A).A = (should be 1):\n");
matrixPrint(res,N,N);
return(0);
}
I have a 2D array which is distributed across a MPI process grid (3 x 2 processes in this example). The values of the array are generated within the process which that chunk of the array is distributed to, and I want to gather all of those chunks together at the root process to display them.
So far, I have the code below. This generates a cartesian communicator, finds out the co-ordinates of the MPI process and works out how much of the array it should get based on that (as the array need not be a multiple of the cartesian grid size). I then create a new MPI derived datatype which will send the whole of that processes subarray as one item (that is, the stride, blocklength and count are different for each process, as each process has different sized arrays). However, when I come to gather the data together with MPI_Gather, I get a segmentation fault.
I think this is because I shouldn't be using the same datatype for sending and receiving in the MPI_Gather call. The data type is fine for sending the data, as it has the right count, stride and blocklength, but when it gets to the other end it'll need a very different derived datatype. I'm not sure how to calculate the parameters for this datatype - does anyone have any ideas?
Also, if I'm approaching this from completely the wrong angle then please let me know!
#include<stdio.h>
#include<array_alloc.h>
#include<math.h>
#include<mpi.h>
int main(int argc, char ** argv)
{
int size, rank;
int dim_size[2];
int periods[2];
int A = 2;
int B = 3;
MPI_Comm cart_comm;
MPI_Datatype block_type;
int coords[2];
float **array;
float **whole_array;
int n = 10;
int rows_per_core;
int cols_per_core;
int i, j;
int x_start, x_finish;
int y_start, y_finish;
/* Initialise MPI */
MPI_Init(&argc, &argv);
/* Get the rank for this process, and the number of processes */
MPI_Comm_size(MPI_COMM_WORLD, &size);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
if (rank == 0)
{
/* If we're the master process */
whole_array = alloc_2d_float(n, n);
/* Initialise whole array to silly values */
for (i = 0; i < n; i++)
{
for (j = 0; j < n; j++)
{
whole_array[i][j] = 9999.99;
}
}
for (j = 0; j < n; j ++)
{
for (i = 0; i < n; i++)
{
printf("%f ", whole_array[j][i]);
}
printf("\n");
}
}
/* Create the cartesian communicator */
dim_size[0] = B;
dim_size[1] = A;
periods[0] = 1;
periods[1] = 1;
MPI_Cart_create(MPI_COMM_WORLD, 2, dim_size, periods, 1, &cart_comm);
/* Get our co-ordinates within that communicator */
MPI_Cart_coords(cart_comm, rank, 2, coords);
rows_per_core = ceil(n / (float) A);
cols_per_core = ceil(n / (float) B);
if (coords[0] == (B - 1))
{
/* We're at the far end of a row */
cols_per_core = n - (cols_per_core * (B - 1));
}
if (coords[1] == (A - 1))
{
/* We're at the bottom of a col */
rows_per_core = n - (rows_per_core * (A - 1));
}
printf("X: %d, Y: %d, RpC: %d, CpC: %d\n", coords[0], coords[1], rows_per_core, cols_per_core);
MPI_Type_vector(rows_per_core, cols_per_core, cols_per_core + 1, MPI_FLOAT, &block_type);
MPI_Type_commit(&block_type);
array = alloc_2d_float(rows_per_core, cols_per_core);
if (array == NULL)
{
printf("Problem with array allocation.\nExiting\n");
return 1;
}
for (j = 0; j < rows_per_core; j++)
{
for (i = 0; i < cols_per_core; i++)
{
array[j][i] = (float) (i + 1);
}
}
MPI_Barrier(MPI_COMM_WORLD);
MPI_Gather(array, 1, block_type, whole_array, 1, block_type, 0, MPI_COMM_WORLD);
/*
if (rank == 0)
{
for (j = 0; j < n; j ++)
{
for (i = 0; i < n; i++)
{
printf("%f ", whole_array[j][i]);
}
printf("\n");
}
}
*/
/* Close down the MPI environment */
MPI_Finalize();
}
The 2D array allocation routine I have used above is implemented as:
float **alloc_2d_float( int ndim1, int ndim2 ) {
float **array2 = malloc( ndim1 * sizeof( float * ) );
int i;
if( array2 != NULL ){
array2[0] = malloc( ndim1 * ndim2 * sizeof( float ) );
if( array2[ 0 ] != NULL ) {
for( i = 1; i < ndim1; i++ )
array2[i] = array2[0] + i * ndim2;
}
else {
free( array2 );
array2 = NULL;
}
}
return array2;
}
This is a tricky one. You're on the right track, and yes, you will need different types for sending and receiving.
The sending part is easy -- if you're sending the whole subarray array, then you don't even need the vector type; you can send the entire (rows_per_core)*(cols_per_core) contiguous floats starting at &(array[0][0]) (or array[0], if you prefer).
It's the receiving that's the tricky part, as you've gathered. Let's start with the simplest case -- assuming that everything divides evenly so all the blocks have the same size. Then you can use the very helfpul MPI_Type_create_subarray (you could always cobble this together with vector types, but for higher-dimensional arrays this becomes tedious, as you need to create 1 intermediate type for each dimension of the array except the last...
Also, rather than hardcoding the decomposition, you can use the also-helpful MPI_Dims_create to create an as-square-as-possible decomposition of your ranks. Note
that this doesn't necessarily have anything to do with MPI_Cart_create, although you can use it for the requested dimensions. I'm going to skip the cart_create stuff here, not because it's not useful, but because I want to focus on the gather stuff.
So if everyone has the same size of array, then root is receiving the same data type from everyone, and one can use a very simple subarray type to get their data:
MPI_Type_create_subarray(2, whole_array_size, sub_array_size, starts,
MPI_ORDER_C, MPI_FLOAT, &block_type);
MPI_Type_commit(&block_type);
where sub_array_size[] = {rows_per_core, cols_per_core}, whole_array_size[] = {n,n}, and for here, starts[]={0,0} - eg, we'll just assume that everything starts the start.
The reason for this is that we can then use Gatherv to explicitly set the displacements into the array:
for (int i=0; i<size; i++) {
counts[i] = 1; /* one block_type per rank */
int row = (i % A);
int col = (i / A);
/* displacement into the whole_array */
disps[i] = (col*cols_per_core + row*(rows_per_core)*n);
}
MPI_Gatherv(array[0], rows_per_core*cols_per_core, MPI_FLOAT,
recvptr, counts, disps, resized_type, 0, MPI_COMM_WORLD);
So now everyone sends their data in one chunk, and it's received into the type into the right part of the array. For this to work, I've resized the type so that it's extent is just one float, so the displacements can be calculated in that unit:
MPI_Type_create_resized(block_type, 0, 1*sizeof(float), &resized_type);
MPI_Type_commit(&resized_type);
The whole code is below:
#include<stdio.h>
#include<stdlib.h>
#include<math.h>
#include<mpi.h>
float **alloc_2d_float( int ndim1, int ndim2 ) {
float **array2 = malloc( ndim1 * sizeof( float * ) );
int i;
if( array2 != NULL ){
array2[0] = malloc( ndim1 * ndim2 * sizeof( float ) );
if( array2[ 0 ] != NULL ) {
for( i = 1; i < ndim1; i++ )
array2[i] = array2[0] + i * ndim2;
}
else {
free( array2 );
array2 = NULL;
}
}
return array2;
}
void free_2d_float( float **array ) {
if (array != NULL) {
free(array[0]);
free(array);
}
return;
}
void init_array2d(float **array, int ndim1, int ndim2, float data) {
for (int i=0; i<ndim1; i++)
for (int j=0; j<ndim2; j++)
array[i][j] = data;
return;
}
void print_array2d(float **array, int ndim1, int ndim2) {
for (int i=0; i<ndim1; i++) {
for (int j=0; j<ndim2; j++) {
printf("%6.2f ", array[i][j]);
}
printf("\n");
}
return;
}
int main(int argc, char ** argv)
{
int size, rank;
int dim_size[2];
int periods[2];
MPI_Datatype block_type, resized_type;
float **array;
float **whole_array;
float *recvptr;
int *counts, *disps;
int n = 10;
int rows_per_core;
int cols_per_core;
int i, j;
int whole_array_size[2];
int sub_array_size[2];
int starts[2];
int A, B;
/* Initialise MPI */
MPI_Init(&argc, &argv);
/* Get the rank for this process, and the number of processes */
MPI_Comm_size(MPI_COMM_WORLD, &size);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
if (rank == 0)
{
/* If we're the master process */
whole_array = alloc_2d_float(n, n);
recvptr = &(whole_array[0][0]);
/* Initialise whole array to silly values */
for (i = 0; i < n; i++)
{
for (j = 0; j < n; j++)
{
whole_array[i][j] = 9999.99;
}
}
print_array2d(whole_array, n, n);
puts("\n\n");
}
/* Create the cartesian communicator */
MPI_Dims_create(size, 2, dim_size);
A = dim_size[1];
B = dim_size[0];
periods[0] = 1;
periods[1] = 1;
rows_per_core = ceil(n / (float) A);
cols_per_core = ceil(n / (float) B);
if (rows_per_core*A != n) {
if (rank == 0) fprintf(stderr,"Aborting: rows %d don't divide by %d evenly\n", n, A);
MPI_Abort(MPI_COMM_WORLD,1);
}
if (cols_per_core*B != n) {
if (rank == 0) fprintf(stderr,"Aborting: cols %d don't divide by %d evenly\n", n, B);
MPI_Abort(MPI_COMM_WORLD,2);
}
array = alloc_2d_float(rows_per_core, cols_per_core);
printf("%d, RpC: %d, CpC: %d\n", rank, rows_per_core, cols_per_core);
whole_array_size[0] = n;
sub_array_size [0] = rows_per_core;
whole_array_size[1] = n;
sub_array_size [1] = cols_per_core;
starts[0] = 0; starts[1] = 0;
MPI_Type_create_subarray(2, whole_array_size, sub_array_size, starts,
MPI_ORDER_C, MPI_FLOAT, &block_type);
MPI_Type_commit(&block_type);
MPI_Type_create_resized(block_type, 0, 1*sizeof(float), &resized_type);
MPI_Type_commit(&resized_type);
if (array == NULL)
{
printf("Problem with array allocation.\nExiting\n");
MPI_Abort(MPI_COMM_WORLD,3);
}
init_array2d(array,rows_per_core,cols_per_core,(float)rank);
counts = (int *)malloc(size * sizeof(int));
disps = (int *)malloc(size * sizeof(int));
/* note -- we're just using MPI_COMM_WORLD rank here to
* determine location, not the cart_comm for now... */
for (int i=0; i<size; i++) {
counts[i] = 1; /* one block_type per rank */
int row = (i % A);
int col = (i / A);
/* displacement into the whole_array */
disps[i] = (col*cols_per_core + row*(rows_per_core)*n);
}
MPI_Gatherv(array[0], rows_per_core*cols_per_core, MPI_FLOAT,
recvptr, counts, disps, resized_type, 0, MPI_COMM_WORLD);
free_2d_float(array);
if (rank == 0) print_array2d(whole_array, n, n);
if (rank == 0) free_2d_float(whole_array);
MPI_Finalize();
}
Minor thing -- you don't need the barrier before the gather. In fact, you hardly ever really need a barrier, and they're expensive operations for a few reasons, and can hide problems -- my rule of thumb is to never, ever, use barriers unless you know exactly why the rule needs to be broken in this case. In this case in particular, the collective gather routine does exactly the same syncronization as the barrier, so just use that.
Now, moving onto the harder stuff. If things don't divide evenly, you have a few options. The simplest, though not necessarily the best, is just to pad the array so that it does divide evenly, even if just for this operation.
If you can arrange it so that the number of columns does divide evenly, even if the number of rows doesn't, then you can still use the gatherv and create a vector type for each part of the row, and gatherv that the appropriate number of rows from each processor. That would work fine.
If you definately have the case where neither can be counted on to divide, and you can't pad data for sending, then there are three sub-options I can see:
As susterpatt suggests, do point-to-point. For small numbers of tasks, this is fine, but as it gets larger, this will be significantly less efficient than the collective operations.
Create a communicator consisting of all the processors not on the outer edges, and use exactly the code above to gather their code; and then point-to-point the edge tasks' data.
Don't gather to process 0 at all; use the Distributed array type to describe the layout of the array, and use MPI-IO to write all the data to a file; once that's done, you can have process zero display the data in some way if you like.
It looks like the first argument to you MPI_Gather call should probably be array[0], and not array.
Also, if you need to get different amounts of data from each rank, you might be better off using MPI_Gatherv.
Finally, not that gathering all your data in once place to do output is not scalable in many circumstances. As the amount of data grows, eventually, it will exceed the memory available to rank 0. You might be much better off distributing the output work (if you are writing to a file, using MPI IO or other library calls) or doing point-to-point sends to rank 0 one at a time, to limit the total memory consumption.
On the other hand, I would not recommend coordinating each of your ranks printing to standard output, one after another, because some major MPI implementations don't guarantee that standard output will be produced in order. Cray's MPI, in particular, jumbles up standard output pretty thoroughly if multiple ranks print.
Accordding to this (emphasis by me):
The type-matching conditions for the collective operations are more strict than the corresponding conditions between sender and receiver in point-to-point. Namely, for collective operations, the amount of data sent must exactly match the amount of data specified by the receiver. Distinct type maps between sender and receiver are still allowed.
Sounds to me like you have two options:
Pad smaller submatrices so that all processes send the same amount of data, then crop the matrix back to its original size after the Gather. If you're feeling adventurous, you might try defining the receiving typemap so that paddings are automatically overwritten during the Gather operation, thus eliminating the need for the crop afterwards. This could get a bit complicated though.
Fall back to point-to-point communication. Much more straightforward, but possibly higher communication costs.
Personally, I'd go with option 2.