I tried to write a function called matrixMultiply that simply takes two 4 x 4 matrices called a and b, multiplies them and stores the result in the 4 x 4 matrix c. After this I wanted to expand the program into a more general one, for n x n matrices. Sadly the program compiles but gets stuck while executing it. I would be very thankful about someone of you being able to tell me where my error is.
#import <stdio.h>
#import "mpi.h"
void matrixMultiply(int argc, char* argv[], int a[][4], int b[][4], int c[][4])
{
int n = 4;
int procs;
int rank;
int rootRank = 0;
MPI_Init(&argc,&argv);
MPI_Comm_size(MPI_COMM_WORLD, &procs);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
if(rank == rootRank) {
int current_row[4];
for(int i = 0; i < n; i++) {
int current_column[4];
for(int j = 0; j < n; j++) {
//getting the i-th row
for(int k = 0; k < n; k++) {
current_row[k] = a[i][k];
}
//getting the j-th column
for (int k = 0; k < n; k++)
{
current_column[k] = b[k][j];
}
//MPI_Send(void* data, int count, MPI_Datatype datatype, int destination, int tag, MPI_Comm communicator)
MPI_Bsend(current_row, 4, MPI_INT, i, 0, MPI_COMM_WORLD);
MPI_Bsend(current_column, 4, MPI_INT, i, 1, MPI_COMM_WORLD);
int result;
//MPI_Recv(void* data, int count, MPI_Datatype datatype, int source, int tag, MPI_Comm communicator, MPI_Status* status)
MPI_Recv(&result, 1, MPI_INT, i ,2, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
c[i][j]=result;
}
}
/* this code is only used to check the resulting matrix c*/
printf("c:\n");
for(int i = 0; i < 4; i++) {
for(int j = 0; j < 4; j++) {
printf("%d ", c[i][j]);
}
printf("\n");
}
printf("\n");
}
else {
int result = 0;
int local_row[4] = {0,0,0,0};
int local_column[4] = {2,2,2,2};
//MPI_Recv(void* data, int count, MPI_Datatype datatype, int source, int tag, MPI_Comm communicator, MPI_Status* status)
MPI_Recv(local_row, 4, MPI_INT, rootRank, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
MPI_Recv(local_column, 4, MPI_INT, rootRank, 1, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
for(int i = 0; i < 4; i++) {
result+= local_row[i] * local_column[i];
}
//MPI_Send(void* data, int count, MPI_Datatype datatype, int destination, int tag, MPI_Comm communicator)
MPI_Bsend(&result, 1, MPI_INT, rootRank, 2, MPI_COMM_WORLD);
}
MPI_Finalize();
return;
}
int main(int argc, char* argv[]) {
int d[][4] = {{1,2,3,4}, {5,6,7,8}, {9,10,11,12},{13,14,15,16}};
int e[][4] = {{16,15,14,13}, {12,11,10,9}, {8,7,6,5}, {4,3,2,1}};
int f[][4] = {{0,0,0,0}, {0,0,0,0}, {0,0,0,0}, {0,0,0,0}};
matrixMultiply(argc, argv, d, e, f);
}
A couple issues to address:
MPI_Bsend (as opposed to MPI_Send) requires a preceding call to MPI_Buffer_attach. See the notes here: https://www.mpich.org/static/docs/latest/www3/MPI_Bsend.html or any recent version of the MPI spec.
Try with that change first. It's possible that's enough to get it going. If not, double check any ordering of Sends and Receives, considered globally, to see whether there's a deadlock anywhere in the system.
As an aside, you may be able to improve performance as well as avoiding some deadlock scenarios by switching to nonblocking sends and receives (MPI_Isend, MPI_Irecv and related) and completing them via MPI_Waitall, MPI_Testall or similar after they've all initiated. This would be more similar to how MPI implementations typically execute collective operations under the hood - which could also have performance benefits over making individual send and receive calls, due to the implementation knowing more about the hardware and what's going on under the hood and being able to optimize ordering around that. What you're doing looks something like an MPI_Bcast followed by an MPI_Gather, as one possible alternative pattern that would lean more heavily on the implementation to take care of optimizations for you.
Related
I have an array of type Matrix structs which the program got from user's input. I need to distribute the matrices to processes with OpenMPI. I tried using Scatter but I am quite confused about the arguments needed for the program to work (and also how to receive the data in each local arrays). Here is my current code:
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include <mpi.h>
#define nil NULL
#define NMAX 100
#define DATAMAX 1000
#define DATAMIN -1000
typedef struct Matrix
{
int mat[NMAX][NMAX]; // Matrix cells
int row_eff; // Matrix effective row
int col_eff; // Matrix effective column
} Matrix;
void init_matrix(Matrix *m, int nrow, int ncol)
{
m->row_eff = nrow;
m->col_eff = ncol;
for (int i = 0; i < m->row_eff; i++)
{
for (int j = 0; j < m->col_eff; j++)
{
m->mat[i][j] = 0;
}
}
}
Matrix input_matrix(int nrow, int ncol)
{
Matrix input;
init_matrix(&input, nrow, ncol);
for (int i = 0; i < nrow; i++)
{
for (int j = 0; j < ncol; j++)
{
scanf("%d", &input.mat[i][j]);
}
}
return input;
}
void print_matrix(Matrix *m)
{
for (int i = 0; i < m->row_eff; i++)
{
for (int j = 0; j < m->col_eff; j++)
{
printf("%d ", m->mat[i][j]);
}
printf("\n");
}
}
int main(int argc, char **argv)
{
MPI_Init(&argc, &argv);
// Get number of processes
int size;
MPI_Comm_size(MPI_COMM_WORLD, &size);
// Get process rank
int rank;
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
// Get matrices from user inputs
int kernel_row, kernel_col, num_targets, target_row, target_col;
// reads kernel's row and column and initalize kernel matrix from input
scanf("%d %d", &kernel_row, &kernel_col);
Matrix kernel = input_matrix(kernel_row, kernel_col);
// reads number of target matrices and their dimensions.
// initialize array of matrices and array of data ranges (int)
scanf("%d %d %d", &num_targets, &target_row, &target_col);
Matrix *arr_mat = (Matrix *)malloc(num_targets * sizeof(Matrix));
for (int i = 0; i < num_targets; i++)
{
arr_mat[i] = input_matrix(target_row, target_col);
}
// Get number of matrices per process
int num_mat_per_proc = ceil(num_targets / size);
// Init local matrices and scatter the global matrices
Matrix *local_mats = (Matrix *)malloc(num_mat_per_proc * sizeof(Matrix));
MPI_Scatter(arr_mat, sizeof(local_mats), MPI_BYTE, &local_mats, sizeof(local_mats), MPI_BYTE, 0, MPI_COMM_WORLD);
if (rank == 0)
{
// Range arrays -> array of convolution results
int arr_range[num_targets];
printf("From master \n");
for (int i = 0; i < 3; i++)
{
print_matrix(&arr_mat[i]);
}
}
else
{
printf("From slave %d = \n", rank);
print_matrix(&local_mats[0]);
}
MPI_Finalize();
}
So here's a few doubts I have about the current implementation:
Can I accept the input just like that or should I make it so that it only happens in rank 0?
How do I implement the scatter part and possibly using Scatterv because the amount of arrays might not be divisible to the number of processes?
Can I accept the input just like that or should I make it so that it
only happens in rank 0?
No, You should use command line arguments or read from file as best practice.
If you want to use scanf, then use it inside rank 0. STDIN is forwarded to rank 0 (this is not supported in standard as far as I know, But I guess this should work and will be implementation dependent)
How do I implement the scatter part and possibly using Scatterv
because the amount of arrays might not be divisible to the number of
processes?
If you different size to send for different processes, then you should use scatterv.
Scatter Syntax:
MPI_Scatter(
void* send_data,
int send_count,
MPI_Datatype send_datatype,
void* recv_data,
int recv_count,
MPI_Datatype recv_datatype,
int root,
MPI_Comm communicator)
Your usage:
MPI_Scatter(arr_mat, sizeof(local_mats), MPI_BYTE, &local_mats, sizeof(local_mats), MPI_BYTE, 0, MPI_COMM_WORLD);
Potential error points:
In send_count: Size to send (as Gilles Gouaillardet Pointed out in comments). Sizeof(local_mats) instead it should be num_mat_per_proc * sizeof(Matrix).
recv_count: I believe size to receive should not be sizeof(local_mats).
Since you use the same type (MPI_BYTES) for SEND and RECV, your send_count == recv_count
This is the very basic function of my program, and as such is not necessarily reproducible. However, I was wondering if there is a way to send an array of arrays using MPI? Or is this something that is not possible and I should flatten my array? Any help would be greatly appreciated as I've been struggling with trying to figure this out.
int *individual_topIds;
int **cell_topIds;
cell_topIds = (int**) malloc(sizeof(int*)*25*boxes);
if(rank == 0) {
for (int i = 0; i < boxes; i++) {
individual_topIds = (int*) malloc(sizeof(int)*25);
for(int j = 0; j < cellMatrix[i].numTop; j++){
individual_topIds[j] = cellMatrix[i].aTopIds[j];
}
cell_topIds[i] = individual_topIds;
}
MPI_Send(cell_topIds, boxes*25, MPI_INT, 1, 10, MPI_COMM_WORLD);
}
Then in my rank == 1 section. I have tried send and receive with just boxes, and not boxes*25 as well.
for 1 -> boxes
MPI_Recv(cell_topIds, boxes*25, MPI_INT, 0, 10, MPI_COMM_WORLD, &status);
int *ptop;
ptop = (int*) malloc(sizeof(int)*25);
ptop = cell_topIds[i];
printf("1\n");
for(int j = 0; j < sizeof(&ptop)/sizeof(int); j++){
printf("%d, ", ptop[j]);
}
printf("2\n");
end for i -> boxes
free(ptop);
Edit: Forgot to mention that the output of the print is a seg fault
Caught error: Segmentation fault (signal 11)
This is not a particularly well-worded question.
However, MPI will let you send arrays of arrays if you use a custom type, as below:
#include "mpi.h"
#include <stdio.h>
struct Partstruct
{
char c;
double d[6];
char b[7];
};
int main(int argc, char *argv[])
{
struct Partstruct particle[1000];
int i, j, myrank;
MPI_Status status;
MPI_Datatype Particletype;
MPI_Datatype type[3] = { MPI_CHAR, MPI_DOUBLE, MPI_CHAR };
int blocklen[3] = { 1, 6, 7 };
MPI_Aint disp[3];
MPI_Init(&argc, &argv);
disp[0] = &particle[0].c - &particle[0];
disp[1] = &particle[0].d - &particle[0];
disp[2] = &particle[0].b - &particle[0];
MPI_Type_create_struct(3, blocklen, disp, type, &Particletype);
MPI_Type_commit(&Particletype);
MPI_Comm_rank(MPI_COMM_WORLD, &myrank);
if (myrank == 0)
{
MPI_Send(particle, 1000, Particletype, 1, 123, MPI_COMM_WORLD);
}
else if (myrank == 1)
{
MPI_Recv(particle, 1000, Particletype, 0, 123, MPI_COMM_WORLD, &status);
}
MPI_Finalize();
return 0;
}
Alternatively, use a flat array design (this is a good idea for performance reasons as well as easy compatibility with MPI).
I have already looked for answers about MPI and dynamic allocation, but there is still an error in my code.
I think the pairs send/receive work well. The problem is probably due to the identical part when I want to do some basic operations. I can't specify indices of the array, otherwise I get this error:
[lyomatnuc09:07574] * Process received signal *
[lyomatnuc09:07575] * Process received signal *
[lyomatnuc09:07575] Signal: Segmentation fault (11)
[lyomatnuc09:07575] Signal code: Address not mapped (1)
[lyomatnuc09:07575] Failing at address: 0x60
The basic code that reproduce the error is below :
int **alloc_array(int rows, int cols) {
int *data = (int *)malloc(rows*cols*sizeof(int));
int **array= (int **)malloc(rows*sizeof(int*));
for (int i=0; i<rows; i++)
array[i] = &(data[cols*i]);
return array;
}
int main(int argc, char *argv[])
{
MPI_Init(&argc, &argv); //initialize MPI operations
MPI_Comm_rank(MPI_COMM_WORLD, &rank); //get the rank
MPI_Comm_size(MPI_COMM_WORLD, &size); //get number of processes
MPI_Datatype columntype;
MPI_Type_vector(10, 1, 10, MPI_INT, &columntype);
MPI_Type_commit(&columntype);
start_time = MPI_Wtime();
if (rank == 0)
{
int **A;
A = alloc_array(10,10);
for ( int i =1 ;i<size;i++)
{
MPI_Send(&(A[0][0]), 10*10, MPI_INT, i, 1, MPI_COMM_WORLD);
}
} else if (rank >= 1) {
int **A2;
A2 = alloc_array(10,10);
MPI_Recv(&(A2[0][0]), 10*10, MPI_INT, 0, 1, MPI_COMM_WORLD, &status);
for (int i =0; i<10; i++)
{
for ( int j=0; j<10;i++)
{
A2[i][j]=i*j;//bug here
}
}
}//end slaves task
MPI_Finalize();
return 0;
}
I was wondering why this program actually works in MPI (openMPI 1.5/1.6. )
#include <stdio.h>
#include <mpi.h>
#define VECTOR_SIZE 100
int main(int argc,char ** argv) {
int A[VECTOR_SIZE];
int sub_size=2;
int count=10;
MPI_Datatype partial_array;
int rank,size;
MPI_Status status;
MPI_Init(&argc,&argv);
MPI_Comm_rank(MPI_COMM_WORLD,&rank);
MPI_Comm_size(MPI_COMM_WORLD,&size);
MPI_Type_vector(count, sub_size,
2*sub_size, MPI_INT, &partial_array);
MPI_Type_commit(&partial_array);
if (rank == 0) {
int i;
// server - initialize data and send
for (i = 0; i< VECTOR_SIZE; i++) {
A[i] = i;
}
MPI_Send(&(A[0]), 1, partial_array, 1, 0, MPI_COMM_WORLD);
} else if (rank==1) {
int i;
for (i = 0; i< VECTOR_SIZE; i++) {
A[i] = 0;
}
// vector is composed by 20 MPI_INT elements
MPI_Recv(&(A[0]),20, MPI_INT, 0, 0, MPI_COMM_WORLD, &status);
printf("\n");
for (i = 0; i<VECTOR_SIZE; i++) {
printf("%d ",A[i]);
}
printf("\n");
}
MPI_Finalize();
}
while this other program where Send and Receive primitives are exchanged does not terminate (the receive never completes):
#include <stdio.h>
#include <mpi.h>
#define VECTOR_SIZE 100
int main(int argc,char ** argv) {
int A[VECTOR_SIZE];
int sub_size=2;
int count=10;
MPI_Datatype partial_array;
int rank,size;
MPI_Status status;
MPI_Init(&argc,&argv);
MPI_Comm_rank(MPI_COMM_WORLD,&rank);
MPI_Comm_size(MPI_COMM_WORLD,&size);
MPI_Type_vector(count, sub_size,
2*sub_size, MPI_INT, &partial_array);
MPI_Type_commit(&partial_array);
if (rank == 0) {
int i;
// server - initialize data and send
for (i = 0; i< VECTOR_SIZE; i++) {
A[i] = i;
}
MPI_Send(&(A[0]),20, MPI_INT, 0, 0, MPI_COMM_WORLD);
} else if (rank==1) {
int i;
// client - receive data and print
for (i = 0; i< VECTOR_SIZE; i++) {
A[i] = 0;
}
MPI_Recv(&(A[0]), 1, partial_array, 1, 0, MPI_COMM_WORLD, &status);
printf("\n");
for (i = 0; i<VECTOR_SIZE; i++) {
printf("%d ",A[i]);
}
printf("\n");
}
MPI_Finalize();
}
If I understand MPI type mathing rules correctly neither of the two should complete.
Obviously in the second program rank 0 is sending to itself and rank 1 is expecting message also from itself:
MPI_Send(&(A[0]),20, MPI_INT, 0, 0, MPI_COMM_WORLD);
destination rank should be 1, not 0
MPI_Recv(&(A[0]), 1, partial_array, 1, 0, MPI_COMM_WORLD, &status);
source rank should be 0, not 1.
Otherwise you do not understand the MPI type matching correctly. It only states that underlying primitive types in the type maps on both ends should match. You are creating a vector whose type map has 20 primitive integers. If you send one element of this type, your message will actually contain 20 integers. On the receiver side you provide space for at least 20 integers so this is correct. The opposite is also correct.
It is not correct if you send only 10 or 18 integers in the second program since they will not make a complete element of the vector type. Nevertheless, the receive operation will complete but if you call MPI_Get_count() on the status, if will return MPI_UNDEFINED because from the number of received primitive integer elements one cannot construct an integer number of vector elements. It is also not correct to mix primitive types, e.g. send MPI_DOUBLE (or vector, or structure, or whatever other type that has doubles) and receive it as MPI_INT.
Please also note that MPI messages do not carry their type map or type ID with them so most MPI implementations do not check if types match. It is possible to send MPI_FLOAT and receive it as MPI_INT (because both are 4 bytes on most systems) but it is not correct to do so.
I code a small program using MPI to parallelize matrix-matrix multiplication. The problem is: When running the program on my computer, it takes about 10 seconds to complete, but about 75 seconds on a cluster. I think I have some synchronization problem, but I cannot figure it out (yet).
Here's my source code:
/*matrix.c
mpicc -o out matrix.c
mpirun -np 11 out
*/
#include <mpi.h>
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
#define N 1000
#define DATA_TAG 10
#define B_SENT_TAG 20
#define FINISH_TAG 30
int master(int);
int worker(int, int);
int main(int argc, char **argv) {
int myrank, p;
double s_time, f_time;
MPI_Init(&argc,&argv);
MPI_Comm_rank(MPI_COMM_WORLD, &myrank);
MPI_Comm_size(MPI_COMM_WORLD, &p);
if (myrank == 0) {
s_time = MPI_Wtime();
master(p);
f_time = MPI_Wtime();
printf("Complete in %1.2f seconds\n", f_time - s_time);
fflush(stdout);
}
else {
worker(myrank, p);
}
MPI_Finalize();
return 0;
}
int *read_matrix_row();
int *read_matrix_col();
int send_row(int *, int);
int recv_row(int *, int, MPI_Status *);
int send_tag(int, int);
int write_matrix(int *);
int master(int p) {
MPI_Status status;
int *a; int *b;
int *c = (int *)malloc(N * sizeof(int));
int i, j; int num_of_finish_row = 0;
while (1) {
for (i = 1; i < p; i++) {
a = read_matrix_row();
b = read_matrix_col();
send_row(a, i);
send_row(b, i);
//printf("Master - Send data to worker %d\n", i);fflush(stdout);
}
wait();
for (i = 1; i < N / (p - 1); i++) {
for (j = 1; j < p; j++) {
//printf("Master - Send next row to worker[%d]\n", j);fflush(stdout);
b = read_matrix_col();
send_row(b, j);
}
}
for (i = 1; i < p; i++) {
//printf("Master - Announce all row of B sent to worker[%d]\n", i);fflush(stdout);
send_tag(i, B_SENT_TAG);
}
//MPI_Barrier(MPI_COMM_WORLD);
for (i = 1; i < p; i++) {
recv_row(c, MPI_ANY_SOURCE, &status);
//printf("Master - Receive result\n");fflush(stdout);
num_of_finish_row++;
}
//printf("Master - Finish %d rows\n", num_of_finish_row);fflush(stdout);
if (num_of_finish_row >= N)
break;
}
//printf("Master - Finish multiply two matrix\n");fflush(stdout);
for (i = 1; i < p; i++) {
send_tag(i, FINISH_TAG);
}
//write_matrix(c);
return 0;
}
int worker(int myrank, int p) {
int *a = (int *)malloc(N * sizeof(int));
int *b = (int *)malloc(N * sizeof(int));
int *c = (int *)malloc(N * sizeof(int));
int i;
for (i = 0; i < N; i++) {
c[i] = 0;
}
MPI_Status status;
int next = (myrank == (p - 1)) ? 1 : myrank + 1;
int prev = (myrank == 1) ? p - 1 : myrank - 1;
while (1) {
recv_row(a, 0, &status);
if (status.MPI_TAG == FINISH_TAG)
break;
recv_row(b, 0, &status);
wait();
//printf("Worker[%d] - Receive data from master\n", myrank);fflush(stdout);
while (1) {
for (i = 1; i < p; i++) {
//printf("Worker[%d] - Start calculation\n", myrank);fflush(stdout);
calc(c, a, b);
//printf("Worker[%d] - Exchange data with %d, %d\n", myrank, next, prev);fflush(stdout);
exchange(b, next, prev);
}
//printf("Worker %d- Request for more B's row\n", myrank);fflush(stdout);
recv_row(b, 0, &status);
//printf("Worker %d - Receive tag %d\n", myrank, status.MPI_TAG);fflush(stdout);
if (status.MPI_TAG == B_SENT_TAG) {
break;
//printf("Worker[%d] - Finish calc one row\n", myrank);fflush(stdout);
}
}
//wait();
//printf("Worker %d - Send result\n", myrank);fflush(stdout);
send_row(c, 0);
for (i = 0; i < N; i++) {
c[i] = 0;
}
}
return 0;
}
int *read_matrix_row() {
int *row = (int *)malloc(N * sizeof(int));
int i;
for (i = 0; i < N; i++) {
row[i] = 1;
}
return row;
}
int *read_matrix_col() {
int *col = (int *)malloc(N * sizeof(int));
int i;
for (i = 0; i < N; i++) {
col[i] = 1;
}
return col;
}
int send_row(int *row, int dest) {
MPI_Send(row, N, MPI_INT, dest, DATA_TAG, MPI_COMM_WORLD);
return 0;
}
int recv_row(int *row, int src, MPI_Status *status) {
MPI_Recv(row, N, MPI_INT, src, MPI_ANY_TAG, MPI_COMM_WORLD, status);
return 0;
}
int wait() {
MPI_Barrier(MPI_COMM_WORLD);
return 0;
}
int calc(int *c_row, int *a_row, int *b_row) {
int i;
for (i = 0; i < N; i++) {
c_row[i] = c_row[i] + a_row[i] * b_row[i];
//printf("%d ", c_row[i]);
}
//printf("\n");fflush(stdout);
return 0;
}
int exchange(int *row, int next, int prev) {
MPI_Request request; MPI_Status status;
MPI_Isend(row, N, MPI_INT, next, DATA_TAG, MPI_COMM_WORLD, &request);
MPI_Irecv(row, N, MPI_INT, prev, MPI_ANY_TAG, MPI_COMM_WORLD, &request);
MPI_Wait(&request, &status);
return 0;
}
int send_tag(int worker, int tag) {
MPI_Send(0, 0, MPI_INT, worker, tag, MPI_COMM_WORLD);
return 0;
}
int write_matrix(int *matrix) {
int i;
for (i = 0; i < N; i++) {
printf("%d ", matrix[i]);
}
printf("\n");
fflush(stdout);
return 0;
}
Well, you have a fairly small matrix (N=1000), and secondly you distribute your algorithm on a row/column basis rather than blocked.
For a more realistic version using better algorithms, you might want to acquire an optimized BLAS library (e.g. GOTO is free), test single-thread performance with that one, then get PBLAS and link it against your optimized BLAS, and compare MPI parallel performance using the PBLAS version.
I see some errors in your program:
First, why are you calling the wait function since its implementation is simply calling MPI_Barrier. MPI_Barrier is a primitive synchronization that blocks all threads until they reach the "barrier" by calling MPI_Barrier. My question is: do you want the master to be synchronized with the workers? In this context, that would not be optimal because a worker doesn't need to wait for the master to begin its calculation.
Second, there are 2 unnecessary for loops.
for (i = 1; i < N / (p - 1); i++) {
for (j = 1; j < p; j++) {
b = read_matrix_col();
send_row(b, j);
}
}
for (i = 1; i < p; i++) {
send_tag(i, B_SENT_TAG);
}
In the first i-loop, you don't use the variable in your statement. Since the j-loop and the second i-loop are the same, you could do:
for (i = 0; i < p; i++) {
b = read_matrix_col();
send_row(b, j);
send_tag(i, B_SENT_TAG);
}
In terms of data transfer, your program is not optimized because you are sending an array of 1000 integers of data for each data transfer. There should be a better way to optimise the data transfer, but I will let you look at it. So make the corrections I told you and tell us what is your new performance.
And as #janneb said, you can use the BLAS library for better performance for matrix multiplication. Good luck!
I did not look over your code, but I can provide some hints about why your result may not unexpected:
As already mentioned, N=1000 may be too small. You should make more tests to see the scalability of your program (try setting N=100, 500, 1000, 5000, 10000, etc.) and compare results on both your system and the cluster.
Compare results between your system (one processor I presume) and a single processor on the cluster. Usually in production environments like servers or clusters a single processor is less powerful than the best processors designed for desktop use, but they provide stability, reliability and other features useful for environments which run 24h/day at full capacity.
If your processor has multiple cores, more than one MPI processes may run at the same time and synchronization between them is negligible compared to the synchronization between nodes in a cluster.
Are the nodes from the cluster statically assigned to you? Maybe other users' programs can be scheduled on the nodes you are running at the same time as you.
Read documentation about the cluster's architecture. Some architectures may be more suitable for particular classes of problems.
Assess latency of the network of the cluster. Ping-ing from each node to another many times and computing the mean value may give a rough estimate.
Last but perhaps the most important, your algorithm may not be optimal. Read a/some books on matrix multiplication (I can recommend "Matrix Computations", Golub and Van Loan).