How to debug with LD_preload - c

I use LD_PRELOAD to override the MPI_Send function with my own function to do some debugging of the MPI_send function.
Here, myMPI_Send.c code:
#define _GNU_SOURCE
#include <stdio.h>
#include <dlfcn.h>
#include <mpi.h>
#include <stdlib.h>
int MPI_Send(const void *buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm)
{
int (*original_MPI_Send)(const void *, int, MPI_Datatype, int, int, MPI_Comm);
original_MPI_Send=dlsym(RTLD_NEXT, "MPI_Send");
printf(" Calling MPI_Send ************** \n");
return (*original_MPI_Send)(buf, count, datatype, dest, tag, comm);
}
In my project, I use an extern library which includes also MPI_Send functions. I need to debug the extern library to know the line and the number of calls of each call of MPI_Send.
I tried to use this code using macros:
fprintf (stderr,"MPI_Send, func <%s>, file %s, line %d, count %d\n",__func__, __FILE__, __LINE__, __COUNTER__);
But, it doesn't work, it prints always the line of MPI_Send in the myMPI_Send.so.
Could you help me please? Thank you in advance.

MPI covers most of your needs via the MPI Profiling Interface (aka PMPI).
Simply redifines the MPI_* subroutines you need, and have them call the original PMPI_* corresponding subroutine.
In you case:
int MPI_Send(const void *buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm)
{
printf(" Calling MPI_Send ************** \n");
PMPI_Send(buf, count, datatype, dest, tag, comm);
}
Since you want to print the line and file of the caller, you might have to use macros and rebuild your app:
#define MPI_Send(buf,count,MPI_Datatype, dest, tag, comm) \
myMPI_Send(buf, count, MPI_Datatype, dest, tag, comm, __func__, __FILE__, __LINE__)
int myMPI_Send(const void *buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm, char *func, char *file, int line)
{
printf(" Calling MPI_Send ************** from %s at %s:%d\n", func, file, line);
return PMPI_Send(buf, count, datatype, dest, tag, comm);
}

Related

MPI_Scatter produces write error, bad address (3)

I am receiving a writing error when trying to scatter a dynamically allocated matrix (it is contiguous), it happens when more than 5 cores are involved in the computation. I have placed printfs and it occurs in the scatter, the code is the next:
#include <mpi.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <cblas.h>
#include <sys/time.h>
int main(int argc, char* argv[])
{
int err = MPI_Init(&argc, &argv);
MPI_Comm world;
world=MPI_COMM_WORLD;
int size = 0;
err = MPI_Comm_size(world, &size);
int rank = 0;
err = MPI_Comm_rank(world, &rank);
int n_rows=2400, n_cols=2400, n_rpc=n_rows/size;
float *A, *Asc, *B, *C; //Dyn alloc A B and C
Asc=malloc(n_rpc*n_cols*sizeof(float));
B=malloc(n_rows*n_cols*sizeof(float));
C=malloc(n_rows*n_cols*sizeof(float));
A=malloc(n_rows*n_cols*sizeof(float));
if(rank==0)
{
for (int i=0; i<n_rows; i++)
{
for (int j=0; j<n_cols; j++)
{
A[i*n_cols+j]= i+1.0;
B[i*n_cols+j]=A[i*n_cols+j];
}
}
}
struct timeval start, end;
if(rank==0) gettimeofday(&start,NULL);
MPI_Bcast(B, n_rows*n_cols, MPI_FLOAT, 0, MPI_COMM_WORLD);
if(rank==0) printf("Before Scatter\n"); //It is breaking here
MPI_Scatter(A, n_rpc*n_cols, MPI_FLOAT, Asc, n_rpc*n_cols, MPI_FLOAT, 0, MPI_COMM_WORLD);
if(rank==0) printf("After Scatter\n");
/* Some computation */
err = MPI_Finalize();
if (err) DIE("MPI_Finalize");
return err;
}
Upto 4 cores, it works correctly and performs the scatter, but with 5 or more it does not, and I can not find a clear reason.
The error message is as follows:
[raspberrypi][[26238,1],0][btl_tcp_frag.c:130:mca_btl_tcp_frag_send] mca_btl_tcp_frag_send: writev error (0xac51e0, 8)
Bad address(3)
[raspberrypi][[26238,1],0][btl_tcp_frag.c:130:mca_btl_tcp_frag_send] mca_btl_tcp_frag_send: writev error (0xaf197048, 29053982)
Bad address(1)
[raspberrypi:05345] pml_ob1_sendreq.c:308 FATAL
Thanks in advance!
Multiple errors, first of all, take care of using always the same type when defining variables. Then, when you use scatter, the send count and receive are the same, and you will be sending Elements/Cores. Also when receiving with gather you have to receive the same amount you sent, so again Elements/Cores.

MPI_Gatherv: create and collect arrays of variable size (MPI+C)

I am new to MPI and I am trying to manage arrays of different size in parallel and then pass them to the main thread, unsuccessfully so far.
I have learned that
MPI_Gatherv(const void *sendbuf, int sendcount, MPI_Datatype sendtype,
void *recvbuf, const int *recvcounts, const int *displs,
MPI_Datatype recvtype, int root, MPI_Comm comm)
is the way to go in this case.
Here is my sample code, which doesn't work because of memory issues (I think).
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <mpi.h>
int main (int argc, char *argv[]) {
MPI_Init(&argc, &argv);
int world_size,*sendarray;
int rank, *rbuf=NULL, count;
int *displs=NULL,i,*rcounts=NULL;
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &world_size);
if(rank==0){
rbuf = malloc(10*sizeof(int));
displs = malloc(world_size*sizeof(int));
rcounts=malloc(world_size*sizeof(int));
rcounts[0]=1;
rcounts[1]=3;
rcounts[2]=6;
displs[0]=1;
displs[1]=3;
displs[2]=6;
sendarray=malloc(1*sizeof(int));
for(int i=0;i<1;i++)sendarray[i]=1;
count=1;
}
if(rank==1){
sendarray=malloc(3*sizeof(int));
for(int i=0;i<3;i++)sendarray[i]=2;
count=3;
}
if(rank==2){
sendarray=malloc(6*sizeof(int));
for(int i=0;i<6;i++)sendarray[i]=3;
count=6;
}
MPI_Barrier(MPI_COMM_WORLD);
MPI_Gatherv(sendarray, count, MPI_INT, rbuf, rcounts,
displs, MPI_INT, 0, MPI_COMM_WORLD);
if(rank==0){
int SIZE=10;
for(int i=0;i<SIZE;i++)printf("(%d) %d ",i, rbuf[i]);
free(rbuf);
free(displs);
free(rcounts);
}
if(rank!=0)free(sendarray);
MPI_Finalize();
}
Specifically, when I run it, I get
(0) 0 (1) 1 (2) 0 (3) 2 (4) 2 (5) 2 (6) 3 (7) 3 (8) 3 (9) 3
Instead of something like this
(0) 1 (1) 2 (2) 2 (3) 2 (4) 3 (5) 3 (6) 3 (7) 3 (8) 3 (9) 3
Why is that?
What is even more interesting, is that it seems like missing elements are stored in 11th and 12th element of the rbuf, even though those are supposed to not even exist at the first place.
Your program is very close to working. If you change these lines:
displs[0]=1;
displs[1]=3;
displs[2]=6;
to this:
displs[0]=0;
displs[1]=displs[0]+rcounts[0];
displs[2]=displs[1]+rcounts[1];
you will get the expected output. The variable displs is the offset into the receiving buffer to place the data from process i.

How to print to console (Linux) without the standard library (libc)

I'm not using the standard library, since my target x86 Linux distro is very limited.
#include <unistd.h>
void _start () {
const char msg[] = "Hello world";
write( STDOUT_FILENO, msg, sizeof( msg ) - 1 );
}
I want to print text to console, but I can't, is there any other way to do this.
The code above wont work because it depends on standard library
gcc Test.cpp -o Test -nostdlib
If you don't have libc, then you need to craft a write() system call from scratch to write to the standard output.
See this resource for the details: http://weeb.ddns.net/0/programming/c_without_standard_library_linux.txt
Code example from the above link:
void* syscall5(
void* number,
void* arg1,
void* arg2,
void* arg3,
void* arg4,
void* arg5
);
typedef unsigned long int uintptr; /* size_t */
typedef long int intptr; /* ssize_t */
static
intptr write(int fd, void const* data, uintptr nbytes)
{
return (intptr)
syscall5(
(void*)1, /* SYS_write */
(void*)(intptr)fd,
(void*)data,
(void*)nbytes,
0, /* ignored */
0 /* ignored */
);
}
int main(int argc, char* argv[])
{
write(1, "hello\n", 6);
return 0;
}

MPI_Send/MPI_Recv blocked using MPI_Comm_connect / MPI_Comm_accept

I'm developing a client/server application by using MPI. I'm using MPI_Open_port(), MPI_Comm_Accept() and MPI_Comm_connect() for doing that. Inside the application I also use MPI_Send() and MPI_Recv() for the client/server comms.
All is fine when I transfer small arrays by using MPI_Send()/MPI_Recv(). I have a problem when some array growth in size. For some reason for large arrays MPI_Send/Recv functions are blocking the execution. It is like I'm sending and receiving the data on the the same Rank, what is true, but I'm using different communicators. I really don't understand why it doesn't work.
For a size 16*16*16 all works fine but, for instance, for a size equal to 32*32*32 it doesn't. I really want to know what is happening and I'll appreciate any advice for solving this issue.
Thanks.
The server is:
#include<mpi.h>
#include<stdio.h>
#define SIZE 32*32*32
//#define SIZE 16*16*16
int main(int argc, char *argv[])
{
MPI_Comm client;
char port_name[MPI_MAX_PORT_NAME];
FILE* port_file;
int buf[SIZE];
MPI_Init(&argc,&argv);
port_file = fopen("ports.txt","w");
MPI_Open_port(MPI_INFO_NULL, port_name);
fprintf(port_file,"%s\n",port_name);
fclose(port_file);
MPI_Comm_accept(port_name, MPI_INFO_NULL, 0, MPI_COMM_WORLD, &client);
printf("Client connected...\n");
printf("Sending data...\n");
MPI_Send((void*)&buf, SIZE, MPI_INT, 0, 0, client);
printf("Data sent...\n");
MPI_Comm_free(&client);
MPI_Close_port(port_name);
MPI_Finalize();
return 0;
}
The client is:
#include<mpi.h>
#include<stdio.h>
#define SIZE 32*32*32
//#define SIZE 16*16*16
int main(int argc, char *argv[])
{
MPI_Comm server;
char port_name[MPI_MAX_PORT_NAME];
FILE* port_file;
int buf[SIZE];
MPI_Init(&argc,&argv);
port_file = fopen("ports.txt","r");
fscanf(port_file,"%s\n",port_name);
fclose(port_file);
MPI_Comm_connect(port_name, MPI_INFO_NULL, 0, MPI_COMM_WORLD, &server);
printf("Client connected...\n");
printf("Receiving data...\n");
MPI_Recv((void*)&buf, SIZE, MPI_INT, MPI_ANY_SOURCE,
MPI_ANY_TAG, server, MPI_STATUS_IGNORE);
printf("Data received...\n");
//MPI_Comm_disconnect(&server);
MPI_Finalize();
return 0;
}

Callbacks provided in MPI_Comm_create_keyval are not called

I am reading "Using MPI" and try to execute the code myself. There is a nonblocking broadcast code in Chapter 6.2. I tried to run with my own callbacks instead of MPI_NULL_COPY_FN or MPI_NULL_DELETE_FN. Here is my code, it is very similar to the code in book, but the callbacks will not be called. I am not sure why. There are no warnings or errors when I compile with -Wall. Could you help me please? Thanks a lot.
#include <stdio.h>
#include <stdlib.h>
#include <mpi.h>
static int ibcast_keyval=MPI_KEYVAL_INVALID;
typedef struct
{
MPI_Comm comm;
int ordering_tag;
} Ibcast_syshandle;
typedef struct
{
MPI_Request *req_array;
MPI_Status *stat_array;
int num_sends;
int num_recvs;
} Ibcast_handle;
int Ibcast_work(Ibcast_handle *handle)
{
if(handle->num_recvs==0)
MPI_Startall(handle->num_sends, handle->req_array);
else
MPI_Startall(handle->num_recvs, &(handle->req_array[handle->num_sends]));
return MPI_SUCCESS;
}
int Ibcast_copy(MPI_Comm oldcomm, int keyval, void *extra, void *attr_in, void *attr_out, int *flag)
{
Ibcast_syshandle *syshandle=(Ibcast_syshandle *)attr_in;
Ibcast_syshandle *new_syshandle;
printf("keyval=%d\n", keyval);
fflush(stdout);
if((keyval==MPI_KEYVAL_INVALID)||(keyval!=ibcast_keyval)||(syshandle==NULL))
return 1;
new_syshandle=(Ibcast_syshandle *)malloc(sizeof(Ibcast_syshandle));
new_syshandle->ordering_tag=0;
MPI_Comm_dup(syshandle->comm, &(new_syshandle->comm));
{
int rank;
MPI_Comm_rank(new_syshandle->comm, &rank);
printf("Ibcast_copy called from %d\n", rank);
fflush(stdout);
}
*(void **)attr_out=(void *)new_syshandle;
*flag=1;
return MPI_SUCCESS;
}
int Ibcast_delete(MPI_Comm comm, int keyval, void *attr_val, void *extra)
{
Ibcast_syshandle *syshandle=(Ibcast_syshandle *)attr_val;
{
int rank;
MPI_Comm_rank(syshandle->comm, &rank);
printf("Ibcast_delete called from %d\n", rank);
fflush(stdout);
}
if((keyval==MPI_KEYVAL_INVALID)||(keyval!=ibcast_keyval)||(syshandle==NULL))
return 1;
MPI_Comm_free(&(syshandle->comm));
free(syshandle);
return MPI_SUCCESS;
}
int Ibcast(void *buf, int count, MPI_Datatype datatype, int root, MPI_Comm comm, Ibcast_handle **handle_out)
{
Ibcast_syshandle *syshandle;
Ibcast_handle *handle;
int flag, mask, relrank;
int retn, size, rank;
int req_no=0;
MPI_Comm_size(comm, &size);
MPI_Comm_rank(comm, &rank);
if(size==1)
{
(*handle_out)=NULL;
return MPI_SUCCESS;
}
if(ibcast_keyval==MPI_KEYVAL_INVALID)
// MPI_Keyval_create(MPI_NULL_COPY_FN, MPI_NULL_DELETE_FN, &ibcast_keyval, NULL);
MPI_Comm_create_keyval(Ibcast_copy, Ibcast_delete, &ibcast_keyval, NULL);
MPI_Comm_get_attr(comm, ibcast_keyval, (void **)&syshandle, &flag);
if(flag==0)
{
syshandle=(Ibcast_syshandle *)malloc(sizeof(Ibcast_syshandle));
syshandle->ordering_tag=0;
MPI_Comm_dup(comm, &(syshandle->comm));
MPI_Comm_set_attr(comm, ibcast_keyval, (void *)syshandle);
}
handle=(Ibcast_handle *)malloc(sizeof(Ibcast_handle));
handle->num_sends=handle->num_recvs=0;
mask=0x1;
relrank=(rank-root+size)%size;
while((mask&relrank)==0 && mask<size)
{
if((relrank|mask)<size)
++handle->num_sends;
mask<<=1;
}
if(mask<size)
++handle->num_recvs;
handle->req_array=(MPI_Request *)malloc(sizeof(MPI_Request)*(handle->num_sends+handle->num_recvs));
handle->stat_array=(MPI_Status *)malloc(sizeof(MPI_Status)*(handle->num_sends+handle->num_recvs));
mask=0x1;
relrank=(rank-root+size)%size;
while((mask&relrank)==0 && mask<size)
{
if((relrank|mask)<size)
MPI_Send_init(buf, count, datatype, ((relrank|mask)+root)%size, syshandle->ordering_tag, syshandle->comm, &(handle->req_array[req_no++]));
mask<<=1;
}
if(mask<size)
MPI_Recv_init(buf, count, datatype, ((relrank & (~mask))+root)%size, syshandle->ordering_tag, syshandle->comm, &(handle->req_array[req_no++]));
retn=Ibcast_work(handle);
++(syshandle->ordering_tag);
(*handle_out)=handle;
return retn;
}
int Ibcast_wait(Ibcast_handle **handle_out)
{
Ibcast_handle *handle=(*handle_out);
int retn, i;
if(handle==NULL)
return MPI_SUCCESS;
if(handle->num_recvs!=0)
{
MPI_Waitall(handle->num_recvs, &handle->req_array[handle->num_sends], &handle->stat_array[handle->num_sends]);
MPI_Startall(handle->num_sends, handle->req_array);
}
retn=MPI_Waitall(handle->num_sends, handle->req_array, handle->stat_array);
for(i=0; i<(handle->num_sends+handle->num_recvs);i++)
MPI_Request_free(&(handle->req_array[i]));
free(handle->req_array);
free(handle->stat_array);
free(handle);
*handle_out=NULL;
return retn;
}
int main( int argc, char *argv[] )
{
int buf1[10], buf2[20];
int rank, i;
Ibcast_handle *ibcast_handle_1, *ibcast_handle_2;
MPI_Init( &argc, &argv );
MPI_Comm_rank( MPI_COMM_WORLD, &rank );
if (rank == 0) {
for (i=0; i<10; i++) buf1[i] = i;
for (i=0; i<20; i++) buf2[i] = -i;
}
Ibcast( buf1, 10, MPI_INT, 0, MPI_COMM_WORLD, &ibcast_handle_1 );
Ibcast( buf2, 20, MPI_INT, 0, MPI_COMM_WORLD, &ibcast_handle_2 );
Ibcast_wait( &ibcast_handle_1 );
Ibcast_wait( &ibcast_handle_2 );
for (i=0; i<10; i++) {
if (buf1[i] != i) printf( "buf1[%d] = %d on %d\n", i, buf1[i], rank );
}
for (i=0; i<20; i++) {
if (buf2[i] != -i) printf( "buf2[%d] = %d on %d\n", i, buf2[i], rank );
}
MPI_Finalize();
return 0;
}
The callback functions are there to copy and delete the created attributes when a communicator is duplicated or deleted, or just when the attribute is deleted. The callback functions are necessary because the attributes can be completely arbitrary.
So here's a stripped down version of your code that does work (creating such a minimal example being a useful way to both track down problems and get help on sites like SO):
#include <stdio.h>
#include <stdlib.h>
#include <mpi.h>
static int ibcast_keyval;
int Ibcast_copy(MPI_Comm oldcomm, int keyval, void *extra, void *attr_in, void *attr_out, int *flag)
{
printf("In ibcast_copy: keyval = %d\n", keyval);
*flag = 1;
return MPI_SUCCESS;
}
int Ibcast_delete(MPI_Comm comm, int keyval, void *attr_val, void *extra)
{
printf("In ibcast_delete: keyval = %d\n", keyval);
return MPI_SUCCESS;
}
int main( int argc, char *argv[] )
{
int rank, i;
int attr=2;
MPI_Init( &argc, &argv );
MPI_Comm_rank( MPI_COMM_WORLD, &rank );
MPI_Comm duped_comm;
MPI_Comm_create_keyval(Ibcast_copy, Ibcast_delete, &ibcast_keyval, NULL);
MPI_Comm_set_attr( MPI_COMM_WORLD, ibcast_keyval, &attr);
MPI_Comm_dup( MPI_COMM_WORLD, &duped_comm);
MPI_Comm_free( &duped_comm );
MPI_Comm_delete_attr( MPI_COMM_WORLD, ibcast_keyval );
MPI_Finalize();
return 0;
}
Here we create the keyval with the callbacks, set the attribute corresponding to the key, and then dup MPI_COMM_WORLD (invoking the copy callback) and then free the dup'ed communicator and deleting the attribute from COMM_WORLD (invoking the delete callback both times:)
$ mpirun -np 1 ./comm-attr
In ibcast_copy: keyval = 10
In ibcast_delete: keyval = 10
In ibcast_delete: keyval = 10
In your code, you dup the communicator in Ibcast before setting the attribute, so that the copy callback is never invoked (as there is nothing to copy). You can fix that part by setting the attribute before the dup, but then there is another problem - you call dup and free within the callbacks, which is wrong-way around; those functions are to (indirectly) invoke the callbacks, not vice-versa.

Resources