Open MPI 1.10 not seeing processes on Linux Mint 17 - c

I recently changed from Ubuntu 14 to Linux Mint 17. After re-installing Open MPI 1.10 (which went fine) I ran into a very strange problem:
I compiled my standard mpi_hello_world.c with mpicc and tried to run
it, (mpiexec -n 4 mpi_hello) but got the following output:
Hello world from processor DarkHeresy, rank 0 out of 1 processors
Hello world from processor DarkHeresy, rank 0 out of 1 processors
Hello world from processor DarkHeresy, rank 0 out of 1 processors
Hello world from processor DarkHeresy, rank 0 out of 1 processors
So the mpiexec is only running the process at rank 0!
I also tried to run a more complicated program, but it resulted a following error:
Quitting. Number of MPI tasks (1) must be divisible by 4.
Earlier on Ubuntu, everything was running smoothly using the same parameters.
What could be wrong / how to solve this?
The hello world program was a standard one taken from www.mpitutorial.com:
#include <mpi.h>
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char** argv) {
MPI_Init(NULL, NULL);
int world_size;
MPI_Comm_size(MPI_COMM_WORLD, &world_size);
int world_rank;
MPI_Comm_rank(MPI_COMM_WORLD, &world_rank);
char processor_name[MPI_MAX_PROCESSOR_NAME];
int name_len;
MPI_Get_processor_name(processor_name, &name_len);
printf("Hello world from processor %s, rank %d out of %d processors\n",
processor_name, world_rank, world_size);
MPI_Finalize();
}

Related

How make mpiexec.hydra use all cores

I have a test code like this:
#include <mpi.h>
#include <stdio.h>
int main(int argc, char** argv) {
MPI_Init(NULL, NULL);
int world_size;
MPI_Comm_size(MPI_COMM_WORLD, &world_size);
int world_rank;
MPI_Comm_rank(MPI_COMM_WORLD, &world_rank);
char processor_name[MPI_MAX_PROCESSOR_NAME];
int name_len;
MPI_Get_processor_name(processor_name, &name_len);
printf("Hello world from processor %s, rank %d out of %d processors\n",
processor_name, world_rank, world_size);
MPI_Finalize();
}
and make file like this:
EXECS=mpi_hello_world
MPICC?=mpicc
all: ${EXECS}
mpi_hello_world: mpi_hello_world.c
${MPICC} -o mpi_hello_world mpi_hello_world.c
clean:
rm -f ${EXECS}
When running:
mpirun ./mpi_hello_world
got these:
Hello world from processor x-space, rank 2 out of 6 processors
Hello world from processor x-space, rank 3 out of 6 processors
Hello world from processor x-space, rank 4 out of 6 processors
Hello world from processor x-space, rank 1 out of 6 processors
Hello world from processor x-space, rank 5 out of 6 processors
Hello world from processor x-space, rank 0 out of 6 processors
All cores got used
And when running:
mpiexec.hydra ./mpi_hello_world
Got these
Hello world from processor x-space, rank 0 out of 1 processors
Only one core got used
When running:
mpiexec.hydra -n 6 ./mpi_hello_world
Got these:
Hello world from processor x-space, rank 0 out of 1 processors
Hello world from processor x-space, rank 0 out of 1 processors
Hello world from processor x-space, rank 0 out of 1 processors
Hello world from processor x-space, rank 0 out of 1 processors
Hello world from processor x-space, rank 0 out of 1 processors
Hello world from processor x-space, rank 0 out of 1 processors
Still,only one core got used
Question is:
When using mpiexec.hydra to run,how to use all cores?

mpirun was unable to launch the specified application as it could not access or execute an executable with C

I created a clustering program using MPI using 4 nodes. But when I run the MPI program, I can't execute the program even though I've been given full access to the folder I created.
The program I made to count the number of processors running from each node/client
#include <mpi.h>
#include <stdio.h>
int main(int argc, char** argv){
MPI_Init(NULL,NULL);
int world_size;
MPI_Comm_size(MPI_COMM_WORLD, &world_size);
int world_rank;
MPI_Comm_rank(MPI_COMM_WORLD, &world_rank);
char processor_name[MPI_MAX_PROCESSOR_NAME];
int name_len;
MPI_Get_processor_name(processor_name, &name_len);
printf("hello world from processor %s, rank %d out of %d processors\n",processor_name,world_rank,world_size);
MPI_Finalize();
return 0;
}
but when I run the program with the command : mpirun -np 8 --hostfile /mnt/sharedfolder/hostfile /mnt/sharedfolder/hello_world.out
a message like this appears
mpirun was unable to launch the specified application as it could not access
or execute an executable:
Executable: /mnt/sharedfolder/hello_world.out
Node: node-02
while attempting to start process rank 0.
--------------------------------------------------------------------------
8 total processes failed to start
previously I have given full access to the firewall on the google cloud platform on each node.please help me, I'm still learning

How to make/configure an MPI project to run on multiple processes?

I have a small piece of code that should run on multilple processes, which is :
#include <stdio.h>
#include "mpi.h"
main(int argc, char **argv)
{
int ierr, num_procs, my_id;
ierr = MPI_Init(&argc, &argv);
/* find out MY process ID, and how many processes were started. */
ierr = MPI_Comm_rank(MPI_COMM_WORLD, &my_id);
ierr = MPI_Comm_size(MPI_COMM_WORLD, &num_procs);
printf("Hello world! I'm process %i out of %i processes\n",
my_id, num_procs);
ierr = MPI_Finalize();
}
the output is :
Hello world! I'm process 0 out of 1 processes. although it should run on more than one process
We edited the run configurations arguments to "-np 2" so it would run on 2 processes but it always gives us 1 process no matter what the value is.
The used environment is:
Eclipse Juno on Ubuntu 12.04
Source of code: http://condor.cc.ku.edu/~grobe/docs/intro-MPI-C.shtml[^]
It seems that you are trying to launch your MPI application directly, i.e. starting the compiled executable with -np 2, like this:
$ ./my_app -np 2
That's not the right way of launching MPI programs. Instead, you should call your MPI implementation's launcher (usually named mpirun or mpiexec), and pass the name of your executable and -np 2 to that. So if your executable is called my_app, then instead of the above command, you should run:
$ mpirun -np 2 ./my_app
Consult your MPI implementation's documentation for the specifics.
Some small points about mpirun or mpiexec commands:
If you are trying to run you app on multiple nodes, make sure that your app is copied to all the nodes.In addition, make sure that all needed binary files (executable & libraries) could be found directly from command line. I personally, prefer to run my MPI programs within a shell script like this:
#!/bin/sh
PATH=/path/to/all/executable/files \
LD_LIBRARY_PATH=/path/to/all/libraries \
mpirun -np 4 ./my_app arg1 arg2 arg3

Error while compiling hello world program in mpi in C on openSUSE

PROGRAM :
#include <stdio.h>
#include <mpi.h>
int main (argc, argv)
int argc;
char *argv[];
{
int rank, size;
MPI_Init (&argc, &argv); /* starts MPI */
MPI_Comm_rank (MPI_COMM_WORLD, &rank); /* get current process id */
MPI_Comm_size (MPI_COMM_WORLD, &size); /* get number of processes */
printf( "Hello world from process %d of %d\n", rank, size );
MPI_Finalize();
return 0;
}
ERROR :
/usr/lib/gcc/i586-suse-linux/4.4/../../../../i586-suse-linux/bin/ld: cannot find -lopen-rte
collect2: ld returned 1 exit status
command for compilation :mpicc hello.c -o ./hello.
I am trying to build a cluster of openSUSE nodes.
So I am testing if mpich2 programs run on every node.
libopen-rte.so refers to OpenMPI, not MPICH2. Check default MPI implementation using mpi-selector tool. I personally prefer OpenMPI.
It looks like you have two MPI libraries installed at the same time. While this is possible, it's usually a pain to configure and use if you're not very careful. I'd suggest uninstalling either Open MPI or MPICH. That should take care of your problem.

compiling a mpi project with mingw64

i'm trying to get following code running with the "mpiexec -n 4 myprogram" command.
#include <stdio.h>
#include "mpi.h"
#include <omp.h>
int main(int argc, char *argv[]) {
int numprocs, rank, namelen;
char processor_name[MPI_MAX_PROCESSOR_NAME];
int iam = 0, np = 1;
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &numprocs);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Get_processor_name(processor_name, &namelen);
#pragma omp parallel default(shared) private(iam, np)
{
np = omp_get_num_threads();
iam = omp_get_thread_num();
printf("Hello from thread %d out of %d from process %d out of %d on %s\n",
iam, np, rank, numprocs, processor_name);
}
MPI_Finalize();
}
i'm using win7 x64, mpich2 x64, eclipse x64 and mingw64 (rubenvb build). it compiles well and also runs in eclipse environment (but there only with one process), but on the command line it immediatly closes without a result or an error. if i compile it to a x86 exe it runs as intended. so whats going wrong? is mpi incompatible with programs compiled by mingw64?
If you build it as a console program, the program will run, finish and then immediately close as there's likely no command sent by the program to hold the console open.
If you run it again, this time by going into the console first and running it from the command line, the console will stay open as it's running as a seperate process, instead of being tied to your program (as is the case when double clicking to run the program).
As for not running in parallel, make sure you have the flag -fopenmp in both the compilation and linking stages.

Resources