How make mpiexec.hydra use all cores - mpiexec

I have a test code like this:
#include <mpi.h>
#include <stdio.h>
int main(int argc, char** argv) {
MPI_Init(NULL, NULL);
int world_size;
MPI_Comm_size(MPI_COMM_WORLD, &world_size);
int world_rank;
MPI_Comm_rank(MPI_COMM_WORLD, &world_rank);
char processor_name[MPI_MAX_PROCESSOR_NAME];
int name_len;
MPI_Get_processor_name(processor_name, &name_len);
printf("Hello world from processor %s, rank %d out of %d processors\n",
processor_name, world_rank, world_size);
MPI_Finalize();
}
and make file like this:
EXECS=mpi_hello_world
MPICC?=mpicc
all: ${EXECS}
mpi_hello_world: mpi_hello_world.c
${MPICC} -o mpi_hello_world mpi_hello_world.c
clean:
rm -f ${EXECS}
When running:
mpirun ./mpi_hello_world
got these:
Hello world from processor x-space, rank 2 out of 6 processors
Hello world from processor x-space, rank 3 out of 6 processors
Hello world from processor x-space, rank 4 out of 6 processors
Hello world from processor x-space, rank 1 out of 6 processors
Hello world from processor x-space, rank 5 out of 6 processors
Hello world from processor x-space, rank 0 out of 6 processors
All cores got used
And when running:
mpiexec.hydra ./mpi_hello_world
Got these
Hello world from processor x-space, rank 0 out of 1 processors
Only one core got used
When running:
mpiexec.hydra -n 6 ./mpi_hello_world
Got these:
Hello world from processor x-space, rank 0 out of 1 processors
Hello world from processor x-space, rank 0 out of 1 processors
Hello world from processor x-space, rank 0 out of 1 processors
Hello world from processor x-space, rank 0 out of 1 processors
Hello world from processor x-space, rank 0 out of 1 processors
Hello world from processor x-space, rank 0 out of 1 processors
Still,only one core got used
Question is:
When using mpiexec.hydra to run,how to use all cores?

Related

mpirun was unable to launch the specified application as it could not access or execute an executable with C

I created a clustering program using MPI using 4 nodes. But when I run the MPI program, I can't execute the program even though I've been given full access to the folder I created.
The program I made to count the number of processors running from each node/client
#include <mpi.h>
#include <stdio.h>
int main(int argc, char** argv){
MPI_Init(NULL,NULL);
int world_size;
MPI_Comm_size(MPI_COMM_WORLD, &world_size);
int world_rank;
MPI_Comm_rank(MPI_COMM_WORLD, &world_rank);
char processor_name[MPI_MAX_PROCESSOR_NAME];
int name_len;
MPI_Get_processor_name(processor_name, &name_len);
printf("hello world from processor %s, rank %d out of %d processors\n",processor_name,world_rank,world_size);
MPI_Finalize();
return 0;
}
but when I run the program with the command : mpirun -np 8 --hostfile /mnt/sharedfolder/hostfile /mnt/sharedfolder/hello_world.out
a message like this appears
mpirun was unable to launch the specified application as it could not access
or execute an executable:
Executable: /mnt/sharedfolder/hello_world.out
Node: node-02
while attempting to start process rank 0.
--------------------------------------------------------------------------
8 total processes failed to start
previously I have given full access to the firewall on the google cloud platform on each node.please help me, I'm still learning

Open MPI 1.10 not seeing processes on Linux Mint 17

I recently changed from Ubuntu 14 to Linux Mint 17. After re-installing Open MPI 1.10 (which went fine) I ran into a very strange problem:
I compiled my standard mpi_hello_world.c with mpicc and tried to run
it, (mpiexec -n 4 mpi_hello) but got the following output:
Hello world from processor DarkHeresy, rank 0 out of 1 processors
Hello world from processor DarkHeresy, rank 0 out of 1 processors
Hello world from processor DarkHeresy, rank 0 out of 1 processors
Hello world from processor DarkHeresy, rank 0 out of 1 processors
So the mpiexec is only running the process at rank 0!
I also tried to run a more complicated program, but it resulted a following error:
Quitting. Number of MPI tasks (1) must be divisible by 4.
Earlier on Ubuntu, everything was running smoothly using the same parameters.
What could be wrong / how to solve this?
The hello world program was a standard one taken from www.mpitutorial.com:
#include <mpi.h>
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char** argv) {
MPI_Init(NULL, NULL);
int world_size;
MPI_Comm_size(MPI_COMM_WORLD, &world_size);
int world_rank;
MPI_Comm_rank(MPI_COMM_WORLD, &world_rank);
char processor_name[MPI_MAX_PROCESSOR_NAME];
int name_len;
MPI_Get_processor_name(processor_name, &name_len);
printf("Hello world from processor %s, rank %d out of %d processors\n",
processor_name, world_rank, world_size);
MPI_Finalize();
}

How to make/configure an MPI project to run on multiple processes?

I have a small piece of code that should run on multilple processes, which is :
#include <stdio.h>
#include "mpi.h"
main(int argc, char **argv)
{
int ierr, num_procs, my_id;
ierr = MPI_Init(&argc, &argv);
/* find out MY process ID, and how many processes were started. */
ierr = MPI_Comm_rank(MPI_COMM_WORLD, &my_id);
ierr = MPI_Comm_size(MPI_COMM_WORLD, &num_procs);
printf("Hello world! I'm process %i out of %i processes\n",
my_id, num_procs);
ierr = MPI_Finalize();
}
the output is :
Hello world! I'm process 0 out of 1 processes. although it should run on more than one process
We edited the run configurations arguments to "-np 2" so it would run on 2 processes but it always gives us 1 process no matter what the value is.
The used environment is:
Eclipse Juno on Ubuntu 12.04
Source of code: http://condor.cc.ku.edu/~grobe/docs/intro-MPI-C.shtml[^]
It seems that you are trying to launch your MPI application directly, i.e. starting the compiled executable with -np 2, like this:
$ ./my_app -np 2
That's not the right way of launching MPI programs. Instead, you should call your MPI implementation's launcher (usually named mpirun or mpiexec), and pass the name of your executable and -np 2 to that. So if your executable is called my_app, then instead of the above command, you should run:
$ mpirun -np 2 ./my_app
Consult your MPI implementation's documentation for the specifics.
Some small points about mpirun or mpiexec commands:
If you are trying to run you app on multiple nodes, make sure that your app is copied to all the nodes.In addition, make sure that all needed binary files (executable & libraries) could be found directly from command line. I personally, prefer to run my MPI programs within a shell script like this:
#!/bin/sh
PATH=/path/to/all/executable/files \
LD_LIBRARY_PATH=/path/to/all/libraries \
mpirun -np 4 ./my_app arg1 arg2 arg3

MPI Unexpected output

I was reading and practicing MPI programs from a tutorial. There I saw an example of finding a rank of a process. But the same example is giving different output on my machine(Ubuntu 10.04)..
Here is the program
#include <stdio.h>
#include <mpi.h>
main(int argc, char **argv)
{
int ierr, num_procs, my_id;
ierr = MPI_Init(&argc, &argv);
/* find out MY process ID, and how many processes were started. */
ierr = MPI_Comm_rank(MPI_COMM_WORLD, &my_id);
ierr = MPI_Comm_size(MPI_COMM_WORLD, &num_procs);
printf("Hello world! I'm process %i out of %i processes\n",
my_id, num_procs);
ierr = MPI_Finalize();
}
The expected output according to the tutorial is
Expected Output :
Hello world! I'm process 0 out of 4 processes.
Hello world! I'm process 2 out of 4 processes.
Hello world! I'm process 1 out of 4 processes.
Hello world! I'm process 3 out of 4 processes.
Output which I am getting
Hello world! I'm process 0 out of 1 processes
Hello world! I'm process 0 out of 1 processes
Hello world! I'm process 0 out of 1 processes
Hello world! I'm process 0 out of 1 processes
My machine uses intel i3,Dell Inspiron and is having Ubuntu 10.04 OS.Help me resolving the problem.
I have just compiled and run your program on my Ubuntu:
tom#tom-ThinkPad-T500:~/MPI_projects/Start/net2/net2/bin/Debug$ mpirun -n 6 ./output
Hello world! I'm process 3 out of 6 processes
Hello world! I'm process 4 out of 6 processes
Hello world! I'm process 0 out of 6 processes
Hello world! I'm process 2 out of 6 processes
Hello world! I'm process 1 out of 6 processes
Hello world! I'm process 5 out of 6 processes
Enter the folder with your executable file and run:
mpirun -np 2 ./output
or
mpirun -np 6 ./output
the flag -np modifies the number of called processes (http://linux.die.net/man/1/mpirun).
You can also run mpirun without any flags to display lots of useful info.
Another interesting command is mpirun -info, which will show print MPI build information.
This is the first part of my output:
tom#tom-ThinkPad-T500:~/MPI_projects/Start/net2/net2/bin/Debug$ mpirun -info
HYDRA build details:
Version: 1.4.1
Release Date: Wed Aug 24 14:40:04 CDT 2011
The last resort is to re-install or update your MPI using, for example, the following command : sudo apt-get install libcr-dev mpich2 mpich2-doc

MPI on PBS cluster Hello World

I am using mpiexec to run a couple of hello world executables. They each run, but the number of processes is always 1 where it looks like there should be 4 processes. Does someone understand why? Also I'm not sure why stty is giving me an invalid argument. Thanks!
Here is the output:
/bin/stty: standard input: invalid argument
Hello world from process 0 of 1
Hello world from process 0 of 1
Hello world from process 0 of 1
Hello world from process 0 of 1
Here is the c file:
#include <stdio.h>
#include <mpi.h>
int main(int argc, char *argv[])
{
int rank, size;
MPI_Init (&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &size);
printf("Hello world from process %d of %d\n", rank, size);
fflush(stdout);
MPI_Finalize();
return 0;
}
Here is the submission script:
#!/bin/bash
#PBS -N helloWorld
#PBS -l select=4:ncpus=2
#PBS -j oe
#PBS -o output
#PBS -l walltime=3:00
cd $PBS_O_WORKDIR
mpiexec ./helloWorld
Steven:
The above should work; it looks like something along the line (PBS <-> MPI library <-> mpiexec) is misconfigured.
The first, most obvious guess -- is the mpiexec the same mpi launching program that corresponds to the library you compiled with? If you do a which mpiexec in your script, do you get something that corresponds to the which mpicc when you compile the program? Do you have to do anything like a "module load [mpi package]" before you compile?
Similarly, is your mpiexec PBS-aware? If not, you might have to specify a hostfile (${PBS_NODEFILE}) somehow, and the number of processors.
What mpi are you using, and what system are you running on -- is it a publically available system with documentation we can look at?

Resources