I'm trying to write a elaborate hello world program using MPI. The idea is that the root process sends a message to the other processes, which in turn sends a message to the root containing their process id. So I want to send several character arrays to the root process and eventually print it.
The problem is that I don't manage to collect all the messages into a variable in the root process. I only get compile errors or unwanted output. I feel like I'm making some basic error in manipulating the char arrays ... I'd be glad if you could help me out!
The code so far:
// Hello world using MPI
#include <stdio.h>
#include <math.h>
#include <mpi.h>
#include <string.h>
int main(int argc, char **argv){
// Define useful global constants
int my_id, root_process, num_procs, rc;
MPI_Status status;
char[] final_message; // <---- How should I define final message? Can I do this in another way?
// Let master process be process 0
root_process = 0;
// Now, replicate this process to create parallel processes:
rc = MPI_Init(&argc, &argv);
// Find out MY process ID and how many processes were started:
rc = MPI_Comm_rank(MPI_COMM_WORLD, &my_id);
rc = MPI_Comm_size(MPI_COMM_WORLD, &num_procs);
// Broadcast master's message to slaves
char message1[20];
strcpy(message1, "Hello, World!");
rc = MPI_Bcast(&message1, 14, MPI_CHAR, root_process, MPI_COMM_WORLD);
// TEST: Each slave now displays message1 along with which process it is.
//printf("I'm process %i, the message is: %.13s \n",my_id,message1);
// Now the master collects a return message from each slave, including the process ID.
char message2[31];
char message22[2];
strcpy(message2, "Master, I'm process number ");// %i \n",&my_id ); //28
sprintf(message22,"%d\n",my_id);
strcat(message2,message22);
rc = MPI_Gather(message2, 1, MPI_CHAR, final_message, 1, MPI_CHAR, root_process, MPI_COMM_WORLD);
// Display the message collected by the master.
printf(final_message);
// Close down this process
rc = MPI_Finalize();
}
Related
This question already has answers here:
Ordering Output in MPI
(4 answers)
Closed 5 years ago.
Below is a very basic MPI program
#include <mpi.h>
#include <stdio.h>
int main(int argc, char * argv[]) {
int rank;
int size;
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &size);
MPI_Barrier(MPI_COMM_WORLD);
printf("Hello from %d\n", rank);
MPI_Barrier(MPI_COMM_WORLD);
printf("Goodbye from %d\n", rank);
MPI_Barrier(MPI_COMM_WORLD);
printf("Hello 2 from %d\n", rank);
MPI_Barrier(MPI_COMM_WORLD);
printf("Goodbye 2 from %d\n", rank);
MPI_Finalize();
return 0;
}
You would expect the output to be (for 2 processes)
Hello from 0
Hello from 1
Goodbye from 1
Goodbye from 0
Hello 2 from 1
Hello 2 from 0
Goodbye 2 from 0
Goodbye 2 from 1
Or something similar (the hellos and goodbyes should be grouped, but process order is not guaranteed).
Here is my actual output:
Hello from 0
Goodbye from 0
Hello 2 from 0
Goodbye 2 from 0
Hello from 1
Goodbye from 1
Hello 2 from 1
Goodbye 2 from 1
Am I fundamentally misunderstanding what MPI_Barrier is supposed to do? From what I can tell, if I use it only once, then it gives me expected results, but any more than that and it seems to do absolutely nothing.
I realize many similar questions have been asked before, but the askers in the questions I viewed misunderstood the function of MPI_Barrier.
the hellos and goodbyes should be grouped
They should not, there is additional (MPI-asynchronous) buffering inside printf function, and other one buffering is stdout gathering from multiple MPI processes to single user terminal.
printf just prints into in-memory buffer of libc (glibc), which is sometimes flushed to real file descriptor (stdout; use fflush to flush the buffer); and fprintf(stderr,...) usually have less buffering than stdout
Remote tasks are started by mpirun/mpiexec, usually with ssh remote shell, which does stdout/stderr forwarding. ssh (and TCP too) will buffer data and there can be reordering when data from ssh is showed at your terminal by mpirun/mpiexec or other entity (several data streams are multiplexed into one).
What you get is like 4 strings from first process are buffered and flushed at its exit (all strings were printed to stdout with usually has buffer of several kilobytes); and 4 more strings from second process are buffered too till exit. All 4 strings are sent by ssh or other launch method to your console as single "packet" and your console just shows both packets of 4 lines each in some order, either "4_lines_packet_from_id_0; 4_lines_packet_from_id_1;" or as "4_lines_packet_from_id_1;4_lines_packet_from_id_0;".
MPI_Barrier is supposed to do?
MPI_Barrier synchronizes parts of code, but it can't disable any buffering in libc/glibc printing and file I/O functions, nor in ssh or other remote shell.
If all of your processes run on machines with synchronized system clocks (they will be when you have single machine and they should be when there is ntpd on your cluster), you can add timestamp field to every printf to check that real order is respecting your barrier (gettimeofday looks for current time, and has no extra buffering). You may sort timestamped output even if printf and ssh will reorder messages.
#include <mpi.h>
#include <sys/time.h>
#include <stdio.h>
void print_timestamped_message(int mpi_rank, char *s);
{
struct timeval now;
gettimeofday(&now, NULL);
printf("[%u.%06u](%d): %s\n", now.tv_sec, now.tv_usec, mpi_rank, s);
}
int main(int argc, char * argv[]) {
int rank;
int size;
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &size);
MPI_Barrier(MPI_COMM_WORLD);
print_timestamped_message(rank, "First Hello");
MPI_Barrier(MPI_COMM_WORLD);
print_timestamped_message(rank, "First Goodbye");
MPI_Barrier(MPI_COMM_WORLD);
print_timestamped_message(rank, "Second Hello");
MPI_Barrier(MPI_COMM_WORLD);
print_timestamped_message(rank, "Second Goodbye");
MPI_Finalize();
return 0;
}
I am tring to use GDB to see what's going on about low level OPEN MPI calls. However, I couldn't use GDB to jump into low level library functions in OpenMPI.
I first compiled OpenMPI with tag "--enable-debug --enable-debug-symbols", so library should be compiled with "-g" tag. Then I compile my test code using "mpicc /path/to/my/code -g -o /dest/code". Code is simple shown as below:
#include "mpi.h"
#include <stdio.h>
main(int argc, char *argv[]) {
int numtasks, rank, dest, source, rc, count, tag=1;
int buf;
const int root=0;
MPI_Status Stat;
// sleep still GDB attached
int i = 0;
char hostname[256];
gethostname(hostname, sizeof(hostname));
printf("PID %d on %s ready for attach\n", getpid(), hostname);
fflush(stdout);
while (0 == i)
sleep(5);
MPI_Init(&argc,&argv);
MPI_Comm_size(MPI_COMM_WORLD, &numtasks);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
if(rank == root) {
buf = 777;
}
printf("[%d]: Before Bcast, buf is %d\n", rank, buf);
MPI_Bcast(&buf, 1, MPI_INT, root, MPI_COMM_WORLD);
printf("[%d]: After Bcast, buf is %d\n", rank, buf);
MPI_Finalize();
}
This code will firstly printout PID for every process, then keep sleep wait for attach. Then I use GDB attach method to attach one of the process. Set break point & reset the value of i will jump out of waiting while loop.
Then, I continue use "s" to step next my code. It supposed to jump into low level library functions like MPI_Init(), MPI_Bcast(). However, GDB will just going to next line and ignore jumping into any function. Neither "bt" could print out backtrace information. But the whole code could run without error.
Could someone found any mistake I made in this whole process? Thanks ahead.
I have a problem with the following codes:
Master:
#include <iostream>
using namespace std;
#include "mpi.h"
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#define PB1 1
#define PB2 1
int main (int argc, char *argv[])
{
int np[2] = { 2, 1 }, errcodes[2];
MPI_Comm parentcomm, intercomm;
char *cmds[2] = { "./slave", "./slave" };
MPI_Info infos[2] = { MPI_INFO_NULL, MPI_INFO_NULL };
MPI_Init(NULL, NULL);
#if PB1
for(int i = 0 ; i<2 ; i++)
{
MPI_Info_create(&infos[i]);
char hostname[] = "localhost";
MPI_Info_set(infos[i], "host", hostname);
}
#endif
MPI_Comm_spawn_multiple(2, cmds, MPI_ARGVS_NULL, np, infos, 0, MPI_COMM_WORLD, &intercomm, errcodes);
printf("c Creation of the workers finished\n");
#if PB2
sleep(1);
#endif
MPI_Comm_spawn_multiple(2, cmds, MPI_ARGVS_NULL, np, infos, 0, MPI_COMM_WORLD, &intercomm, errcodes);
printf("c Creation of the workers finished\n");
MPI_Finalize();
return 0;
}
Slave:
#include "mpi.h"
#include <stdio.h>
using namespace std;
int main( int argc, char *argv[])
{
int rank;
MPI_Init(0, NULL);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
printf("rank = %d\n", rank);
MPI_Finalize();
return 0;
}
I do not know why when I run mpirun -np 1 ./master, my program stops with the following mesage when I set PB1 and PB2 to 1 (it works well when I set of of them to 0):
There are not enough slots available in the system to satisfy the 2
slots that were requested by the application: ./slave Either
request fewer slots for your application, or make more slots available
for use.
For instance, when I set PB2 to 0, the program works well. Thus, I suppose that it is because the MPI_finalize does not finish its job ...
I googled, but I did not find any answer for my problem. I tried various things as: call MPI_comm_disconnect, add a barrier, ... but nothing worked.
I work on Ubuntu (15.10) and use the OpenMPI version 1.10.2.
The MPI_Finalize on the first set of salves will not finish until MPI_Finalize is called on the master. MPI_Finalize is collective over all connected processes. You can work around that by manually disconnecting the first batch of salves from the intercommunicator before calling MPI_Finalize. This way, the slaves will actually finish complete and exit - freeing the "slots" for the new batch of slaves. Unfortunately I don't see a standardized way to really ensure the slaves are finished in a sense that their slots are freed, because that would be implementation defined. The fact that OpenMPI freezes in the MPI_Comm_spawn_multiple instead of returning an error is unfortunate and one might consider that a bug. Anyway here is a draft of what you could do:
Within the master, each time is done with its slaves:
MPI_Barrier(&intercomm); // Make sure master and slaves are somewhat synchronized
MPI_Comm_disconnect(&intercomm);
sleep(1); // This is the ugly unreliable way to give the slaves some time to shut down
The slave:
MPI_Comm parent;
MPI_Comm_get_parent(&parent); // you should have that already
MPI_Comm_disconnect(&parent);
MPI_Finalize();
However, you still need to make sure OpenMPI knows how many slots should be reserved for the whole application (universe_size). You can do that for example with a hostfile:
localhost slots=4
And then mpirun -np 1 ./master.
Now this is not pretty and I would argue that your approach to dynamically spawning MPI workers isn't really what MPI is meant for. It may be supported by the standard, but that doesn't help you if implementations are struggling. However, there is not enough information on how you intend to communicate with the external processes to provide a cleaner, more ideomatic solution.
One last remark: Do check the return codes of MPI functions. Especially MPI_Comm_spawn_multiple.
I'm writing a simple program in C with MPI library.
The intent of this program is the following:
I have a group of processes that perform an iterative loop, at the end of this loop all processes in the communicator must call two collective functions(MPI_Allreduce and MPI_Bcast). The first one sends the id of the processes that have generated the minimum value of the num.val variable, and the second one broadcasts from the source num_min.idx_v to all processes in the communicator MPI_COMM_WORLD.
The problem is that I don't know if the i-th process will be finalized before calling the collective functions. All processes have a probability of 1/10 to terminate. This simulates the behaviour of the real program that I'm implementing. And when the first process terminates, the others cause deadlock.
This is the code:
#include <stdio.h>
#include <stdlib.h>
#include <mpi.h>
typedef struct double_int{
double val;
int idx_v;
}double_int;
int main(int argc, char **argv)
{
int n = 10;
int max_it = 4000;
int proc_id, n_proc;double *x = (double *)malloc(n*sizeof(double));
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &n_proc);
MPI_Comm_rank(MPI_COMM_WORLD, &proc_id);
srand(proc_id);
double_int num_min;
double_int num;
int k;
for(k = 0; k < max_it; k++){
num.idx_v = proc_id;
num.val = rand()/(double)RAND_MAX;
if((rand() % 10) == 0){
printf("iter %d: proc %d terminato\n", k, proc_id);
MPI_Finalize();
exit(EXIT_SUCCESS);
}
MPI_Allreduce(&num, &num_min, 1, MPI_DOUBLE_INT, MPI_MINLOC, MPI_COMM_WORLD);
MPI_Bcast(x, n, MPI_DOUBLE, num_min.idx_v, MPI_COMM_WORLD);
}
MPI_Finalize();
exit(EXIT_SUCCESS);
}
Perhaps I should create a new group and new communicator before calling MPI_Finalize function in the if statement? How should I solve this?
If you have control over a process before it terminates you should send a non-blocking flag to a rank that cannot terminate early (lets call it the root rank). Then instead of having a blocking all_reduce, you could have sends from all ranks to the root rank with their value.
The root rank could post non-blocking receives for a possible flag, and the value. All ranks would have to have sent one or the other. Once all ranks are accounted for you can do the reduce on the root rank, remove exited ranks from communication and broadcast it.
If your ranks exit without notice, I am not sure what options you have.
I am trying to run the example code at the following url. I compiled the program with "mpicc twoGroups.c" and tried to run it as "./a.out", but got the following message: Must specify MP_PROCS= 8. Terminating.
My question is how do you set MP_PROCS=8 ?
Group and Communication Routine examples at here https://computing.llnl.gov/tutorials/mpi/
#include "mpi.h"
#include <stdio.h>
#define NPROCS 8
main(int argc, char *argv[]) {
int rank, new_rank, sendbuf, recvbuf, numtasks,
ranks1[4]={0,1,2,3}, ranks2[4]={4,5,6,7};
MPI_Group orig_group, new_group;
MPI_Comm new_comm;
MPI_Init(&argc,&argv);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &numtasks);
if (numtasks != NPROCS) {
printf("Must specify MP_PROCS= %d. Terminating.\n",NPROCS);
MPI_Finalize();
exit(0);
}
sendbuf = rank;
/* Extract the original group handle */
MPI_Comm_group(MPI_COMM_WORLD, &orig_group);
/* Divide tasks into two distinct groups based upon rank */
if (rank < NPROCS/2) {
MPI_Group_incl(orig_group, NPROCS/2, ranks1, &new_group);
}
else {
MPI_Group_incl(orig_group, NPROCS/2, ranks2, &new_group);
}
/* Create new new communicator and then perform collective communications */
MPI_Comm_create(MPI_COMM_WORLD, new_group, &new_comm);
MPI_Allreduce(&sendbuf, &recvbuf, 1, MPI_INT, MPI_SUM, new_comm);
MPI_Group_rank (new_group, &new_rank);
printf("rank= %d newrank= %d recvbuf= %d\n",rank,new_rank,recvbuf);
MPI_Finalize();
}
When you execute an MPI program, you need to use the appropriate wrappers. Most of the time it looks like this:
mpiexec -n <number_of_processes> <executable_name> <executable_args>
So for your simple example:
mpiexec -n 8 ./a.out
You will also see mpirun used instead of mpiexec or -np instead of -n. Both are fine most of the time.
If you're just starting out, it would also be a good idea to make sure you're using a recent version of MPI so you don't get old bugs or weird execution environments. MPICH and Open MPI are the two most popular implementations. MPICH just released version 3.1 available here while Open MPI has version 1.7.4 available here. You can also usually get either of them via your friendly neighborhood package manager.