Here's my problem:
Given an initial interval [a,b] to parallelize & supposing that there are processes faster than others, I would like to make a process i go "help" another one (j) when it finishes its chunk (work), helping in this case means dividing equally the chunk of process j (the one that's still working) between it & process i (the one that's going to help). I've got the idea of this algorithm but I don't know how to use MPI communication functions (such as Broadcast, Allgather, send, recv) to implement it: I would like to use an array "arr" shared by all processes & which size's equals the number of all processes. arr[i] = 1 means that process i has finished working, otherwise it's equal to -1. I initialize all its elements to -1, when a process of rank "rank" finishes its work, it does arr[rank] = 1 & keeps waiting for a working process to notice it & send it a new chunk. Here's a "pseudo-code" of what I would like to achieve:
MPI_Init ( &argc, &argv );
MPI_Comm_rank ( MPI_COMM_WORLD, &rank );
MPI_Comm_size ( MPI_COMM_WORLD, &nb_proc );
int i;
int a = 0, b = max; //initial interval [a,b] to parallelize
int j; arr[nb_proc];
for(j = 0; j < 10; j++)
{
arr[j] = -1; //initially, all processes are working
}
do
{
i = a + rank;
do
{
if(there's a free process) // checking the array "arr" & finding at least one element that equals 1
{
//Let that process be process of rank "r":
arr[r] = -1;
int mid = (b+i)/2; //dividing the rest of the work
a = mid + rank - r;
MPI_Send(a to process r);
MPI_Send(b to process r);
b = a-1;
}
/*does i-th iteration*/
i = i + p;
}
while(i <= b);
arr[rank] = 1; //finished working and about to start waiting for new work
}
while(there's at least one process that's still working & if it's the case get the new work (starting a & finishing b) from it);
MPI_Finalize ( );
return 0;
My main problem concerns the way to access to the array and how to be sure it's updated to all processes and that all processes have the same array at an instant t. I would appreciate your help a lot. Thanks in advance.
Related
i wrote a simple c program to make every thread multiplate its index by 1000000 and add it to sum , i created 5 threads so the logic answer would be (0+1+2+3+4)*1000000 which is 10000000 but it throws 14000000 instead .could anyone helps me understanding this?
#include<pthread.h>
#include<stdio.h>
typedef struct argument {
int index;
int sum;
} arg;
void *fonction(void *arg0) {
((arg *) arg0) -> sum += ((arg *) arg0) -> index * 1000000;
}
int main() {
pthread_t thread[5];
int order[5];
arg a;
for (int i = 0; i < 5; i++)
order[i] = i;
a.sum = 0;
for (int i = 0; i < 5; i++) {
a.index = order[i];
pthread_create(&thread[i], NULL, fonction, &a);
}
for (int i = 0; i < 5; i++)
pthread_join(thread[i], NULL);
printf("%d\n", a.sum);
return 0;
}
It is 140.. because the behavior is undefined. The results will differ on different machines and other environmental factors. The undefined behavior is caused as a result of all threads accessing the same object (see &a given to each thread) that is modified after the first thread is created.
When each thread runs it accesses the same index (as part of accessing a member of the same object (&a)). Thus the assumption that the threads will see [0,1,2,3,4] is incorrect: multiple threads likely see the same value of index (eg. [0,2,4,4,4]1) when they run. This depends on the scheduling with the loop creating threads as it also modifies the shared object.
When each thread updates sum it has to read and write to the same shared memory. This is inherently prone to race conditions and unreliable results. For example, it could be lack of memory visibility (thread X doesn’t see value updated from thread Y) or it could be a conflicting thread schedule between the read and write (thread X read, thread Y read, thread X write, thread Y write) etc..
If creating a new arg object for each thread, then both of these problems are avoided. While the sum issue can be fixed with the appropriate locking, the index issue can only be fixed by not sharing the object given as the thread input.
// create 5 arg objects, one for each thread
arg a[5];
for (..) {
a[i].index = i;
// give DIFFERENT object to each thread
pthread_create(.., &a[i]);
}
// after all threads complete
int sum = 0;
for (..) {
sum += a[i].result;
}
1 Even assuming that there is no race condition in the current execution wrt. the usage of sum, the sequence for the different threads seeing index values as [0,2,4,4,4], the sum of which is 14, might look as follows:
a.index <- 0 ; create thread A
thread A reads a.index (0)
a.index <- 1 ; create thread B
a.index <- 2 ; create thread C
thread B reads a.index (2)
a.index <- 3 ; create thread D
a.index <- 4 ; create thread E
thread D reads a.index (4)
thread C reads a.index (4)
thread E reads a.index (4)
I am new to MPI programing. I have a 8 by 10 array that I need to use to find the summation of each row parallely. In rank 0 (process 0), it will generate the 8 by 10 matrix using a 2 dimensional array. Then I would use tag number as the first index value(row number) of the array. This way, I can use a unique buffer to send through Isend. However, it looks like my method of tag number generation for Isend is not working. Can you please look in to the following code and tell me if I am passing the 2D array correctly and tag number. When I run this code, it stop just after executing rannk 1 and waits. I use 3 process for this example and use the command mpirun -np 3 test please let me know how to tackle this problem with an example if possible.
#include "mpi.h"
#include <stdio.h>
#include <stdlib.h>
int main (int argc, char *argv[])
{
MPI_Init(&argc, &argv);
int world_rank;
MPI_Comm_rank(MPI_COMM_WORLD, &world_rank);
int world_size;
MPI_Comm_size(MPI_COMM_WORLD, &world_size);
int tag = 1;
int arr[8][10];
MPI_Request request;
MPI_Status status;
int source = 0;
int dest;
printf ("\n--Current Rank: %d\n", world_rank);
if (world_rank == 0)
{
int i = 0;
int a, b, x, y;
printf("* Rank 0 excecuting\n");
for(a=0; a<8/(world_size-1); a++)//if -np is 3, this will loop 4 times
{
for(b=0; b<(world_size-1); b++)//if -np is 3, this loops will loop 2 times
{//So, if -np is 3, due to both of these loops, Isend will be called 8 times
dest = b+1;
tag = a+b;//create a uniqe tag value each time, which can be use as first index value of array
//Error: This tag value passing to Isend doesn't seems to be workiing
MPI_Isend(&arr[tag][0], 10, MPI_INT, dest, tag, MPI_COMM_WORLD, &request);
}
}
for(x=0; x<8; x++)//Generating the whole 8 by 10 2D array
{
i++;
for ( y = 0; y < 10; y++ )
{
arr[x][y] = i;
}
}
}
else
{
int a, b;
for(b=1; b<=8/(world_size-1); b++)
{
int sum = 0;
int i;
MPI_Irecv(&arr[tag][0], 10, MPI_INT, source, tag, MPI_COMM_WORLD, &request);
MPI_Wait (&request, &status);
//Error: not getting the correct tag value
for(i = 0; i<10; i++)
{
sum = arr[tag][i]+sum;
}
printf("\nSum is: %d at rank: %d and tag is:%d\n", sum, world_rank, tag);
}
}
MPI_Finalize();
}
The tag issue is because of how the tag is computed (or not) on different processes. You're initializing the tag values for all processes as
int tag = 1;
and later, for process rank 0 you set the tag to
tag = a+b;
which, for the first time this is set, will set tag to 0 because both a and b start out as zero. However, for processes with rank above 0, the tag is never changed. They will continue to have the tag set to 1.
The tag uniquely identifies the message being sent by MPI_Isend and MPI_Irecv, which means that a send and its corresponding receive must have the same tag for the data transfer to succeed. Because the tags are mismatched between processes for most of the receives, the transfers are mostly unsuccessful. This causes processes with rank higher than 0 to eventually block (wait) forever on the call to MPI_Wait.
In order to fix this, you have to make sure to change the tags for the processes with rank above zero. However, before we can do that, there's a few other issues worth touching up on.
With the way you've set your tag for the rank 0 process right now, tag can only ever have values 0 to 4 (assuming 3 processes). This is because a is limited to the range 0 to 3, and b can only have values 0 or 1. The maximum possible sum of these values is 4. This means that when you access your array using arr[tag][0], you will miss out on a lot of the data, and you'll re-send the same rows several times. I recommend changing the way you approach sending each subarray (which you're currently accessing with tag) so that you have only one for loop to determine which subarray to send, rather than two embedded loops. Then, you can calculate the process to send the array to as
dest = subarray_index%(world_size - 1) + 1;
This will alternate the desitnations between the processes with rank greater than zero. You can keep the tag as just subarray_index. On the receiving side you'll need to calculate the tag per process, per receive.
Finally, I saw that you were initializing your array after you sent the data. You want to do that beforehand.
Combining all these aspects, we get
#include "mpi.h"
#include <stdio.h>
#include <stdlib.h>
int main (int argc, char *argv[])
{
MPI_Init(&argc, &argv);
int world_rank;
MPI_Comm_rank(MPI_COMM_WORLD, &world_rank);
int world_size;
MPI_Comm_size(MPI_COMM_WORLD, &world_size);
int tag = 1;
int arr[8][10];
MPI_Request request;
MPI_Status status;
int source = 0;
int dest;
printf ("\n--Current Rank: %d\n", world_rank);
if (world_rank == 0)
{
int i = 0;
int a, b, x, y;
printf("* Rank 0 excecuting\n");
//I've moved the array generation to before the sends.
for(x=0; x<8; x++)//Generating the whole 8 by 10 2D array
{
i++;
for ( y = 0; y < 10; y++ )
{
arr[x][y] = i;
}
}
//I added a subarray_index as mentioned above.
int subarray_index;
for(subarray_index=0; subarray_index < 8; subarray_index++)
{
dest = subarray_index%(world_size - 1) + 1;
tag = subarray_index;
MPI_Isend(&arr[subarray_index][0], 10, MPI_INT, dest, tag, MPI_COMM_WORLD, &request);
}
}
else
{
int a, b;
for(b=0; b<8/(world_size-1); b++)
{
int sum = 0;
int i;
//We have to do extra calculations here. These match tag, dest, and subarray.
int my_offset = world_rank-1;
tag = b*(world_size-1) + my_offset;
int subarray = b;
MPI_Irecv(&arr[subarray][0], 10, MPI_INT, source, tag, MPI_COMM_WORLD, &request);
MPI_Wait (&request, &status);
for(i = 0; i<10; i++)
{
sum = arr[subarray][i]+sum;
}
printf("\nSum is: %d at rank: %d and tag is:%d\n", sum, world_rank, tag);
}
}
MPI_Finalize();
}
There's a one thing that still seems a bit unfinished in this version for you to consider: what will happen if your number of processes changes? For example, if you have 4 processes instead of 3, it looks like you may run into some trouble with the loop
for(b=0; b<8/(world_size-1); b++)
because each process will execute it the same number of times, but the amount of data sent doesn't cleanly split for 3 workers (non-rank-zero processes).
However, if that is not a concern to you, then you do not need to handle such cases.
Aside from the obvious question: "why on earth would you want to do that?", there are so many problems here that I'm not sure I'll be able to list them all. I'll give it a try though:
Tag: it seems that the bulk of your method is to use the tag as an indicator of where to look for the receiver. But there are (at least) two major flaws here:
Since tag isn't know before reception, what is &arr[tag][0] supposed to be?
Tags in MPI are messages "identifier"... On normal circumstances a given communication (send and matching receive) should have a matching tag. This can be alleviated by using MPI_ANY_TAG special tag on the receiving side, and retrieving its actual value using the MPI_TAG field of the reception's status. But that's another story.
Bottom line here is that the method isn't such a good one.
Data initialisation: one of the major principles of non-blocking MPI communications is that you should never modify a buffer you used for a communication between the post of the communication (the MPI_Isend() here) and its finalisation (which is missing here). Therefore, your data generation must happen before the attempts to communicate the data.
Speaking of which, communication finalisation: you have too finalise your sending communications. This can be done using either a wait-type call (MPI_Wait() or MPI_Waitall()), or an "infinite" loop of test-type calls (MPI_Test() and such)...
The MPI_Irecv(): why are you using a non-blocking receive when the very next call is MPI_Wait()? If you want a blocking receive, just call MPI_Recv() directly.
So fundamentally, what you try to do here doesn't look right. Therefore, I'm very reluctant in trying to propose you a corrected version since I don't understand the actual problem you try to solve. Is this code a reduced version of a bigger real one (or an initial version of something supposed to grow), or just a toy example meant for you to learn how MPI send/receive works? Is ther any fundamental reason why you're not using a collective communication such as MPI_Scatter()?
Depending on your answer on these questions, I can try to produce a valid version.
I am performing code parallelization using MPI to evaluate the cost function.
I dividing population for 50,000 points among 8 processors.
I am trying to parallelize the following code but struggling with it:
//mpiWorldSize is number of processors
//=====================================
for (int k=1; k< mpiWorldSize; k++)
{
MPI_Send(params[1][mpiWorldRank*over+k],7,MPI_INT, 0,0,MPI_COMM_WORLD);
}
// evaluate all the new costs
//=========================
for (int j=1; j<mpiWorldSize;j++)
{
MPI_Recv( params[1][mpiWorldRank*over+k],7,MPI_INT,j,0,MPI_COMM_WORLD,MPI_STATUS_IGNORE);
}
// memory allocation
//=========================
SecNewCostValues = (float*) malloc(noOfDataPerProcessor/bufferLength);
//loop throw nuber of data per proc
for ( i = 0; i < over; i++ )
{
if(mpiWorldRank != 0)
{
SecNewCostValues[i] = cost( params[1][mpiWorldRank*noOfDataPerPreocessor+i] );
newCostValues[over] = cost( params[1][i] ); //change the i part to rank*nodpp+i
printf("hello from rank %d: %s\n", mpiWorldRank ,procName );
}
}
I can't send and receive the data from different processors except 0.
I will appreciate any help.
thanks
MPI uses Single Program Multiple Data message passing programming model, that is all MPI processes execute the same program and you need to use conditionally to decide which process will execute which part of the code . The overall structure of your code could be as follows (assuming master with rank 0 distributes work and worker receive work).
if (myrank == 0) { // master
for (int k = 1; k < mpiWorldSize; k++) { // send a chunk to each worker
MPI_Send(...);
}
}
else { // worker
MPI_Recv(...); // receive work
}
Analogously master would collect work. Check out documentation on MPI_Scatter() and MPI_Gather() collective communication functions which seem relevant.
How do I access a global vector from an individual thread in MPI?
I'm using a library - specifically, an ODE solver library - called CVODE (part of SUNDIALS). The library works with MPI, so that multiple threads are running in parallel. They are all running the same code. Each thread sends the thread "next to" it a piece of data. But I want one of the threads (rank=0) to print out the state of the data at some points.
The library includes functions so that each thread can access their own data (the local vector). But there is no method to access the global vector.
I need to output the values of all of the equations at specific times. To do so, I would need access to the global vector. Anyone know how get at all of the data in an MPI vector (using CVODE, if possible)?
For example, here is my code that each thread runs
for (iout=1, tout=T1; iout <= NOUT; iout++, tout += DTOUT) {
flag = CVode(cvode_mem, tout, u, &t, CV_NORMAL);
if(check_flag(&flag, "CVode", 1, my_pe)) break;
if (my_pe == 0) PrintData(t, u);
}
...
static void PrintData(realtype t, N_Vector u) {
I want to print data from all threads in here
}
In function f (the function I'm solving), I pass data back and forth using MPI_Send and MPI_Recv. But I can't really do that in PrintData because the other processes have run ahead. Also, I don't want to add messaging overhead. I want to access the global vector in PrintData, and then just print out what's needed. Is it possible?
Edit: While waiting for a better answer, I programmed each thread passing the data back to the 0th thread. I don't think that's adding too much messaging overhead, but I'd still like to hear from you experts if there's a better method (I'm sure there isn't any worse ones! :D ).
Edit 2: Although angainor's solution is surely superior, I stuck with the one I had created. For future reference of anyone who has the same question, here is the basics of how I did it:
/* Is called by all threads */
static void PrintData(realtype t, N_Vector u, UserData data) {
... declarations and such ...
for (n=1; n<=my_length; n++) {
mass_num = my_base + n;
z[mass_num - 1] = udata[n-1];
z[mass_num - 1 + N] = udata[n - 1 + my_length];
}
if (my_pe != 0) {
MPI_Send(&z, 2*N, PVEC_REAL_MPI_TYPE, 0, my_pe, comm);
} else {
for (i=1; i<npes; i++) {
MPI_Recv(&z1, 2*N, PVEC_REAL_MPI_TYPE, i, i, comm, &status);
for (n=0; n<2*N; n++)
z[n] = z[n] + z1[n];
}
... now I can print it out however I like...
return;
}
When using MPI the individual threads do not have access to a 'global'
vector. They are not threads, they are processes that can run on
different physical computers and therefore can not have direct access to global data.
To do what you want you can either send the vector to one of the MPI processes (you did that) and print it there, or to print local worker parts in sequence. Use a function like this:
void MPI_write_ivector(int thrid, int nthr, int vec_dim, int *v)
{
int i, j;
int curthr = 0;
MPI_Barrier(MPI_COMM_WORLD);
while(curthr!=nthr){
if(curthr==thrid){
printf("thread %i writing\n", thrid);
for(i=0; i<vec_dim; i++) printf("%d\n", v[i]);
fflush(stdout);
curthr++;
MPI_Bcast(&curthr, 1, MPI_INT, thrid, MPI_COMM_WORLD);
} else {
MPI_Bcast(&curthr, 1, MPI_INT, curthr, MPI_COMM_WORLD);
}
}
}
All MPI processes should call it at the same time since there is a barrier and broadcast inside. Essentially, the procedure makes sure that all the MPI processes print their vector part in order, starting from rank 0. The data is not messed up since only
one process writes at any given time.
In the example above, Broadcast is used since it gives more flexibility on the order in which the threads should print their results - the thread that currently outputs can decide, who comes next. You could also skip the broadcast and only use a barrier
void MPI_write_ivector(int thrid, int nthr, int vec_dim, int *v)
{
int i, j;
int curthr = 0;
while(curthr!=nthr){
if(curthr==thrid){
printf("thread %i writing\n", thrid);
for(i=0; i<vec_dim; i++) printf("%d\n", v[i]);
fflush(stdout);
}
MPI_Barrier(MPI_COMM_WORLD);
curthr++;
}
}
want to distribute a vector with overlapping elements. For example, if I had [1,2,3], I'd want [1,2] to get sent to one node, and [2,3] to get sent to another.i want it for open mpi.....please help me.....
It doesn't matter if it's for OpenMPI or not; OpenMPI is just one implementation of the standard, as is MPICH2. MPI, luckily, is MPI.
So distributing a vector of data is done with the MPI_Scatter call, which sends equal-sized chunks of the vector of data to each process in the communicator. If each task may need different numbers of elements, one uses MPI_Scatterv, where you explicitly set how many elements each process gets, and where it starts in the array.
But once you're using MPI_Scatterv and specifying counts and displacements, you can can use the counts and displacements to specify overlapping pieces of data. The counts would sum up to the number of elements in the arrays plus the overlapping bits; the displacements would point to the first, overlapping, part of the array the process sees. So for instance this distributes overlapping segments of an integer array:
#include <stdio.h>
#include <stdlib.h>
#include <mpi.h>
int main(int argc, char **argv) {
const int NELEM=15;
int globvec[NELEM];
int *locvec;
int *counts, *disps;
int size, rank, ierr;
int start, end;
ierr = MPI_Init(&argc, &argv);
ierr |= MPI_Comm_size(MPI_COMM_WORLD, &size);
ierr |= MPI_Comm_rank(MPI_COMM_WORLD, &rank);
if (rank==0)
for (int i=0;i<NELEM;i++) globvec[i] = i;
/* figure out the counts and displacements into the array.
* All the tasks from 1..size-1 get one extra element
* at the end overlapping with their neighbour; the tasks
* size-1 gets all remaining data.
*/
counts = (int *)malloc(size*sizeof(int));
disps = (int *)malloc(size*sizeof(int));
for (int i=0; i<size; i++) {
start = (NELEM/size)*i;
end = (start + (NELEM/size)-1)+1;
if (i == size-1) end = NELEM-1;
counts[i] = (end-start+1);
disps[i] = start;
}
locvec = (int *)malloc(counts[rank]*sizeof(int));
MPI_Scatterv (globvec, counts, disps, MPI_INT,
locvec, counts[rank], MPI_INT, 0, MPI_COMM_WORLD);
for (int i=0; i<counts[rank]; i++)
printf("%d: %d\n", rank, locvec[i]);
free(locvec);
free(counts);
free(disps);
MPI_Finalize();
return 0;
}
There are 15 elements, 0..14. So if you run it with three tasks, and there's overlap of 1, you'd expect the array to be broken up [0,1,2,3,4,5],[5,6,7,8,9,10],[10,11,12,13,14,15], and that's what you get:
$ mpirun -np 3 ./vector1
0: 0
0: 1
0: 2
0: 3
0: 4
0: 5
1: 5
1: 6
1: 7
1: 8
1: 9
1: 10
2: 10
2: 11
2: 12
2: 13
2: 14
A good point to start is the MPI wiki page.
You should be able to modify the hello world example to do just what you desire.
I am not really sure what your specific problem is. It would really help if you state how much you already did, and what does not work for you.