I'm looking at someone else's MPI code and there are a number of times that variables are declared in main() and used in other functions (some MPI specific). I am new to MPI, but in my programming experience that is normally not supposed to be done. Basically it is difficult for me to determine if it is safe to do this (no errors are thrown).
The entire code is quite long so I will just give a simplified version below:
int main(int argc, char** argv) {
// ...unrelated code
int num_procs, local_rank, name_len;
MPI_Comm comm_new;
MPI_Init(&argc, &argv);
MPI_Get_processor_name(proc_name, &name_len);
create_ring_topology(&comm_new, &local_rank, &num_procs);
// ...unrelated code
MPI_Comm_free(&comm_new);
MPI_Finalize();
}
void create_ring_topology(MPI_Comm* comm_new, int* local_rank, int* num_procs) {
MPI_Comm_size(MPI_COMM_WORLD, num_procs);
int dims[1], periods[1];
int dimension = 1;
dims[0] = *num_procs;
periods[0] = 1;
int* local_coords = malloc(sizeof(int)*dimension);
MPI_Cart_create(MPI_COMM_WORLD, dimension, dims, periods, 0, comm_new);
MPI_Comm_rank(*comm_new, local_rank);
MPI_Comm_size(*comm_new, num_procs);
MPI_Cart_coords(*comm_new, *local_rank, dimension, local_coords);
sprintf(s_local_coords, "[%d]", local_coords[0]);
}
That's just regular pointer usage. Nothing wrong with that.
The variables are declared in main and remain in-scope until main returns, i.e. almost for the duration of the program.
Note that MPI does not actually add anything to C. All it is is an extra library. It does not extend the language.
Related
I would like to have an example showing how to use MPI_Type_create_subarray to build 2D cyclic distribution for large matrix.
I know that MPI_Type_create_darray will give me 2D cyclic distribution, but it is not compatible with SCALAPACK process grid.
I would to do 2d block cyclic distribution using MPI_Type_create_subarray and pass the matrices to SCALAPACK routines.
Could I have an example showing this?
There are at least two parts to your question. The following sections address these two component pieces, but leave integration of the two to you. The example code contained below in both sections, along with explanations provided in the ScaLapack link below should provide some guidance...
From DeinoMPI:
The following sample code illustrates MPI_Type_create_subarray.
#include "mpi.h"
#include <stdio.h>
int main(int argc, char *argv[])
{
int myrank;
MPI_Status status;
MPI_Datatype subarray;
int array[9] = { -1, 1, 2, 3, -2, -3, -4, -5, -6 };
int array_size[] = {9};
int array_subsize[] = {3};
int array_start[] = {1};
int i;
MPI_Init(&argc, &argv);
/* Create a subarray datatype */
MPI_Type_create_subarray(1, array_size, array_subsize, array_start, MPI_ORDER_C, MPI_INT, &subarray);
MPI_Type_commit(&subarray);
MPI_Comm_rank(MPI_COMM_WORLD, &myrank);
if (myrank == 0)
{
MPI_Send(array, 1, subarray, 1, 123, MPI_COMM_WORLD);
}
else if (myrank == 1)
{
for (i=0; i<9; i++)
array[i] = 0;
MPI_Recv(array, 1, subarray, 0, 123, MPI_COMM_WORLD, &status);
for (i=0; i<9; i++)
printf("array[%d] = %d\n", i, array[i]);
fflush(stdout);
}
MPI_Finalize();
return 0;
}
And from ScaLapack in C essentials:
Unfortunately, there is no C interface for ScaLAPACK or PBLAS.All
parametersshould be passed into routines and functionsby reference,
you can also define constants (i_one for 1, i_negone for -1, d_two for
2.0E+0 etc.) to pass into routines.Matrices should bestoredas 1d array(A[ i + lda*j ], not A[i][j])
To invoke ScaLAPACK routines in your program, you should first
initialize grid via BLACS routines (BLACS is enough). Second, you
should distribute your matrix over process grid (block cyclic 2d
distribution). You can do this by means of pdgeadd_ PBLAS routine.
This routine cumputes sum of two matrices A, B: B:=alphaA+betaB).
Matrices can have different distribution,in particularmatrixA can be
owned by only one process, thus, setting alpha=1, beta=0 you cansimply
copy your non-distributed matrix A into distributed matrix B.
Third, call pdgeqrf_ for matrix B. In the end of ScaLAPACK part of
code, you can collect results on one process (just copy distributed
matrix into local one via pdgeadd_). Finally, close grid via
blacs_gridexit_ and blacs_exit_.
After all, ScaLAPACK-using program should contain following:
void main(){
// Useful constants
const int i_one = 1, i_negone = -1, i_zero = 0;
const double zero=0.0E+0, one=1.0E+0;
... (See the rest of code in linked location above...)
I am new to MPI programing. I have a 8 by 10 array that I need to use to find the summation of each row parallely. In rank 0 (process 0), it will generate the 8 by 10 matrix using a 2 dimensional array. Then I would use tag number as the first index value(row number) of the array. This way, I can use a unique buffer to send through Isend. However, it looks like my method of tag number generation for Isend is not working. Can you please look in to the following code and tell me if I am passing the 2D array correctly and tag number. When I run this code, it stop just after executing rannk 1 and waits. I use 3 process for this example and use the command mpirun -np 3 test please let me know how to tackle this problem with an example if possible.
#include "mpi.h"
#include <stdio.h>
#include <stdlib.h>
int main (int argc, char *argv[])
{
MPI_Init(&argc, &argv);
int world_rank;
MPI_Comm_rank(MPI_COMM_WORLD, &world_rank);
int world_size;
MPI_Comm_size(MPI_COMM_WORLD, &world_size);
int tag = 1;
int arr[8][10];
MPI_Request request;
MPI_Status status;
int source = 0;
int dest;
printf ("\n--Current Rank: %d\n", world_rank);
if (world_rank == 0)
{
int i = 0;
int a, b, x, y;
printf("* Rank 0 excecuting\n");
for(a=0; a<8/(world_size-1); a++)//if -np is 3, this will loop 4 times
{
for(b=0; b<(world_size-1); b++)//if -np is 3, this loops will loop 2 times
{//So, if -np is 3, due to both of these loops, Isend will be called 8 times
dest = b+1;
tag = a+b;//create a uniqe tag value each time, which can be use as first index value of array
//Error: This tag value passing to Isend doesn't seems to be workiing
MPI_Isend(&arr[tag][0], 10, MPI_INT, dest, tag, MPI_COMM_WORLD, &request);
}
}
for(x=0; x<8; x++)//Generating the whole 8 by 10 2D array
{
i++;
for ( y = 0; y < 10; y++ )
{
arr[x][y] = i;
}
}
}
else
{
int a, b;
for(b=1; b<=8/(world_size-1); b++)
{
int sum = 0;
int i;
MPI_Irecv(&arr[tag][0], 10, MPI_INT, source, tag, MPI_COMM_WORLD, &request);
MPI_Wait (&request, &status);
//Error: not getting the correct tag value
for(i = 0; i<10; i++)
{
sum = arr[tag][i]+sum;
}
printf("\nSum is: %d at rank: %d and tag is:%d\n", sum, world_rank, tag);
}
}
MPI_Finalize();
}
The tag issue is because of how the tag is computed (or not) on different processes. You're initializing the tag values for all processes as
int tag = 1;
and later, for process rank 0 you set the tag to
tag = a+b;
which, for the first time this is set, will set tag to 0 because both a and b start out as zero. However, for processes with rank above 0, the tag is never changed. They will continue to have the tag set to 1.
The tag uniquely identifies the message being sent by MPI_Isend and MPI_Irecv, which means that a send and its corresponding receive must have the same tag for the data transfer to succeed. Because the tags are mismatched between processes for most of the receives, the transfers are mostly unsuccessful. This causes processes with rank higher than 0 to eventually block (wait) forever on the call to MPI_Wait.
In order to fix this, you have to make sure to change the tags for the processes with rank above zero. However, before we can do that, there's a few other issues worth touching up on.
With the way you've set your tag for the rank 0 process right now, tag can only ever have values 0 to 4 (assuming 3 processes). This is because a is limited to the range 0 to 3, and b can only have values 0 or 1. The maximum possible sum of these values is 4. This means that when you access your array using arr[tag][0], you will miss out on a lot of the data, and you'll re-send the same rows several times. I recommend changing the way you approach sending each subarray (which you're currently accessing with tag) so that you have only one for loop to determine which subarray to send, rather than two embedded loops. Then, you can calculate the process to send the array to as
dest = subarray_index%(world_size - 1) + 1;
This will alternate the desitnations between the processes with rank greater than zero. You can keep the tag as just subarray_index. On the receiving side you'll need to calculate the tag per process, per receive.
Finally, I saw that you were initializing your array after you sent the data. You want to do that beforehand.
Combining all these aspects, we get
#include "mpi.h"
#include <stdio.h>
#include <stdlib.h>
int main (int argc, char *argv[])
{
MPI_Init(&argc, &argv);
int world_rank;
MPI_Comm_rank(MPI_COMM_WORLD, &world_rank);
int world_size;
MPI_Comm_size(MPI_COMM_WORLD, &world_size);
int tag = 1;
int arr[8][10];
MPI_Request request;
MPI_Status status;
int source = 0;
int dest;
printf ("\n--Current Rank: %d\n", world_rank);
if (world_rank == 0)
{
int i = 0;
int a, b, x, y;
printf("* Rank 0 excecuting\n");
//I've moved the array generation to before the sends.
for(x=0; x<8; x++)//Generating the whole 8 by 10 2D array
{
i++;
for ( y = 0; y < 10; y++ )
{
arr[x][y] = i;
}
}
//I added a subarray_index as mentioned above.
int subarray_index;
for(subarray_index=0; subarray_index < 8; subarray_index++)
{
dest = subarray_index%(world_size - 1) + 1;
tag = subarray_index;
MPI_Isend(&arr[subarray_index][0], 10, MPI_INT, dest, tag, MPI_COMM_WORLD, &request);
}
}
else
{
int a, b;
for(b=0; b<8/(world_size-1); b++)
{
int sum = 0;
int i;
//We have to do extra calculations here. These match tag, dest, and subarray.
int my_offset = world_rank-1;
tag = b*(world_size-1) + my_offset;
int subarray = b;
MPI_Irecv(&arr[subarray][0], 10, MPI_INT, source, tag, MPI_COMM_WORLD, &request);
MPI_Wait (&request, &status);
for(i = 0; i<10; i++)
{
sum = arr[subarray][i]+sum;
}
printf("\nSum is: %d at rank: %d and tag is:%d\n", sum, world_rank, tag);
}
}
MPI_Finalize();
}
There's a one thing that still seems a bit unfinished in this version for you to consider: what will happen if your number of processes changes? For example, if you have 4 processes instead of 3, it looks like you may run into some trouble with the loop
for(b=0; b<8/(world_size-1); b++)
because each process will execute it the same number of times, but the amount of data sent doesn't cleanly split for 3 workers (non-rank-zero processes).
However, if that is not a concern to you, then you do not need to handle such cases.
Aside from the obvious question: "why on earth would you want to do that?", there are so many problems here that I'm not sure I'll be able to list them all. I'll give it a try though:
Tag: it seems that the bulk of your method is to use the tag as an indicator of where to look for the receiver. But there are (at least) two major flaws here:
Since tag isn't know before reception, what is &arr[tag][0] supposed to be?
Tags in MPI are messages "identifier"... On normal circumstances a given communication (send and matching receive) should have a matching tag. This can be alleviated by using MPI_ANY_TAG special tag on the receiving side, and retrieving its actual value using the MPI_TAG field of the reception's status. But that's another story.
Bottom line here is that the method isn't such a good one.
Data initialisation: one of the major principles of non-blocking MPI communications is that you should never modify a buffer you used for a communication between the post of the communication (the MPI_Isend() here) and its finalisation (which is missing here). Therefore, your data generation must happen before the attempts to communicate the data.
Speaking of which, communication finalisation: you have too finalise your sending communications. This can be done using either a wait-type call (MPI_Wait() or MPI_Waitall()), or an "infinite" loop of test-type calls (MPI_Test() and such)...
The MPI_Irecv(): why are you using a non-blocking receive when the very next call is MPI_Wait()? If you want a blocking receive, just call MPI_Recv() directly.
So fundamentally, what you try to do here doesn't look right. Therefore, I'm very reluctant in trying to propose you a corrected version since I don't understand the actual problem you try to solve. Is this code a reduced version of a bigger real one (or an initial version of something supposed to grow), or just a toy example meant for you to learn how MPI send/receive works? Is ther any fundamental reason why you're not using a collective communication such as MPI_Scatter()?
Depending on your answer on these questions, I can try to produce a valid version.
I would like to gather data from arrays of double and organize them at the same time. Say we have 2 MPI ranks:
if(rank == 0)
P = {0,1,4,5,8,9};
else
P = {2,3,6,7,10,11};
How could I gather the information located in P and locate them in order, i.e: P in the master should contain P= [0 1 2....9 10 11]
I could gather P as it is, and then reorganizing it in the root however this approach would not be very efficient as P is increased. I have tried creating an MPI_Type_vector however I have not managed to get it right yet. Any ideas?
It depends a little bit on what you mean by "in order". If you mean that, as in the above example, each vector is made up of blocks of data and you want those blocks interleaved in a fixed known order, yes, you can certainly do this. (The question could also be read to be asking if you can do a sort as part of the gather; that's rather harder.)
You have the right approach; you want to send the data as is, but receive the data into specified chunks broken up by processor. Here, the data type you want to receive into looks like this:
MPI_Datatype vectype;
MPI_Type_vector(NBLOCKS, BLOCKSIZE, size*BLOCKSIZE, MPI_CHAR, &vectype);
That is, for a given processor's input, you're going to receive it into NBLOCKS blocks of size BLOCKSIZE, each separated by however many processors there are times the blocksize. As it is, you could receive into that type; to gather into that type, however, you need to set the extents so that the data from each processor is gathered into the right place:
MPI_Datatype gathertype;
MPI_Type_create_resized(vectype, 0, BLOCKSIZE*sizeof(char), &gathertype);
MPI_Type_commit(&gathertype);
The reason for that resizing is given in, for instance, this answer, and likely elsewhere on this site as well.
Putting this together into sample code gives us the following:
#include <stdio.h>
#include <stdlib.h>
#include <mpi.h>
int main(int argc, char **argv) {
int rank, size;
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &size);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
const int BLOCKSIZE=2; /* each block of data is 2 items */
const int NBLOCKS =3; /* each task has 3 such blocks */
char locdata[NBLOCKS*BLOCKSIZE];
for (int i=0; i<NBLOCKS*BLOCKSIZE; i++)
locdata[i] = 'A' + (char)rank; /* rank 0 = 'AAA..A'; rank 1 = 'BBB..B', etc */
MPI_Datatype vectype, gathertype;
MPI_Type_vector(NBLOCKS, BLOCKSIZE, size*BLOCKSIZE, MPI_CHAR, &vectype);
MPI_Type_create_resized(vectype, 0, BLOCKSIZE*sizeof(char), &gathertype);
MPI_Type_commit(&gathertype);
char *globaldata = NULL;
if (rank == 0) globaldata = malloc((NBLOCKS*BLOCKSIZE*size+1)*sizeof(char));
MPI_Gather(locdata, BLOCKSIZE*NBLOCKS, MPI_CHAR,
globaldata, 1, gathertype,
0, MPI_COMM_WORLD);
if (rank == 0) {
globaldata[NBLOCKS*BLOCKSIZE*size] = '\0';
printf("Assembled data:\n");
printf("<%s>\n", globaldata);
free(globaldata);
}
MPI_Type_free(&gathertype);
MPI_Finalize();
return 0;
}
Running gives:
$ mpirun -np 3 ./vector
Assembled data:
<AABBCCAABBCCAABBCC>
$ mpirun -np 7 ./vector
Assembled data:
<AABBCCDDEEFFGGAABBCCDDEEFFGGAABBCCDDEEFFGG>
I am a baginner MPI user and I may made some mistake with my parallel code for my calculation.
I need to compute an iterative estimation on a large data set and I want to calculate it in parallel using MPI in C.
I made a standard (ANSI) C function ('myFunc') to estimate an element in the output dataset ('param_2') based on the input parameters ('param_1',param_3,'table_1','table_2','table_3') and the estimation of the previous iteration ('param_2'). The calculation could be done parallel if we partition the new estimation ('param_2') into chunks.
When I made some profiling on the code, I realised that the calculation started almost at the same time on each node (thread), but it is finished in a serial fashion, one after another (with a fixed time interval between them).
It looks like they are using some shared resources or something like that... I tried to eliminate all concurrency between the threads, but i am affraid I do not have enough experience in MPI to solve the problem.
I thought all MPI thread have its own 'copy' of the declared variables and using them independently from each other, so I do not understand why the threads wait for each other to finish the calculation when they have their own copy of the parameters...
Here is the simplefield version of the code:
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include <mpi.h>
#define X 131
#define Y 131
#define Z 150
#define MASTER 0
float table_1[31][8];
float table_2[31][4];
float table_3[31][2];
int main(int argc, char* argv[]) {
float *param_1;
float *param_2;
float param_3;
float *chunk;
int file_length = X*Y*Z;
float myFunc(int i, float *param_1, float *param_2, float param_3);
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &taskid);
MPI_Comm_size(MPI_COMM_WORLD, &numtasks);
chunk_size = ceil(file_length / numtasks);
/* Allocate memory for the input parameters */
param_1 = malloc(file_length*sizeof(float));
param_2 = malloc(file_length*sizeof(float));
if( taskid == MASTER) {
/* Read parameters from file (table_1, table_2, table_3, param_1) */
}
MPI_Bcast(table_1,31*8,MPI_FLOAT,MASTER,MPI_COMM_WORLD);
MPI_Bcast(table_2,31*4,MPI_FLOAT,MASTER,MPI_COMM_WORLD);
MPI_Bcast(table_3,31*2,MPI_FLOAT,MASTER,MPI_COMM_WORLD);
MPI_Bcast(param_1,file_length,MPI_FLOAT,MASTER,MPI_COMM_WORLD);
for(it = 0; it < 10; it++) {
for(i = 0; i < chunk_size; i++) {
chunk[i] = myFunc((taskid*chunk_size)+i, param_1, param_2, param_3);
}
MPI_Gather(chunk, chunk_size, MPI_FLOAT, param_2, chunk_size, MPI_FLOAT, MASTER, MPI_COMM_WORLD);
MPI_Bcast(param_2, file_length, MPI_FLOAT, MASTER, MPI_COMM_WORLD);
}
MPI_Finalize();
free(...);
return 0;
}
float myFunc(int i, float *param_1, float *param_2, float param_3) {
/* Using the global tables (table_1,table_2,table_3) and some localy declared variable */
/* No MPI function here, only Math functions */
}
If you have a solution, advise or a comment please be kind and share with me, I would be grateful, thank you!
I am thinking about implementing a wrapper for MPI that imitates OpenMP's way
of parallelizing for loops.
begin_parallel_region( chunk_size=100 , num_proc=10 );
for( int i=0 ; i<1000 ; i++ )
{
//some computation
}
end_parallel_region();
The code above distributes computation inside the for loop to 10 slave MPI processors.
Upon entering the parallel region, the chunk size and number of slave processors are provided.
Upon leaving the parallel region, the MPI processors are synched and are put idle.
EDITED in response to High Performance Mark.
I have no intention to simulate the OpenMP's shared memory model.
I propose this because I need it.
I am developing a library that is required to build graphs from mathetical functions.
In these mathetical functions, there often exist for loops like the one below.
for( int i=0 ; i<n ; i++ )
{
s = s + sin(x[i]);
}
So I want to first be able to distribute sin(x[i]) to slave processors and at the end reduce to the single varible just like in OpenMP.
I was wondering if there is such a wrapper out there so that I don't have to reinvent the wheel.
Thanks.
There is no such wrapper out there which has escaped from the research labs into widespread use. What you propose is not so much re-inventing the wheel as inventing the flying car.
I can see how you propose to write MPI code which simulates OpenMP's approach to sharing the burden of loops, what is much less clear is how you propose to have MPI simulate OpenMP's shared memory model ?
In a simple OpenMP program one might have, as you suggest, 10 threads each perform 10% of the iterations of a large loop, perhaps updating the values of a large (shared) data structure. To simulate that inside your cunning wrapper in MPI you'll either have to (i) persuade single-sided communications to behave like shared memory (this might be doable and will certainly be difficult) or (ii) distribute the data to all processes, have each process independently compute 10% of the results, then broadcast the results all-to-all so that at the end of execution each process has all the data that the others have.
Simulating shared memory computing on distributed memory hardware is a hot topic in parallel computing, always has been, always will be. Google for distributed shared memory computing and join the fun.
EDIT
Well, if you've distributed x across processes then individual processes can compute sin(x[i]) and you can reduce the sum on to one process using MPI_Reduce.
I must be missing something about your requirements because I just can't see why you want to build any superstructure on top of what MPI already provides. Nevertheless, my answer to your original question remains No, there is no such wrapper as you seek and all the rest of my answer is mere commentary.
Yes, you could do this, for specific tasks. But you shouldn't.
Consider how you might implement this; the begin part would distribute the data, and the end part would bring the answer back:
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include <mpi.h>
typedef struct state_t {
int globaln;
int localn;
int *locals;
int *offsets;
double *localin;
double *localout;
double (*map)(double);
} state;
state *begin_parallel_mapandsum(double *in, int n, double (*map)(double)) {
state *s = malloc(sizeof(state));
s->globaln = n;
s->map = map;
/* figure out decomposition */
int size, rank;
MPI_Comm_size(MPI_COMM_WORLD, &size);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
s->locals = malloc(size * sizeof(int));
s->offsets = malloc(size * sizeof(int));
s->offsets[0] = 0;
for (int i=0; i<size; i++) {
s->locals[i] = (n+i)/size;
if (i < size-1) s->offsets[i+1] = s->offsets[i] + s->locals[i];
}
/* allocate local arrays */
s->localn = s->locals[rank];
s->localin = malloc(s->localn*sizeof(double));
s->localout = malloc(s->localn*sizeof(double));
/* distribute */
MPI_Scatterv( in, s->locals, s->offsets, MPI_DOUBLE,
s->localin, s->locals[rank], MPI_DOUBLE,
0, MPI_COMM_WORLD);
return s;
}
double end_parallel_mapandsum(state **s) {
double localanswer=0., answer;
/* sum up local answers */
for (int i=0; i<((*s)->localn); i++) {
localanswer += ((*s)->localout)[i];
}
/* and get global result. Everyone gets answer */
MPI_Allreduce(&localanswer, &answer, 1, MPI_DOUBLE, MPI_SUM, MPI_COMM_WORLD);
free( (*s)->localin );
free( (*s)->localout );
free( (*s)->locals );
free( (*s)->offsets );
free( (*s) );
return answer;
}
int main(int argc, char **argv) {
int rank;
double *inputs;
double result;
int n=100;
const double pi=4.*atan(1.);
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
if (rank == 0) {
inputs = malloc(n * sizeof(double));
for (int i=0; i<n; i++) {
inputs[i] = 2.*pi/n*i;
}
}
state *s=begin_parallel_mapandsum(inputs, n, sin);
for (int i=0; i<s->localn; i++) {
s->localout[i] = (s->map)(s->localin[i]);
}
result = end_parallel_mapandsum(&s);
if (rank == 0) {
printf("Calculated result: %lf\n", result);
double trueresult = 0.;
for (int i=0; i<n; i++) trueresult += sin(inputs[i]);
printf("True result: %lf\n", trueresult);
}
MPI_Finalize();
}
That constant distribute/gather is a terrible communications burden to sum up a few numbers, and is antithetical to the entire distributed-memory computing model.
To a first approximation, shared memory approaches - OpenMP, pthreads, IPP, what have you - are about scaling computations faster; about throwing more processors at the same chunk of memory. On the other hand, distributed-memory computing is about scaling a computation bigger; about using more resourses, particularly memory, than can be found on a single computer. The big win of using MPI is when you're dealing with problem sets which can't fit on any one node's memory, ever. So when doing distributed-memory computing, you avoid having all the data in any one place.
It's important to keep that basic approach in mind even when you are just using MPI on-node to use all the processors. The above scatter/gather approach will just kill performance. The more idiomatic distributed-memory computing approach is for the logic of the program to already have distributed the data - that is, your begin_parallel_region and end_parallel_region above would have already been built into the code above your loop at the very beginning. Then, every loop is just
for( int i=0 ; i<localn ; i++ )
{
s = s + sin(x[i]);
}
and when you need to exchange data between tasks (or reduce a result, or what have you) then you call the MPI functions to do those specific tasks.
Is MPI a must or are you just trying to run your OpenMP-like code on a cluster? In the latter case, I propose you to take a look at Intel's Cluster OpenMP:
http://www.hpcwire.com/hpcwire/2006-05-19/openmp_on_clusters-1.html