C MPI array passing - c

Why can't I access muhray 8 lines from the bottom? The print lines that start with "!!" work correctly but I can't seem to get the right values at the very end.
Here is my output:
[computer#node01 ~]$ mpiexec -n 8 ./presum 1000
!! proc0's array is size 125 and goes from 1 to 1
proc0's array is size 125 and goes from 4693173 to 1819307369
!! proc2's array is size 125 and goes from 1 to 1
proc2's array is size 125 and goes from 4693173 to 1819307369
!! proc3's array is size 125 and goes from 1 to 1
proc3's array is size 125 and goes from 4693173 to 1819307369
!! proc1's array is size 125 and goes from 1 to 1
proc1's array is size 125 and goes from 4693173 to 1819307369
!! proc4's array is size 125 and goes from 1 to 1
proc4's array is size 125 and goes from 4693173 to 1819307369
!! proc5's array is size 125 and goes from 1 to 1
proc5's array is size 125 and goes from 4693173 to 1819307369
!! proc6's array is size 125 and goes from 1 to 1
proc6's array is size 125 and goes from 4693173 to 1819307369
!! proc7's array is size 125 and goes from 1 to 1
proc7's array is size 125 and goes from 4693173 to 1819307369
Here is the code in question:
#include "mpi.h"
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
//max size of the data array to split up
#define MAXSIZE 1000000
//methods
int checkInput(int nprocs, int argc, char *argv[], int id);
//mpi send & rec tags
int ARSIZE = 0; //array size
int ARR = 1; //array
int MSM = 2; //slave sum
int main(int argc, char *argv[]) {
int ARsize; /*size of the array to pre-sum*/
int id; /*process id number*/
int nprocs; /*number of processors*/
int i, j, k; /*counters*/
int muhsize; /*size of personal array to calculate*/
int * muhray; /**/
//MPI framework
MPI_Status status;
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &nprocs);
MPI_Comm_rank(MPI_COMM_WORLD, &id);
MPI_Barrier(MPI_COMM_WORLD);
//pull input, check values, return ARsize
ARsize = checkInput(nprocs, argc, argv, id);
//set up array, serial run, send out chunks
if (!id) {
//variables only the zero node needs
int data[ARsize]; /*full original array of numbers*/
int chunkSize, upper, lower; /*vars to determine cunksize to send out*/
int smoothCount = 0; /*BOOL for uneven division chunksize*/
//fill array with numbers
for (i = 0; i < ARsize; i++) {
data[i] = 1;
}
//sequential solution here
//determine chunkSize
chunkSize = (int) (ARsize/nprocs);
if (ARsize % nprocs != 0) {
chunkSize = chunkSize + 1;
smoothCount = 1;
}
//send chunks of data to procs
for (i = 0; i < nprocs; i++) {
lower = i * chunkSize;
upper = ((i+1) * chunkSize) - 1;
if (i == nprocs-1 && smoothCount == 1) {
upper = ARsize-1;
}
int intarray[(upper-lower)];
for (k = lower, j = 0; k <= upper; k++, j++) {
intarray[j] = data[k];
}
if(i > 0) {
//send array size
MPI_Send(&j, 1, MPI_INT, i, ARSIZE, MPI_COMM_WORLD);
//send actual array
MPI_Send(intarray, j, MPI_INT, i, ARR, MPI_COMM_WORLD);
}
//zero no send to self, this data used later for all nodes calc
else {
muhsize = j;
int muhray[muhsize];
for (j = 0; j <= chunkSize; j++) {
muhray[j] = intarray[j];
}
printf("!! proc%d's array is size %d and goes from %d to %d\n", id, muhsize, muhray[0], muhray[(muhsize-1)]);
}
}
}
else {
MPI_Recv(&muhsize, 1, MPI_INT, 0, ARSIZE, MPI_COMM_WORLD, &status);
int muhray[muhsize];
MPI_Recv(muhray, muhsize, MPI_INT, 0, ARR, MPI_COMM_WORLD, &status);
printf("!! proc%d's array is size %d and goes from %d to %d\n", id, muhsize, muhray[0], muhray[(muhsize-1)]);
fflush(stdout);
}
printf("proc%d's array is size %d and goes from %d to %d\n", id, muhsize, muhray[0], muhray[muhsize]);
fflush(stdout);
//MPI_Send(&muhsize, 1, MPI_INT, 0, MSM, MPI_COMM_WORLD);
MPI_Finalize();
}
//pull input, check values, return ARsize
int checkInput(int nprocs, int argc, char *argv[], int id) {
int size;
if (nprocs % 2 != 0 || nprocs == 6 || nprocs > 8) {
if (!id) printf("run with 2^k procs, (1 >= k <= 3)\n");
fflush(stdout);
MPI_Finalize();
exit(1);
}
if (argc != 2) {
if (!id) printf("Usage: presum [array size (max: %d)]\n", MAXSIZE);
fflush(stdout);
MPI_Finalize();
exit(1);
}
size = atoi(argv[1]);
if (size <= nprocs) {
if (!id) printf("search range must be greater than processor count\n");
fflush(stdout);
MPI_Finalize();
exit(1);
}
if (size > MAXSIZE) {
if (!id) printf("array size must be less than or equal to %d\n", MAXSIZE);
fflush(stdout);
MPI_Finalize();
exit(1);
}
return size;
}

The problem you have is very likely with scopes of variables. For instance here:
...
else {
MPI_Recv(&muhsize, 1, MPI_INT, 0, ARSIZE, MPI_COMM_WORLD, &status);
int muhray[muhsize];
MPI_Recv(muhray, muhsize, MPI_INT, 0, ARR, MPI_COMM_WORLD, &status);
printf("!! proc%d's array is size %d and goes from %d to %d\n", id, muhsize, muhray[0], muhray[(muhsize-1)]);
fflush(stdout);
}
printf("proc%d's array is size %d and goes from %d to %d\n", id, muhsize, muhray[0], muhray[muhsize]);
you declare int muhray[muhsize]; inside the scope of the else construct. When you exit this scope muhray is destroyed as it is a local variable. What you are using in the last printf seems to be an uninitialized int * muhray; declared immediately after the main.
Note that while they have the same name these two are different variables.

It seems that you're printing "muhray[(muhsize-1)]" twice and "muhray[muhsize]" at the end. You shall always print the value of "muhray[(muhsize-1)]".
The else-part with the two MPI_Recv calls uses a locally defined variable called "muhray", which is different from that defined initially in the main function (it actually shadows the muhray defined at the beginning of the main function). Thus "printf("!!..." uses a completely different variable than the last "printf("proc..." call.

Related

MPI_Get doesn't send the correct elements between the buffers of two process

I am trying to create a program that will ultimately be transposing a matrix in MPI so that it can be used in further computations. But right now I am trying to do a simple thing: Root process has a 4x4 matrix "A" which contains elements 0..15 in row-major order. This data is scattered to 2 processes so that each receives one half of the matrix. Process 0 has a 2x4 sub_matrix "a" and receives elements 0..7 and Process 1 gets elements 8..15 in its sub_matrix "a".
My goal is for these processes to swap their a matrices with each other using MPI_Get. Since I was encountering problems, I decided to test a simpler version and simply make process 0 get process 1's "a" matrix, that way, both processes will have the same elements in their respective sub_matrices once I print after the MPI_Get-call and the MPI_fence are called.
Yet the output is erratic, have tried to trouble-shoot for several hours but haven't been able to crack the nut. Would appreciate your help with this.
This is the code below, and the run-command: mpirun -n 2 ./get
Compile: mpicc -std=c99 -g -O3 -o get get.c -lm
#include <mpi.h>
#include <stdio.h>
#include <stdlib.h>
#define NROWS 4
#define NCOLS 4
int allocate_matrix(int ***M, int ROWS, int COLS) {
int *p;
if (NULL == (p = malloc(ROWS * COLS * sizeof(int)))) {
perror("Couldn't allocate memory for input (p in allocate_matrix)");
return -1;
}
if (NULL == (*M = malloc(ROWS * sizeof(int*)))) {
perror("Couldn't allocate memory for input (M in allocate_matrix)");
return -1;
}
for(int i = 0; i < ROWS; i++) {
(*M)[i] = &(p[i * COLS]);
}
return 0;
}
int main(int argc, char *argv[])
{
int rank, nprocs, **A, **a, n_cols, n_rows, block_len;
MPI_Win win;
int errs = 0;
if(rank==0)
{
allocate_matrix(&A, NROWS, NCOLS);
for (int i=0; i<NROWS; i++)
for (int j=0; j<NCOLS; j++)
A[i][j] = i*NCOLS + j;
}
MPI_Init(&argc,&argv);
MPI_Comm_size(MPI_COMM_WORLD,&nprocs);
MPI_Comm_rank(MPI_COMM_WORLD,&rank);
n_cols=NCOLS; //cols in a sub_matrix
n_rows=NROWS/nprocs; //rows in a sub_matrix
block_len = n_cols*n_rows;
allocate_matrix(&a, n_rows, n_cols);
for (int i = 0; i <n_rows; i++)
for (int j = 0; j < n_cols; j++)
a[i][j] = 0;
MPI_Datatype block_type;
MPI_Type_vector(n_rows, n_cols, n_cols, MPI_INTEGER, &block_type);
MPI_Type_commit(&block_type);
MPI_Scatter(*A, 1, block_type, &(a[0][0]), block_len, MPI_INTEGER, 0, MPI_COMM_WORLD);
MPI_Barrier(MPI_COMM_WORLD);
printf("process %d: \n", rank);
for (int j=0; j<n_rows; j++){
for (int i=0; i<n_cols; i++){
printf("%d ",a[j][i]);
}
printf("\n");
}
if (rank == 0)
{
printf("TESTING, before Get a[0][0] %d\n", a[0][0]);
MPI_Win_create(NULL, 0, 1, MPI_INFO_NULL, MPI_COMM_WORLD, &win);
MPI_Win_fence((MPI_MODE_NOPUT | MPI_MODE_NOPRECEDE), win);
MPI_Get(*a, 8, MPI_INTEGER, 1, 0, 8, MPI_INTEGER, win);
MPI_Win_fence(MPI_MODE_NOSUCCEED, win);
printf("TESTING, after Get a[0][0] %d\n", a[0][0]);
printf("process %d:\n", rank);
for (int j=0; j<n_rows; j++){
for (int i=0; i<n_cols; i++){
printf("%d ", a[j][i]);
}
printf("\n");
}
}
else
{ /* rank = 1 */
MPI_Win_create(a, n_rows*n_cols*sizeof(int), sizeof(int), MPI_INFO_NULL, MPI_COMM_WORLD, &win);
MPI_Win_fence((MPI_MODE_NOPUT | MPI_MODE_NOPRECEDE), win);
MPI_Win_fence(MPI_MODE_NOSUCCEED, win);
}
MPI_Type_free(&block_type);
MPI_Win_free(&win);
MPI_Finalize();
return errs;
}
This is the output that I get:
process 0:
0 1 2 3
4 5 6 7
process 1:
8 9 10 11
12 13 14 15
process 0:
1552976336 22007 1552976352 22007
1552800144 22007 117 0
But what I want is for the second time I print the matrix from process 0, it should have the same elements as in process 1.
First, I doubt this is really the code you are testing. You are freeing some MPI type variables that are not defined and also rank is uninitialised in
if(rank==0)
{
allocate_matrix(&A, NROWS, NCOLS);
for (int i=0; i<NROWS; i++)
for (int j=0; j<NCOLS; j++)
A[i][j] = i*NCOLS + j;
}
and the code segfaults because A won't get allocated in the root.
Moving this post MPI_Comm_rank(), freeing the correct MPI type variable, and fixing the call to MPI_Win_create in rank 1:
MPI_Win_create(&a[0][0], n_rows*n_cols*sizeof(int), sizeof(int), MPI_INFO_NULL, MPI_COMM_WORLD, &win);
// This -------^^^^^^^^
produces the result you are seeking.
I'd recommend to stick to a single notation for the beginning of the array like &a[0][0] instead of a mixture of *a and &a[0][0]. This will prevent (or at least reduce the occurrence of) similar errors in the future.

MPI Search In Array

Im trying to find a spesific value inside an array. Im trying to find it with parallel searching by mpi. When my code finds the value, it shows an error.
ERROR
Assertion failed in file src/mpid/ch3/src/ch3u_buffer.c at line 77: FALSE
memcpy argument memory ranges overlap, dst_=0x7ffece7eb590 src_=0x7ffece7eb590 len_=4
PROGRAM
const char *FILENAME = "input.txt";
const size_t ARRAY_SIZE = 640;
int main(int argc, char **argv)
{
int *array = malloc(sizeof(int) * ARRAY_SIZE);
int rank,size;
MPI_Status status;
MPI_Request request;
int done,myfound,inrange,nvalues;
int i,j,dummy;
/* Let the system do what it needs to start up MPI */
MPI_Init(&argc,&argv);
MPI_Comm_rank(MPI_COMM_WORLD,&rank);
MPI_Comm_size(MPI_COMM_WORLD,&size);
myfound=0;
if (rank == 0)
{
createFile();
array = readFile(FILENAME);
}
MPI_Bcast(array, ARRAY_SIZE, MPI_INT, 0, MPI_COMM_WORLD);
MPI_Irecv(&dummy, 1, MPI_INT, MPI_ANY_SOURCE, 1, MPI_COMM_WORLD, &request);
MPI_Test(&request, &done, &status);
nvalues = ARRAY_SIZE / size; //EACH PROCESS RUNS THAT MUCH NUMBER IN ARRAY
i = rank * nvalues; //OFFSET FOR EACH PROCESS INSIDE THE ARRAY
inrange = (i <= ((rank + 1) * nvalues - 1) && i >= rank * nvalues); //LIMIT OF THE OFFSET
while (!done && inrange)
{
if (array[i] == 17)
{
dummy = 1;
for (j = 0; j < size; j++)
{
MPI_Send(&dummy, 1, MPI_INT, j, 1, MPI_COMM_WORLD);
}
printf("P:%d found it at global index %d\n", rank, i);
myfound = 1;
}
printf("P:%d - %d - %d\n", rank, i, array[i]);
MPI_Test(&request, &done, &status);
++i;
inrange = (i <= ((rank + 1) * nvalues - 1) && i >= rank * nvalues);
}
if (!myfound)
{
printf("P:%d stopped at global index %d\n", rank, i - 1);
}
MPI_Finalize();
}
Error is somewhere in here because when i put an invalid number for example -5 into if condition, program runs smoothly.
dummy = 1;
for (j = 0; j < size; j++)
{
MPI_Send(&dummy, 1, MPI_INT, j, 1, MPI_COMM_WORLD);
}
printf("P:%d found it at global index %d\n", rank, i);
myfound = 1;
Thanks
Your program is invalid with respect to the MPI standard because you use the same buffer (&dummy) for both MPI_Irecv() and MPI_Send().
You can either use two distinct buffers (e.g. dummy_send and dummy_recv), or since you do not seem to care about the value of dummy, then use NULL as buffer and send/receive zero size messages.

MPI Reduce and Broadcast work, but cause a future return value to fail

I am using MPI to implement Dijkstras algorithm for a class. My teacher also has no idea of why this is broken and has given me permission to post here.
My problem is happening in the chooseVertex function. The program works fine for 1 processor, but when I run it with 2 processors, processor 0 fails to return leastPostition, even though I am able to print the contents of leastPosition on the line before the return.
My code:
#include "mpi.h"
#include <stdlib.h>
#include <stdio.h>
#define min(x,y) ((x) > (y) ? (y) : (x))
#define MASTER 0
#define INFINTY 100000
void dijkstra(int, int, int **, int *, int, int);
int chooseVertex(int *, int, int *, int, int);
int main(int argc, char* argv[])
{
int rank, size, i, j;
//Initialize MPI
MPI_Status status;
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &size);
//Initialize graph
int src = 0;
int n = 12;
int **edge = (int**) malloc(n * sizeof(int *));
for (i = 0; i < n; i++)
edge[i] = (int *)malloc(n * sizeof(int));
int dist[12];
//Set all graph lengths to infinity
for (i = 0; i < n; i++)
{
for (j = 0; j < n; j++)
{
if (i == j) { edge[i][j] = 0; }
else { edge[i][j] = INFINTY; }
}
}
//set graph edge lengths
edge[0][3] = 5;
edge[0][6] = 13;
edge[1][5] = 12;
edge[2][1] = 7;
edge[3][2] = 9;
edge[3][4] = 2;
edge[4][7] = 3;
edge[5][10] = 1;
edge[5][11] = 4;
edge[6][9] = 9;
edge[7][8] = 4;
edge[8][9] = 10;
edge[8][10] = 7;
edge[9][10] = 6;
edge[10][11] = 1;
dijkstra(src, n, edge, dist, rank, size);
if(rank == MASTER){ printf("The distance is %d", dist[n - 1]); }
MPI_Finalize();
return 0;
}
//called by dijkstras function below
int chooseVertex(int *dist, int n, int *found, int rank, int size) {
int i, tmp, partition, lower, upper, leastPosition;
int least = INFINTY;
//set the number of nodes wach processor will work with
partition = n / size;
lower = rank * partition;
upper = lower + partition;
//used for MPI_Reduce
struct {
int pos;
int val;
} sendBuffr, recvBuffr;
//calculate least position
for (i = lower; i < upper; i++) {
tmp = dist[i];
if ((!found[i]) && (tmp < least)) {
least = tmp;
leastPosition = i;
}
}
//if all nodes checked are INFINITY, go with last node checked
if (least == INFINTY) leastPosition = i;
//set the send buffer for MPI_Reduce
sendBuffr.val = least;
sendBuffr.pos = leastPosition;
//Rank 0 processor has correct least position and value
MPI_Reduce(&sendBuffr, &recvBuffr, 1, MPI_DOUBLE_INT, MPI_MINLOC, MASTER, MPI_COMM_WORLD);
if (rank == MASTER) leastPosition = recvBuffr.pos;
//Update all processors to have correct position
MPI_Bcast(&leastPosition, 1, MPI_INT, MASTER, MPI_COMM_WORLD);
//Print the contents of leastPosition on rank 0 for debugging
if(rank == MASTER) printf("LeastPosition for rank %d is: %d\n", rank, leastPosition);
fflush(stdout);
return leastPosition;
}
void dijkstra(int SOURCE, int n, int **edge, int *dist, int rank, int size)
{
int i, j, count, partition, lower, upper, *found, *sendBuffer;
j = INFINTY;
sendBuffer = (int *)malloc(n * sizeof(int));
found = (int *)calloc(n, sizeof(int));
partition = n / size;
lower = rank * partition;
upper = lower + partition;
//set the distance array
for (i = 0; i < n; i++) {
found[i] = 0;
dist[i] = edge[SOURCE][i];
sendBuffer[i] = dist[i];
}
found[SOURCE] = 1;
count = 1;
//Dijkstra loop
while (count < n) {
printf("before ChooseVertex: rank %d reporting\n", rank);
fflush(stdout);
j = chooseVertex(dist, n, found, rank, size);
printf("after ChooseVertex: rank %d reporting\n", rank);
fflush(stdout);
count++;
found[j] = 1;
for (i = lower; i < upper; i++) {
if (!found[i])
{
dist[i] = min(dist[i], dist[j] + edge[j][i]);
sendBuffer[i] = dist[i];
}
}
MPI_Reduce(sendBuffer, dist, n, MPI_INT, MPI_MIN, MASTER, MPI_COMM_WORLD);
MPI_Bcast(dist, n, MPI_INT, MASTER, MPI_COMM_WORLD);
}
}
Sample error messages:
before ChooseVertex: rank 1 reporting
before ChooseVertex: rank 0 reporting
LeastPosition for rank 0 is: 3
after ChooseVertex: rank 1 reporting
after ChooseVertex: rank 0 reporting
before ChooseVertex: rank 1 reporting
before ChooseVertex: rank 0 reporting
after ChooseVertex: rank 1 reporting
LeastPosition for rank 0 is: 4
after ChooseVertex: rank 0 reporting
before ChooseVertex: rank 0 reporting
before ChooseVertex: rank 1 reporting
LeastPosition for rank 0 is: 7
after ChooseVertex: rank 1 reporting
job aborted:
[ranks] message
[0] process exited without calling finalize
[1] terminated
---- error analysis -----
[0] on My-ComputerName
Assignmet3PP ended prematurely and may have crashed. exit code 3
---- error analysis -----
Your reduce command is:
MPI_Reduce(&sendBuffr, &recvBuffr, 1, MPI_DOUBLE_INT, MPI_MINLOC, MASTER, MPI_COMM_WORLD);
By using MPI_DOUBLE_INT, you are saying that you are sending a struct with two variables: a double followed by an int. This is not your struct however: you only have 2 ints. Therefore you should use MPI_2INT. These types were derived from this source. Alternatively, you could create your own type using vectors.
An example fix is:
MPI_Reduce(&sendBuffr, &recvBuffr, 1, MPI_2INT, MPI_MINLOC, MASTER, MPI_COMM_WORLD);
Also, a reduction, followed by a broadcast can be easily combined into one step with MPI_Allreduce().

How do I use MPI to scatter a 2d array?

I am trying to use MPI to distribute the work for bucket sort. When I scatter the array, I wanted each process to receive a single bucket (int array) and be able to print its content. However, my current program prints out incorrect values, which make me think I am not indexing into the memory I want. Can someone help explain how I can properly index into the array I am passing to each process or how I am doing this incorrectly?
#define MAX_VALUE 64
#define N 32
main(int argc, char *argv[]){
MPI_Init(&argc, &argv); //initialize MPI environment
int** sendArray = malloc(16*sizeof(int *));
int *arrayIndex = (int *) malloc(16*sizeof(int));
int *receiveArray = (int *) malloc(N*sizeof(int));
int nps, myrank;
MPI_Comm_size(MPI_COMM_WORLD, &nps);
MPI_Comm_rank(MPI_COMM_WORLD, &myrank);
int i;
if(myrank == 0)
{
//create an array that stores the number of values in each bucket
for( i = 0; i < 16; i++){
arrayIndex[i] = 0;
}
int bucket =0;
int temp = 0;
//creates an int array within each array index of sendArray
for( i = 0; i < 16; i++){
sendArray[i] = (int *)malloc(N * sizeof(int));
}
//Create a random int array with values ranging from 0 to MAX_VALUE
for(i = 0; i < N; i++){
temp= rand() % MAX_VALUE;
bucket = temp/4;
printf("assigning %d to index [%d][%d]\n", temp, bucket, arrayIndex[bucket]);
sendArray[bucket][arrayIndex[bucket]]= temp;
arrayIndex[bucket] = arrayIndex[bucket] + 1;
}
MPI_Scatter(sendArray, 16, MPI_INT, receiveArray, N, MPI_INT, 0, MPI_COMM_WORLD);
MPI_Bcast(arrayIndex, 16, MPI_INT, 0, MPI_COMM_WORLD);
printf("bucket %d has %d values\n", myrank, arrayIndex[myrank]);
for( i = 0; i < arrayIndex[myrank]; i++){
printf("bucket %d index %d has value %d\n", myrank, i, receiveArray[i]);
}
}
What you are trying to do doesn't work because MPI always sends only the data you point to. It does not follow the pointers in your sendArray.
In your example you could just make your SendArray bigger, namely 16 * N and put all your data into that continuous array. That way you have a one-dimensional array, but that should not be a Problem in the code you gave us because all buckets have the same length, so you can access element j from bucket i with sendArray[i * N + j].
Also, in most cases send_count should be equal to recv_count. In your case that would be N. The correct MPI call would be
MPI_Scatter(sendArray, N, MPI_INT, receiveArray, N, MPI_INT, 0, MPI_COMM_WORLD);

fread() in MPI is giving Signal 7 Bus Error

I am a newbie to C and MPI.
I have the following code which I am using with MPI.
#include "RabinKarp.c"
#include <stdio.h>
#include <stdlib.h>
#include<string.h>
#include<math.h>
#include </usr/include/mpi/mpi.h>
typedef struct {
int lowerOffset;
int upperOffset;
int processorNumber;
} patternPartitioning;
int rank;
FILE *fp;
char* filename = "/home/rohit/Downloads/10_seqs_2000_3000_bp.fasta";
int n = 0;
int d = 0;
//number of processors
int k, i = 0, lower_limit, upper_limit;
int main(int argc, char** argv) {
char* pattern= "taaat";
patternPartitioning partition[k];
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &k);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
fp = fopen(filename, "rb");
if (fp != '\0') {
fseek(fp, 0L, SEEK_END);
n = ftell(fp);
fseek(fp, 0L, SEEK_SET);
}
//Do for Master Processor
if(rank ==0){
int m = strlen(pattern);
printf("pattern length is %d \n", m);
d = (int)(n - m + 1) / k;
for (i = 0; i <= k - 2; i++) {
lower_limit = round(i * d);
upper_limit = round((i + 1) * d) + m - 1;
partition->lowerOffset = lower_limit;
partition->upperOffset = upper_limit;
partition->processorNumber = i+1;
// k-2 times calculate the limits like this
printf(" the lower limit is %d and upper limit is%d\n",
partition->lowerOffset, partition->upperOffset);
int mpi_send_block[2];
mpi_send_block[0]= lower_limit;
mpi_send_block[1] = upper_limit;
MPI_Send(mpi_send_block, 2, MPI_INT, i+1, i+1, MPI_COMM_WORLD);
//int MPI_Send(void *buf, int count, MPI_Datatype dtype, int dest, int tag, MPI_Comm comm);
}
// for the last processor calculate the index here
lower_limit = round((k - 1) * d);
upper_limit = n;
partition->lowerOffset = lower_limit;
partition->upperOffset = n;
partition->processorNumber = k;
printf("Processor : %d : has start : %d : and end : %d :\n",rank,partition->lowerOffset,partition->upperOffset);
//perform the search here
int size = partition->upperOffset-partition->lowerOffset;
char *text = (char*) malloc (size);
fseek(fp,partition->lowerOffset , SEEK_SET);
fread(&text, sizeof(char), size, fp);
printf("read in rank0");
fputs(text,stdout);
int number =0;
fputs(text,stdout);
fputs(pattern,stdout);
number = rabincarp(text,pattern);
for (i = 0; i <= k - 2; i++) {
int res[1];
res[0]=0;
MPI_Status status;
// MPI_Recv(res, 1, MPI_INT, i+1, i+1, MPI_COMM_WORLD, &status);
// number = number + res[0];
}
printf("\n\ntotal number of result found:%d\n", number);
} else {
patternPartitioning mypartition;
MPI_Status status;
int number[1];
int mpi_recv_block[2];
MPI_Recv(mpi_recv_block, 2, MPI_INT, 0, rank, MPI_COMM_WORLD,
&status);
printf("Processor : %d : has start : %d : and end : %d :\n",rank,mpi_recv_block[0],mpi_recv_block[1]);
//perform the search here
int size = mpi_recv_block[1]-mpi_recv_block[0];
char *text = (char*) malloc (size);
fseek(fp,mpi_recv_block[0] , SEEK_SET);
fread(&text, sizeof(char), size, fp);
printf("read in rank1");
// fread(text,size,size,fp);
printf("length of text segment by proc: %d is %d",rank,(int)strlen(text));
number[0] = rabincarp(text,pattern);
//MPI_Send(number, 1, MPI_INT, 0, rank, MPI_COMM_WORLD);
}
MPI_Barrier(MPI_COMM_WORLD);
MPI_Finalize();
fclose(fp);
return (EXIT_SUCCESS);
}
if I run( mpirun -np 2 pnew ) this I am getting the following error:
[localhost:03265] *** Process received signal ***
[localhost:03265] *** Process received signal ***
--------------------------------------------------------------------------
mpirun noticed that process rank 1 with PID 3265 on node localhost exited on signal 7 (Bus error).
so if I remove the fread() statements I dont get the error.. can anyone tell me what am I missing?
char *text = (char*) malloc (size);
fseek(fp,partition->lowerOffset , SEEK_SET);
fread(&text, sizeof(char), size, fp);
The documentation for fread says "The function fread() reads nmemb elements of data, each size bytes long, from the stream pointed to by stream, storing them at the location given by ptr."
Since text is a char *, &text is the address of a char *. That won't have enough space to hold the data you're reading. You want to pass fread the address of the memory you allocated, not the address of the variable holding that address! (So remove the &.)
if (fp != '\0') {
fp is FILE* , '\0' is an int constant.
This is not the error, but I suggest you compile with a higher warning level to catch this kind of errors.

Resources