Storing values starting from a particular location in MPI_Recv - c

I am testing an example, where I am trying to send an array of 4 elements from process 0 to process 1 and I am doing so using MPI_Type_contiguous
This is the code for the same
#include <string.h>
#include <stdio.h>
#include <stdlib.h>
#include "mpi.h"
int main( int argc, char *argv[] )
{
MPI_Init(&argc, &argv);
int myrank, size; //size will take care of number of processes
MPI_Comm_rank(MPI_COMM_WORLD, &myrank) ;
MPI_Comm_size(MPI_COMM_WORLD, &size);
//declaring the matrix
double mat[4]={1, 2, 3, 4};
int r=4;
double snd_buf[r];
double recv_buf[r];
double buf[r];
int position=0;
MPI_Status status[r];
MPI_Datatype type;
MPI_Type_contiguous( r, MPI_DOUBLE, &type );
MPI_Type_commit(&type);
//sending the data
if(myrank==0)
{
MPI_Send (&mat[0], r , type, 1 /*dest*/ , 100 /*tag*/ , MPI_COMM_WORLD);
}
//receiving the data
if(myrank==1)
{
MPI_Recv(recv_buf, r, type, 0 /*src*/ , 100 /*tag*/, MPI_COMM_WORLD,&status[0]);
}
//printing
if(myrank==1)
{
for(int i=0;i<r;i++)
{
printf("%lf ",recv_buf[i]);
}
printf("\n");
}
MPI_Finalize();
return 0;
}
As one can see the recv_buf size is same as the size of the array. And the output that is being printed is 1 2 3 4
Now what I trying to do is that, say the recv_buf size is 10 and I want to store the elements starting from location 6 to 9. For which I have written this code, but to my surprise it is giving no output
#include <string.h>
#include <stdio.h>
#include <stdlib.h>
#include "mpi.h"
int main( int argc, char *argv[] )
{
MPI_Init(&argc, &argv);
int myrank, size; //size will take care of number of processes
MPI_Comm_rank(MPI_COMM_WORLD, &myrank) ;
MPI_Comm_size(MPI_COMM_WORLD, &size);
//declaring the matrix
double mat[4]={1, 2, 3, 4};
int r=4;
double snd_buf[r];
double recv_buf[10]; //declared it of size 10
double buf[r];
int position=0;
MPI_Status status[r];
MPI_Datatype type;
MPI_Type_contiguous( r, MPI_DOUBLE, &type );
MPI_Type_commit(&type);
//packing and sending the data
if(myrank==0)
{
MPI_Send (&mat[0], r , type, 1 /*dest*/ , 100 /*tag*/ , MPI_COMM_WORLD);
}
//receiving the data
if(myrank==1)
{
MPI_Recv(&recv_buf[6], r, type, 0 /*src*/ , 100 /*tag*/, MPI_COMM_WORLD,&status[0]);
}
//printing
if(myrank==1)
{
for(int i=6;i<10;i++)
{
printf("%lf ",recv_buf[i]);
}
printf("\n");
}
MPI_Finalize();
return 0;
}
Where am I going wrong?

From this SO Thread one can read:
MPI_Type_contiguous is for making a new datatype which is count copies
of the existing one. This is useful to simplify the processes of
sending a number of datatypes together as you don't need to keep track
of their combined size (count in MPI_send can be replaced by 1).
That being said, in your MPI_Send call:
MPI_Send (&mat[0], r , type, 1 /*dest*/ , 100 /*tag*/ , MPI_COMM_WORLD);
you should not send an array with 'r' elements of type 'type', but rather send 1 element of type 'type' (which is equal to 4 doubles). One of the goals of the MPI_Type_contiguous is to abstract away the count to 1 instead of keeping track of the number of elements.
The same applies to your recv call:
MPI_Recv(&recv_buf[6], 1, type, 0 /*src*/ , 100 /*tag*/, MPI_COMM_WORLD,&status[0]);
Finally, you should also free the custom type accordingly:
MPI_Type_free(&type);
The entire code:
#include <string.h>
#include <stdio.h>
#include <stdlib.h>
#include "mpi.h"
int main( int argc, char *argv[] )
{
MPI_Init(&argc, &argv);
int myrank, size;
MPI_Comm_rank(MPI_COMM_WORLD, &myrank) ;
MPI_Comm_size(MPI_COMM_WORLD, &size);
double mat[4]={1, 2, 3, 4};
int r=4;
double snd_buf[r];
double recv_buf[10];
MPI_Status status;
MPI_Datatype type;
MPI_Type_contiguous( r, MPI_DOUBLE, &type );
MPI_Type_commit(&type);
if(myrank==0)
MPI_Send (&mat[0], 1 , type, 1, 100, MPI_COMM_WORLD);
else if(myrank==1)
{
MPI_Recv(&recv_buf[6], 1, type, 0, 100, MPI_COMM_WORLD, &status);
for(int i=6;i<10;i++)
printf("%lf ",recv_buf[i]);
printf("\n");
}
MPI_Type_free(&type);
MPI_Finalize();
return 0;
}
The Output:
1.000000 2.000000 3.000000 4.000000

Related

MPI Array undeclared

I am attaching a Minimal Reproducing Example of the error that I am facing in a large code. Say I have 2 processes, P0 and P1. I am declaring an array int arr[2] inside P0 and storing a value in arr[0]. Then I am redeclaring int arr[2] for P1 only, but whenever I am trying to access it inside P1, I am getting ‘arr’ undeclared (first use in this function) 37 | arr[1]=1; error. I am attaching the code here.
#include <string.h>
#include <stdio.h>
#include <stdlib.h>
#include "mpi.h"
int main( int argc, char *argv[])
{
MPI_Init(&argc, &argv);
int myrank, size; //size will take care of number of processes
double sTime, eTime, time,max_time;
MPI_Comm_rank(MPI_COMM_WORLD, &myrank) ;
MPI_Comm_size(MPI_COMM_WORLD, &size);
int r=0, c;
if(myrank==0)
{
int arr[2];
arr[0]=1;
}
if(myrank!=0)
{
int arr[2];
}
if(myrank==1)
{
arr[1]=2;
MPI_Send(&arr[1], 1 , MPI_INT, 0 /*dest*/ , 100 /*tag*/ , MPI_COMM_WORLD);
}
MPI_Status status[2];
if(myrank==0)
{
MPI_Recv(&arr[1],1 , MPI_INT, 1 /*src*/ , 100 /*tag*/, MPI_COMM_WORLD,&status[1]);
for(int i=0;i<2;i++)
{
printf(" %d",arr[i]);
}
}
MPI_Finalize();
return 0;
}
To prevent this error I have tried declaring int arr[2] inside
if(myrank==1){
int arr[2];
arr[1]=2;
MPI_Send(arr[1], 1 , MPI_INT, 0 /*dest*/ , 100 /*tag*/ , MPI_COMM_WORLD);
}
But this time I get this error,
‘arr’ undeclared (first use in this function)
47 | MPI_Recv(arr[0],1 , MPI_INT, 1 /src/ , 100 /tag/, MPI_COMM_WORLD,&status[1]);
I am not able to get what is wrong here. Any suggestions will be really helpful.
That error is because you declared a variable inside a if statement and you want to used outside that if. In C that is not possible, when you do:
if(myrank==0)
{
int arr[2];
arr[0]=1;
}
the array int arr[2]; will only exist in the scope of that if statement. Therefore, if you try to access it outside it leads to a compilation error.
To get around you can either declare int arr[2] outside the if statement (but then all MPI process would allocated), or declare a pointer to int outside and only allocate memory for that pointer for the process that you want, namely:
int *arr = NULL;
if(myrank==0)
{
arr = malloc(sizeof(int) * 2);
arr[0]=1;
}
when the array is not longer needed do not forget to free it:
free(arr);
Another option (that works in your example) is to rearrange the code so that you have all the code related with the one process in one if branch, and the code from the other process in another if branch, namely:
#include <string.h>
#include <stdio.h>
#include <stdlib.h>
#include "mpi.h"
int main( int argc, char *argv[])
{
MPI_Init(&argc, &argv);
int myrank, size; //size will take care of number of processes
double sTime, eTime, time,max_time;
MPI_Status status[2];
MPI_Comm_rank(MPI_COMM_WORLD, &myrank) ;
MPI_Comm_size(MPI_COMM_WORLD, &size);
int r=0, c;
if(myrank==0)
{
int arr[2];
arr[0]=1;
MPI_Recv(&arr[1],1 , MPI_INT, 1 /*src*/ , 100 /*tag*/, MPI_COMM_WORLD,&status[1]);
for(int i=0;i<2;i++)
printf(" %d",arr[i]);
}
if(myrank==1)
{
int arr[2];
arr[1]=2;
MPI_Send(&arr[1], 1 , MPI_INT, 0 /*dest*/ , 100 /*tag*/ , MPI_COMM_WORLD);
}
MPI_Finalize();
return 0;
}

An efficient way to perform an all reduction in MPI of a value based on another variable?

As an example, lets say I have
int a = ...;
int b = ...;
int c;
where a is the result of some complex local calculation and b is some metric for the quality of a.
I'd like to send the best value of a to every process and store it in c where best is defined by having the largest value of b.
I guess I'm just wondering if there is a more efficient way of doing this than doing an allgather on a and b and then searching through the resulting arrays.
The actual code involves sending and comparing several hundred values on upto several hundred/thousand processes, so any efficiency gains would be welcome.
I guess I'm just wondering if there is a more efficient way of doing
this than doing an allgather on a and b and then searching through the
resulting arrays.
This can be achieved with only a single MPI_AllReduce.
I will present two approaches, a simpler one (suitable for your use case); and a more generic one, for more complex use-cases. The latter will also be useful to show case MPI functionality such as custom MPI Datatypes and custom MPI reduction operators.
Approach 1
To represent
int a = ...;
int b = ...;
you could use the following struct:
typedef struct MyStruct {
int b;
int a;
} S;
then you can use the MPI Datatype MPI_2INT and the MPI operator MAXLOC
The operator MPI_MINLOC is used to compute a global minimum and also
an index attached to the minimum value. **MPI_MAXLOC similarly computes
a global maximum and index. One application of these is to compute a
global minimum (maximum) and the rank of the process containing this
value.
In your case, instead of the rank we will be using the value of 'a'. Hence, the MPI_AllReduce call:
S local, global;
...
MPI_Allreduce(&local, &global, 1, MPI_2INT, MPI_MAXLOC, MPI_COMM_WORLD);
The complete code would look like the following:
#include <stdio.h>
#include <mpi.h>
typedef struct MyStruct {
int b;
int a;
} S;
int main(int argc,char *argv[]){
MPI_Init(NULL,NULL); // Initialize the MPI environment
int world_rank;
int world_size;
MPI_Comm_rank(MPI_COMM_WORLD,&world_rank);
MPI_Comm_size(MPI_COMM_WORLD,&world_size);
// Some fake data
S local, global;
local.a = world_rank;
local.b = world_size - world_rank;
MPI_Allreduce(&local, &global, 1, MPI_2INT, MPI_MAXLOC, MPI_COMM_WORLD);
if(world_rank == 0){
printf("%d %d\n", global.b, global.a);
}
MPI_Finalize();
return 0;
}
Second Approach
The MPI_MAXLOC only works for a certain number of predefined datatypes. Nonetheless, for the remaining cases you can use the following approach (based on this SO thread):
Create a struct that will contain the values a and b;
Create a customize MPI_Datatype representing the 1. struct to be sent across processes;
Use MPI_AllReduce:
int MPI_Allreduce(const void *sendbuf, void *recvbuf, int count,
MPI_Datatype datatype, MPI_Op op, MPI_Comm comm)
Combines values from all processes and distributes the result back to
all processes
Use the operation MAX;
I'd like to send the best value of 'a' to every process and store it in
'c' where best is defined by having the largest value of 'b'.
Then you have to tell MPI to only consider the element b of the struct. Hence, you need to create a custom MPI_Op max operation.
Coding the approach
So let us break step-by-step the aforementioned implementation:
First define the struct:
typedef struct MyStruct {
double a, b;
} S;
Second create the customize MPI_Datatype:
void defineStruct(MPI_Datatype *tstype) {
const int count = 2;
int blocklens[count];
MPI_Datatype types[count];
MPI_Aint disps[count];
for (int i=0; i < count; i++){
types[i] = MPI_DOUBLE;
blocklens[i] = 1;
}
disps[0] = offsetof(S,a);
disps[1] = offsetof(S,b);
MPI_Type_create_struct(count, blocklens, disps, types, tstype);
MPI_Type_commit(tstype);
}
Very Important
Note that since we are using a struct you have to be careful with the fact that (source)
the C standard allows arbitrary padding between the fields.
So reducing a struct with two doubles is NOT the same as reducing an array with two doubles.
In the main you have to do:
MPI_Datatype structtype;
defineStruct(&structtype);
Third create the custom max operation:
void max_struct(void *in, void *inout, int *len, MPI_Datatype *type){
S *invals = in;
S *inoutvals = inout;
for (int i=0; i < *len; i++)
inoutvals[i].b = (inoutvals[i].b > invals[i].b) ? inoutvals[i].b : invals[i].b;
}
in the main do:
MPI_Op maxstruct;
MPI_Op_create(max_struct, 1, &maxstruct);
Finally, call the MPI_AllReduce:
S local, global;
...
MPI_Allreduce(&local, &global, 1, structtype, maxstruct, MPI_COMM_WORLD);
The entire code put together:
#include <assert.h>
#include <stdio.h>
#include <stdlib.h>
#include <mpi.h>
typedef struct MyStruct {
double a, b;
} S;
void max_struct(void *in, void *inout, int *len, MPI_Datatype *type){
S *invals = in;
S *inoutvals = inout;
for (int i=0; i<*len; i++)
inoutvals[i].b = (inoutvals[i].b > invals[i].b) ? inoutvals[i].b : invals[i].b;
}
void defineStruct(MPI_Datatype *tstype) {
const int count = 2;
int blocklens[count];
MPI_Datatype types[count];
MPI_Aint disps[count];
for (int i=0; i < count; i++) {
types[i] = MPI_DOUBLE;
blocklens[i] = 1;
}
disps[0] = offsetof(S,a);
disps[1] = offsetof(S,b);
MPI_Type_create_struct(count, blocklens, disps, types, tstype);
MPI_Type_commit(tstype);
}
int main(int argc,char *argv[]){
MPI_Init(NULL,NULL); // Initialize the MPI environment
int world_rank;
int world_size;
MPI_Comm_rank(MPI_COMM_WORLD,&world_rank);
MPI_Comm_size(MPI_COMM_WORLD,&world_size);
MPI_Datatype structtype;
MPI_Op maxstruct;
S local, global;
defineStruct(&structtype);
MPI_Op_create(max_struct, 1, &maxstruct);
// Just some random values
local.a = world_rank;
local.b = world_size - world_rank;
MPI_Allreduce(&local, &global, 1, structtype, maxstruct, MPI_COMM_WORLD);
if(world_rank == 0){
double c = global.a;
printf("%f %f\n", global.b, c);
}
MPI_Finalize();
return 0;
}
You can pair the value of b with the rank of the process to find the rank that contains the maximum value of b. The MPI_DOUBLE_INT type is very useful for this purpose. You can then broadcast a from this rank in order to have the value at each process.
#include <mpi.h>
#include <stdlib.h>
#include <stdio.h>
int main(int argc, char *argv[])
{
int my_rank;
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &my_rank);
// Create random a and b on each rank.
srand(123 + my_rank);
double a = rand() / (double)RAND_MAX;
double b = rand() / (double)RAND_MAX;
struct
{
double value;
int rank;
} s_in, s_out;
s_in.value = b;
s_in.rank = my_rank;
printf("before: %d, %f, %f\n", my_rank, a, b);
// Find the maximum value of b and the corresponding rank.
MPI_Allreduce(&s_in, &s_out, 1, MPI_DOUBLE_INT, MPI_MAXLOC, MPI_COMM_WORLD);
b = s_out.value;
// Broadcast from the rank with the maximum value.
MPI_Bcast(&a, 1, MPI_DOUBLE, s_out.rank, MPI_COMM_WORLD);
printf("after: %d, %f, %f\n", my_rank, a, b);
MPI_Finalize();
}

MPI_Scatter is receiving wrong values

My goal is to take an array of 6 integers and split them among 3 processes. However, the numbers in receiveBuffer are not correct. I don't know why the three processes don't contain integers from the original array.
#include<stdio.h>
#include <math.h>
#include <stdlib.h>
#include <time.h>
#include <assert.h>
#include "mpi.h"
#define ARRAY_SIZE 6
// simple print array method
void printArray(int arr[], int size)
{
int i;
for (i=0; i < size; i++)
printf("%d ", arr[i]);
printf("\n");
}
int main (int argc, char *argv[])
{
srand(time(NULL));
int array[ARRAY_SIZE];
int rank, numNodes;
// fill array with random numbers and print
for(int i = 0; i < ARRAY_SIZE; i++)
array[i] = rand();
printArray(array, ARRAY_SIZE);
MPI_Init( &argc, &argv );
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size (MPI_COMM_WORLD,&numNodes);
int receiveBuffer[ARRAY_SIZE/numNodes];
if(rank == 0)
{
MPI_Scatter(array, ARRAY_SIZE/numNodes, MPI_INT, &receiveBuffer, 0, MPI_INT, rank, MPI_COMM_WORLD);
}
printf("ID: %d with %d items.\n", rank, ARRAY_SIZE/numNodes);
printArray(receiveBuffer, ARRAY_SIZE/numNodes);
MPI_Finalize();
return 0;
}
Additionally, why is the original array printing for each process? Doesn't parallelization begin after INIT?
There are two issues with your usage of MPI_Scatter()
MPI_Scatter() is a collective operation, and has hence to be invoked by all the ranks of the communicator (e.g. not only rank zero)
since you use MPI_INT for both send and receive datatype, the send and receive counts should be equal (e.g. use ARRAY_SIZE/numNodes instead of 0)
The MPI standard does not specify what happens before MPI_Init(), and it is very common that mpirun spawns all the tasks, so they all execute the code before MPI_Init(). That's why MPI_Init() is generally invoked at the very beginning of a MPI program.

Using MPI_Reduce and MPI_Scatter in C with mpi

I'm new in mpi, from this code I want to distribute one row of 2D array to all processor for example
p1 = 1,2,3,4
p2 = 5,6,7,8
p3 = 9,10,11,12
p4 = 13,14,15,16
when run the program with (mpirun -np 4 ./a),
MPI_Scatter
works fine but
MPI_Reduce
cause stopping in terminal. I do not know how can I deal with MPI_Reduce by found localmax or ( mymax). Can any one help?
#include "mpi.h"
#include <stdio.h>
#define size 4
int main ()
{
int np, rank, sendcount, recvcount, source,i;
int recvbuf[size];
int mymax;
int max=0;
MPI_Init(NULL,NULL);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Comm_size(MPI_COMM_WORLD, &np);
int sendbuf[size][size] ={
{1, 2, 3, 4},
{5, 6, 7, 8},
{9,10,11, 12},
{13, 14, 15, 16}};
source = 1;
sendcount = size;
recvcount = size;
MPI_Scatter(sendbuf,sendcount,MPI_INT,recvbuf,recvcount,MPI_INT,source,MPI_COMM_WORLD);
printf("rank= %d Results: %d %d %d %d \n",rank,recvbuf[0],recvbuf[1],recvbuf[2],recvbuf[3]);
//Each processor has a row, now find local max
mymax = recvbuf[0];
for(i=0;i<recvcount;i++) {
if(mymax < recvbuf[i]) {
mymax = recvbuf[i];
}
}
MPI_Reduce(&mymax,&max ,1,MPI_INT, MPI_MAX, 0, MPI_COMM_WORLD);
if (rank==0) {
printf(" Processor %d has max data after reduce : max= %d ", rank,max);
}
else
printf("----.\n");
MPI_Finalize();
}

Communication between two processors (Parallel Programming)

I want to write a code that:
P0 processor gets an array from keyboard, and sends that array to P1 processor.
P1 processor prints all of the values to screen. For example:
[P0]: Enter the size of array: 1
[P0]: Enter the elements of array: 3
[P1]: The array is: 3
[P0]: Enter the size of array: 3
[P0]: Enter the elements of array: 5 7 5
[P1]: The array is: 5 7 5
.
.
.
and here is my first work. Too many faults I think. But I'm new. Want to learn how to code.
#include <stdio.h>
#include <mpi.h>
#define n 100
int main(int argc, char *argv[]){
int my_rank, size_cm;
int value, i;
int dizi[n];
double senddata[n];
double recvdata[n];
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &my_rank);
MPI_Comm_size(MPI_COMM_WORLD, &size_cm);
value=0;
if(my_rank == 0){
printf("[%d]: Enter the size of array: ",&my_rank);
scanf("%d",value);
printf("[%d]: Enter the elements of array",&my_rank);
for(i=0; i<n; i++){
scanf("%d", &dizi[n]);
senddata[0] = dizi[];
MPI_Send(senddata, 1, MPI_INT, 1, 0, MPI_COMM_WORLD);
}
}
if(my_rank == 1){
MPI_Recv(recvdata, 1, MPI_INT, 0, 0, MPI_COMM_WORLD,
printf("[%d]: The array is: %d ",&my_rank, dizi[n]);
}
MPI_Finalize();
return 0;
}
To get a minimal example that compile, I added the missing argument to MPI_Recv():
MPI_Status status;
MPI_Recv(recvdata, 1, MPI_INT, 0, 0, MPI_COMM_WORLD,&status);
I also modifed senddata[0] = dizi[]; to senddata[0] = dizi[i];
As I tried to compile the code you provided, I got a warning :
format ‘%d’ expects argument of type ‘int’, but argument 2 has type ‘int *
The function scanf() needs the pointer to the data to modify it, so int a;scanf("%d",&a); is correct. But printf() just needs the data since it will not modify it : int a;printf("%d",a); is the right way to go.
If you want the array to be populated, use scanf("%d", &dizi[i]);, not scanf("%d", &dizi[n]);. n is the lenght of array dizi. Hence, the index n is out of the array, since indexes of arrays start at 0. This could trigger undefined behaviors (strange values, segmentation fault or even a correct result !).
Since MPI_Send() is called in the for(i=0; i<n; i++), process 0 tries to send n messages to process 1. But process 1 only receives one. Hence, process 0 will be locked at i=1, waiting for process 1 to receive the second message. This is a deadlock.
I assume you are trying to send an array from process 0 to process 1. The following code based on your should do the trick. The actual length of the array is n_arr:
#include <stdio.h>
#include <mpi.h>
#include <stdlib.h>
#define n 100
int main(int argc, char *argv[]){
int my_rank, size_cm;
int n_arr;
int i;
int dizi[n];
// double senddata[n];
// double recvdata[n];
MPI_Status status;
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &my_rank);
MPI_Comm_size(MPI_COMM_WORLD, &size_cm);
if(my_rank == 0){
// fflush(stdout); because the standard output is buffered...
printf("[%d]: Enter the size of array: ",my_rank);fflush(stdout);
if(scanf("%d",&n_arr)!=1){fprintf(stderr,"input error\n");exit(1);}
if(n_arr>100){
fprintf(stderr,"The size of the array is too large\n");exit(1);
}
printf("[%d]: Enter the elements of array",my_rank);fflush(stdout);
for(i=0; i<n_arr; i++){
if(scanf("%d", &dizi[i])!=1){fprintf(stderr,"input error\n");exit(1);}
}
//sending the length of the array
MPI_Send(&n_arr, 1, MPI_INT, 1, 0, MPI_COMM_WORLD);
//senfing the array
MPI_Send(dizi, n_arr, MPI_INT, 1, 0, MPI_COMM_WORLD);
}
if(my_rank == 1){
// receiving the length of the array
MPI_Recv(&n_arr, 1, MPI_INT, 0, 0, MPI_COMM_WORLD,&status);
//receiving the array
MPI_Recv(dizi, n_arr, MPI_INT, 0, 0, MPI_COMM_WORLD,&status);
printf("[%d]: The array of size %d is: ",my_rank,n_arr);
for(i=0; i<n_arr; i++){
printf("%d ",dizi[i]);
}
printf("\n");
}
MPI_Finalize();
return 0;
}
It is compiled by running mpicc main.c -o main and ran by mpirun -np 2 main
I added some stuff to check if the input is correct (always a good thing) and to handle the case of n_arr being larger than n=100. The last one could be avoided by using malloc() to allocate memory for the array: this part is left to you !

Resources