I am quite new to meson and C, please forgive me if the answer to this question is trivial ...
I want to use OpenMP in a C project, and I am using meson as a build tool.
I want to compile the parallel for example from this tutorial.
My main.c looks very similar:
#include <omp.h>
#define N 1000
#define CHUNKSIZE 100
int main(int argc, char *argv[]) {
int i, chunk;
float a[N], b[N], c[N];
/* Some initializations */
for (i=0; i < N; i++)
a[i] = b[i] = i * 1.0;
chunk = CHUNKSIZE;
#pragma omp parallel for \
shared(a,b,c,chunk) private(i) \
schedule(static,chunk)
for (i=0; i < N; i++)
c[i] = a[i] + b[i];
return 0;
}
My short meson.build file contains this:
project('openmp_with_meson', 'c')
# add_project_arguments('-fopenmp', language: 'c')
exe = executable('some_exe', 'src/main.c') #, c_args: '-fopenmp')
I commented out the c_args keyword in the call to executable here.
Now I end up with the following scenarios:
without '-fopenmp' option, I get the warning, that the pragma is unknown and will be ignored (as I would expect): ../src/main.c:15:0: warning: ignoring pragma omp parallel [-Wunknown-pragmas] #pragma omp parallel for
with the option c_args: '-fopenmp' inserted, I do not get the above warning anymore, instead I get errors for undefined references to GOMP_parallel, omp_get_num_threads and omp_get_thread_num, and nothing gets built
when I use gcc manually with gcc -Wall -o manually_with_gcc ../src/main.c -fopenmp the program compiles and executes without any errors.
Can anyone tell me how to get the executable to compile with meson?
Meson 0.46 or later
Meson 0.46 (released Apr 23, 2018) added OpenMP support. So, if you have meson 0.46 or later,
project('openmp_with_meson', 'c')
omp = dependency('openmp')
exe = executable('some_exe', 'src/main.c',
dependencies : omp)
Should work with both GCC and Clang.
Meson 0.45 or earlier
If you happen to have older version, Debian Stretch, Ubuntu Bionic (18.04LTS), or Fedora 27, you can do the following:
You need another keyword arg link_args : '-fopenmp' for executable().
exe = executable('some_exe', 'src/main.c',
c_args: '-fopenmp',
link_args : '-fopenmp')
Meson builds C program in two phases, compiling and linking. You can pass extra arguments with c_args for compiling and link_args for linking.
The option -fopenmp enables OpenMP directives while compiling, and
the flag also arranges for automatic linking of the OpenMP runtime
library.
That is, -fopenmp is dual purpose option.
Now, the above is simple and good. Once you understand it, however, you can also compile your program with -fopenmp to activate the OpenMP directives and link the OpenMP libraries by yourself without -fopenmp to the link_args.
Here is a complete meson.build:
project('openmp_with_meson', 'c')
cc = meson.get_compiler('c')
libgomp = cc.find_library('gomp')
exe = executable('some_exe', 'src/main.c',
c_args: '-fopenmp',
dependencies : libgomp)
Meson >= 0.46 now has a builtin for this (docs):
openmp = dependency('openmp') # meson builtin
Related
I have a few questions about parallelisation using OMP.
Say I have a program, within which there is a nested for loop. From my understanding of the directive #pragma omp parallel for, the outer iteration counter is automatically privatised. Is the same true for the inner iteration counter? This appears to be the case, as the outputs are identical whether I state it explicitly or not.
Is it necessary(/safer) to explicitly privatise iteration counters for for loops within a parallel for block?
I am compiling with GCC - I found that I had some unhelpful crosstalk between threads when using GCC 5.4.0, but not when using GCC 7.5.0. To resolve this, I added private(foo, bar) to the directive, but I am curious as to why it works without this statement for GCC 7.5.0. Does the GCC (7.5.0) automatically identify race conditions/crosstalk and privatise things it thinks should be private?
Other than allocating a few additional memory addresses, is there any significant overhead cost in privatising variables? I think likely 'yes, but (in my case) negligible'. Target audience for code will be using systems with ~10s-100s of cores
Toy example, which finds maximum values in chunks along an array:
find_max(double *inArr, double *outArr, int64_t nSamps, int64_t nCells, int64_t threads) {
double maxVal, curVal;
int64_t t, cell;
#pragma omp parallel for private(maxVal, curVal) num_threads(threads)
for (t=0; t<nSamps; t++) {
maxVal = inArr[t];
for (cell=1; cell<nCells; cell++) {
curVal = inArr[cell * nSamps + t];
if (curVal > maxVal) {
maxVal = curVal;
}
}
outArr[t] = maxVal;
}
}
I am building this as an extension module for a Python library - the call to gcc is:
gcc -pthread -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -Wall -Wstrict-prototypes -c src.c -o src.o -fopenmp -fPIC -Ofast
Is there a good way to use OpenMP to parallelize a for-loop, only if an -omp argument is passed to the program?
This seems not possible, since #pragma omp parallel for is a preprocessor directive and thus evaluated even before compile time and of course it is only certain if the argument is passed to the program at runtime.
At the moment I am using a very ugly solution to achieve this, which leads to an enormous duplication of code.
if(ompDefined) {
#pragma omp parallel for
for(...)
...
}
else {
for(...)
...
}
I think what you are looking for can be solved using a CPU dispatcher technique.
For benchmarking OpenMP code vs. non-OpenMP code you can create different object files from the same source code like this
//foo.c
#ifdef _OPENMP
double foo_omp() {
#else
double foo() {
#endif
double sum = 0;
#pragma omp parallel for reduction(+:sum)
for(int i=0; i<1000000000; i++) sum += i%10;
return sum;
}
Compile like this
gcc -O3 -c foo.c
gcc -O3 -fopenmp -c foo.c -o foo_omp.o
This creates two object files foo.o and foo_omp.o. Then you can call one of these functions like this
//bar.c
#include <stdio.h>
double foo();
double foo_omp();
double (*fp)();
int main(int argc, char *argv[]) {
if(argc>1) {
fp = foo_omp;
}
else {
fp = foo;
}
double sum = fp();
printf("sum %e\n", sum);
}
Compile and link like this
gcc -O3 -fopenmp bar.c foo.o foo_omp.o
Then I time the code like this
time ./a.out -omp
time ./a.out
and the first case takes about 0.4 s and the second case about 1.2 s on my system with 4 cores/8 hardware threads.
Here is a solution which only needs a single source file
#include <stdio.h>
typedef double foo_type();
foo_type foo, foo_omp, *fp;
#ifdef _OPENMP
#define FUNCNAME foo_omp
#else
#define FUNCNAME foo
#endif
double FUNCNAME () {
double sum = 0;
#pragma omp parallel for reduction(+:sum)
for(int i=0; i<1000000000; i++) sum += i%10;
return sum;
}
#ifdef _OPENMP
int main(int argc, char *argv[]) {
if(argc>1) {
fp = foo_omp;
}
else {
fp = foo;
}
double sum = fp();
printf("sum %e\n", sum);
}
#endif
Compile like this
gcc -O3 -c foo.c
gcc -O3 -fopenmp foo.c foo.o
You can set the number of threads at run-time by calling omp_set_num_threads:
#include <omp.h>
int main()
{
int threads = 1;
#ifdef _OPENMP
omp_set_num_threads(threads);
#endif
#pragma omp parallel for
for(...)
{
...
}
}
This isn't quite the same as disabling OpenMP, but it will stop it running calculations in parallel. I've found it's always a good idea to set this using a command line switch (you can implement this using GNU getopt or Boost.ProgramOptions). This allows you to easily run single-threaded and multi-threaded tests on the same code.
As Vladimir F pointed out in the comments, you can also set the number of threads by setting the environment variable OMP_NUM_THREADS before executing your program:
gcc -Wall -Werror -pedantic -O3 -fopenmp -o test test.c
OMP_NUM_THREADS=1
./test
unset OMP_NUM_THREADS
Finally, you can disable OpenMP at compile-time by not providing GCC with the -fopenmp option. However, you will need to put preprocessor guards around any lines in your code that require OpenMP to be enabled (see above). If you want to use some functions included in the OpenMP library without actually enabling the OpenMP pragmas you can simply link against the OpenMP library by replacing the -fopenmp option with -lgomp.
One solution would be to use the preprocessor to ignore the pragma statement if you do not pass an additional flag to the compiler.
For example in your code you might have:
#ifdef MP_ENABLED
#pragma omp parallel for
#endif
for(...)
...
and then when you compile you can pass a flag to the compiler to define the MP_ENABLED macro. In the case of GCC (and Clang) you would pass -DMP_ENABLED.
You then might compile with gcc as
gcc SOME_SOURCE.c -I SOME_INCLUDE.h -lomp -DMP_ENABLED -o SOME_OUTPUT
then when you want to disable the parallelism you can make a minor tweek to the compile command by dropping -DMP_ENABLED.
gcc SOME_SOURCE.c -I SOME_INCLUDE.h -lomp -DMP_ENABLED -o SOME_OUTPUT
This causes the macro to be undefined which leads to the preprocessor ignoring the pragma.
You could also use a similar solution using ifndef instead depending on whether you consider the parallel behavior the default or not.
Edit: As noted by some comments, inclusion of OMP lib defines some macros such as _OPENMP which you could use in place of your own user-defined macros. That looks to be a superior solution, but the difference in effort is reasonably small.
The following code is not pipelining when compiled on the C64x+:
void main ()
{
int a, b, ar[100] = {0};
for (a = 0; a < 1000; a++)
for (b = 0; b < 100; b++)
ar[b]++;
while(1);
}
My IDE (Code Composer v6) gives the following message for the inner loop: "Loop cannot be scheduled efficiently, as it contains complex conditional expression. Try to simplify condition."
The problem seems to be with the nested loop, but I can't find any more information about optimizing one as simple as this.
Has anyone solved a similar issue before?
-- Additional information --
Processor: TMS320C64x+
Compiler: TI v8.0.3
Compiler flags:-mv6400+ --abi=eabi -O3 --opt_for_speed=4 --include_path="D:/TI/ccsv6/tools/compiler/ti-cgt-c6000_8.0.3/include" --advice:performance -g --issue_remarks --verbose_diagnostics --diag_warning=225 --gen_func_subsections=on --debug_software_pipeline --gen_opt_info=2 --gen_profile_info -k --c_src_interlist --asm_listing --output_all_syms
Linker flags: -mv6400+ --abi=eabi -O3 --opt_for_speed=4 --advice:performance -g --issue_remarks --verbose_diagnostics --diag_warning=225 --gen_func_subsections=on --debug_software_pipeline --gen_opt_info=2 --gen_profile_info -k --c_src_interlist --asm_listing --output_all_syms -z -m"dsp.map" -i"D:/TI/ccsv6/tools/compiler/ti-cgt-c6000_8.0.3/lib" -i"D:/TI/ccsv6/tools/compiler/ti-cgt-c6000_8.0.3/include" --reread_libs --warn_sections --xml_link_info="dsp_linkInfo.xml" --rom_model
Removing --gen_profile_info from the compiler flags solved the issue. My loops have been splooped.
I've recently started to play around with OpenMP and like it very much.
I am a just-for-fun Classic-VB programmer and like coding functions for my VB programs in C. As such, I use Windows 7 x64 and GCC 4.7.2.
I usually set up all my C functions in one large C file and then compile a DLL out of it. Now I would like to use OpenMP in my DLL.
First of all, I set up a simple example and compiled an exe file from it:
#include <omp.h>
#include <stdio.h>
#include <stdlib.h>
int main(void)
{
int n = 520000;
int i;
int a[n];
int NumThreads;
omp_set_num_threads(4);
#pragma omp parallel for
for (i = 0; i < n; i++)
{
a[i] = 2 * i;
NumThreads = omp_get_num_threads();
}
printf("Value = %d.\n", a[77]);
printf("Number of threads = %d.", NumThreads);
return(0);
}
I compile that using gcc -fopenmp !MyC.c -o !MyC.exe and it works like a charm.
However, when I try to use OpenMP in my DLL, it fails. For example, I set up this function:
__declspec(dllexport) int __stdcall TestAdd3i(struct SAFEARRAY **InArr1, struct SAFEARRAY **InArr2, struct SAFEARRAY **OutArr) //OpenMP Test
{
int LengthArr;
int i;
int *InArrElements1;
int *InArrElements2;
int *OutArrElements;
LengthArr = (*InArr1)->rgsabound[0].cElements;
InArrElements1 = (int*) (**InArr1).pvData;
InArrElements2 = (int*) (**InArr2).pvData;
OutArrElements = (int*) (**OutArr).pvData;
omp_set_num_threads(4);
#pragma omp parallel for private(i)
for (i = 0; i < LengthArr; i++)
{
OutArrElements[i] = InArrElements1[i] + InArrElements2[i];
}
return(omp_get_num_threads());
}
The structs are defined, of course. I compile that using
gcc -fopenmp -c -DBUILD_DLL dll.c -o dll.o
gcc -fopenmp -shared -o mydll.dll dll.o -lgomp -Wl,--add-stdcall-alias
The compiler and linker do not complain (not even warnings come up) and the dll file is actually being built. But as I try to call the function from within VB, the VB compiler claims the the DLL file could not be found (run-time error 53). The strange thing about that is that as soon as one single OpenMP "command" is present inside the .c file, the VB compiler claims a missing DLL even if I call a function that does not even contain a single line of OpenMP code. When I comment all OpenMP stuff out, the function works as expected, but doesn't use OpenMP for parallelization, of course.
What is wrong here? Any help appreciated, thanks in advance! :-)
The problem most probably in this case is LD_LIBRARY_PATH is not set . You must use set LD_LIBRARY_PATH to the path that contains the dll or the system will not be able to find it and hence complains about the same
I have a program on C that uses both MPI and OpenMP. In order to compile such program on Windows system I have downloaded and installed a gcc compiler provided by MinGW. Using this compiler I can compile and execute C programs with OpenMP using the key -fopenmp for gcc. Such programs run without problems. In order to compile and execute C programs with MPI I have downloaded and installed MPICH2. Now I can compile and run such programs without problems, specifying additional parameters for gcc, provided by MinGW. But when I want to compile and run a program that uses both OpenMP and MPI I have a problem. I specified both keys -fopenmp and keys for MPI program for gcc compiler. Compilator didn't give me any error. I tried to launch my program by mpiexec, provided by MPICH2. My program didn't want to work (It was a HelloWorld program and it didn't print anything to output). Please help me to compile and launch such programs correctly.
Here is my HelloWorld program, that doesn't produce any output.
#include <stdio.h>
#include <mpi.h>
int main(int argc, char ** argv)
{
int thnum, thtotal;
int pid, np;
MPI_Init(&argc,&argv);
MPI_Comm_rank(MPI_COMM_WORLD,&pid);
MPI_Comm_size(MPI_COMM_WORLD,&np);
printf("Sequental %d out of %d!\n",pid,np);
MPI_Barrier(MPI_COMM_WORLD);
#pragma omp parallel private(thnum,thtotal)
{
thnum = omp_get_thread_num();
thtotal = omp_get_num_threads();
printf("parallel: %d out of %d from proc %d out of %d\n",thnum,thtotal,pid,np);
}
MPI_Barrier(MPI_COMM_WORLD);
MPI_Finalize();
return 0;
}
You can use the mpicc compiler with the -openmp option. For example,
mpicc -openmp hello.c -o hello
This might not be the root cause of your problem, but the MPI standard mandates that threaded programs use MPI_Init_thread() instead of MPI_Init(). In your case there are no MPI calls from within the parallel region so threading level of MPI_THREAD_FUNNELED should suffice. You should replace the call to MPI_Init() with:
int provided;
MPI_Init_thread(&argc, &argv, MPI_THREAD_FUNNELED, &provided);
if (provided < MPI_THREAD_FUNNELED)
{
MPI_Abort(MPI_COMM_WORLD, 1);
return 1; // Usually not reached
}
Although some MPI libraries might not advertise threading support (provided as returned is MPI_THREAD_SINGLE) they still work fine with hybrid OpenMP/MPI codes if one does not make MPI calls from within parallel regions.
The OpenMP portion of your program might require #include <omp.h> :
parallel: 0 out of 2 from proc 0 out of 0
parallel: 1 out of 2 from proc 0 out of 0