I am currently porting an autotools project to CMake that uses the AC_HEADER_TIME autotools macro to check if I can include both time.h and sys/time.h.
How can this be done with CMake?
It isn't the time test specifically, but for an example of how to do that sort of thing you might check out this example from BRL-CAD:
http://sourceforge.net/p/brlcad/code/HEAD/tree/brlcad/trunk/misc/CMake/test_srcs/sys_wait_test.c
The code is then used to do a test in CMake with:
CHECK_C_SOURCE_RUNS(${CMAKE_SOURCE_DIR}/misc/CMake/test_srcs/sys_wait_test.c SYS_WAIT_TEST_RUNS)
You can also use CHECK_C_SOURCE_COMPILES if you don't want to try to run the code (we usually do, but that sort of thing is situation dependent.) See http://www.cmake.org/cmake/help/v3.0/module/CheckCSourceRuns.html
and
http://www.cmake.org/cmake/help/v3.0/module/CheckCSourceCompiles.html for more information about using variables to control how the compilation is done. Generally speaking if you need to set those variables, you'll want to cache their current values before specifying the test values and then restore the original values after the test, i.e.
set(CMAKE_REQUIRED_FLAGS_CACHE ${CMAKE_REQUIRED_FLAGS})
set(CMAKE_REQUIRED_FLAGS "-foo_flag")
CHECK_C_SOURCE_RUNS(${CMAKE_SOURCE_DIR}/CMake/time_test.c TIME_TEST_RUNS)
set(CMAKE_REQUIRED_FLAGS ${CMAKE_REQUIRED_FLAGS_CACHE})
I've had a few puzzling behaviors in tests over the years that resulted from leftovers from one test interfering with another.
You can use try_compile() with sample source which #include both files and check the result.
Related
I have two boards, each with the same mcu as target. The difference is that the peripherals are not 100% the same (lets say they are by maybe 90%). So far my colleague has two macros and he either comments them or not so that #ifdef/#endif can be used to tell the preprocessor which includes to use and which to ignore.
I'm thinking of better ways to do this. I dont like the idea of people having to search for the correct line to comment each time they want the correct build for their hardware system, this should be automated and or better documented imho.
Best I came up with are multiple "build-sets" that would then by called "hardware-1" and "hardware-2" or something (of course more descriptive...). These build sets would then each have different "-I"-options to define the two macros my colleague used already before.
For cmake I found this thread:
Define preprocessor macro through CMake?
Is this the way to go or are there better ways that are more elegant? How would you solve this situation? The question maybe also goes into "What are the best practices to tackle this"
Thanks for your input
J
Best I came up with are multiple "build-sets" that would then by
called "hardware-1" and "hardware-2" or something (of course more
descriptive...). These build sets would then each have different
"-I"-options to define the two macros my colleague used already
before.
You mean -D, not -I, but yes, defining the macros via the compiler command line is one of the traditional approaches to this. How you might achieve that depends somewhat on your build system, but with a hand-rolled makefile, it is common to define make variables for target-specific flags, and to put put those, appropriately commented, at the top of the top-level makefile. Sometimes these are intended to be modified at build time, but sometimes there are just different makefiles, or else which set of flags to used is controlled by the target requested on the make command line.
For cmake I found [...]. Is this the way to go or are there better ways that are more elegant?
If you are using cmake already then yes, cmake's facilities for adding macro definitions to the compiler command line would be a great approach. If you are not using cmake then no, switching to a cmake-based build system would be way overkill for just solving the problem described. For systems where CMake will generate makefiles, it is basically a wrapper for what I already described.
I happen to be a fan of the Autotools. If you have an Autotools-based build system then there are different ways to set up this sort of thing, but if you don't, then setting up autotooling for just this purpose would be overkill. It is perhaps worth mentioning, however, that a standard Autotools approach would work by putting the definitions of the adjustable control macros in a header file, and having all the source files include that header. The Autotools would generate that header programmatically, but that's not essential -- you could set up such a header manually and update it as needed, and that would still solve the problem of knowing where to look for the macro definitions.
Normally one can specify preprocessor defines as part of the compilation command.
gcc -Wall -Darduino embedded.c
So assuming Linux/Make you could use
make clean arduino
or
make clean atmega2560
and simply have two targets named that in the make file.
Each one having a -darduino or -datmega2560 as part of the compile command.
If you are using some sort of IDE like MSVC, on the project properties page, under C/C++ you would find a Preprocessor area, and you can add one or the other as part of the preprocessor defines.
Preprocessor Definitions arduino;_DEBUG;_CONSOLE;%(PreprocessorDefinitions)
Specifically, my issue is that I have CUDA code that needs <curand_kernel.h> to run. This isn't included by default in NVRTC. Presumably then when creating the program context (i.e. the call to nvrtcCreateProgram), I have to send in the name of the file (curand_kernel.h) and also the source code of curand_kernel.h? I feel like I shouldn't have to do that.
It's hard to tell; I haven't managed to find an example from NVIDIA of someone needing standard CUDA files like this as a source, so I really don't understand what the syntax is. Some issues: curand_kernel.h also has includes... Do I have to do the same for each of these? I am not even sure the NVRTC compiler will even run correctly on curand_kernel.h, because there are some language features it doesn't support, aren't there?
Next: if you've sent in the source code of a header file to nvrtcCreateProgram, do I still have to #include it in the code to be executed / will it cause an error if I do so?
A link to example code that does this or something like it would be appreciated much more than a straightforward answer; I really haven't managed to find any.
You have to send the "filename" and the source of each header separately.
When the preprocessor does its thing, it'll use any #include filenames as a key to find the source for the header, based on the collection that you provide.
I suspect that, in this case, the compiler (driver) doesn't have file system access, so you have to give it the source in much the same way that you would for shader includes in OpenGL.
So:
Include your header's name when calling nvrtcCreateProgram. The compiler will, internally, generate the equivalent of a std::map<string,string> containing the source of each header indexed by the given name.
In your kernel source, use #include "foo.cuh" as usual.
The compiler will use foo.cuh as an index or key into its internal map (created when you called nvrtcCreateProgram), and will retrieve the header source from that collection
Compilation proceeds as normal.
One of the reasons that nvrtc provides only a "subset" of features is that the compiler plays in a somewhat sandboxed environment, without necessarily having all of the supporting tools and utilities lying around that you have with offline compilation. So, you have to manually handle a lot of the stuff that the normal nvcc + (gcc | MSVC| clang) combination provides.
A possible, but non-ideal, solution would be to preprocess the file that you need in your IDE, save the result and then #include that. However, I bet there is a better way to do that. if you just want curand, consider diving into the library and extracting the part you need (blech) or using another GPU-friendly rand implementation. On older CUDA versions, I just generated a big array of random floats on the host, uploaded it to the GPU, and sampled it in the kernels.
This related link may be helpful.
You do not need to load curand_kernel.h yourself and add it to the include "aliases" mechanism.
Instead, you can simply add the CUDA include directory to your (set of) include paths, e.g. by adding --include-path=/usr/local/cuda/include to your NVRTC compiler options.
(I do this in my GPU-kernel-runner test harness, by default, to be on the safe side.)
have you ever heard about automatic C code generators?
I have to do a kind of strange API functionality research which includes at least one attempt of every function execution. It may lead to crushes, segmentation faults - no matter. I just need to register every function call.
So i got a long list (several hundreds) of functions from sources using
ctags -x --c-kinds=f *.c
Can i use any tool to generate code calling every of them? Thanks a lot.
UPD: thanks for all your answers.
You could also consider customizing the GCC compiler, e.g. with a MELT extension (which e.g. would generate the testing during some customized compilation). Then you might even define your own #pragma or __attribute__ to parameterize these functions (enabling their auto-testing, giving default arguments for testing, etc etc).
However, I'm not sure it is the right approach for unit testing. There are many unit testing frameworks (but I am not very familiar with them).
Maybe something like autoconf could help you with that: as described here. In particular check for AC_CHECK_FUNCS. Autoconf creates small programs to test the existence of registered functions.
Recently, I need to add unit test to one legacy program.
But in it, there are lots of macros, like
#ifdef CONFIG_XXX
do xxx
#endif
#ifdef CONFIG_YYY
do yyy
#endif
Currently, the generic program path are covered by unit tests. So, I want to add tests to cover the inside macro parts (different program path).
It seems that I need to compile and run my program with certain macros each time, and how to design the composition of macros to cover the program path and reduce compilation times is really not easy.
So, I plan to move all the hardware related code to arch folder, now, macros were moved from c files to makefile, but still need to compile with certain macros each time to get UT work.
Does anyone have experiences on this problem before?
Thanks for your comments.
i think you can just use gcc -D to generate many version of the binary program.
compile and run them with a script to do that :)
A previous programmer preferred to generate large lookup tables (arrays of constants) to save runtime CPU cycles rather than calculating values on the fly. He did this by creating custom Visual C++ projects that were unique for each individual lookup table... which generate array files that are then #included into a completely separate ANSI-C micro-controller (Renesas) project.
This approach is fine for his original calculation assumptions, but has become tedious when the input parameters need to be modified, requiring me to recompile all of the Visual C++ projects and re-import those files into the ANSI-C project. What I would like to do is port the Visual C++ source directly into the ANSI-C microcontroller project and let the compiler create the array tables.
So, my question is: Can ANSI-C compilers compute and generate lookup arrays during compile time? And if so, how should I go about it?
Thanks in advance for your help!
Is there some reason you can't import his code generation architecture to your build system?
I mean, in make I might consider something like:
TABLES:=$(wildcard table_*)
TABLE_INCS:=$(foreach dir,$TABLES,$dir/$dir.h)
include $(foreach dir,$TABLES,dir/makefile.inc)
$MAIN: $(SRS) $(TABLE_INCS)
where each table_* contains a complete code generation project whose sole purpose is tho build table_n/table_n.h. Also in each table directory a makefile fragment named makefile.inc which provides the dependency lines for generated include files, and now I've removed the recursivity.
Done right (and this implementation isn't finished, in part because the point is clearer this way but mostly because I am lazy), you could edit table_3/table_3.input, type make in the main directory and get table_3/table_3.h rebuilt and the program incrementally recompiled.
I guess that depends on the types of value you need to look up. If the processing to compute each value demands more than e.g. constant-expression evaluation can deliver, you're going to have problems.
Check out the Boost preprocessor library. It's written for C++ but as far as I'm aware, the two preprocessors are pretty much identical, and it can do this sort of thing.