check if netcdf library supports netcdf4 - c

Does anyone know if there is a way to check at runtime or user compile time if a netcdf library build includes support for netcdf4?
There is a function nc_inq_libvers() but the version number itself is not useful because modern library versions can still be built with netcdf4 support turned off.
I simply want to be able to write code that checks if netcdf4 is supported, if it is then create a netcdf4 file, if not then create a netcdf3 file.

Ideal approach:
Investigate the library's source code for some #defines that you can check either at compile-time or at run-time (you may have a look also at the library's building process, in order to spot if such defines are made).
Then it's trivial to check for support; for instance, if the library defines NETCDF4_SUPPORT_ENABLED:
// compile-time check
#ifdef NETCDF4_SUPPORT_ENABLED
int main(void) {
// use netcdf4 format...
}
#else
int main(void) {
// use netcdf3 format...
}
#endif
More complex approach: (in case the library doesn't make such #defines)
You need to change your program's configuration/building procedure. When you'll build your program, a script written by you may check if the NETCDF library has been compiled with NETCDF4 support; in case support is enabled, your configuration script may add, for instance, parameter -DNETCDF4_SUPPORT_ENABLED to the command you use to compile your program, so this can be handled like in the previous solution.
Writing a script that detects if the NETCDF library has NETCDF4 support enabled is up to you, and different solutions work for different libraries.
A really (really) dirty hack, to only use in case of nuclear attack, is to write a small C program like this (that your configuration script will try to compile in order to detect NETCDF4 support):
#include <assert.h>
static_assert(sizeof( foo() ) != sizeof(int), "NETCDF4 support is not available");
// NOTE: replace `foo` with a library function that doesn't return `int` (nor another type with the same size of `int`)
// this code will fail to build if function `foo` is missing
Replace foo with the name of a library function that won't be included in the library if NETCDF4 support is disabled (also, the function's return type size must be different from int's size, e.g.: it may return long long, char, void, etc).
But I think a better solution may be found for the specific case of the NETCDF library (the previous hack is library-agnostic, but really dirty).

Related

Gathering test symbols into an array statically in C/C++

Short version of question
Is it possible to gather specific symbols in C into a single list/array into the executable statically at compile time, without relying on crt initialization (I frequently support embedded targets, and have limited support on dynamic memory).
EDIT: I'm 100% ok with this happening at link time and also ok with not having symbols cross library boundaries.
EDIT 2: I'm also OK with compiler specific answers if it's gcc or clang but would prefer cross platform if possible.
Longer version with more background
This has been a pain in my side for a while.
Right now I have a number of built-in self tests that I like to run in order.
I enforce the same calling convention on all functions and am manually gathering all the tests into an array statically.
// ThisLibrary_testlist.h
#define DECLARE_TEST(TESTNAME) void TESTNAME##_test(void * test_args)
DECLARE_TEST(test1);
DECLARE_TEST(test2);
DECLARE_TEST(test3);
// ThisLibrary_some_module.c
#include "ThisLibrary_testlist.h"
DECLARE_TEST(test1)
{
// ... do hood stuff here
}
// ThisLibrary_testarray.c
#include "ThisLibrary_testlist.h"
typedef void (*testfunc_t) (void*);
#define LIST_TEST(TESTNAME)
testfunc_t tests[] =
{
&LIST_TEST(test1),
&LIST_TEST(test2)
};
// now it's an array... you know what to do.
So far this has kept me alive but it's getting kind of ridiculous that I have to basically modify the code in 3 separate locations if I want to update a test.
Not to mention the absolute #ifdef nightmare that comes with conditionally compiled tests.
Is there a better way?
With a bit of scripting magic you could do the following: After compiling your source files (but before linking) you search the object files for symbols that match your test name pattern. See man nm how to obtain symbol names from object files (well, on Unix, that is - no idea about windows, sorry). Based on the list of object names found, you auto-create the file ThisLibrary_testarray.c, putting in all the extern declarations and then the function pointer table. After generation of this file, you compile it and finally link everything.
This way you only have to add new test functions to the source files. No need to maintain the header file ThisLibrary_testlist.h, but you have to make sure the test functions have external linkage, follow the naming pattern - and be sure no other symbol uses the naming pattern :-)

Conditional compilation based on functionality in Linux kernel headers

Consider the case where I'm using some functionality from the Linux headers exported to user space, such as perf_event_open from <linux/perf_event.h>.
The functionality offered by this API has changed over time, as members have been added to the perf_event_attr, such as perf_event_attr.cap_user_time.
How can I write source that compiles and uses these new functionalities if they are available locally, but falls back gracefully if they aren't and doesn't use them?
In particular, how can I detect in the pre-processor whether this stuff is available?
I've used this perf_event_attr as an example, but my question is a general one because structure members, new structures, definitions and functions are added all the time.
Note that here I'm only considering the case where a process is compiled on the same system that it will run on: if you want to compile on one host and run on another you need a different set of tricks.
Use the macros from /usr/include/linux/version.h:
#include <linux/version.h>
int main() {
#if LINUX_VERSION_CODE <= KERNEL_VERSION(2,6,16)
// ^^^^^^ change for the proper version when `perf_event_attr.cap_user_time` was introduced
// use old interface
#else
// use new interface
// use perf_event_attr.cap_user_time
#endif
}
You might go into this with the following assumptions
The features available in the header files correspond to those documented for the specific Linux version.
The kernel running during execution corresponds to <linux/version.h> during compilation
Ideally, I suggest not to rely on these two assumptions at all.
The first assumption fails primarily due to backports, e.g. in enterprise Linux versions based on ancient kernels. If you care about different versions, you probably care about them.
Instead, I recommend utilizing the methods for checking for struct members and include files in build system, e.g. for CMake:
CHECK_STRUCT_HAS_MEMBER("struct perf_event_attr" cap_user_time linux/perf_event.h HAVE_PERF_CAP_USER_TIME)
CHECK_INCLUDE_FILES can also be useful.
The second assumption can fail for many reasons, even if the binary is not moved between systems; E.g. updating the kernel but not recompiling the binary or simply booting another kernel. Specifically perf_event_open fails with EINVAL if a reserved bit is set. This allows you to retry with an alternative implementation not using the requested feature.
In short, statically check for the feature instead of the version. Dynamically, try and retry the legacy implementation if it failed.
Just in addition to other answers.
If you're aiming for supporting both cross-version and cross-distro code, you should also keep in mind that there are distros (Centos/RHEL) which pull some recent changes from new kernels to old. So you may encounter a situation in which you'll have LINUX_VERSION_CODE equal to some old kernel version, but there will be some changes (new fields in data structures, new functions, etc.) from recent kernel. In such case this macro is insufficient.
You can add something like (to avoid preprocessor errors in case it is not a Centos distro):
#ifndef RHEL_RELEASE_CODE
#define RHEL_RELEASE_CODE 0
#endif
#ifndef RHEL_RELEASE_VERSION
#define RHEL_RELEASE_VERSION(x,y) 1
#endif
And use it with > or >= where you need:
#if LINUX_VERSION_CODE >= KERNEL_VERSION(4,3,0) || RHEL_RELEASE_CODE > RHEL_RELEASE_VERSION(7,2)
...
for Centos/RHEL custom kernels support.
P.S. of course it's necessary to examine an appropriate versions of Centos/RHEL, and understand when and what exactly has changed in the code sections that affect you.

Compile-time test if function is optimized out

I'm writing a small operating system for microcontrollers in C (not C++, so I can't use templates). It makes heavy use of some gcc features, one of the most important being the removal of unused code. The OS doesn't load anything at runtime; the user's program and the OS source are compiled together to form a single binary.
This design allows gcc to include only the OS functions that the program actually uses. So if the program never uses i2c or USB, support for those won't be included in the binary.
The problem is when I want to include optional support for those features without introducing a dependency. For example, a debug console should provide functions to debug i2c if it's being used, but including the debug console shouldn't also pull in i2c if the program isn't using it.
The methods that come to mind to achieve this aren't ideal:
Have the user explicitly enable the modules they need (using #define), and use #if to only include support for them in the debug console if enabled. I don't like this method, because currently the user doesn't have to do this, and I'd prefer to keep it that way.
Have the modules register function pointers with the debug module at startup. This isn't ideal, because it adds some runtime overhead and means the debug code is split up over several files.
Do the same as above, but using weak symbols instead of pointers. But I'm still not sure how to actually accomplish this.
Do a compile-time test in the debug code, like:
if(i2cInit is used) {
debugShowi2cStatus();
}
The last method seems ideal, but is it possible?
This seems like an interesting problem. Here's an idea, although it's not perfect:
Two-pass compile.
What you can do is first, compile the program with a flag like FINDING_DEPENDENCIES=1. Surround all the dependency checks with #ifs for this (I'm assuming you're not as concerned about adding extra ifs there.)
Then, when the compile is done (without any optional features), use nm or similar to detect the usage of functions/features in the program (such as i2cInit), and format this information into a .h file.
#ifndef FINDING_DEPENDENCIES
#include "dependency_info.h"
#endif
Now the optional dependencies are known.
This still doesn't seem like a perfect solution, but ultimately, it's mostly a chicken-and-the-egg problem. When compiling, the compiler doesn't know what symbols are going to be gc'd out. You basically need to get this information from the linker stage and feed it back to the compilation stage.
Theoretically, this might not increase build times much, especially if you used a temp file for the generated h, and then only replaced it if it was different. You'd need to use different object dirs, though.
Also this might help (pre-strip, of course):
How can I view function names and parameters contained in an ELF file?

Preferred method to use two names to call the same function in C

I know there are at least three popular methods to call the same function with multiple names. I haven't actually heard of someone using the fourth method for this purpose.
1). Could use #defines:
int my_function (int);
#define my_func my_function
OR
#define my_func(int (a)) my_function(int (a))
2). Embedded function calls are another possibility:
int my_func(int a) {
return my_function(a);
}
3). Use a weak alias in the linker:
int my_func(int a) __attribute__((weak, alias("my_function")));
4). Function pointers:
int (* const my_func)(int) = my_function;
The reason I need multiple names is for a mathematical library that has multiple implementations of the same method.
For example, I need an efficient method to calculate the square root of a scalar floating point number. So I could just use math.h's sqrt(). This is not very efficient. So I write one or two other methods, such as one using Newton's Method. The problem is each technique is better on certain processors (in my case microcontrollers). So I want the compilation process to choose the best method.
I think this means it would be best to use either the macros or the weak alias since those techniques could easily be grouped in a few #ifdef statements in the header files. This simplifies maintenance (relatively). It is also possible to do using the function pointers, but it would have to be in the source file with extern declarations of the general functions in the header file.
Which do you think is the better method?
Edit:
From the proposed solutions, there appears to be two important questions that I did not address.
Q. Are the users working primarily in C/C++?
A. All known development will be in C/C++ or assembly. I am designing this library for my own personal use, mostly for work on bare metal projects. There will be either no or minimal operating system features. There is a remote possibility of using this in full blown operating systems, which would require consideration of language bindings. Since this is for personal growth, it would be advantageous to learn library development on popular embedded operating systems.
Q. Are the users going to need/want an exposed library?
A. So far, yes. Since it is just me, I want to make direct modifications for each processor I use after testing. This is where the test suite would be useful. So an exposed library would help somewhat. Additionally, each "optimal implementation" for particular function may have a failing conditions. At this point, it has to be decided who fixes the problem: the user or the library designer. A user would need an exposed library to work around failing conditions. I am both the "user" and "library designer". It would almost be better to allow for both. Then non-realtime applications could let the library solve all of stability problems as they come up, but real-time applications would be empowered to consider algorithm speed/space vs. algorithm stability.
Another alternative would be to move the functionality into a separately compiled library optimised for each different architecture and then just link to this library during compilation. This would allow the project code to remain unchanged.
Depending on the intended audience for your library, I suggest you chose between 2 alternatives:
If the consumer of your library is guaranteed to be Cish, use #define sqrt newton_sqrt for optimal readability
If some consumers of your library are not of the C variety (think bindings to Dephi, .NET, whatever) try to avoid consumer-visible #defines. This is a major PITA for bindings, as macros are not visible on the binary - embedded function calls are the most binding-friendly.
What you can do is this. In header file (.h):
int function(void);
In the source file (.c):
static int function_implementation_a(void);
static int function_implementation_b(void);
static int function_implementation_c(void);
#if ARCH == ARCH_A
int function(void)
{
return function_implementation_a();
}
#elif ARCH == ARCH_B
int function(void)
{
return function_implementation_b();
}
#else
int function(void)
{
return function_implementation_c();
}
#endif // ARCH
Static functions called once are often inlined by the implementation. This is the case for example with gcc by default : -finline-functions-called-once is enabled even in -O0. The static functions that are not called are also usually not included in the final binary.
Note that I don't put the #if and #else in a single function body because I find the code more readable when #if directives are outside the functions body.
Note this way works better with embedded code where libraries are usually distributed in their source form.
I usually like to solve this with a single declaration in a header file with a different source file for each architecture/processor-type. Then I just have the build system (usually GNU make) choose the right source file.
I usually split the source tree into separate directories for common code and for target-specific code. For instance, my current project has a toplevel directory Project1 and underneath it are include, common, arm, and host directories. For arm and host, the Makefile looks for source in the proper directory based on the target.
I think this makes it easier to navigate the code since I don't have to look up weak symbols or preprocessor definitions to see what functions are actually getting called. It also avoids the ugliness of function wrappers and the potential performance hit of function pointers.
You might you create a test suite for all algorithms and run it on the target to determine which are the best performing, then have the test suite automatically generate the necessary linker aliases (method 3).
Beyond that a simple #define (method 1) probably the simplest, and will not and any potential overhead. It does however expose to the library user that there might be multiple implementations, which may be undesirable.
Personally, since only one implementation of each function is likley to be optimal on any specific target, I'd use the test suite to determine the required versions for each target and build a separate library for each target with only those one version of each function the correct function name directly.

Macro definitions for headers, where to put them?

When defining macros that headers rely on, such as _FILE_OFFSET_BITS, FUSE_USE_VERSION, _GNU_SOURCE among others, where is the best place to put them?
Some possibilities I've considered include
At the top of the any source files that rely on definitions exposed by headers included in that file
Immediately before the include for the relevant header(s)
Define at the CPPFLAGS level via the compiler? (such as -D_FILE_OFFSET_BITS=64) for the:
Entire source repo
The whole project
Just the sources that require it
In project headers, which should also include those relevant headers to which the macros apply
Some other place I haven't thought of, but is infinitely superior
A note: Justification by applicability to make, autotools, and other build systems is a factor in my decision.
If the macros affect system headers, they probably ought to go somewhere where they affect every source file that includes those system headers (which includes those that include them indirectly). The most logical place would therefore be on the command line, assuming your build system allows you to set e.g. CPPFLAGS to affect the compilation of every file.
If you use precompiled headers, and have a precompiled header that must therefore be included first in every source file (e.g. stdafx.h for MSVC projects) then you could put them in there too.
For macros that affect self-contained libraries (whether third-party or written by you), I would create a wrapper header that defines the macros and then includes the library header. All uses of the library from your project should then include your wrapper header rather than including the library header directly. This avoids defining macros unnecessarily, and makes it clear that they relate to that library. If there are dependencies between libraries then you might want to make the macros global (in the build system or precompiled header) just to be on the safe side.
Well, it depends.
Most, I'd define via the command line - in a Makefile or whatever build system you use.
As for _FILE_OFFSET_BITS I really wouldn't define it explicitly, but rather use getconf LFS_CFLAGS and getconf LFS_LDFLAGS.
I would always put them on the command line via CPPFLAGS for the whole project. If you put them any other place, there's a danger that you might forget to copy them into a new source file or include a system header before including the project header that defines them, and this could lead to extremely nasty bugs (like one file declaring a legacy 32-bit struct stat and passing its address to a function in another file which expects a 64-bit struct stat).
BTW, it's really ridiculous that _FILE_OFFSET_BITS=64 still isn't the default on glibc.
Most projects that I've seen use them did it via -D command line options. They are there because that eases building the source with different compilers and system headers. If you were to build with a system compiler for another system that didn't need them or needed a different set of them then a configure script can easily change the command line arguments that a make file passes to the compiler.
It's probably best to do it for the entire program because some of the flags effect which version of a function gets brought in or the size/layout of a struct and mixing those up could cause crazy things if you aren't careful.
They certainly are annoying to keep up with.
For _GNU_SOURCE and the autotools in particular, you could use AC_USE_SYSTEM_EXTENSIONS (citing liberally from the autoconf manual here):
-- Macro: AC_USE_SYSTEM_EXTENSIONS
This macro was introduced in Autoconf 2.60. If possible, enable
extensions to C or Posix on hosts that normally disable the
extensions, typically due to standards-conformance namespace
issues. This should be called before any macros that run the C
compiler. The following preprocessor macros are defined where
appropriate:
_GNU_SOURCE
Enable extensions on GNU/Linux.
__EXTENSIONS__
Enable general extensions on Solaris.
_POSIX_PTHREAD_SEMANTICS
Enable threading extensions on Solaris.
_TANDEM_SOURCE
Enable extensions for the HP NonStop platform.
_ALL_SOURCE
Enable extensions for AIX 3, and for Interix.
_POSIX_SOURCE
Enable Posix functions for Minix.
_POSIX_1_SOURCE
Enable additional Posix functions for Minix.
_MINIX
Identify Minix platform. This particular preprocessor macro
is obsolescent, and may be removed in a future release of
Autoconf.
For _FILE_OFFSET_BITS, you need to call AC_SYS_LARGEFILE and AC_FUNC_FSEEKO:
— Macro: AC_SYS_LARGEFILE
Arrange for 64-bit file offsets, known as large-file support. On some hosts, one must use special compiler options to build programs that can access large files. Append any such options to the output variable CC. Define _FILE_OFFSET_BITS and _LARGE_FILES if necessary.
Large-file support can be disabled by configuring with the --disable-largefile option.
If you use this macro, check that your program works even when off_t is wider than long int, since this is common when large-file support is enabled. For example, it is not correct to print an arbitrary off_t value X with printf("%ld", (long int) X).
The LFS introduced the fseeko and ftello functions to replace their C counterparts fseek and ftell that do not use off_t. Take care to use AC_FUNC_FSEEKO to make their prototypes available when using them and large-file support is enabled.
If you are using autoheader to generate a config.h, you could define the other macros you care about using AC_DEFINE or AC_DEFINE_UNQUOTED:
AC_DEFINE([FUSE_VERSION], [28], [FUSE Version.])
The definition will then get passed to the command line or placed in config.h, if you're using autoheader. The real benefit of AC_DEFINE is that it easily allows preprocessor definitions as a result of configure checks and separates system-specific cruft from the important details.
When writing the .c file, #include "config.h" first, then the interface header (e.g., foo.h for foo.c - this ensures that the header has no missing dependencies), then all other headers.
I usually put them as close as practicable to the things that need them, whilst ensuring you don't set them incorrectly.
Related pieces of information should be kept close to make it easier to identify. A classic example is the ability for C to now allow variable definitions anywhere in the code rather than just at the top of a function:
void something (void) {
// 600 lines of code here
int x = fn(y);
// more code here
}
is a lot better than:
void something (void) {
int x;
// 600 lines of code here
x = fn(y);
// more code here
}
since you don't have to go searching for the type of x in the latter case.
By way of example, if you need to compile a single source file multiple times with different values, you have to do it with the compiler:
gcc -Dmydefine=7 -o binary7 source.c
gcc -Dmydefine=9 -o binary9 source.c
However, if every compilation of that file will use 7, it can be moved closer to the place where it's used:
source.c:
#include <stdio.h>
#define mydefine 7
#include "header_that_uses_mydefine.h"
#define mydefine 7
#include "another_header_that_uses_mydefine.h"
Note that I've done it twice so that it's more localised. This isn't a problem since, if you change only one, the compiler will tell you about it, but it ensures that you know those defines are set for the specific headers.
And, if you're certain that you will never include (for example) bitio.h without first setting BITCOUNT to 8, you can even go so far as to create a bitio8.h file containing nothing but:
#define BITCOUNT 8
#include "bitio.h"
and then just include bitio8.h in your source files.
Global, project-wide constants that are target specific are best put in CCFLAGS in your makefile. Constants you use all over the place can go in appropriate header files which are included by any file that uses them.
For example,
// bool.h - a boolean type for C
#ifndef __BOOL_H__
#define BOOL_H
typedef int bool_t
#define TRUE 1
#define FALSE 0
#endif
Then, in some other header,
`#include "bool.h"`
// blah
Using header files is what I recommend because it allows you to have a code base built by make files and other build systems as well as IDE projects such as Visual Studio. This gives you a single point of definition that can be accompanied by comments (I'm a fan of doxygen which allows you to generate macro documentation).
The other benefit with header files is that you can easily write unit tests to verify that only valid combinations of macros are defined.

Resources