Consider the case where I'm using some functionality from the Linux headers exported to user space, such as perf_event_open from <linux/perf_event.h>.
The functionality offered by this API has changed over time, as members have been added to the perf_event_attr, such as perf_event_attr.cap_user_time.
How can I write source that compiles and uses these new functionalities if they are available locally, but falls back gracefully if they aren't and doesn't use them?
In particular, how can I detect in the pre-processor whether this stuff is available?
I've used this perf_event_attr as an example, but my question is a general one because structure members, new structures, definitions and functions are added all the time.
Note that here I'm only considering the case where a process is compiled on the same system that it will run on: if you want to compile on one host and run on another you need a different set of tricks.
Use the macros from /usr/include/linux/version.h:
#include <linux/version.h>
int main() {
#if LINUX_VERSION_CODE <= KERNEL_VERSION(2,6,16)
// ^^^^^^ change for the proper version when `perf_event_attr.cap_user_time` was introduced
// use old interface
#else
// use new interface
// use perf_event_attr.cap_user_time
#endif
}
You might go into this with the following assumptions
The features available in the header files correspond to those documented for the specific Linux version.
The kernel running during execution corresponds to <linux/version.h> during compilation
Ideally, I suggest not to rely on these two assumptions at all.
The first assumption fails primarily due to backports, e.g. in enterprise Linux versions based on ancient kernels. If you care about different versions, you probably care about them.
Instead, I recommend utilizing the methods for checking for struct members and include files in build system, e.g. for CMake:
CHECK_STRUCT_HAS_MEMBER("struct perf_event_attr" cap_user_time linux/perf_event.h HAVE_PERF_CAP_USER_TIME)
CHECK_INCLUDE_FILES can also be useful.
The second assumption can fail for many reasons, even if the binary is not moved between systems; E.g. updating the kernel but not recompiling the binary or simply booting another kernel. Specifically perf_event_open fails with EINVAL if a reserved bit is set. This allows you to retry with an alternative implementation not using the requested feature.
In short, statically check for the feature instead of the version. Dynamically, try and retry the legacy implementation if it failed.
Just in addition to other answers.
If you're aiming for supporting both cross-version and cross-distro code, you should also keep in mind that there are distros (Centos/RHEL) which pull some recent changes from new kernels to old. So you may encounter a situation in which you'll have LINUX_VERSION_CODE equal to some old kernel version, but there will be some changes (new fields in data structures, new functions, etc.) from recent kernel. In such case this macro is insufficient.
You can add something like (to avoid preprocessor errors in case it is not a Centos distro):
#ifndef RHEL_RELEASE_CODE
#define RHEL_RELEASE_CODE 0
#endif
#ifndef RHEL_RELEASE_VERSION
#define RHEL_RELEASE_VERSION(x,y) 1
#endif
And use it with > or >= where you need:
#if LINUX_VERSION_CODE >= KERNEL_VERSION(4,3,0) || RHEL_RELEASE_CODE > RHEL_RELEASE_VERSION(7,2)
...
for Centos/RHEL custom kernels support.
P.S. of course it's necessary to examine an appropriate versions of Centos/RHEL, and understand when and what exactly has changed in the code sections that affect you.
Related
The C language has a set of outright reserved keywords. However, there is a much larger set of identifiers that are reserved or semi-reserved, whose use is at least strongly not recommended because they are used by the standard library or various system headers, or may be so used in future, etc; there is a comprehensive though not exhaustive list of those here: https://www.gnu.org/software/libc/manual/html_node/Reserved-Names.html
The set of such names is much too large to be feasible to enumerate.
Looking at it from the perspective of using C as a compilation target, I'm looking for the reverse: a set of names I can generate, that are guaranteed to be not reserved, to be free for application use.
Clearly this requirement could be effectively met as far as it goes by prepending a UUID to every name, but there is an additional requirement that the generated code be as amenable as possible to eyeball debugging, so the namespace should be as simple as possible, e.g. if all names are to have a common prefix, that prefix should be as short as possible.
What's the simplest way to characterize a set of names that are guaranteed, or failing that highly likely, to be free for application use? For example, would it be safe to use arbitrary names prefixed with x_ or suchlike?
Most C libraries provide feature-selection macros, which allow you to specify which version of the interface you are using. If you set _POSIX_C_SOURCE and _XOPEN_SOURCE before including any system headers on Linux or UNIX, your system libraries will not declare any identifiers that future versions of UNIX might define. (In theory, setting either one by itself should suffice, but it’s good defensive coding to set both, as this will prevent one or the other from being set inconsistently by someone else.) On Windows, you would declare NTDDI_VERSION and _WIN32_WINNT.
The C Standard Library only provides feature-test macros, not macros that let you choose an interface, but compliers support a flag such as -std=c20, and you should set this in your build scripts. This should disable any new keywords or identifiers that get added to the language in the future.
If you depend on a specific version of a library, and are worried that changes to its header files could break your code, you can put a copy of the headers (and to be absolutely certain, the library itself) in your project tree. If the library is open-source, making a note of which version you used should let anyone else download the right version. Otherwise, you’re at the mercy of its maintainers.
Do not declare _BSD_SOURCE or _GNU_SOURCE if this is a concern for you! Linux headers without glibc bindings, such as <linux/module.h>, generally don’t have this kind of versioning.
Some languages have much more robust solutions for this, such as cabal and stack for Haskell or cargo for Rust.
Scenario 1: I'm trying to install IBM GPFS driver onto RHEL6 with a vanilla kernel 3.10 (actually, kernel-lt from Elrepo). The GPL part won't compile due to:
Too many/too few arguments passed to function
struct x has no such member
type mismatch
Their code compiles fine on stock RHEL/Suse kernels older or newer than mine, but fails here.
Scenario 2:
I'm trying to compile the opensource softiwarp driver on RHEL6 with stock kernel, but it fails with same errors as in scenario 1. However, it compiles fine on a vanilla kernel.
This all is because their feature-check headers look like this:
#if LINUX_KERNEL_VERSION >= 2061300
#define FOO <newer variant>
#else
#define FOO <older variant>
#endif
But RHEL and Suse have many backports and bugfixes, so their 3.10.101 is not the same as vanilla 3.10.101.
How to write code that will check features, not version number? In a user-space program I would use autoconf macros AC_CHECK_MEMBER/AC_CHECK_FUNC
How to write code that will check features, not version number? In a user-space program I would use autoconf macros AC_CHECK_MEMBER/AC_CHECK_FUNC
The standard preprocessor's capabilities are much less than some people seem to think. It has no ability to do what you want directly. Autoconf provides no magic in this regard, either; it simply performs tests at configuration time, often simply by checking whether the compiler accepts a given piece of code, and it communicates the results to the compiler largely by causing preprocessor macros to be defined. (And you are responsible for using those macros as needed in conditional tests much like the one in your example.)
Since we're talking about Autoconf, however, as long as it runs against the kernel headers that correspond to the kernel you're building for, at least some Autoconf macros should work for you, and you should be able to write custom Autoconf tests for others. Indeed, any issue that that the compiler can detect at build time, Autoconf should also be able to test for.
Of course, there is also the option of giving the module builder the ability to indicate needed configuration details explicitly where a thorny issue such as this arises. For example, adjust the feature selection macros to pay attention also to a symbol reserved for the builder to use to modulate the results.
I know there are at least three popular methods to call the same function with multiple names. I haven't actually heard of someone using the fourth method for this purpose.
1). Could use #defines:
int my_function (int);
#define my_func my_function
OR
#define my_func(int (a)) my_function(int (a))
2). Embedded function calls are another possibility:
int my_func(int a) {
return my_function(a);
}
3). Use a weak alias in the linker:
int my_func(int a) __attribute__((weak, alias("my_function")));
4). Function pointers:
int (* const my_func)(int) = my_function;
The reason I need multiple names is for a mathematical library that has multiple implementations of the same method.
For example, I need an efficient method to calculate the square root of a scalar floating point number. So I could just use math.h's sqrt(). This is not very efficient. So I write one or two other methods, such as one using Newton's Method. The problem is each technique is better on certain processors (in my case microcontrollers). So I want the compilation process to choose the best method.
I think this means it would be best to use either the macros or the weak alias since those techniques could easily be grouped in a few #ifdef statements in the header files. This simplifies maintenance (relatively). It is also possible to do using the function pointers, but it would have to be in the source file with extern declarations of the general functions in the header file.
Which do you think is the better method?
Edit:
From the proposed solutions, there appears to be two important questions that I did not address.
Q. Are the users working primarily in C/C++?
A. All known development will be in C/C++ or assembly. I am designing this library for my own personal use, mostly for work on bare metal projects. There will be either no or minimal operating system features. There is a remote possibility of using this in full blown operating systems, which would require consideration of language bindings. Since this is for personal growth, it would be advantageous to learn library development on popular embedded operating systems.
Q. Are the users going to need/want an exposed library?
A. So far, yes. Since it is just me, I want to make direct modifications for each processor I use after testing. This is where the test suite would be useful. So an exposed library would help somewhat. Additionally, each "optimal implementation" for particular function may have a failing conditions. At this point, it has to be decided who fixes the problem: the user or the library designer. A user would need an exposed library to work around failing conditions. I am both the "user" and "library designer". It would almost be better to allow for both. Then non-realtime applications could let the library solve all of stability problems as they come up, but real-time applications would be empowered to consider algorithm speed/space vs. algorithm stability.
Another alternative would be to move the functionality into a separately compiled library optimised for each different architecture and then just link to this library during compilation. This would allow the project code to remain unchanged.
Depending on the intended audience for your library, I suggest you chose between 2 alternatives:
If the consumer of your library is guaranteed to be Cish, use #define sqrt newton_sqrt for optimal readability
If some consumers of your library are not of the C variety (think bindings to Dephi, .NET, whatever) try to avoid consumer-visible #defines. This is a major PITA for bindings, as macros are not visible on the binary - embedded function calls are the most binding-friendly.
What you can do is this. In header file (.h):
int function(void);
In the source file (.c):
static int function_implementation_a(void);
static int function_implementation_b(void);
static int function_implementation_c(void);
#if ARCH == ARCH_A
int function(void)
{
return function_implementation_a();
}
#elif ARCH == ARCH_B
int function(void)
{
return function_implementation_b();
}
#else
int function(void)
{
return function_implementation_c();
}
#endif // ARCH
Static functions called once are often inlined by the implementation. This is the case for example with gcc by default : -finline-functions-called-once is enabled even in -O0. The static functions that are not called are also usually not included in the final binary.
Note that I don't put the #if and #else in a single function body because I find the code more readable when #if directives are outside the functions body.
Note this way works better with embedded code where libraries are usually distributed in their source form.
I usually like to solve this with a single declaration in a header file with a different source file for each architecture/processor-type. Then I just have the build system (usually GNU make) choose the right source file.
I usually split the source tree into separate directories for common code and for target-specific code. For instance, my current project has a toplevel directory Project1 and underneath it are include, common, arm, and host directories. For arm and host, the Makefile looks for source in the proper directory based on the target.
I think this makes it easier to navigate the code since I don't have to look up weak symbols or preprocessor definitions to see what functions are actually getting called. It also avoids the ugliness of function wrappers and the potential performance hit of function pointers.
You might you create a test suite for all algorithms and run it on the target to determine which are the best performing, then have the test suite automatically generate the necessary linker aliases (method 3).
Beyond that a simple #define (method 1) probably the simplest, and will not and any potential overhead. It does however expose to the library user that there might be multiple implementations, which may be undesirable.
Personally, since only one implementation of each function is likley to be optimal on any specific target, I'd use the test suite to determine the required versions for each target and build a separate library for each target with only those one version of each function the correct function name directly.
When defining macros that headers rely on, such as _FILE_OFFSET_BITS, FUSE_USE_VERSION, _GNU_SOURCE among others, where is the best place to put them?
Some possibilities I've considered include
At the top of the any source files that rely on definitions exposed by headers included in that file
Immediately before the include for the relevant header(s)
Define at the CPPFLAGS level via the compiler? (such as -D_FILE_OFFSET_BITS=64) for the:
Entire source repo
The whole project
Just the sources that require it
In project headers, which should also include those relevant headers to which the macros apply
Some other place I haven't thought of, but is infinitely superior
A note: Justification by applicability to make, autotools, and other build systems is a factor in my decision.
If the macros affect system headers, they probably ought to go somewhere where they affect every source file that includes those system headers (which includes those that include them indirectly). The most logical place would therefore be on the command line, assuming your build system allows you to set e.g. CPPFLAGS to affect the compilation of every file.
If you use precompiled headers, and have a precompiled header that must therefore be included first in every source file (e.g. stdafx.h for MSVC projects) then you could put them in there too.
For macros that affect self-contained libraries (whether third-party or written by you), I would create a wrapper header that defines the macros and then includes the library header. All uses of the library from your project should then include your wrapper header rather than including the library header directly. This avoids defining macros unnecessarily, and makes it clear that they relate to that library. If there are dependencies between libraries then you might want to make the macros global (in the build system or precompiled header) just to be on the safe side.
Well, it depends.
Most, I'd define via the command line - in a Makefile or whatever build system you use.
As for _FILE_OFFSET_BITS I really wouldn't define it explicitly, but rather use getconf LFS_CFLAGS and getconf LFS_LDFLAGS.
I would always put them on the command line via CPPFLAGS for the whole project. If you put them any other place, there's a danger that you might forget to copy them into a new source file or include a system header before including the project header that defines them, and this could lead to extremely nasty bugs (like one file declaring a legacy 32-bit struct stat and passing its address to a function in another file which expects a 64-bit struct stat).
BTW, it's really ridiculous that _FILE_OFFSET_BITS=64 still isn't the default on glibc.
Most projects that I've seen use them did it via -D command line options. They are there because that eases building the source with different compilers and system headers. If you were to build with a system compiler for another system that didn't need them or needed a different set of them then a configure script can easily change the command line arguments that a make file passes to the compiler.
It's probably best to do it for the entire program because some of the flags effect which version of a function gets brought in or the size/layout of a struct and mixing those up could cause crazy things if you aren't careful.
They certainly are annoying to keep up with.
For _GNU_SOURCE and the autotools in particular, you could use AC_USE_SYSTEM_EXTENSIONS (citing liberally from the autoconf manual here):
-- Macro: AC_USE_SYSTEM_EXTENSIONS
This macro was introduced in Autoconf 2.60. If possible, enable
extensions to C or Posix on hosts that normally disable the
extensions, typically due to standards-conformance namespace
issues. This should be called before any macros that run the C
compiler. The following preprocessor macros are defined where
appropriate:
_GNU_SOURCE
Enable extensions on GNU/Linux.
__EXTENSIONS__
Enable general extensions on Solaris.
_POSIX_PTHREAD_SEMANTICS
Enable threading extensions on Solaris.
_TANDEM_SOURCE
Enable extensions for the HP NonStop platform.
_ALL_SOURCE
Enable extensions for AIX 3, and for Interix.
_POSIX_SOURCE
Enable Posix functions for Minix.
_POSIX_1_SOURCE
Enable additional Posix functions for Minix.
_MINIX
Identify Minix platform. This particular preprocessor macro
is obsolescent, and may be removed in a future release of
Autoconf.
For _FILE_OFFSET_BITS, you need to call AC_SYS_LARGEFILE and AC_FUNC_FSEEKO:
— Macro: AC_SYS_LARGEFILE
Arrange for 64-bit file offsets, known as large-file support. On some hosts, one must use special compiler options to build programs that can access large files. Append any such options to the output variable CC. Define _FILE_OFFSET_BITS and _LARGE_FILES if necessary.
Large-file support can be disabled by configuring with the --disable-largefile option.
If you use this macro, check that your program works even when off_t is wider than long int, since this is common when large-file support is enabled. For example, it is not correct to print an arbitrary off_t value X with printf("%ld", (long int) X).
The LFS introduced the fseeko and ftello functions to replace their C counterparts fseek and ftell that do not use off_t. Take care to use AC_FUNC_FSEEKO to make their prototypes available when using them and large-file support is enabled.
If you are using autoheader to generate a config.h, you could define the other macros you care about using AC_DEFINE or AC_DEFINE_UNQUOTED:
AC_DEFINE([FUSE_VERSION], [28], [FUSE Version.])
The definition will then get passed to the command line or placed in config.h, if you're using autoheader. The real benefit of AC_DEFINE is that it easily allows preprocessor definitions as a result of configure checks and separates system-specific cruft from the important details.
When writing the .c file, #include "config.h" first, then the interface header (e.g., foo.h for foo.c - this ensures that the header has no missing dependencies), then all other headers.
I usually put them as close as practicable to the things that need them, whilst ensuring you don't set them incorrectly.
Related pieces of information should be kept close to make it easier to identify. A classic example is the ability for C to now allow variable definitions anywhere in the code rather than just at the top of a function:
void something (void) {
// 600 lines of code here
int x = fn(y);
// more code here
}
is a lot better than:
void something (void) {
int x;
// 600 lines of code here
x = fn(y);
// more code here
}
since you don't have to go searching for the type of x in the latter case.
By way of example, if you need to compile a single source file multiple times with different values, you have to do it with the compiler:
gcc -Dmydefine=7 -o binary7 source.c
gcc -Dmydefine=9 -o binary9 source.c
However, if every compilation of that file will use 7, it can be moved closer to the place where it's used:
source.c:
#include <stdio.h>
#define mydefine 7
#include "header_that_uses_mydefine.h"
#define mydefine 7
#include "another_header_that_uses_mydefine.h"
Note that I've done it twice so that it's more localised. This isn't a problem since, if you change only one, the compiler will tell you about it, but it ensures that you know those defines are set for the specific headers.
And, if you're certain that you will never include (for example) bitio.h without first setting BITCOUNT to 8, you can even go so far as to create a bitio8.h file containing nothing but:
#define BITCOUNT 8
#include "bitio.h"
and then just include bitio8.h in your source files.
Global, project-wide constants that are target specific are best put in CCFLAGS in your makefile. Constants you use all over the place can go in appropriate header files which are included by any file that uses them.
For example,
// bool.h - a boolean type for C
#ifndef __BOOL_H__
#define BOOL_H
typedef int bool_t
#define TRUE 1
#define FALSE 0
#endif
Then, in some other header,
`#include "bool.h"`
// blah
Using header files is what I recommend because it allows you to have a code base built by make files and other build systems as well as IDE projects such as Visual Studio. This gives you a single point of definition that can be accompanied by comments (I'm a fan of doxygen which allows you to generate macro documentation).
The other benefit with header files is that you can easily write unit tests to verify that only valid combinations of macros are defined.
I'm trying to streamline large chunk of legacy C code in which, even today, before doing the build guy who maintains it takes a source file(s) and manually modifies the following section before the compilation based on the various types of environment.
The example follows but here's the question. I'm rusty on my C but I do recall that using #ifdef is discouraged. Can you guys offer better alternative? Also - I think some of it (if not all of it) can be set as environment variable or passed in as a parameter and if so - what would be a good way of defining these and then accessing from the source code?
Here's snippet of the code I'm dealing with
#define DAN NO
#define UNIX NO
#define LINUX YES
#define WINDOWS_ES NO
#define WINDOWS_RB NO
/* Later in the code */
#if ((DAN==1) || (UNIX==YES))
#include <sys/param.h>
#endif
#if ((WINDOWS_ES==YES) || (WINDOWS_RB==YES) || (WINDOWS_TIES==YES))
#include <param.h>
#include <io.h>
#include <ctype.h>
#endif
/* And totally insane harcoded paths */
#if (DAN==YES)
char MasterSkipFile[MAXSTR] = "/home/dp120728/tools/testarea/test/MasterSkipFile";
#endif
#if (UNIX==YES)
char MasterSkipFile[MAXSTR] = "/home/tregrp/tre1/tretools/MasterSkipFile";
#endif
#if (LINUX==YES)
char MasterSkipFile[MAXSTR] = "/ptehome/tregrp/tre1/tretools/MasterSkipFile";
#endif
/* So on for every platform and combination */
Sure, you can pass -DWHATEVER on the command line. Or -DWHATEVER_ELSE=NO, etc. Maybe for the paths you could do something like
char MasterSkipFile[MAXSTR] = SOME_COMMAND_LINE_DEFINITION;
and then pass
-DSOME_COMMAND_LINE_DEFINITION="/home/whatever/directory/filename"
on the command line.
One thing we used to do is have a generated .h file with these definitions, and generate it with a script. That helped us get rid of a lot of brittle #ifs and #ifdefs
You need to be careful about what you put there, but machine-specific parameters are good candidates - this is how autoconf/automake work.
EDIT: in your case, an example would be to use the generated .h file to define INCLUDE_SYS_PARAM and INCLUDE_PARAM, and in the code itself use:
#ifdef INCLUDE_SYS_PARAM
#include <sys/param.h>
#endif
#ifdef INCLUDE_PARAM
#include <param.h>
#endif
Makes it much easier to port to new platforms - the existence of a new platform doesn't trickle into the code, only to the generated .h file.
Platform specific configuration headers
I'd have a system to generate the platform-specific configuration into a header that is used in all builds. The AutoConf name is 'config.h'; you can see 'platform.h' or 'porting.h' or 'port.h' or other variations on the theme. This file contains the information needed for the platform being built. You can generate the file by copying a version-controlled platform-specific variant to the standard name. You can use a link instead of copying. Or you can run configuration scripts to determine its contents based on what the script finds on the machine.
Default values for configuration parameters
The code:
#if (DAN==YES)
char MasterSkipFile[MAXSTR] = "/home/dp120728/tools/testarea/MasterSkipFile";
#endif
#if (UNIX==YES)
char MasterSkipFile[MAXSTR] = "/home/tregrp/tre1/tretools/MasterSkipFile";
#endif
#if (LINUX==YES)
char MasterSkipFile[MAXSTR] = "/ptehome/tregrp/tre1/tretools/MasterSkipFile";
#endif
Would be better replaced by:
#ifndef MASTER_SKIP_FILE_PATH
#define MASTER_SKIP_FILE_PATH "/opt/tretools/MasterSkipFile"
#endif
const char MasterSkipFile[] = MASTER_SKIP_FILE_PATH;
Those who want the build in a different location can set the location via:
-DMASTER_SKIP_FILE_PATH='"/ptehome/tregtp/tre1/tretools/PinkElephant"'
Note the use of single and double quotes; try to avoid doing this on the command line with backslashes in the path. You can use a similar default mechanism for all sorts of things:
#ifndef DEFAULTABLE_PARAMETER
#define DEFAULTABLE_PARAMETER default_value
#endif
If you choose your defaults well, this can save a lot of energy.
Relocatable software
I'm not sure about the design of the software that can only be installed in one location. In my book, you need to be able to have the old version 1.12 of the product installed on the machine at the same time as the new 2.1 version, and they should be able to operate independently. A hard-coded path name defeats that.
Parameterize by feature not platform
The key difference between the AutoConf tools and the average alternative system is that the configuration is done based on features, not on platforms. You parameterize your code to identify a feature that you want to use. This is crucial because features tend to appear on platforms other than the original. I look after code where there are lines like:
#if defined(SUN4) || defined(SOLARIS_2) || defined(HP_UX) || \
defined(LINUX) || defined(PYRAMID) || defined(SEQUENT) || \
defined(SEQUENT40) || defined(NCR) ...
#include <sys/types.h>
#endif
It would be much, much better to have:
#ifdef INCLUDE_SYS_TYPES_H
#include <sys/types.h>
#endif
And then on the platforms where it is needed, generate:
#define INCLUDE_SYS_TYPES_H
(Don't take this example header too literally; it is the concept I am trying to get over.)
Treat platform as a bundle of features
As a corollary to the previous point, you do need to detect platform and define the features that are applicable to that platform. This is where you have the platform-specific configuration header which defines the configuration features.
Product features should be enabled in a header
(Elaborating on a comment I made to another answer.)
Suppose you have a bunch of features in the product that need to be included or excluded conditionally. For example:
KVLOCKING
B1SECURITY
C2SECURITY
DYNAMICLOCKS
The relevant code is included when the appropriate define is set:
#ifdef KVLOCKING
...KVLOCKING stuff...
#else
...non-KVLOCKING stuff...
#endif
If you use a source code analysis tool like cscope, then it is helpful if it can show you when KVLOCKING is defined. If the only place where it is defined is in some random Makefiles scattered around the build system (let's assume there are a hundred sub-directories that are used in this), it is hard to tell whether the code is still in use on any of your platforms. If the defines are in a header somewhere - the platform specific header, or maybe a product release header (so version 1.x can have KVLOCKING and version 2.x can include C2SECURITY but 2.5 includes B1SECURITY, etc), then you can see that KVLOCKING code is still in use.
Believe me, after twenty years of development and staff turnover, people don't know whether features are still in use or not (because it is stable and never causes problems - possibly because it is never used). And if the only place to find whether KVLOCKING is still defined is in the Makefiles, then tools like cscope are less helpful - which makes modifying the code more error prone when trying to clean up later.
Its much saner to use :
#if SOMETHING
.. from platform to platform, to avoid confusing broken preprocessors. However any modern compiler should effectively argue your case in the end. If you give more details on your platform, compiler and preprocessor you might receive a more concise answer.
Conditional compilation, given the plethora of operating systems and variants therein is a necessary evil. if, ifdef, etc are most decidedly not an abuse of the preprocessor, just exercising it as intended.
My preferred way would be to have the build system do the OS detection. Complex cases you'd want to isolate the machine-specific stuff into a single source file, and have completely different source files for the different OSes.
So in this case, you'd have a #include "OS_Specific.h" in that file. You put the different includes, and the definition of MasterSkipFile for this platform. You can select between them by specifying different -I (include path directories) on your compiler command line.
The nice thing about doing it this way is that somebody trying to figure out the code (perhaps debugging) doesn't have to wade through (and possibly be misled by) phantom code for a platform they aren't even running on.
I've seen build systems in which most of the source files started something off like this:
#include PLATFORM_CONFIG
#include BUILD_CONFIG
and the compiler was kicked off with:
cc -DPLATFORM_CONFIG="linuxconfig.h" -DBUILD_CONFIG="importonlyconfig.h"
(this may need backslash escapes)
this had the effect of letting you separate out the platform settings in one set of files and the configuration settings in another. Platform settings manages handling library calls that may not exist on one platform or not in the right format as well as defining important size dependent types--things that are platform specific. Build settings handles what features are being enabled in the output.
Generalities
I'm a heretic who has been cast out from the Church of the GNU Autotools. Why? Because I like to understand what the hell my tools are doing. And because I've had the experience of trying to combine two components, each of which insisted on a different, incompatible version of autotools being the default version installed on my computer.
I work by creating one .h file or .c filed for every combination of platform and significant abstraction. I work hard to define a central .h file that says what the interface is. Often this means I wind up creating a "compatibility layer" that insulates me from differences between platforms. Often I wind up using ANSI Standard C whenever possible, instead of platform-specific functionality.
I sometimes write scripts to generate platform-dependent files. But the scripts are always written by hand and documented, so I know what they do.
I admire Glenn Fowler's nmake and Phong Vo's iffe (if feature exists), which I think are better engineered than the GNU tools. But these tools are part of the AT&T Software Technology suite, and I haven't been able to figure out how to use them without buying into the whole AST way of doing things, which I don't always understand.
Your example
There clearly needs to be
extern char MasterSkipFile[];
in a .h file somewhere, and you can then link against a suitable .o.
The conditional inclusion of the "right set of .h files for the platform" is something I would handle by trying to stick to ANSI C when possible, and when not possible, defining a compatibility layer in a platform-specific .h file. As it is, I can't tell what names the #includes are trying to import, so I can't give more specific advice.