Get rid of hardware macros in embedded software - c

I was working on an embedded program using C.
There are tons of hardware macros like
#ifdef HardwareA
do A
#endif
It's not readable, and hard to cover all the different paths with unit tests.
So, I decided to move the hardware related code to arch folders, and using macros in the makefile to decide which arch folder is linked. Like in the Linux kernel code.
But when I saw the Linux kernel, I noticed there are so many duplicates in the arch folders.
How do they make the changes to all related hardware when a bug was found in one hardware, but might affect all others?
I think doing this way will inevitably bring duplicates into the code base.
Does anyone have experience with this type of problem?
How to unit test on code which has lots of hardware macros?
Refactoring the code to move hardware macros off source code?

It sounds like you are replacing a function like this:
somefunc()
{
/* generic code ... */
#ifdef HardwareA
do A
#endif
/* more generic code ... */
}
with multiple implementations, one in each arch folder, like this:
somefunc()
{
/* generic code ... */
/* more generic code ... */
}
somefunc()
{
/* generic code ... */
do A
/* more generic code ... */
}
The duplication of the generic code is what you're worried about. Don't do that: instead, have one implementation of the function like this:
somefunc()
{
/* generic code ... */
do_A();
/* more generic code ... */
}
..and then implement do_A() in the arch folders: on Hardware A it has the code for that hardware, and on the other hardware, it is an empty function.
Don't be afraid of empty functions - if you make them inline functions defined in the arch header file, they'll be completely optimised out.

Linux tries to avoid code duplicated between multiple arch directories. You'll see the same functions implemented, but implemented differently. After all, all architectures need code for managing the page tables, but the details differ. So they all have the same functions, but with different definitions.
For some functions, there are CONFIG_GENERIC_* defined by the build system that replace unnecessary architecture hooks with generic versions as well (often no-ops). For example, an arch without a FPU doesn't need hooks to save/restore FPU state on context switch.

This kind of #ifdef hell is definitely to be avoided, but naturally you also want to avoid code duplication. I don't claim this will solve all your problems, but I think the single biggest step you can make it changing your #ifdefs from #ifdef HardwareX to #ifdef HAVE_FeatureY or #ifdef USE_FeatureZ. What this allows you to do is factor the knowledge of which hardware/OS/etc. targets have which features/interfaces out of all your source files and into a single header, which avoids things like:
#if defined(HardwareA) || (defined(HardwareB) && HardwareB_VersionMajor>4 || ...
rendering your sources unreadable.

I tend to move the hardware specific #defines into one header per platform, then select it in a "platform.h" file, which all source files include.
platform.h:
#if defined PLATFORM_X86_32BIT
#include "Platform_X86_32Bit.h"
#elsif defined PLATFORM_TI_2812
#include "Platform_TI_2812.h"
#else
#error "Project File must define a platform"
#endif
The architecture specific headers will contain 2 things.
1) Typedefs for all the common integer sizes, like typedef short int16_t; Note that c99 specifies a 'stdint.h' which has these predefined. (Never use a raw int in portable code).
2) Function headers or Macros for all the hardware specific behavior. By extracting all the dependencies to functions, the main body of code remains clean:
//example data receive function
HW_ReceiverPrepare();
HW_ReceiveBytes(buffer, bytesToFetch);
isGood = (Checksum(buffer+1, bytesToFetch-1) == buffer[0])
HW_ReceiverReset();
Then one platform specific header may provide the prototype to a complex HW_ReceiverPrepare() function, while another simply defines it away with #define HW_ReceiverPrepare()
This works very well in situations like the one described in your comment where the differences between platforms are usually one or two lines. Just encapsulate those lines as function/macro calls, and you can keep the code readable while minimizing duplication.

Related

Multiplatform support, preprocesser or linking with individual libraries

I'm working on a homebrew game for the GBA, and was thinking about porting it to the PC (likely using SDL) as well.
I haven't dealt with the problem of multiplatform support before, so I don't really have any experience.
I came up with two possible ways of going about it, but both have drawbacks, and I don't know if there is a way better solution I'm missing out on.
First would use the preprocessor. A header file would be included in all files which would #define GBA, and based on whether it is defined, the appropriate headers will be included and the appropriate platform specific code will be compiled for the platform.
I would implement it something like
/* GBA definition is in platform.h */
/* Example.c */
void example()
#ifdef GBA
{
/* GBA specific implementation goes here */
}
#else
{
/* PC specific implementation goes here */
}
#endif
The drawback I see here is for a large project, this can get very messy and is frankly kind of ugly and difficult to read.
The other option I can think of is creating static libraries for each platform. Therefore the main source code for both platforms will be the same, increasing ease of simultaneous development, and when building for GBA or PC, the appropriate libraries and settings will be specified and that's it.
The obvious drawback here is that if there needs to be a change in the implementation of something in the library, if something needs to be added, or anything really regarding the library, it needs to be maintained and rebuilt constantly, along with the main actual program.
If there is a better way to approach this, what would it be?
If the ways I mentioned are the standard way of doing it, which is more common / better for long term development?
Here's what I would do [and have done]. Doing [a lot of] #ifdef/#else/#endif sequences is hard to maintain. Trust me, I've done it, until I found better ways. Below is a way I've used in the past. There are other similar approaches.
Here is the generic code:
// example.c -- generic code
#ifdef _USE_GBA_
#include <example_gba.c>
#endif
#ifdef _USE_SDL_
#include <example_sdl.c>
#endif
void
example(void)
{
// NOTE: this will get optimized using tail recursion into a jump or
// example_dep will get inlined here
example_dep();
}
Here is the GBA specific code:
// example_gba.c -- GBA specific code
static void
example_dep(void)
{
// ...
}
Here is the SDL code:
// example_sdl.c -- SDL specific code
static void
example_dep(void)
{
// ...
}

Preferred method to use two names to call the same function in C

I know there are at least three popular methods to call the same function with multiple names. I haven't actually heard of someone using the fourth method for this purpose.
1). Could use #defines:
int my_function (int);
#define my_func my_function
OR
#define my_func(int (a)) my_function(int (a))
2). Embedded function calls are another possibility:
int my_func(int a) {
return my_function(a);
}
3). Use a weak alias in the linker:
int my_func(int a) __attribute__((weak, alias("my_function")));
4). Function pointers:
int (* const my_func)(int) = my_function;
The reason I need multiple names is for a mathematical library that has multiple implementations of the same method.
For example, I need an efficient method to calculate the square root of a scalar floating point number. So I could just use math.h's sqrt(). This is not very efficient. So I write one or two other methods, such as one using Newton's Method. The problem is each technique is better on certain processors (in my case microcontrollers). So I want the compilation process to choose the best method.
I think this means it would be best to use either the macros or the weak alias since those techniques could easily be grouped in a few #ifdef statements in the header files. This simplifies maintenance (relatively). It is also possible to do using the function pointers, but it would have to be in the source file with extern declarations of the general functions in the header file.
Which do you think is the better method?
Edit:
From the proposed solutions, there appears to be two important questions that I did not address.
Q. Are the users working primarily in C/C++?
A. All known development will be in C/C++ or assembly. I am designing this library for my own personal use, mostly for work on bare metal projects. There will be either no or minimal operating system features. There is a remote possibility of using this in full blown operating systems, which would require consideration of language bindings. Since this is for personal growth, it would be advantageous to learn library development on popular embedded operating systems.
Q. Are the users going to need/want an exposed library?
A. So far, yes. Since it is just me, I want to make direct modifications for each processor I use after testing. This is where the test suite would be useful. So an exposed library would help somewhat. Additionally, each "optimal implementation" for particular function may have a failing conditions. At this point, it has to be decided who fixes the problem: the user or the library designer. A user would need an exposed library to work around failing conditions. I am both the "user" and "library designer". It would almost be better to allow for both. Then non-realtime applications could let the library solve all of stability problems as they come up, but real-time applications would be empowered to consider algorithm speed/space vs. algorithm stability.
Another alternative would be to move the functionality into a separately compiled library optimised for each different architecture and then just link to this library during compilation. This would allow the project code to remain unchanged.
Depending on the intended audience for your library, I suggest you chose between 2 alternatives:
If the consumer of your library is guaranteed to be Cish, use #define sqrt newton_sqrt for optimal readability
If some consumers of your library are not of the C variety (think bindings to Dephi, .NET, whatever) try to avoid consumer-visible #defines. This is a major PITA for bindings, as macros are not visible on the binary - embedded function calls are the most binding-friendly.
What you can do is this. In header file (.h):
int function(void);
In the source file (.c):
static int function_implementation_a(void);
static int function_implementation_b(void);
static int function_implementation_c(void);
#if ARCH == ARCH_A
int function(void)
{
return function_implementation_a();
}
#elif ARCH == ARCH_B
int function(void)
{
return function_implementation_b();
}
#else
int function(void)
{
return function_implementation_c();
}
#endif // ARCH
Static functions called once are often inlined by the implementation. This is the case for example with gcc by default : -finline-functions-called-once is enabled even in -O0. The static functions that are not called are also usually not included in the final binary.
Note that I don't put the #if and #else in a single function body because I find the code more readable when #if directives are outside the functions body.
Note this way works better with embedded code where libraries are usually distributed in their source form.
I usually like to solve this with a single declaration in a header file with a different source file for each architecture/processor-type. Then I just have the build system (usually GNU make) choose the right source file.
I usually split the source tree into separate directories for common code and for target-specific code. For instance, my current project has a toplevel directory Project1 and underneath it are include, common, arm, and host directories. For arm and host, the Makefile looks for source in the proper directory based on the target.
I think this makes it easier to navigate the code since I don't have to look up weak symbols or preprocessor definitions to see what functions are actually getting called. It also avoids the ugliness of function wrappers and the potential performance hit of function pointers.
You might you create a test suite for all algorithms and run it on the target to determine which are the best performing, then have the test suite automatically generate the necessary linker aliases (method 3).
Beyond that a simple #define (method 1) probably the simplest, and will not and any potential overhead. It does however expose to the library user that there might be multiple implementations, which may be undesirable.
Personally, since only one implementation of each function is likley to be optimal on any specific target, I'd use the test suite to determine the required versions for each target and build a separate library for each target with only those one version of each function the correct function name directly.

#includes in C files for processor specific implementations

I'm working on a 'C' code base that was written specifically for one type of embedded processor. I've written generic 'psuedo object-oriented' code for things like LEDs, GPIO lines and ADCs (using structs, etc). I have also written a large amount of code that utilizes these 'objects' in a hardware/target agnostic manner.
We are now tossing another processor type into the mix, and I'd like to keep the current code structure so I can still make use of the higher level libraries. I do, however, need to provide different implementations for the lower level code (LEDs, GPIO, ADCs).
I know #includes in .C files are generally looked down upon, but in this case, is it appropriate? For example:
// led.c
#ifdef TARGET_AVR
#include "led_avr.c"
#elseifdef TARGET_PIC
#include "led_pic.c"
#else
#error "Unspecified Target"
#endif
If this is inappropriate, what is a better implementation?
Thanks!
Since the linker doesn't care what the name of a source file actually is (it only cares about exported symbols), you can change your linker command line for each target to name the appropriate implementation module (led_avr.c or led_pic.c).
A common way to manage multiple platform source files is to put each set of platform implementation files in their own directory, so you might have avr/led.c and pic/led.c (and avr/gpio.c and pic/gpio.c, etc).
It is good. You may use other tricks, like:
#ifdef PROC1
#define MULTI_CPU(a,b) (a)
#else
#define MULTI_CPU(a,b) (b)
#endif
The more common way to do that, instead of including a C file, is to change the build system (whatever it is) to compile or not compile those certain C files.

Why should #ifdef be avoided in .c files?

A programmer I respect said that in C code, #if and #ifdef should be avoided at all costs, except possibly in header files. Why would it be considered bad programming practice to use #ifdef in a .c file?
Hard to maintain. Better use interfaces to abstract platform specific code than abusing conditional compilation by scattering #ifdefs all over your implementation.
E.g.
void foo() {
#ifdef WIN32
// do Windows stuff
#else
// do Posix stuff
#endif
// do general stuff
}
Is not nice. Instead have files foo_w32.c and foo_psx.c with
foo_w32.c:
void foo() {
// windows implementation
}
foo_psx.c:
void foo() {
// posix implementation
}
foo.h:
void foo(); // common interface
Then have 2 makefiles1: Makefile.win, Makefile.psx, with each compiling the appropriate .c file and linking against the right object.
Minor amendment:
If foo()'s implementation depends on some code that appears in all platforms, E.g. common_stuff()2, simply call that in your foo() implementations.
E.g.
common.h:
void common_stuff(); // May be implemented in common.c, or maybe has multiple
// implementations in common_{A, B, ...} for platforms
// { A, B, ... }. Irrelevant.
foo_{w32, psx}.c:
void foo() { // Win32/Posix implementation
// Stuff
...
if (bar) {
common_stuff();
}
}
While you may be repeating a function call to common_stuff(), you can't parameterize your definition of foo() per platform unless it follows a very specific pattern. Generally, platform differences require completely different implementations and don't follow such patterns.
Makefiles are used here illustratively. Your build system may not use make at all, such as if you use Visual Studio, CMake, Scons, etc.
Even if common_stuff() actually has multiple implementations, varying per platform.
(Somewhat off the asked question)
I saw a tip once suggesting the use of #if(n)def/#endif blocks for use in debugging/isolating code instead of commenting.
It was suggested to help avoid situations in which the section to be commented already had documentation comments and a solution like the following would have to be implemented:
/* <-- begin debug cmnt if (condition) /* comment */
/* <-- restart debug cmnt {
....
}
*/ <-- end debug cmnt
Instead, this would be:
#ifdef IS_DEBUGGED_SECTION_X
if (condition) /* comment */
{
....
}
#endif
Seemed like a neat idea to me. Wish I could remember the source so I could link it :(
Because then when you do search results you don't know if the code is in or out without reading it.
Because they should be used for OS/Platform dependencies, and therefore that kind of code should be in files like io_win.c or io_macos.c
My interpretation of this rule:
Your (algorithmic) program logic should not be influenced by preprocessor defines. The functioning of your code should always be concise. Any other form of logic (platform, debug) should be abstractable in header files.
This is more a guideline than a strict rule, IMHO.
But I agree that c-syntax based solutions are preferred over preprocessor magic.
The conditional compilation is hard to debug. One has to know all the settings in order to figure out which block of code the program will execute.
I once spent a week debugging a multi-threaded application that used conditional compilation. The problem was that the identifier was not spelled the same. One module used #if FEATURE_1 while the problem area used #if FEATURE1 (Notice the underscore).
I a big proponent of letting the makefile handle the configuration by including the correct libraries or objects. Makes to code more readable. Also, the majority of the code becomes configuration independent and only a few files are configuration dependent.
A reasonable goal but not so great as a strict rule
The advice to try and keep preprocessor conditionals in header files is good, as it allows you to select interfaces conditionally but not litter the code with confusing and ugly preprocessor logic.
However, there is lots and lots and lots of code that looks like the made-up example below, and I don't think there is a clearly better alternative. I think you have cited a reasonable guideline but not a great gold-tablet-commandment.
#if defined(SOME_IOCTL)
case SOME_IOCTL:
...
#endif
#if defined(SOME_OTHER_IOCTL)
case SOME_OTHER_IOCTL:
...
#endif
#if defined(YET_ANOTHER_IOCTL)
case YET_ANOTHER_IOCTL:
...
#endif
CPP is a separate (non-Turing-complete) macro language on top of (usually) C or C++. As such, it's easy to get mixed up between it and the base language, if you're not careful. That's the usual argument against macros instead of e.g. c++ templates, anyway. But #ifdef? Just go try to read someone else's code you've never seen before that has a bunch of ifdefs.
e.g. try reading these Reed-Solomon multiply-a-block-by-a-constant-Galois-value functions:
http://parchive.cvs.sourceforge.net/viewvc/parchive/par2-cmdline/reedsolomon.cpp?revision=1.3&view=markup
If you didn't have the following hint, it will take you a minute to figure out what's going on: There are two versions: one simple, and one with a pre-computed lookup table (LONGMULTIPLY). Even so, have fun tracing the #if BYTE_ORDER == __LITTLE_ENDIAN. I found it a lot easier to read when I rewrote that bit to use a le16_to_cpu function, (whose definition was inside #if clauses) inspired by Linux's byteorder.h stuff.
If you need different low-level behaviour depending on the build, try to encapsulate that in low-level functions that provide consistent behaviour everywhere, instead of putting #if stuff right inside your larger functions.
By all means, favor abstraction over conditional compilation. As anyone who has written portable software can tell you, however, the number of environmental permutations is staggering. Some design discipline can help, but sometimes the choice is between elegance and meeting a schedule. In such cases, a compromise might be necessary.
Consider the situation where you are required to provide fully tested code, with 100% branch coverage etc. Now add in conditional compilation.
Each unique symbol used to control conditional compilation doubles the number of code variants you need to test. So, one symbol - you have two variants. Two symbols, you now have four different ways to compile your code. And so on.
And this only applies for boolean tests such as #ifdef. You can easily imagine the problem if a test is of the form #if VARIABLE == SCALAR_VALUE_FROM_A_RANGE.
If your code will be compiled with different C compilers, and you use compiler-specific features, then you may need to determine which predefined macros are available.
It's true that #if #endif does complicate the reading of the code. However I have seen a lot of real world code that have no issues using this and are still going strong. So there may be better ways to avoid using #if #endif but using them is not that bad if proper care is taken.

C - alternative to #ifdef

I'm trying to streamline large chunk of legacy C code in which, even today, before doing the build guy who maintains it takes a source file(s) and manually modifies the following section before the compilation based on the various types of environment.
The example follows but here's the question. I'm rusty on my C but I do recall that using #ifdef is discouraged. Can you guys offer better alternative? Also - I think some of it (if not all of it) can be set as environment variable or passed in as a parameter and if so - what would be a good way of defining these and then accessing from the source code?
Here's snippet of the code I'm dealing with
#define DAN NO
#define UNIX NO
#define LINUX YES
#define WINDOWS_ES NO
#define WINDOWS_RB NO
/* Later in the code */
#if ((DAN==1) || (UNIX==YES))
#include <sys/param.h>
#endif
#if ((WINDOWS_ES==YES) || (WINDOWS_RB==YES) || (WINDOWS_TIES==YES))
#include <param.h>
#include <io.h>
#include <ctype.h>
#endif
/* And totally insane harcoded paths */
#if (DAN==YES)
char MasterSkipFile[MAXSTR] = "/home/dp120728/tools/testarea/test/MasterSkipFile";
#endif
#if (UNIX==YES)
char MasterSkipFile[MAXSTR] = "/home/tregrp/tre1/tretools/MasterSkipFile";
#endif
#if (LINUX==YES)
char MasterSkipFile[MAXSTR] = "/ptehome/tregrp/tre1/tretools/MasterSkipFile";
#endif
/* So on for every platform and combination */
Sure, you can pass -DWHATEVER on the command line. Or -DWHATEVER_ELSE=NO, etc. Maybe for the paths you could do something like
char MasterSkipFile[MAXSTR] = SOME_COMMAND_LINE_DEFINITION;
and then pass
-DSOME_COMMAND_LINE_DEFINITION="/home/whatever/directory/filename"
on the command line.
One thing we used to do is have a generated .h file with these definitions, and generate it with a script. That helped us get rid of a lot of brittle #ifs and #ifdefs
You need to be careful about what you put there, but machine-specific parameters are good candidates - this is how autoconf/automake work.
EDIT: in your case, an example would be to use the generated .h file to define INCLUDE_SYS_PARAM and INCLUDE_PARAM, and in the code itself use:
#ifdef INCLUDE_SYS_PARAM
#include <sys/param.h>
#endif
#ifdef INCLUDE_PARAM
#include <param.h>
#endif
Makes it much easier to port to new platforms - the existence of a new platform doesn't trickle into the code, only to the generated .h file.
Platform specific configuration headers
I'd have a system to generate the platform-specific configuration into a header that is used in all builds. The AutoConf name is 'config.h'; you can see 'platform.h' or 'porting.h' or 'port.h' or other variations on the theme. This file contains the information needed for the platform being built. You can generate the file by copying a version-controlled platform-specific variant to the standard name. You can use a link instead of copying. Or you can run configuration scripts to determine its contents based on what the script finds on the machine.
Default values for configuration parameters
The code:
#if (DAN==YES)
char MasterSkipFile[MAXSTR] = "/home/dp120728/tools/testarea/MasterSkipFile";
#endif
#if (UNIX==YES)
char MasterSkipFile[MAXSTR] = "/home/tregrp/tre1/tretools/MasterSkipFile";
#endif
#if (LINUX==YES)
char MasterSkipFile[MAXSTR] = "/ptehome/tregrp/tre1/tretools/MasterSkipFile";
#endif
Would be better replaced by:
#ifndef MASTER_SKIP_FILE_PATH
#define MASTER_SKIP_FILE_PATH "/opt/tretools/MasterSkipFile"
#endif
const char MasterSkipFile[] = MASTER_SKIP_FILE_PATH;
Those who want the build in a different location can set the location via:
-DMASTER_SKIP_FILE_PATH='"/ptehome/tregtp/tre1/tretools/PinkElephant"'
Note the use of single and double quotes; try to avoid doing this on the command line with backslashes in the path. You can use a similar default mechanism for all sorts of things:
#ifndef DEFAULTABLE_PARAMETER
#define DEFAULTABLE_PARAMETER default_value
#endif
If you choose your defaults well, this can save a lot of energy.
Relocatable software
I'm not sure about the design of the software that can only be installed in one location. In my book, you need to be able to have the old version 1.12 of the product installed on the machine at the same time as the new 2.1 version, and they should be able to operate independently. A hard-coded path name defeats that.
Parameterize by feature not platform
The key difference between the AutoConf tools and the average alternative system is that the configuration is done based on features, not on platforms. You parameterize your code to identify a feature that you want to use. This is crucial because features tend to appear on platforms other than the original. I look after code where there are lines like:
#if defined(SUN4) || defined(SOLARIS_2) || defined(HP_UX) || \
defined(LINUX) || defined(PYRAMID) || defined(SEQUENT) || \
defined(SEQUENT40) || defined(NCR) ...
#include <sys/types.h>
#endif
It would be much, much better to have:
#ifdef INCLUDE_SYS_TYPES_H
#include <sys/types.h>
#endif
And then on the platforms where it is needed, generate:
#define INCLUDE_SYS_TYPES_H
(Don't take this example header too literally; it is the concept I am trying to get over.)
Treat platform as a bundle of features
As a corollary to the previous point, you do need to detect platform and define the features that are applicable to that platform. This is where you have the platform-specific configuration header which defines the configuration features.
Product features should be enabled in a header
(Elaborating on a comment I made to another answer.)
Suppose you have a bunch of features in the product that need to be included or excluded conditionally. For example:
KVLOCKING
B1SECURITY
C2SECURITY
DYNAMICLOCKS
The relevant code is included when the appropriate define is set:
#ifdef KVLOCKING
...KVLOCKING stuff...
#else
...non-KVLOCKING stuff...
#endif
If you use a source code analysis tool like cscope, then it is helpful if it can show you when KVLOCKING is defined. If the only place where it is defined is in some random Makefiles scattered around the build system (let's assume there are a hundred sub-directories that are used in this), it is hard to tell whether the code is still in use on any of your platforms. If the defines are in a header somewhere - the platform specific header, or maybe a product release header (so version 1.x can have KVLOCKING and version 2.x can include C2SECURITY but 2.5 includes B1SECURITY, etc), then you can see that KVLOCKING code is still in use.
Believe me, after twenty years of development and staff turnover, people don't know whether features are still in use or not (because it is stable and never causes problems - possibly because it is never used). And if the only place to find whether KVLOCKING is still defined is in the Makefiles, then tools like cscope are less helpful - which makes modifying the code more error prone when trying to clean up later.
Its much saner to use :
#if SOMETHING
.. from platform to platform, to avoid confusing broken preprocessors. However any modern compiler should effectively argue your case in the end. If you give more details on your platform, compiler and preprocessor you might receive a more concise answer.
Conditional compilation, given the plethora of operating systems and variants therein is a necessary evil. if, ifdef, etc are most decidedly not an abuse of the preprocessor, just exercising it as intended.
My preferred way would be to have the build system do the OS detection. Complex cases you'd want to isolate the machine-specific stuff into a single source file, and have completely different source files for the different OSes.
So in this case, you'd have a #include "OS_Specific.h" in that file. You put the different includes, and the definition of MasterSkipFile for this platform. You can select between them by specifying different -I (include path directories) on your compiler command line.
The nice thing about doing it this way is that somebody trying to figure out the code (perhaps debugging) doesn't have to wade through (and possibly be misled by) phantom code for a platform they aren't even running on.
I've seen build systems in which most of the source files started something off like this:
#include PLATFORM_CONFIG
#include BUILD_CONFIG
and the compiler was kicked off with:
cc -DPLATFORM_CONFIG="linuxconfig.h" -DBUILD_CONFIG="importonlyconfig.h"
(this may need backslash escapes)
this had the effect of letting you separate out the platform settings in one set of files and the configuration settings in another. Platform settings manages handling library calls that may not exist on one platform or not in the right format as well as defining important size dependent types--things that are platform specific. Build settings handles what features are being enabled in the output.
Generalities
I'm a heretic who has been cast out from the Church of the GNU Autotools. Why? Because I like to understand what the hell my tools are doing. And because I've had the experience of trying to combine two components, each of which insisted on a different, incompatible version of autotools being the default version installed on my computer.
I work by creating one .h file or .c filed for every combination of platform and significant abstraction. I work hard to define a central .h file that says what the interface is. Often this means I wind up creating a "compatibility layer" that insulates me from differences between platforms. Often I wind up using ANSI Standard C whenever possible, instead of platform-specific functionality.
I sometimes write scripts to generate platform-dependent files. But the scripts are always written by hand and documented, so I know what they do.
I admire Glenn Fowler's nmake and Phong Vo's iffe (if feature exists), which I think are better engineered than the GNU tools. But these tools are part of the AT&T Software Technology suite, and I haven't been able to figure out how to use them without buying into the whole AST way of doing things, which I don't always understand.
Your example
There clearly needs to be
extern char MasterSkipFile[];
in a .h file somewhere, and you can then link against a suitable .o.
The conditional inclusion of the "right set of .h files for the platform" is something I would handle by trying to stick to ANSI C when possible, and when not possible, defining a compatibility layer in a platform-specific .h file. As it is, I can't tell what names the #includes are trying to import, so I can't give more specific advice.

Resources