I'm trying to streamline large chunk of legacy C code in which, even today, before doing the build guy who maintains it takes a source file(s) and manually modifies the following section before the compilation based on the various types of environment.
The example follows but here's the question. I'm rusty on my C but I do recall that using #ifdef is discouraged. Can you guys offer better alternative? Also - I think some of it (if not all of it) can be set as environment variable or passed in as a parameter and if so - what would be a good way of defining these and then accessing from the source code?
Here's snippet of the code I'm dealing with
#define DAN NO
#define UNIX NO
#define LINUX YES
#define WINDOWS_ES NO
#define WINDOWS_RB NO
/* Later in the code */
#if ((DAN==1) || (UNIX==YES))
#include <sys/param.h>
#endif
#if ((WINDOWS_ES==YES) || (WINDOWS_RB==YES) || (WINDOWS_TIES==YES))
#include <param.h>
#include <io.h>
#include <ctype.h>
#endif
/* And totally insane harcoded paths */
#if (DAN==YES)
char MasterSkipFile[MAXSTR] = "/home/dp120728/tools/testarea/test/MasterSkipFile";
#endif
#if (UNIX==YES)
char MasterSkipFile[MAXSTR] = "/home/tregrp/tre1/tretools/MasterSkipFile";
#endif
#if (LINUX==YES)
char MasterSkipFile[MAXSTR] = "/ptehome/tregrp/tre1/tretools/MasterSkipFile";
#endif
/* So on for every platform and combination */
Sure, you can pass -DWHATEVER on the command line. Or -DWHATEVER_ELSE=NO, etc. Maybe for the paths you could do something like
char MasterSkipFile[MAXSTR] = SOME_COMMAND_LINE_DEFINITION;
and then pass
-DSOME_COMMAND_LINE_DEFINITION="/home/whatever/directory/filename"
on the command line.
One thing we used to do is have a generated .h file with these definitions, and generate it with a script. That helped us get rid of a lot of brittle #ifs and #ifdefs
You need to be careful about what you put there, but machine-specific parameters are good candidates - this is how autoconf/automake work.
EDIT: in your case, an example would be to use the generated .h file to define INCLUDE_SYS_PARAM and INCLUDE_PARAM, and in the code itself use:
#ifdef INCLUDE_SYS_PARAM
#include <sys/param.h>
#endif
#ifdef INCLUDE_PARAM
#include <param.h>
#endif
Makes it much easier to port to new platforms - the existence of a new platform doesn't trickle into the code, only to the generated .h file.
Platform specific configuration headers
I'd have a system to generate the platform-specific configuration into a header that is used in all builds. The AutoConf name is 'config.h'; you can see 'platform.h' or 'porting.h' or 'port.h' or other variations on the theme. This file contains the information needed for the platform being built. You can generate the file by copying a version-controlled platform-specific variant to the standard name. You can use a link instead of copying. Or you can run configuration scripts to determine its contents based on what the script finds on the machine.
Default values for configuration parameters
The code:
#if (DAN==YES)
char MasterSkipFile[MAXSTR] = "/home/dp120728/tools/testarea/MasterSkipFile";
#endif
#if (UNIX==YES)
char MasterSkipFile[MAXSTR] = "/home/tregrp/tre1/tretools/MasterSkipFile";
#endif
#if (LINUX==YES)
char MasterSkipFile[MAXSTR] = "/ptehome/tregrp/tre1/tretools/MasterSkipFile";
#endif
Would be better replaced by:
#ifndef MASTER_SKIP_FILE_PATH
#define MASTER_SKIP_FILE_PATH "/opt/tretools/MasterSkipFile"
#endif
const char MasterSkipFile[] = MASTER_SKIP_FILE_PATH;
Those who want the build in a different location can set the location via:
-DMASTER_SKIP_FILE_PATH='"/ptehome/tregtp/tre1/tretools/PinkElephant"'
Note the use of single and double quotes; try to avoid doing this on the command line with backslashes in the path. You can use a similar default mechanism for all sorts of things:
#ifndef DEFAULTABLE_PARAMETER
#define DEFAULTABLE_PARAMETER default_value
#endif
If you choose your defaults well, this can save a lot of energy.
Relocatable software
I'm not sure about the design of the software that can only be installed in one location. In my book, you need to be able to have the old version 1.12 of the product installed on the machine at the same time as the new 2.1 version, and they should be able to operate independently. A hard-coded path name defeats that.
Parameterize by feature not platform
The key difference between the AutoConf tools and the average alternative system is that the configuration is done based on features, not on platforms. You parameterize your code to identify a feature that you want to use. This is crucial because features tend to appear on platforms other than the original. I look after code where there are lines like:
#if defined(SUN4) || defined(SOLARIS_2) || defined(HP_UX) || \
defined(LINUX) || defined(PYRAMID) || defined(SEQUENT) || \
defined(SEQUENT40) || defined(NCR) ...
#include <sys/types.h>
#endif
It would be much, much better to have:
#ifdef INCLUDE_SYS_TYPES_H
#include <sys/types.h>
#endif
And then on the platforms where it is needed, generate:
#define INCLUDE_SYS_TYPES_H
(Don't take this example header too literally; it is the concept I am trying to get over.)
Treat platform as a bundle of features
As a corollary to the previous point, you do need to detect platform and define the features that are applicable to that platform. This is where you have the platform-specific configuration header which defines the configuration features.
Product features should be enabled in a header
(Elaborating on a comment I made to another answer.)
Suppose you have a bunch of features in the product that need to be included or excluded conditionally. For example:
KVLOCKING
B1SECURITY
C2SECURITY
DYNAMICLOCKS
The relevant code is included when the appropriate define is set:
#ifdef KVLOCKING
...KVLOCKING stuff...
#else
...non-KVLOCKING stuff...
#endif
If you use a source code analysis tool like cscope, then it is helpful if it can show you when KVLOCKING is defined. If the only place where it is defined is in some random Makefiles scattered around the build system (let's assume there are a hundred sub-directories that are used in this), it is hard to tell whether the code is still in use on any of your platforms. If the defines are in a header somewhere - the platform specific header, or maybe a product release header (so version 1.x can have KVLOCKING and version 2.x can include C2SECURITY but 2.5 includes B1SECURITY, etc), then you can see that KVLOCKING code is still in use.
Believe me, after twenty years of development and staff turnover, people don't know whether features are still in use or not (because it is stable and never causes problems - possibly because it is never used). And if the only place to find whether KVLOCKING is still defined is in the Makefiles, then tools like cscope are less helpful - which makes modifying the code more error prone when trying to clean up later.
Its much saner to use :
#if SOMETHING
.. from platform to platform, to avoid confusing broken preprocessors. However any modern compiler should effectively argue your case in the end. If you give more details on your platform, compiler and preprocessor you might receive a more concise answer.
Conditional compilation, given the plethora of operating systems and variants therein is a necessary evil. if, ifdef, etc are most decidedly not an abuse of the preprocessor, just exercising it as intended.
My preferred way would be to have the build system do the OS detection. Complex cases you'd want to isolate the machine-specific stuff into a single source file, and have completely different source files for the different OSes.
So in this case, you'd have a #include "OS_Specific.h" in that file. You put the different includes, and the definition of MasterSkipFile for this platform. You can select between them by specifying different -I (include path directories) on your compiler command line.
The nice thing about doing it this way is that somebody trying to figure out the code (perhaps debugging) doesn't have to wade through (and possibly be misled by) phantom code for a platform they aren't even running on.
I've seen build systems in which most of the source files started something off like this:
#include PLATFORM_CONFIG
#include BUILD_CONFIG
and the compiler was kicked off with:
cc -DPLATFORM_CONFIG="linuxconfig.h" -DBUILD_CONFIG="importonlyconfig.h"
(this may need backslash escapes)
this had the effect of letting you separate out the platform settings in one set of files and the configuration settings in another. Platform settings manages handling library calls that may not exist on one platform or not in the right format as well as defining important size dependent types--things that are platform specific. Build settings handles what features are being enabled in the output.
Generalities
I'm a heretic who has been cast out from the Church of the GNU Autotools. Why? Because I like to understand what the hell my tools are doing. And because I've had the experience of trying to combine two components, each of which insisted on a different, incompatible version of autotools being the default version installed on my computer.
I work by creating one .h file or .c filed for every combination of platform and significant abstraction. I work hard to define a central .h file that says what the interface is. Often this means I wind up creating a "compatibility layer" that insulates me from differences between platforms. Often I wind up using ANSI Standard C whenever possible, instead of platform-specific functionality.
I sometimes write scripts to generate platform-dependent files. But the scripts are always written by hand and documented, so I know what they do.
I admire Glenn Fowler's nmake and Phong Vo's iffe (if feature exists), which I think are better engineered than the GNU tools. But these tools are part of the AT&T Software Technology suite, and I haven't been able to figure out how to use them without buying into the whole AST way of doing things, which I don't always understand.
Your example
There clearly needs to be
extern char MasterSkipFile[];
in a .h file somewhere, and you can then link against a suitable .o.
The conditional inclusion of the "right set of .h files for the platform" is something I would handle by trying to stick to ANSI C when possible, and when not possible, defining a compatibility layer in a platform-specific .h file. As it is, I can't tell what names the #includes are trying to import, so I can't give more specific advice.
Related
I am working on an open source C driver for a cheap sensor that is used mostly for Arduino projects. The project is set up in such a way that it is possible to support multiple platforms outside the Arduino ecosystem, like the Raspberry Pi.
The project is set up with a platform.h file, with the intention of having different implementations of this header file. Like the example below:
platform.h
platform_arduino.c
platform_rpi.c
platform_windows.c
There is this (Cross-Platform C++ code and single header - multiple implementations) Stack Overflow post that goes fairly in depth in how to handle this for C++ but I feel like none of those examples really apply to this C implementation.
I have come up with some solutions like just adding the requirements for each platform at the top of the file.
#if SOME_REQUIREMENT
#include "platform.h"
int8_t t_open(void)
{
// Implementation here
}
#endif //SOME_REQUIREMENT
But this seems like a clunky solution.
It impacts readability of the code.1
It will probably make debugging conflicting requirements a nightmare.
1 Many editors (Like VS Code) try to gray out code which does not match requirements. While I want this most of the time, it is really annoying when working on cross-platform drivers. I could just disable it for the entirety of the project, but in other parts of the project it is useful. I understand that it could probably be solved using VS Code thing. However, I am asking for alternative methods of selecting the right file/code for the platform because I am interested in seeing what other strategies there are.
Part of the "problem" is that support for Arduino is the primary focus, which means it can't easily be solved with makefile magic. My question is, what are alternative ways of implementing a solution to this problem, that are still readable?
If it cannot be done without makefile magic, then that is an answer too.
For reference, here is a simplified example of the header file and implementation
platform.h
#ifndef __PLATFORM__
#define __PLATFORM__
int8_t t_open(void);
#endif //__PLATFORM__
platform_arduino.c
#include "platform.h"
int8_t t_open(void)
{
// Implementation here
}
this (Cross-Platform C++ code and single header - multiple implementations) Stack Overflow post that goes fairly in depth in how to handle this for C++ but I feel like none of those examples really apply to this C implementation.
I don't see why you say that. The first suggestions in the two highest-scoring answers are variations on the idea of using conditional macros, which not only is valid in C, but is a traditional approach. You yourself present an alternative along these lines.
Part of the "problem" is that support for Arduino is the primary focus, which means it can't easily be solved with makefile magic.
I take you to mean that the approach to platform adaptation has to be encoded somehow into the C source, as opposed to being handled via the build system. Frankly, this is an unusual constraint, except inasmuch as it can be addressed by use of the various system-identification macros provided by C compilers of interest.
Even if you don't want to rely specifically on makefiles, you should consider attributing some responsibility to the build system, which you can do even without knowing specifically what build system that is. For example, you can designate macro names, such as for_windows, etc that request builds for non-default platforms. You then leave it to the person building an instance of the driver to figure out how to configure their tools to provide the appropriate macro definition for their needs (which generally is not hard), based on your build documentation.
My question is, what are alternative ways of implementing a solution to this problem, that are still readable?
If the solution needs to be embodied entirely in the C source, then you have three main alternatives:
write code that just works correctly on all platforms, or
perform runtime detection and adaptation, or
use conditional compilation based on macros automatically defined by supported compilers.
If you're prepared to rely on macro definitions supplied by the user at build time, then the last becomes simply
use conditional compilation
Do not dismiss the first out of hand, but it can be a difficult path, and it might not be fully possible for your particular problem (and probably isn't if you're writing a driver or other code for a freestanding implementation).
Runtime adaptation could be viewed as a specific case of code that just works, but what I have in mind for this is a higher level of organization that performs runtime analysis of the host environment and chooses function variants and internal parameters suited to that, as opposed to those choices being made at compile time. This is a real thing that is occasionally done, but it may or may not be viable for your particular case.
On the other hand, conditional compilation is the traditional basis for platform adaptation in C, and the general form does not have the caveat of the other two that it might or might not work in your particular situation. The level of readability and maintainability you achieve this way is a function of the details of how you implement it.
I have come up with some solutions like just adding the requirements for each platform at the top of the file. [...] But this seems like a clunky solution.
If you must include a source file in your build but you don't want anything in it to actually contribute to the target then that's exactly what you must do. You complain that "It will probably make debugging conflicting requirements a nightmare", but to the extent that that's a genuine issue, I think it's not so much a question of syntax as of the whole different code for different platforms plan.
You also complain that the conditional compilation option might be a practical difficulty for you with your choice of development tools. It certainly seems to me that there ought to be good workarounds for that available from your tools and development workflow. But if you must have a workaround grounded only in the C language, then there is one (albeit a bad one): introduce a level of preprocessing indirection. That is, put the conditional compilation directives in a different source file, like so:
platform.c
#if defined(for_windows)
#include "platform_windows.c"
#else
#if defined(for_rpi)
#include "platform_rpi.c"
#else
#include "platform_arduino.c"
#endif
#endif
You then designate platform.c as a file to be built, but not (directly) any of the specific-platform files.
This solves your tool-presentation issue because when you are working on one of the platform-specific .c files, the editor is unlikely to be able to tell whether it would actually be included in a build or not.
Do note well that it is widely considered bad practice to #include files containing function implementations, or those not ending with an extension conventionally designating a header. I don't say otherwise about the above, but I would say that if the whole platform.c contains nothing else, then that's about the least bad variation that I can think of within the category.
Consider the case where I'm using some functionality from the Linux headers exported to user space, such as perf_event_open from <linux/perf_event.h>.
The functionality offered by this API has changed over time, as members have been added to the perf_event_attr, such as perf_event_attr.cap_user_time.
How can I write source that compiles and uses these new functionalities if they are available locally, but falls back gracefully if they aren't and doesn't use them?
In particular, how can I detect in the pre-processor whether this stuff is available?
I've used this perf_event_attr as an example, but my question is a general one because structure members, new structures, definitions and functions are added all the time.
Note that here I'm only considering the case where a process is compiled on the same system that it will run on: if you want to compile on one host and run on another you need a different set of tricks.
Use the macros from /usr/include/linux/version.h:
#include <linux/version.h>
int main() {
#if LINUX_VERSION_CODE <= KERNEL_VERSION(2,6,16)
// ^^^^^^ change for the proper version when `perf_event_attr.cap_user_time` was introduced
// use old interface
#else
// use new interface
// use perf_event_attr.cap_user_time
#endif
}
You might go into this with the following assumptions
The features available in the header files correspond to those documented for the specific Linux version.
The kernel running during execution corresponds to <linux/version.h> during compilation
Ideally, I suggest not to rely on these two assumptions at all.
The first assumption fails primarily due to backports, e.g. in enterprise Linux versions based on ancient kernels. If you care about different versions, you probably care about them.
Instead, I recommend utilizing the methods for checking for struct members and include files in build system, e.g. for CMake:
CHECK_STRUCT_HAS_MEMBER("struct perf_event_attr" cap_user_time linux/perf_event.h HAVE_PERF_CAP_USER_TIME)
CHECK_INCLUDE_FILES can also be useful.
The second assumption can fail for many reasons, even if the binary is not moved between systems; E.g. updating the kernel but not recompiling the binary or simply booting another kernel. Specifically perf_event_open fails with EINVAL if a reserved bit is set. This allows you to retry with an alternative implementation not using the requested feature.
In short, statically check for the feature instead of the version. Dynamically, try and retry the legacy implementation if it failed.
Just in addition to other answers.
If you're aiming for supporting both cross-version and cross-distro code, you should also keep in mind that there are distros (Centos/RHEL) which pull some recent changes from new kernels to old. So you may encounter a situation in which you'll have LINUX_VERSION_CODE equal to some old kernel version, but there will be some changes (new fields in data structures, new functions, etc.) from recent kernel. In such case this macro is insufficient.
You can add something like (to avoid preprocessor errors in case it is not a Centos distro):
#ifndef RHEL_RELEASE_CODE
#define RHEL_RELEASE_CODE 0
#endif
#ifndef RHEL_RELEASE_VERSION
#define RHEL_RELEASE_VERSION(x,y) 1
#endif
And use it with > or >= where you need:
#if LINUX_VERSION_CODE >= KERNEL_VERSION(4,3,0) || RHEL_RELEASE_CODE > RHEL_RELEASE_VERSION(7,2)
...
for Centos/RHEL custom kernels support.
P.S. of course it's necessary to examine an appropriate versions of Centos/RHEL, and understand when and what exactly has changed in the code sections that affect you.
I am developing a embedded software that is meant to run on two to three different family of micro controllers. For now we have makefiles that reads the configuration switches and does compilation.
The process is getting more and more tedious for both developers and non developers to stay updated with compile switches and build configurations. I know Linux kernel uses ncurses for generating compile configurations. I am looking for a similar tool, but cross platform. It should run on Windows and Linux. I know this will still not solve the problem but its more appealing to non developers also I can quickly share my .config file or compare it with existing. The configurations will be in specific order and a diff tool here will help.
Can anyone share their experience with similar project maintenance or a reference project (embedded and common code base for multiple micros). Just want to know best practices.
PS : Language used C, 8/16 bit micros, no OS just timer based batch scheduler (baremetal)
I have one microcontroller but several projects which get compiled from the same source code. I think my scenario is similar to yours, at least to some extent. My solution was inspired by Linux kernel, as well.
config.h
All source code which needs to get access to some configuration parameter simply includes an header file called config.h.
config.h consists of just one line:
#include <config/project.h>
project.h
I have several configuration header files, one per project. A project.h consists of macro definitions with values such as true, false, or constants:
#define CONFIG_FOO true
#define CONFIG_BAR false
#define CONFIG_TIME 100
check.c
This file checks configuration parameters for correctness:
- all parameters must be defined, even if not used or meaningful for that project
- unwanted parameter combinations are signalled
- parameter values are constrained.
#if !defined(CONFIG_FOO)
#error CONFIG_FOO not defined
#endif
#if !defined(CONFIG_BAR)
#error CONFIG_BAR not defined
#endif
#if !defined(CONFIG_TIME)
#error CONFIG_TIME not defined
#endif
#if !(CONFIG_FOO ^ CONFIG_BAR)
#error either CONFIG_FOO or CONFIG_BAR should be se
#endif
#if CONFIG_TIME > 250
#error CONFIG_TIME too big
#endif
Makefile
By instructing the compiler to output the preprocessor macros, it is possible (with a bit of sed expression) to feed the Makefile with the same parameter values gprovided for a given project.
If you don't find anything else, GNU autotools could make things a bit easier.
When I was doing multi-platform development, I used a solution like the one in my answer here. Have a specific "platform_XXX.h" for each platform, and restrict the conditional compilation to a single master "platform.h" file which selects the right subfile.
I'm working on a 'C' code base that was written specifically for one type of embedded processor. I've written generic 'psuedo object-oriented' code for things like LEDs, GPIO lines and ADCs (using structs, etc). I have also written a large amount of code that utilizes these 'objects' in a hardware/target agnostic manner.
We are now tossing another processor type into the mix, and I'd like to keep the current code structure so I can still make use of the higher level libraries. I do, however, need to provide different implementations for the lower level code (LEDs, GPIO, ADCs).
I know #includes in .C files are generally looked down upon, but in this case, is it appropriate? For example:
// led.c
#ifdef TARGET_AVR
#include "led_avr.c"
#elseifdef TARGET_PIC
#include "led_pic.c"
#else
#error "Unspecified Target"
#endif
If this is inappropriate, what is a better implementation?
Thanks!
Since the linker doesn't care what the name of a source file actually is (it only cares about exported symbols), you can change your linker command line for each target to name the appropriate implementation module (led_avr.c or led_pic.c).
A common way to manage multiple platform source files is to put each set of platform implementation files in their own directory, so you might have avr/led.c and pic/led.c (and avr/gpio.c and pic/gpio.c, etc).
It is good. You may use other tricks, like:
#ifdef PROC1
#define MULTI_CPU(a,b) (a)
#else
#define MULTI_CPU(a,b) (b)
#endif
The more common way to do that, instead of including a C file, is to change the build system (whatever it is) to compile or not compile those certain C files.
When defining macros that headers rely on, such as _FILE_OFFSET_BITS, FUSE_USE_VERSION, _GNU_SOURCE among others, where is the best place to put them?
Some possibilities I've considered include
At the top of the any source files that rely on definitions exposed by headers included in that file
Immediately before the include for the relevant header(s)
Define at the CPPFLAGS level via the compiler? (such as -D_FILE_OFFSET_BITS=64) for the:
Entire source repo
The whole project
Just the sources that require it
In project headers, which should also include those relevant headers to which the macros apply
Some other place I haven't thought of, but is infinitely superior
A note: Justification by applicability to make, autotools, and other build systems is a factor in my decision.
If the macros affect system headers, they probably ought to go somewhere where they affect every source file that includes those system headers (which includes those that include them indirectly). The most logical place would therefore be on the command line, assuming your build system allows you to set e.g. CPPFLAGS to affect the compilation of every file.
If you use precompiled headers, and have a precompiled header that must therefore be included first in every source file (e.g. stdafx.h for MSVC projects) then you could put them in there too.
For macros that affect self-contained libraries (whether third-party or written by you), I would create a wrapper header that defines the macros and then includes the library header. All uses of the library from your project should then include your wrapper header rather than including the library header directly. This avoids defining macros unnecessarily, and makes it clear that they relate to that library. If there are dependencies between libraries then you might want to make the macros global (in the build system or precompiled header) just to be on the safe side.
Well, it depends.
Most, I'd define via the command line - in a Makefile or whatever build system you use.
As for _FILE_OFFSET_BITS I really wouldn't define it explicitly, but rather use getconf LFS_CFLAGS and getconf LFS_LDFLAGS.
I would always put them on the command line via CPPFLAGS for the whole project. If you put them any other place, there's a danger that you might forget to copy them into a new source file or include a system header before including the project header that defines them, and this could lead to extremely nasty bugs (like one file declaring a legacy 32-bit struct stat and passing its address to a function in another file which expects a 64-bit struct stat).
BTW, it's really ridiculous that _FILE_OFFSET_BITS=64 still isn't the default on glibc.
Most projects that I've seen use them did it via -D command line options. They are there because that eases building the source with different compilers and system headers. If you were to build with a system compiler for another system that didn't need them or needed a different set of them then a configure script can easily change the command line arguments that a make file passes to the compiler.
It's probably best to do it for the entire program because some of the flags effect which version of a function gets brought in or the size/layout of a struct and mixing those up could cause crazy things if you aren't careful.
They certainly are annoying to keep up with.
For _GNU_SOURCE and the autotools in particular, you could use AC_USE_SYSTEM_EXTENSIONS (citing liberally from the autoconf manual here):
-- Macro: AC_USE_SYSTEM_EXTENSIONS
This macro was introduced in Autoconf 2.60. If possible, enable
extensions to C or Posix on hosts that normally disable the
extensions, typically due to standards-conformance namespace
issues. This should be called before any macros that run the C
compiler. The following preprocessor macros are defined where
appropriate:
_GNU_SOURCE
Enable extensions on GNU/Linux.
__EXTENSIONS__
Enable general extensions on Solaris.
_POSIX_PTHREAD_SEMANTICS
Enable threading extensions on Solaris.
_TANDEM_SOURCE
Enable extensions for the HP NonStop platform.
_ALL_SOURCE
Enable extensions for AIX 3, and for Interix.
_POSIX_SOURCE
Enable Posix functions for Minix.
_POSIX_1_SOURCE
Enable additional Posix functions for Minix.
_MINIX
Identify Minix platform. This particular preprocessor macro
is obsolescent, and may be removed in a future release of
Autoconf.
For _FILE_OFFSET_BITS, you need to call AC_SYS_LARGEFILE and AC_FUNC_FSEEKO:
— Macro: AC_SYS_LARGEFILE
Arrange for 64-bit file offsets, known as large-file support. On some hosts, one must use special compiler options to build programs that can access large files. Append any such options to the output variable CC. Define _FILE_OFFSET_BITS and _LARGE_FILES if necessary.
Large-file support can be disabled by configuring with the --disable-largefile option.
If you use this macro, check that your program works even when off_t is wider than long int, since this is common when large-file support is enabled. For example, it is not correct to print an arbitrary off_t value X with printf("%ld", (long int) X).
The LFS introduced the fseeko and ftello functions to replace their C counterparts fseek and ftell that do not use off_t. Take care to use AC_FUNC_FSEEKO to make their prototypes available when using them and large-file support is enabled.
If you are using autoheader to generate a config.h, you could define the other macros you care about using AC_DEFINE or AC_DEFINE_UNQUOTED:
AC_DEFINE([FUSE_VERSION], [28], [FUSE Version.])
The definition will then get passed to the command line or placed in config.h, if you're using autoheader. The real benefit of AC_DEFINE is that it easily allows preprocessor definitions as a result of configure checks and separates system-specific cruft from the important details.
When writing the .c file, #include "config.h" first, then the interface header (e.g., foo.h for foo.c - this ensures that the header has no missing dependencies), then all other headers.
I usually put them as close as practicable to the things that need them, whilst ensuring you don't set them incorrectly.
Related pieces of information should be kept close to make it easier to identify. A classic example is the ability for C to now allow variable definitions anywhere in the code rather than just at the top of a function:
void something (void) {
// 600 lines of code here
int x = fn(y);
// more code here
}
is a lot better than:
void something (void) {
int x;
// 600 lines of code here
x = fn(y);
// more code here
}
since you don't have to go searching for the type of x in the latter case.
By way of example, if you need to compile a single source file multiple times with different values, you have to do it with the compiler:
gcc -Dmydefine=7 -o binary7 source.c
gcc -Dmydefine=9 -o binary9 source.c
However, if every compilation of that file will use 7, it can be moved closer to the place where it's used:
source.c:
#include <stdio.h>
#define mydefine 7
#include "header_that_uses_mydefine.h"
#define mydefine 7
#include "another_header_that_uses_mydefine.h"
Note that I've done it twice so that it's more localised. This isn't a problem since, if you change only one, the compiler will tell you about it, but it ensures that you know those defines are set for the specific headers.
And, if you're certain that you will never include (for example) bitio.h without first setting BITCOUNT to 8, you can even go so far as to create a bitio8.h file containing nothing but:
#define BITCOUNT 8
#include "bitio.h"
and then just include bitio8.h in your source files.
Global, project-wide constants that are target specific are best put in CCFLAGS in your makefile. Constants you use all over the place can go in appropriate header files which are included by any file that uses them.
For example,
// bool.h - a boolean type for C
#ifndef __BOOL_H__
#define BOOL_H
typedef int bool_t
#define TRUE 1
#define FALSE 0
#endif
Then, in some other header,
`#include "bool.h"`
// blah
Using header files is what I recommend because it allows you to have a code base built by make files and other build systems as well as IDE projects such as Visual Studio. This gives you a single point of definition that can be accompanied by comments (I'm a fan of doxygen which allows you to generate macro documentation).
The other benefit with header files is that you can easily write unit tests to verify that only valid combinations of macros are defined.