Compute minimal list of macro combinations from source - c-preprocessor

How would one compute efficiently the minimal list of macro definitions combinations that will give the same code ? The list of macros is known before-hand.
The objective would be to compile less versions of the program (possibly a shader) and thus skip some compilation time by knowing what macro combinations are equivalent.
eg. if a program built with MACRO_A and MACRO_B is the same as a program built only with MACRO_A.
For the sake of simplicity, those macro are either defined or not, and their value doesn't matter (meaning there is no #if SOMEMACRO).
For example with:
#ifdef A
#ifdef B
// some code
#endif
// some code
#elif defined(C)
// some code
#else
// some code
#endif
A trivial way to generate all the programs would be to compile with all the combinations made from A,B and C. It would mean creating 2^3=8 combinations. However, only the following combinations really are useful (! means the macro is not defined, ~that the macro doesn't matter and can be either defined or undefined ):
( A, B, ~C) (same as A, B, C and A, B, !C)
( A, !B, ~C)
(!A, ~B, C)
(!A, ~B, !C)
Which means compiling only programs with the following definitions is enough:
A B
A
C
No defines
With this list, I know that when asked for a program with the (A, !B, C) combination I can simply use the one built with only A defined.
What tools could be used ?
Notes:
Most preprocessors will only give the path for a given set of defines
Perhaps building the control flow graph of the preprocessor would help?
Some work has been done with clang here by J. Trull to add conditional nodes to the AST, but seems to be aimed at refactoring, not sure if it is the best way to do it

Related

'Reverse' a collection of C preprocessor macros easily

I have a lot of preprocessor macro definitions, like this:
#define FOO 1
#define BAR 2
#define BAZ 3
In the real application, each definition corresponds to an instruction in an interpreter virtual machine. The macros are also not sequential in numbering to leave space for future instructions; there may be a #define FOO 41, then the next one is #define BAR 64.
I'm now working on a debugger for this virtual machine, and need to effectively 'reverse' these preprecessor macros. In other words, I need a function which takes the number and returns the macro name, e.g. an input of 2 returns "BAR".
Of course, I could create a function using a switch myself:
const char* instruction_by_id(int id) {
switch (id) {
case FOO:
return "FOO";
case BAR:
return "BAR";
case BAZ:
return "BAZ";
default:
return "???";
}
}
However, this will a nightmare to maintain, since renaming, removing or adding instructions will require this function to be modified too.
Is there another macro which I can use to create a function like this for me, or is there some other approach? If not, is it possible to create a macro to perform this task?
I'm using gcc 6.3 on Windows 10.
You have the wrong approach. Read SICP if you have not read it.
I have a lot of preprocessor macro definitions, like this:
#define FOO 1
#define BAR 2
#define BAZ 3
Remember that C or C++ code can be generated, and it is quite easy to instruct your build automation tool to generate some particular C file (with GNU make or ninja you just add some rule or recipe).
For example, you could use some different preprocessor (liek GPP or m4), or some script -e.g. in awk or Python or Guile, etc..., or write your own program (in C, C++, Ocaml, etc...), to generate the header file containing these #define-s. And another script or program (or the same one, invoked differently) could generate the C code of instruction_by_id
Such basic metaprogramming techniques (of generating some or several C files from something higher level but specific) have been used since at least the 1980s (e.g. with yacc or RPCGEN). The C preprocessor facilitates that with its #include directive (since you can even include lines inside some function body, etc...). Actually, the idea that code is data (and proof) and data is code is even older (Church-Turing thesis, Curry-Howard correspondence, Halting problem). The Gödel, Escher, Bach book is very entertaining....
For example, you could decide to have a textual file opcodes.txt (or even some sqlite database containing stuff....) like
# ignore lines starting with an hashsign
FOO 1
BAR 2
and have two small awk or Python scripts (or two tiny C specialized programs), one generating the #define-s (into opcode-defines.h) and another generating the body of instruction_by_id (into opcode-instr.inc). Then you need to adapt your Makefile to generate these, and put #include "opcode-defines.h" inside some global header, and have
const char* instruction_by_id(int id) {
switch (id) {
#include "opcode-instr.inc"
default: return "???";
}
}
this will a nightmare to maintain,
Not so with such a metaprogramming approach. You'll just maintain opcodes.txt and the scripts using it, but you express a given "knowledge element" (the relation of FOO to 1) only once (in a single line of opcode.txt). Of course you need to document that (at the very least, with comments in your Makefile).
Metaprogramming from some higher-level, declarative formalization, is a very powerful paradigm. In France, J.Pitrat pioneered it (and he is writing an interesting blog today, while being retired) since the 1960s. In the US, J.MacCarthy and the Lisp community also.
For an entertaining talk, see Liam Proven FOSDEM 2018 talk on The circuit less traveled
Large software are using that metaprogramming approach quite often. For example, the GCC compiler have about a dozen of C++ code generators (in total, they are emitting more than a million of C++ lines).
Another way of looking at such an approach is the idea of domain-specific languages that could be compiled to C. If you use an operating system providing dynamic loading, you can even write a program emitting C code, forking a process to compile it into some plugin, then loading that plugin (on POSIX or Linux, with dlopen). Interestingly, computers are now fast enough to enable such an approach in an interactive application (in some sort of REPL): you can emit a C file of a few thousand lines, compile it into some .so shared object file, and dlopen that, in a fraction of second. You could also use JIT-compiling libraries like GCCJIT or LLVM to generate code at runtime. You could embed an interpreter (like Lua or Guile) into your program.
BTW, metaprogramming approaches is one of the reasons why basic compilation techniques should be known by most developers (and not only just people in the compiler business); another reason is that parsing problems are very common. So read the Dragon Book.
Be aware of Greenspun's tenth rule. It is much more than a joke, actually a profound truth about large software.
In a similar case I've resorted to defining a text file format that defines the instructions, and writing a program to read this file and write out the C source of the actual instruction definitions and the C source of functions like your instruction_by_id(). This way you only need to maintain the text file.
As awesome as general code generation is, I’m surprised that nobody mentioned that (if you relax your problem definition just a bit) the C preprocessor is perfectly capable of generating the necessary code, using a technique called X macros. In fact every simple bytecode VM in C that I’ve seen uses this approach.
The technique works as follows. First, there is a file (call it insns.h) containing the authoritative list of instructions,
INSN(FOO, 1)
INSN(BAR, 2)
INSN(BAZ, 3)
or alternatively a macro in some other header containing the same,
#define INSNS \
INSN(FOO, 1) \
INSN(BAR, 2) \
INSN(BAZ, 3)
whichever is more conveinent for you. (I’ll use the first option in the following.) Note that INSN is not defined anywhere. (Traditionally it would be called X, thus the name of the technique.) Wherever you want to loop over your instructions, define INSN to generate the code you want, include insns.h, then undefine INSN again.
In your disassembler, write
const char *instruction_by_id(int id) {
switch (id) {
#define INSN(NAME, VALUE) \
case NAME: return #NAME;
#include "insns.h" /* or just INSNS if you use a macro */
#undef INSN
default: return "???";
}
}
using the prefix stringification operator # to turn names-as-identifiers into names-as-string-literals.
You obviously can’t define the constants this way, because macros cannot define other macros in the C preprocessor. However, if you don’t insist that the instruction constants be preprocessor constants, there’s a different perfectly serviceable constant facility in the C language: enumerations. Whether or not you use an enumerated type, the enumerators defined inside it are regular integer constants from the point of view of the compiler (though not the preprocessor—you cannot use #ifdef with them, for example). So, using an anonymous enumeration type, define your constants like this:
enum {
#define INSN(NAME, VALUE) \
NAME = VALUE,
#include "insns.h" /* or just INSNS if you use a macro */
#undef INSN
NINSNS /* C89 doesn’t allow trailing commas in enumerations (but C99+ does), and you may find this constant useful in any case */
};
If you want to statically initialize an array indexed by your bytecodes, you’ll have to use C99 designated initializers {[FOO] = foovalue, [BAR] = barvalue, /* ... */} whether or not you use X macros. However, if you don’t insist on assigning custom codes to your instructions, you can eliminate VALUE from the above and have the enumeration assign consecutive codes automatically, and then the array can be simply initialized in order, {foovalue, barvalue, /* ... */}. As a bonus, NINSNS above then becomes equal to the number of the instructions and the size of any such array, which is why I called it that.
There are more tricks you can use here. For example, if some instructions have variants for several data types, the instruction list X macro can call the type list X macro to generate the variants automatically. (The somewhat ugly second option of storing the X macro list in a large macro and not an include file may be more handy here.) The INSN macro may take additional arguments such as the mode name, which would ignored in the code list but used to call the appropriate decoding routine in the disassembler. You can use token pasting operator ## to add prefixes to the names of the constants, as in INSN_ ## NAME to generate INSN_FOO, INSN_BAR, etc. And so on.

Can C macros change macro keyword

Macro enable to easily alias keywords in C, but can it be used to change macro keywords too, so instead of:
#include <stdlib.h>
#define se if
one may write
#inkludu <stdlib.h>
#difinu se if
In other words, can preprocessing directives be aliased, preferably out of the code itself, for example with a compiler argument such as -D for gcc.
A simple test as the following will fail:
#define difinu define
#difinu valoro 2
int main() {
int aĵo = valoro;
return 0;
}
with the following error:
% clang makro.c -o makro
makro.c:2:2: error: invalid preprocessing directive
#difinu valoro 2
^
makro.c:5:16: error: use of undeclared identifier 'valoro'
int aĵo = valoro;
^
2 errors generated.
No. Macros do not change the ways preprocessor directives are handled (but of course can change according to conditional directives like #if). In particular, the following is wrong
///WRONG CODE
#define inc include
/// the following directive is not recognized as an include
#inc <stdio.h>
BTW, having
#define se if
is really a bad habit, even if it is possible. It makes your code se (x>0) {printf("negative x=%d\n", x);} much more difficult to read.
Consider perhaps preprocessing your source with some other preprocessor like m4 or GPP. This amounts to generate your C (or C++) code from something else.
My feeling is that metaprogramming is often a good idea: you write some specialized program which would e.g. emit C or C++ code (and you improve your build procedure, e.g. your Makefile, accordingly). But you might design a real C or C++ code generator (which would work on and process some kind of AST). Parser generators (incorrectly known as compiler-compilers) like ANTLR or bison are a good example of this. And Qt has moc.
See also this & that answers to related questions.
Don't forget to read several textbooks (notably related to compilers) before attempting your own code generator (or domain specific language implementations).
On a historical note, however please note that yes, some times ago there were compilers that made it possible to redefine the macro definitions, and a proof for this fact is the following entry from IOCCC 1985 which obviously compiled happily on a Vax 780/4.2BSD in those days:
http://ioccc.org/1985/lycklama/lycklama.c
which starts with:
#define o define
#o ___o write
#o ooo (unsigned)
Localized languages do exist. Algol68 (available under Linux) is such a very old language (1968), that by the way is one of the superior languages that exist:
IF x < y THEN x ELSE y FI := 3;
( x < y | x | y ) := 3;
IF x < y THEN sin ELSE cos FI(3.14)
In your case I would write an additional preprocessor say for .eocpp and .eohpp to .cpp and .hpp. That could x-translate the special characters, or even a dictionary translation.
"#define" is the keyword to pre processor for "MACRO substitution ( text replacement) for compilation/compiler". consider code compilation stages in c, 'pre processor->compiler->assembler->linker->loader'..
so, when you compile this code, pre processor trying to search the keyword "#difinu" which is not present.so you are getting the error from preprocessor stage itself.
moreover "#define" is single keyword, how can you expect pre processor to treat this as "#"+"define" . For example
#define game olympic
main (){
int abcgame =10;// will it become "abcolympic" ??
return;
}

C Preprocessor: Own implementation for __COUNTER__

I'm currently using the __COUNTER__ macro in my C library code to generate unique integer identifiers. It works nicely, but I see two issues:
It's not part of any C or C++ standard.
Independent code that also uses __COUNTER__ might get confused.
I thus wish to implement an equivalent to __COUNTER__ myself.
Alternatives that I'm aware of, but do not want to use:
__LINE__ (because multiple macros per line wouldn't get unique ids)
BOOST_PP_COUNTER (because I don't want a boost dependency)
BOOST_PP_COUNTER proves that this can be done, even though other answers claim it is impossible.
In essence, I'm looking for a header file "mycounter.h", such that
#include "mycounter.h"
__MYCOUNTER__
__MYCOUNTER__ __MYCOUNTER__
__MYCOUNTER__
will be preprocessed by gcc -E to
(...)
0
1 2
3
without using the built-in __COUNTER__.
Note: Earlier, this question was marked as a duplicate of this, which deals with using __COUNTER__ rather than avoiding it.
You can't implement __COUNTER__ directly. The preprocessor is purely functional - no state changes. A hidden counter is inherently impossible in such a system. (BOOST_PP_COUNTER does not prove what you want can be done - it relies on #include and is therefore one-per-line only - may as well use __LINE__. That said, the implementation is brilliant, you should read it anyway.)
What you can do is refactor your metaprogram so that the counter could be applied to the input data by a pure function. e.g. using good ol' Order:
#include <order/interpreter.h>
#define ORDER_PP_DEF_8map_count \
ORDER_PP_FN(8fn(8L, 8rec_mc(8L, 8nil, 0)))
#define ORDER_PP_DEF_8rec_mc \
ORDER_PP_FN(8fn(8L, 8R, 8C, \
8if(8is_nil(8L), \
8R, \
8let((8H, 8seq_head(8L)) \
(8T, 8seq_tail(8L)) \
(8D, 8plus(8C, 1)), \
8if(8is_seq(8H), \
8rec_mc(8T, 8seq_append(8R, 8seq_take(1, 8L)), 8C), \
8rec_mc(8T, 8seq_append(8R, 8seq(8C)), 8D) )))))
ORDER_PP (
8map_count(8seq( 8seq(8(A)), 8true, 8seq(8(C)), 8true, 8true )) //((A))(0)((C))(1)(2)
)
(recurses down the list, leaving sublist elements where they are and replacing non-list elements - represented by 8false - with an incrementing counter variable)
I assume you don't actually want to simply drop __COUNTER__ values at the program toplevel, so if you can place the code into which you need to weave __COUNTER__ values inside a wrapper macro that splits it into some kind of sequence or list, you can then feed the list to a pure function similar to the example.
Of course a metaprogramming library capable of expressing such code is going to be significantly less portable and maintainable than __COUNTER__ anyway. __COUNTER__ is supported by Intel, GCC, Clang and MSVC. (not everyone, e.g. pcc doesn't have it, but does anyone even use that?) Arguably if you demonstrate the feature in use in real code, it makes a stronger case to the standardisation committee that __COUNTER__ should become part of the next C standard.
You are confusing two different things:
1 - the preprocessor which handles#define and #include like stuff. It does only works as the text (meaning character sequences) level and has very few computing capabilities. It is so limited that it cannot implement __COUNTER__. The preprocessor work consist only in macro expansion and file replacement. The crucial point it that it occur before the compilation even start.
2 - the C++ language and in particular the template (meta)programming language which can be used to compute stuff during the compilation phase. It is indeed turing complete but as I already said compilation start after preprocessing.
So what you are asking is not doable in standard C or C++. To solve this problem boost implement its own preprocessor which is not standard compliant and has much more computing capabilities. In particular it is possible to use build an analogue to __counter__ with it.
This small header of mine contains an own implementation of a C preprocessor counter (it uses a slightly different syntax).

C macro processing

I'm thinking about best way to write C define processor that would be able to handle macros. Unfortunately nothing intelligent comes to my mind.
It should behave exactly like one in C, so it handles expressions like this:
#define max(a, b) (a > b ? a : b)
printf("%d\n", max(a, b));
Or this:
#define F 10
#define max(a, b) (a > b ? a : b)
printf("%d\n", max(a, F));
I know about install and lookup functions from K&R2, what else do I need for
replacing text inside parenthesis?
Does anyone have any advice or some pseudo-code maybe?
I know it's complex task, but still, what would be best possible way to do it?
Macro processors are very interesting but can became a difficult beast to tame (think about recursive expansions, for example).
You can look at the implementation of already existing macro processors like M4 (http://www.scs.stanford.edu/~reddy/links/gnu/m4.pdf).
In very general terms you will need:
a parser that will first extract the macro definitions from your files (deleting them from the file, of course)
another parser that identify where macros need to be expanded and performs the expansion (e.g. you will want to skip strings and comments!)
I think it's a very interesting exercise. The proper data structure to handle all this is not trivial.
This is a pattern matching problem, you should take a look at regular expressions to start with, then when you've grasped the theory on that you could move on to reading about lexers.
A regular expression is basically matching a string to a predefined pattern.
Some regexp (short for regular expression) software/libraries:
- Boost.Regexp
- GNU C library regexp
- PCRE
And a lexer is a piece of software that does something with the matched text, for example, replacing that piece of text with some other piece of text, basically what you seem to need.
Some known lexers:
- flex
- Boost.Wave
2 suggestions:
use boost wave (http://www.boost.org/doc/libs/1_40_0/libs/wave/index.html)
use the preprocessor that comes with your compiler
ie "don't try this at home".

What are C macros useful for?

I have written a little bit of C, and I can read it well enough to get a general idea of what it is doing, but every time I have encountered a macro it has thrown me completely. I end up having to remember what the macro is and substitute it in my head as I read. The ones that I have encountered that were intuitive and easy to understand were always like little mini functions, so I always wondered why they weren't just functions.
I can understand the need to define different build types for debug or cross platform builds in the preprocessor but the ability to define arbitrary substitutions seems to be useful only to make an already difficult language even more difficult to understand.
Why was such a complex preprocessor introduced for C? And does anyone have an example of using it that will make me understand why it still seems to be used for purposes other than simple if #debug style conditional compilations?
Edit:
Having read a number of answers I still just don't get it. The most common answer is to inline code. If the inline keyword doesn't do it then either it has a good reason to not do it, or the implementation needs fixing. I don't understand why a whole different mechanism is needed that means "really inline this code" (aside form the code being written before inline was around). I also don't understand the idea that was mentioned that "if its too silly to be put in a function". Surely any piece of code that takes an input and produces an output is best put in a function. I think I may not be getting it because I am not used to the micro optimisations of writing C, but the preprocessor just feels like a complex solution to a few simple problems.
I end up having to remember what the macro is and substitute it in my head as I read.
That seems to reflect poorly on the naming of the macros. I would assume you wouldn't have to emulate the preprocessor if it were a log_function_entry() macro.
The ones that I have encountered that were intuitive and easy to understand were always like little mini functions, so I always wondered why they weren't just functions.
Usually they should be, unless they need to operate on generic parameters.
#define max(a,b) ((a)<(b)?(b):(a))
will work on any type with an < operator.
More that just functions, macros let you perform operations using the symbols in the source file. That means you can create a new variable name, or reference the source file and line number the macro is on.
In C99, macros also allow you to call variadic functions such as printf
#define log_message(guard,format,...) \
if (guard) printf("%s:%d: " format "\n", __FILE__, __LINE__,__VA_ARGS_);
log_message( foo == 7, "x %d", x)
In which the format works like printf. If the guard is true, it outputs the message along with the file and line number that printed the message. If it was a function call, it would not know the file and line you called it from, and using a vaprintf would be a bit more work.
This excerpt pretty much sums up my view on the matter, by comparing several ways that C macros are used, and how to implement them in D.
copied from DigitalMars.com
Back when C was invented, compiler
technology was primitive. Installing a
text macro preprocessor onto the front
end was a straightforward and easy way
to add many powerful features. The
increasing size & complexity of
programs have illustrated that these
features come with many inherent
problems. D doesn't have a
preprocessor; but D provides a more
scalable means to solve the same
problems.
Macros
Preprocessor macros add powerful features and flexibility to C. But they have a downside:
Macros have no concept of scope; they are valid from the point of definition to the end of the source. They cut a swath across .h files, nested code, etc. When #include'ing tens of thousands of lines of macro definitions, it becomes problematical to avoid inadvertent macro expansions.
Macros are unknown to the debugger. Trying to debug a program with symbolic data is undermined by the debugger only knowing about macro expansions, not the macros themselves.
Macros make it impossible to tokenize source code, as an earlier macro change can arbitrarily redo tokens.
The purely textual basis of macros leads to arbitrary and inconsistent usage, making code using macros error prone. (Some attempt to resolve this was introduced with templates in C++.)
Macros are still used to make up for deficits in the language's expressive capability, such as for "wrappers" around header files.
Here's an enumeration of the common uses for macros, and the corresponding feature in D:
Defining literal constants:
The C Preprocessor Way
#define VALUE 5
The D Way
const int VALUE = 5;
Creating a list of values or flags:
The C Preprocessor Way
int flags:
#define FLAG_X 0x1
#define FLAG_Y 0x2
#define FLAG_Z 0x4
...
flags |= FLAG_X;
The D Way
enum FLAGS { X = 0x1, Y = 0x2, Z = 0x4 };
FLAGS flags;
...
flags |= FLAGS.X;
Setting function calling conventions:
The C Preprocessor Way
#ifndef _CRTAPI1
#define _CRTAPI1 __cdecl
#endif
#ifndef _CRTAPI2
#define _CRTAPI2 __cdecl
#endif
int _CRTAPI2 func();
The D Way
Calling conventions can be specified in blocks, so there's no need to change it for every function:
extern (Windows)
{
int onefunc();
int anotherfunc();
}
Simple generic programming:
The C Preprocessor Way
Selecting which function to use based on text substitution:
#ifdef UNICODE
int getValueW(wchar_t *p);
#define getValue getValueW
#else
int getValueA(char *p);
#define getValue getValueA
#endif
The D Way
D enables declarations of symbols that are aliases of other symbols:
version (UNICODE)
{
int getValueW(wchar[] p);
alias getValueW getValue;
}
else
{
int getValueA(char[] p);
alias getValueA getValue;
}
There are more examples on the DigitalMars website.
They are a programming language (a simpler one) on top of C, so they are useful for doing metaprogramming in compile time... in other words, you can write macro code that generates C code in less lines and time that it will take writing it directly in C.
They are also very useful to write "function like" expressions that are "polymorphic" or "overloaded"; e.g. a max macro defined as:
#define max(a,b) ((a)>(b)?(a):(b))
is useful for any numeric type; and in C you could not write:
int max(int a, int b) {return a>b?a:b;}
float max(float a, float b) {return a>b?a:b;}
double max(double a, double b) {return a>b?a:b;}
...
even if you wanted, because you cannot overload functions.
And not to mention conditional compiling and file including (that are also part of the macro language)...
Macros allow someone to modify the program behavior during compilation time. Consider this:
C constants allow fixing program behavior at development time
C variables allow modifying program behavior at execution time
C macros allow modifying program behavior at compilation time
At compilation time means that unused code won't even go into the binary and that the build process can modify the values, as long as it's integrated with the macro preprocessor. Example: make ARCH=arm (assumes forwarding macro definition as cc -DARCH=arm)
Simple examples:
(from glibc limits.h, define the largest value of long)
#if __WORDSIZE == 64
#define LONG_MAX 9223372036854775807L
#else
#define LONG_MAX 2147483647L
#endif
Verifies (using the #define __WORDSIZE) at compile time if we're compiling for 32 or 64 bits. With a multilib toolchain, using parameters -m32 and -m64 may automatically change bit size.
(POSIX version request)
#define _POSIX_C_SOURCE 200809L
Requests during compilation time POSIX 2008 support. The standard library may support many (incompatible) standards but with this definition, it will provide the correct function prototypes (example: getline(), no gets(), etc.). If the library doesn't support the standard it may give an #error during compile time, instead of crashing during execution, for example.
(hardcoded path)
#ifndef LIBRARY_PATH
#define LIBRARY_PATH "/usr/lib"
#endif
Defines, during compilation time a hardcode directory. Could be changed with -DLIBRARY_PATH=/home/user/lib, for example. If that were a const char *, how would you configure it during compilation ?
(pthread.h, complex definitions at compile time)
# define PTHREAD_MUTEX_INITIALIZER \
{ { 0, 0, 0, 0, 0, 0, { 0, 0 } } }
Large pieces of text may that otherwise wouldn't be simplified may be declared (always at compile time). It's not possible to do this with functions or constants (at compile time).
To avoid really complicating things and to avoid suggesting poor coding styles, I'm wont give an example of code that compiles in different, incompatible, operating systems. Use your cross build system for that, but it should be clear that the preprocessor allows that without help from the build system, without breaking compilation because of absent interfaces.
Finally, think about the importance of conditional compilation on embedded systems, where processor speed and memory are limited and systems are very heterogeneous.
Now, if you ask, is it possible to replace all macro constant definitions and function calls with proper definitions ? The answer is yes, but it won't simply make the need for changing program behavior during compilation go away. The preprocessor would still be required.
Remember that macros (and the pre-processor) come from the earliest days of C. They used to be the ONLY way to do inline 'functions' (because, of course, inline is a very recent keyword), and they are still the only way to FORCE something to be inlined.
Also, macros are the only way you can do such tricks as inserting the file and line into string constants at compile time.
These days, many of the things that macros used to be the only way to do are better handled through newer mechanisms. But they still have their place, from time to time.
Apart from inlining for efficiency and conditional compilation, macros can be used to raise the abstraction level of low-level C code. C doesn't really insulate you from the nitty-gritty details of memory and resource management and exact layout of data, and supports very limited forms of information hiding and other mechanisms for managing large systems. With macros, you are no longer limited to using only the base constructs in the C language: you can define your own data structures and coding constructs (including classes and templates!) while still nominally writing C!
Preprocessor macros actually offer a Turing-complete language executed at compile time. One of the impressive (and slightly scary) examples of this is over on the C++ side: the Boost Preprocessor library uses the C99/C++98 preprocessor to build (relatively) safe programming constructs which are then expanded to whatever underlying declarations and code you input, whether C or C++.
In practice, I'd recommend regarding preprocessor programming as a last resort, when you don't have the latitude to use high level constructs in safer languages. But sometimes it's good to know what you can do if your back is against the wall and the weasels are closing in...!
From Computer Stupidities:
I've seen this code excerpt in a lot of freeware gaming programs for UNIX:
/*
* Bit values.
*/
#define BIT_0 1
#define BIT_1 2
#define BIT_2 4
#define BIT_3 8
#define BIT_4 16
#define BIT_5 32
#define BIT_6 64
#define BIT_7 128
#define BIT_8 256
#define BIT_9 512
#define BIT_10 1024
#define BIT_11 2048
#define BIT_12 4096
#define BIT_13 8192
#define BIT_14 16384
#define BIT_15 32768
#define BIT_16 65536
#define BIT_17 131072
#define BIT_18 262144
#define BIT_19 524288
#define BIT_20 1048576
#define BIT_21 2097152
#define BIT_22 4194304
#define BIT_23 8388608
#define BIT_24 16777216
#define BIT_25 33554432
#define BIT_26 67108864
#define BIT_27 134217728
#define BIT_28 268435456
#define BIT_29 536870912
#define BIT_30 1073741824
#define BIT_31 2147483648
A much easier way of achieving this is:
#define BIT_0 0x00000001
#define BIT_1 0x00000002
#define BIT_2 0x00000004
#define BIT_3 0x00000008
#define BIT_4 0x00000010
...
#define BIT_28 0x10000000
#define BIT_29 0x20000000
#define BIT_30 0x40000000
#define BIT_31 0x80000000
An easier way still is to let the compiler do the calculations:
#define BIT_0 (1)
#define BIT_1 (1 << 1)
#define BIT_2 (1 << 2)
#define BIT_3 (1 << 3)
#define BIT_4 (1 << 4)
...
#define BIT_28 (1 << 28)
#define BIT_29 (1 << 29)
#define BIT_30 (1 << 30)
#define BIT_31 (1 << 31)
But why go to all the trouble of defining 32 constants? The C language also has parameterized macros. All you really need is:
#define BIT(x) (1 << (x))
Anyway, I wonder if guy who wrote the original code used a calculator or just computed it all out on paper.
That's just one possible use of Macros.
I will add to whats already been said.
Because macros work on text substitutions they allow you do very useful things which wouldn't be possible to do using functions.
Here a few cases where macros can be really useful:
/* Get the number of elements in array 'A'. */
#define ARRAY_LENGTH(A) (sizeof(A) / sizeof(A[0]))
This is a very popular and frequently used macro. This is very handy when you for example need to iterate through an array.
int main(void)
{
int a[] = {1, 2, 3, 4, 5};
int i;
for (i = 0; i < ARRAY_LENGTH(a); ++i) {
printf("a[%d] = %d\n", i, a[i]);
}
return 0;
}
Here it doesn't matter if another programmer adds five more elements to a in the decleration. The for-loop will always iterate through all elements.
The C library's functions to compare memory and strings are quite ugly to use.
You write:
char *str = "Hello, world!";
if (strcmp(str, "Hello, world!") == 0) {
/* ... */
}
or
char *str = "Hello, world!";
if (!strcmp(str, "Hello, world!")) {
/* ... */
}
To check if str points to "Hello, world". I personally think that both these solutions look quite ugly and confusing (especially !strcmp(...)).
Here are two neat macros some people (including I) use when they need to compare strings or memory using strcmp/memcmp:
/* Compare strings */
#define STRCMP(A, o, B) (strcmp((A), (B)) o 0)
/* Compare memory */
#define MEMCMP(A, o, B) (memcmp((A), (B)) o 0)
Now you can now write the code like this:
char *str = "Hello, world!";
if (STRCMP(str, ==, "Hello, world!")) {
/* ... */
}
Here is the intention alot clearer!
These are cases were macros are used for things functions cannot accomplish. Macros should not be used to replace functions but they have other good uses.
One of the case where macros really shine is when doing code-generation with them.
I used to work on an old C++ system that was using a plugin system with his own way to pass parameters to the plugin (Using a custom map-like structure). Some simple macros were used to be able to deal with this quirk and allowed us to use real C++ classes and functions with normal parameters in the plugins without too much problems. All the glue code being generated by macros.
Given the comments in your question, you may not fully appreciate is that calling a function can entail a fair amount of overhead. The parameters and key registers may have to be copied to the stack on the way in, and the stack unwound on the way out. This was particularly true of the older Intel chips. Macros let the programmer keep the abstraction of a function (almost), but avoided the costly overhead of a function call. The inline keyword is advisory, but the compiler may not always get it right. The glory and peril of 'C' is that you can usually bend the compiler to your will.
In your bread and butter, day-to-day application programming this kind of micro-optimization (avoiding function calls) is generally worse then useless, but if you are writing a time-critical function called by the kernel of an operating system, then it can make a huge difference.
Unlike regular functions, you can do control flow (if, while, for,...) in macros. Here's an example:
#include <stdio.h>
#define Loop(i,x) for(i=0; i<x; i++)
int main(int argc, char *argv[])
{
int i;
int x = 5;
Loop(i, x)
{
printf("%d", i); // Output: 01234
}
return 0;
}
It's good for inlining code and avoiding function call overhead. As well as using it if you want to change the behaviour later without editing lots of places. It's not useful for complex things, but for simple lines of code that you want to inline, it's not bad.
By leveraging C preprocessor's text manipulation one can construct the C equivalent of a polymorphic data structure. Using this technique we can construct a reliable toolbox of primitive data structures that can be used in any C program, since they take advantage of C syntax and not the specifics of any particular implementation.
Detailed explanation on how to use macros for managing data structure is given here - http://multi-core-dump.blogspot.com/2010/11/interesting-use-of-c-macros-polymorphic.html
Macros let you get rid of copy-pasted fragments, which you can't eliminate in any other way.
For instance (the real code, syntax of VS 2010 compiler):
for each (auto entry in entries)
{
sciter::value item;
item.set_item("DisplayName", entry.DisplayName);
item.set_item("IsFolder", entry.IsFolder);
item.set_item("IconPath", entry.IconPath);
item.set_item("FilePath", entry.FilePath);
item.set_item("LocalName", entry.LocalName);
items.append(item);
}
This is the place where you pass a field value under the same name into a script engine. Is this copy-pasted? Yes. DisplayName is used as a string for a script and as a field name for the compiler. Is that bad? Yes. If you refactor you code and rename LocalName to RelativeFolderName (as I did) and forget to do the same with the string (as I did), the script will work in a way you don't expect (in fact, in my example it depends on did you forget to rename the field in a separate script file, but if the script is used for serialization, it would be a 100% bug).
If you use a macro for this, there will be no room for the bug:
for each (auto entry in entries)
{
#define STR_VALUE(arg) #arg
#define SET_ITEM(field) item.set_item(STR_VALUE(field), entry.field)
sciter::value item;
SET_ITEM(DisplayName);
SET_ITEM(IsFolder);
SET_ITEM(IconPath);
SET_ITEM(FilePath);
SET_ITEM(LocalName);
#undef SET_ITEM
#undef STR_VALUE
items.append(item);
}
Unfortunately, this opens a door for other types of bugs. You can make a typo writing the macro and will never see a spoiled code, because the compiler doesn't show how it looks after all preprocessing. Someone else could use the same name (that's why I "release" macros ASAP with #undef). So, use it wisely. If you see another way of getting rid of copy-pasted code (such as functions), use that way. If you see that getting rid of copy-pasted code with macros isn't worth the result, keep the copy-pasted code.
One of the obvious reasons is that by using a macro, the code will be expanded at compile time, and you get a pseudo function-call without the call overhead.
Otherwise, you can also use it for symbolic constants, so that you don't have to edit the same value in several places to change one small thing.
Macros .. for when your &#(*$& compiler just refuses to inline something.
That should be a motivational poster, no?
In all seriousness, google preprocessor abuse (you may see a similar SO question as the #1 result). If I'm writing a macro that goes beyond the functionality of assert(), I usually try to see if my compiler would actually inline a similar function.
Others will argue against using #if for conditional compilation .. they would rather you:
if (RUNNING_ON_VALGRIND)
rather than
#if RUNNING_ON_VALGRIND
.. for debugging purposes, since you can see the if() but not #if in a debugger. Then we dive into #ifdef vs #if.
If its under 10 lines of code, try to inline it. If it can't be inlined, try to optimize it. If its too silly to be a function, make a macro.
While I'm not a big fan of macros and don't tend to write much C anymore, based on my current tasking, something like this (which could obviously have some side-effects) is convenient:
#define MIN(X, Y) ((X) < (Y) ? (X) : (Y))
Now I haven't written anything like that in years, but 'functions' like that were all over code that I maintained earlier in my career. I guess the expansion could be considered convenient.
I didn't see anyone mentioning this so, regarding function like macros, eg:
#define MIN(X, Y) ((X) < (Y) ? (X) : (Y))
Generally it's recommended to avoid using macros when not necessary, for many reasons, readability being the main concern. So:
When should you use these over a function?
Almost never, since there's a more readable alternative which is inline, see https://www.greenend.org.uk/rjk/tech/inline.html
or http://www.cplusplus.com/articles/2LywvCM9/ (the second link is a C++ page, but the point is applicable to c compilers as far as I know).
Now, the slight difference is that macros are handled by the pre-processor and inline is handled by the compiler, but there's no practical difference nowadays.
when is it appropriate to use these?
For small functions (two or three liners max). The goal is to gain some advantage during the run time of a program, as function like macros (and inline functions) are code replacements done during the pre-proccessing (or compilation in case of inline) and are not real functions living in memory, so there's no function call overhead (more details in the linked pages).

Resources