What are legitimate uses for function-like macros? [closed] - c

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
There is no shortage of examples of bad or dangerous function-like macros in C/C++.
#define SQUARE(x) x * x
printf("%d", SQUARE(4)); //16
printf("%d", SQUARE(3+1)); //7
#undef SQUARE
#define SQUARE(x) (x) * (x)
printf("%d", 36/4); //9
printf("%d", SQUARE(6)/SQUARE(2)); //36
#undef SQUARE
#define SQUARE(x) ((x) * (x))
int x = 3;
++x;
printf("%d", SQUARE(x)); //16
int y = 3;
printf("%d", SQUARE(++y)); //?
#undef SQUARE
Given the problems with function-like macros, what examples are there of good/prudent/recommended uses for them?
Are there any times when a function-like macro would be preferrable to a function?
I imagine there must be some really good cases of this, or else the preprocessor makers would have made their job easier and left them out.

The comments have captured most of it.
Some types of debugging and instrumentation can be accomplished only by use of a function-style macro. The best example is assert(), but function-style macros can also be used for instrumenting code for profiling as well. Magic macros like __FILE__ and __LINE__ as well as features like # for quoting and ## for token-pasting make function-like macros valuable for the debugging and profiling.
In C++, with templates and typically more aggressive inlining, there are few, if any, other reasons to use a function-style macro. For example, the template function std::max is a much better solution than a MAX macro for the reasons illustrated in the question.
In C, it's sometimes necessary in optimization to ensure the a small piece of code is inlined. Macros, with all their caveats, are still occasionally useful in this context. The ALLCAPS naming convention is there to warn programmers that this is actually a macro and not a function because of all the problems with simple text substitution. For example, if you had several places where you needed the equivalent of std::max in a performance-critical piece of code, a macro--with all its dangers--can be a useful solution.

Most of the time you don't need to use macros. However there are some cases that are legitimate (although there is room for discussion).
You should not use macros when you can use enums. You should not have macros that depend on having a local variable with a magic name. You should not have macros that can be used as l-values. You should not have macros that have side effects on the code around the expanded macro. Using macros instead of inline functions is most of the time a bad idea, in any case the list is endless.
You could use a macro to fake an iterator though, in C and in particular the Linux Kernel you will see the following:
#define list_for_each_safe(pos, n, head) \
for (pos = (head)->next, n = pos->next; pos != (head); \
pos = n, n = pos->next)
There are numerous other similar type of macros that are used throughout the Linux kernel source.
Also offsetof(3) is typically implemented as a macro, as well as assert(3) etc.

I generally agree with dmp here. If you can use something else you use it and forget about macros.
That said I will mostly repeat what's in comments:
You usually need function like macro when you need to use inside it some other preprocessor symbols (__LINE__, __FILE__, __func__ and so on). This usually means assert/debugging macros.
Stringizing / pasting tokens (# and ## operators) in a macro.
You CAN (if you feel adventures enough) use function-like macros to create enum->string mappings using some cleaver tricks like defining/undefining and redefining macros.
Something like va_list/va_arg where you need access through pointer with simultaneous change of the place where it points to.
And I repeat myself - if you can avoid preprocessor just do that.

The C standard has (or allows) a whole bunch of them, and I think anything that does macros in the same spirit should be legitimate:
character handling functions isspace and similar are often implemented as macros
CMPLX is declared to be a macro
UINT64_C and similar are macros
if you include tgmath.h, sin, cos and a lot of other functions become macros
the whole point of the new _Generic keyword is to write function-like macros

In some cases, there will be compiler specific instructions which vary and are noisy to rewrite for each compiler. One macro that I use often which falls into this category is used to inform the compiler that a parameter is unused:
virtual size_t heightForItemAtIndex(const size_t& idx) const {
MONUnusedParameter(idx); // << MONUnusedParameter() is the macro
return 12; // << this type's items are all the same height
}
The problem is introduced better here: cross platform macro for silencing unused variables warning

Google Style C++ Guide has one example of a pretty useful macro (I'm not sure if this is what you mean by 'function-like macro' - it takes an argument but it can not be replaced by a function call):
DISALLOW_COPY_AND_ASSIGN(ClassName);
Such macro can be put in a private section of a class to disable a copy constructor and an assignment operator that are automatically generated by a compiler. It is usually a good idea to disallow these two automatically generated methods unless a class really needs them. Without a macro disabling requires quite a lot of typing in every class:
ClassName(const ClassName&);
void operator=(const ClassName&);
The macro just automatically generates this code. This is pretty safe. I'm not aware of any cases in which such macro causes problems.

Related

How do function renaming macros work, and should one use them?

Everyone knows about classic #define DEFAULT_VALUE 100 macro where the preprocessor will just find the "token" and replace it with whatever the value is.
The problem I am having is understanding the function version of this #define my_puts(x) puts(x). I have K&R in front of me but I simply cannot find a suitable explanation. For instance:
why do I need to supply the number of arguments?
why can their name be whatever?
why don't I have to supply the type?
But mainly I would like to know how this replacement functions under the hood.
In the back of my mind I think I have a memory of someone saying somewhere that this is bad because there are no types.
In short, I would like to know if it is safe and secure to use macros to rename functions (as opposed to the alternative of manually wrapping the function in another function).
Thank you!
The problem I am having is understanding the function version of this #define my_puts(x) puts(x).
Part of your confusion might arise from thinking of this variety as a "function renaming" macro. A more conventional term is "function-like", referring to the form of the macro definition and usage. Providing aliases for function names or converting from one function name to another is a relatively minor use for this kind of macro.
Such macros are better regarded more generally, simply as macros that accept parameters. From that standpoint, your specific questions have relatively clear answers:
why do I need to supply the number of arguments?
You are primarily associating parameter names with the various positions in the macro's parameter list. This is necessary so that the preprocessor can properly expand the macro. That the number of parameters is thereby conveyed (except for variadic macros) is of secondary importance.
why can their name be whatever?
"Whatever" is a little too strong, but the answer is that the names of macro parameters are significant only within the scope of the macro definition. The preprocessor substitutes the actual arguments into each expansion in place of the parameter names whenever it expands the macro. This is analogous to bona fide functions, actually, so I'm not really sure why this particular uncertainty arises for you.
why don't I have to supply the type?
Of the macro? Because to the extent that macros have a type, they all have the same one. They all expand to sequences of zero or more tokens. You can view this as a source-to-source translation. The resulting token sequence will be interpreted by the compiler at a subsequent stage in the process.
But mainly I would like to know how this replacement functions under the hood.
Roughly speaking, wherever the name of an in-scope function like macro appears in the source code followed by a parenthesized list of arguments, the macro name and argument list are replaced by the expansion of the macro, with the macro arguments substituted appropriately.
For example, consider this function-like macro, which you might see in real source code:
#define MIN(x, y) (((x) <= (y)) ? (x) : (y))
Within the scope of that definition, this code ...
n = MIN(10, z);
... expands to
n = (((10) <= (z)) ? (10) : (z));
Note well that
the function-like macro is not providing function alias in this case.
the macro arguments are substituted into the macro expansion wherever they appear as complete tokens in the macro's defined replacement text.
In the back of my mind I think I have a memory of someone saying somewhere that this is bad because there are no types.
Well, there are no types declared in the macro definition. That doesn't prevent all the normal rules around data type from applying to the source code resulting from the preprocessing stage. Both of these factors need to be taken into account. In some ways, the MIN() macro in the above example is more flexible than any one function can be be. Is that bad? I don't mean to deny that there are arguments against, but it's a multifaceted question that is not well captured by a single consideration or a plain "good" vs. "bad" evaluation.
In short, I would like to know if it is safe and secure to use macros to rename functions (as opposed to the alternative of manually wrapping the function in another function).
That's largely a different question from any of the above. The semantics of function-like macros are well-defined. There is no inherent safety or security issue. But function-like macros do obscure what is going on, and thereby make it more difficult to analyze code. This is therefore mostly a stylistic issue.
Function-like macros do have detractors these days, especially in the C++ community. In most cases, they have little to offer to distinguish themselves as superior to functions.

Can I shorten a function name I use repeatedly?

I have a long formula, like the following:
float a = sin(b)*cos(c)+sin(c+d)*sin(d)....
Is there a way to use s instead of sin in C, to shorten the formula, without affecting the running time?
There are at least three options for using s for sin:
Use a preprocessor macro:
#define s(x) (sin(x))
#define c(x) (cos(x))
float a = s(b)*c(c)+s(c+d)*c(d)....
#undef c
#undef s
Note that the macros definitions are immediately removed with #undef to prevent them from affecting subsequent code. Also, you should be aware of the basics of preprocessor macro substitution, noting the fact that the first c in c(c) will be expanded but the second c will not since the function-like macro c(x) is expanded only where c is followed by (.
This solution will have no effect on run time.
Use an inline function:
static inline double s(double x) { return sin(x); }
static inline double c(double x) { return cos(x); }
With a good compiler, this will have no effect on run time, since the compiler should replace a call to s or c with a direct call to sin or cos, having the same result as the original code. Unfortunately, in this case, the c function will conflict with the c object you show in your sample code. You will need to change one of the names.
Use function pointers:
static double (* const s)(double) = sin;
static double (* const c)(double) = cos;
With a good compiler, this also will have no effect on run time, although I suspect a few more compilers might fail to optimize code using this solution than than previous solution. Again, you will have the name conflict with c. Note that using function pointers creates a direct call to the sin and cos functions, bypassing any macros that the C implementation might have defined for them. (C implementations are allowed to implement library function using macros as well as functions, and they might do so to support optimizations or certain features. With a good quality compiler, this is usually a minor concern; optimization of a direct call still should be good.)
if I use define, does it affect runtime?
define works by doing text-based substitution at compile time. If you #define s(x) sin(x) then the C pre-processor will rewrite all the s(x) into sin(x) before the compiler gets a chance to look at it.
BTW, this kind of low-level text-munging is exactly why define can be dangerous to use for more complex expressions. For example, one classic pitfall is that if you do something like #define times(x, y) x*y then times(1+1,2) rewrites to 1+1*2, which evaluates to 3 instead of the expected 4. For more complex expressions like it is often a good idea to use inlineable functions instead.
Don't do this.
Mathematicians have been abbreviating the trigonometric functions to sin, cos, tan, sinh, cosh, and tanh for many many years now. Even though mathematicians (like me) like to use their favourite and often idiosyncratic notation so puffing up any paper by a number of pages, these have emerged as pretty standard. Even LaTeX has commands like \sin, \cos, and \tan.
The Japanese immortalised the abbreviations when releasing scientific calculators in the 1970s (the shorthand can fit easily on a button), and the C standard library adopted them.
If you deviate from this then your code immediately becomes difficult to read. This can be particularly pernicious with mathematical code where you can't immediately see the effects of a bad implementation.
But if you must, then a simple
static double(*const s)(double) = sin;
will suffice.

Why would you recursively define macros in terms of other macros in C?

I wanted to see how the arduino function digitalWrite actually worked. But when I looked for the source for the function it was full of macros that were themselves defined in terms of other macros. Why would it be built this way instead of just using a function? Is this just poor coding style or the proper way of doing things in C?
For example, digitalWrite contains the macro digitalPinToPort.
#define digitalPinToPort(P) ( pgm_read_byte( digital_pin_to_port_PGM + (P) ) )
And pgm_read_byte is a macro:
#define pgm_read_byte(address_short) pgm_read_byte_near(address_short)
And pgm_read_byte_near is a macro:
#define pgm_read_byte_near(address_short) __LPM((uint16_t)(address_short))
And __LPM is a macro:
#define __LPM(addr) __LPM_classic__(addr)
And __LPM_classic__ is a macro:
#define __LPM_classic__(addr) \
(__extension__({ \
uint16_t __addr16 = (uint16_t)(addr); \
uint8_t __result; \
__asm__ __volatile__ \
( \
"lpm" "\n\t" \
"mov %0, r0" "\n\t" \
: "=r" (__result) \
: "z" (__addr16) \
: "r0" \
); \
__result; \
}))
Not directly related to this, but I was also under the impression that double underscore should only be used by the compiler. Is having LPM prefixed with __ proper?
If your question is "why one would use multiple layers of macros?", then:
why not? Notably in the pre-C99 era without inline. A canonical example was 1980s era getc which was (IIRC) at that time (SunOS3.2, 1987) documented as being a macro, with a NOTE in the man page telling (I forgot the details) that with some FILE* filearr[]; a getc(filearr[i++]) was wrong (IIRC, the undefined behavior terminology did not exist at that time). When you looked into some system headers (e.g. <stdio.h> or some header included by it) you could find the definition of such macro. And at that time (with computers running only at a few MHz, so thousand times slower than today) getc has to be a macro for efficiency reasons (since inline did not exist, and compilers did no interprocedural optimizations like they are able to do now). Of course, you could use getc in your own macros.
even today, some standards define macros. In particular today's waitpid(2) system call document WIFEXITED & WEXITSTATUS as macros, and it is sensible to #define some of your macros mixing both of them.
the main point is to understand the working of the C preprocessor, and its profoundly textual (so very brittle) nature. This is explained in all textbooks about C. Hence you need to understand what is happening under the hoods.
the rule of thumb with modern era C (that is C99 & C11) is to systematically prefer having some static inline function (defined in some header file) to an equivalent macro. In other words, #define some macro only when you cannot avoid it. And explicitly document that fact.
several layers of macro might (sometimes) increase code readability.
macros can be tested with #ifdef macroname which is sometimes useful.
Of course, when you dare defining several layers of macros (I won't call them "recursive", read about self-referential macros) you need to be very careful and understand what is happening with all of them (combined & separately). Looking into the preprocessed form is helpful.
BTW, to debug complex macros, I sometimes do
gcc -C -E -I/some/directory mysource.c | sed 's:^#://#:' > mysource.i
then I look into the mysource.i and sometimes even have to compile it, perhaps as gcc -c -Wall mysource.i to get warnings which are located in the preprocessed form mysource.i (which I can examine in my emacs editor). The sed command is "commenting" the lines starting with # which are setting source location (à la #line).... Sometimes I even do indent mysource.i
(actually, I have a special rule for that in my Makefile)
Is having LPM prefixed with __ proper?
BTW, names starting with _ are (by the standard, and conventionally) reserved to the implementation. In principle you are not allowed to have your names starting with _ so there cannot be any collision.
Notice that the __LPM_classic__ macro uses the statement-expr extension
(of GCC & Clang)
Look also at other programming languages. Common Lisp has a very different model of macros (and a much more interesting one). Read about hygienic macros. My personal regret is that the C preprocessor has never evolved to be much more powerful (and Scheme-like). Why that did not happen (imagine a C preprocessor able to invoke Guile code to do the macro expansion!) is probably for sociological & economical reasons.
You still should sometimes consider using some other preprocessor (like m4 or GPP) to generate C code. See autoconf.
Why would it be built this way instead of just using a function?
The purpose is likely to optimize functions' calls using pre-C99 compilers, that don't support inline functions (via inline keyword). This way, the whole stack of function-like macros is essentially merged by preprocessor into single block of code.
Every time you call a function in C, there is a tiny overhead, because of jumping around program's code, managing the stack frame and passing arguments. This cost is negligible in most applications, but if the function is called very often, then it may become a performance bottleneck.
Is this just poor coding style or the proper way of doing things in C?
It's hard to give definitive answer, because coding style is subjective topic. I would say, consider using inline functions or even (better) let the compiler inline them by itself. They are type-safe, more readable, more predictable, and with the proper assistance from compiler, the net result is essentially the same.
Related reference (it's for C++, but the idea is generally the same for C):
Why should I use inline functions instead of plain old #define macros?

Is #define banned in industry standards?

I am a first year computer science student and my professor said #define is banned in the industry standards along with #if, #ifdef, #else, and a few other preprocessor directives. He used the word "banned" because of unexpected behaviour.
Is this accurate? If so why?
Are there, in fact, any standards which prohibit the use of these directives?
First I've heard of it.
No; #define and so on are widely used. Sometimes too widely used, but definitely used. There are places where the C standard mandates the use of macros — you can't avoid those easily. For example, §7.5 Errors <errno.h> says:
The macros are
EDOM
EILSEQ
ERANGE
which expand to integer constant expressions with type int, distinct positive values, and which are suitable for use in #if preprocessing directives; …
Given this, it is clear that not all industry standards prohibit the use of the C preprocessor macro directives. However, there are 'best practices' or 'coding guidelines' standards from various organizations that prescribe limits on the use of the C preprocessor, though none ban its use completely — it is an innate part of C and cannot be wholly avoided. Often, these standards are for people working in safety-critical areas.
One standard you could check the MISRA C (2012) standard; that tends to proscribe things, but even that recognizes that #define et al are sometimes needed (section 8.20, rules 20.1 through 20.14 cover the C preprocessor).
The NASA GSFC (Goddard Space Flight Center) C Coding Standards simply say:
Macros should be used only when necessary. Overuse of macros can make code harder to read and maintain because the code no longer reads or behaves like standard C.
The discussion after that introductory statement illustrates the acceptable use of function macros.
The CERT C Coding Standard has a number of guidelines about the use of the preprocessor, and implies that you should minimize the use of the preprocessor, but does not ban its use.
Stroustrup would like to make the preprocessor irrelevant in C++, but that hasn't happened yet. As Peter notes, some C++ standards, such as the JSF AV C++ Coding Standards (Joint Strike Fighter, Air Vehicle) from circa 2005, dictate minimal use of the C preprocessor. Essentially, the JSF AV C++ rules restrict it to #include and the #ifndef XYZ_H / #define XYZ_H / … / #endif dance that prevents multiple inclusions of a single header. C++ has some options that are not available in C — notably, better support for typed constants that can then be used in places where C does not allow them to be used. See also static const vs #define vs enum for a discussion of the issues there.
It is a good idea to minimize the use of the preprocessor — it is often abused at least as much as it is used (see the Boost preprocessor 'library' for illustrations of how far you can go with the C preprocessor).
Summary
The preprocessor is an integral part of C and #define and #if etc cannot be wholly avoided. The statement by the professor in the question is not generally valid: #define is banned in the industry standards along with #if, #ifdef, #else, and a few other macros is an over-statement at best, but might be supportable with explicit reference to specific industry standards (but the standards in question do not include ISO/IEC 9899:2011 — the C standard).
Note that David Hammen has provided information about one specific C coding standard — the JPL C Coding Standard — that prohibits a lot of things that many people use in C, including limiting the use of of the C preprocessor (and limiting the use of dynamic memory allocation, and prohibiting recursion — read it to see why, and decide whether those reasons are relevant to you).
No, use of macros is not banned.
In fact, use of #include guards in header files is one common technique that is often mandatory and encouraged by accepted coding guidelines. Some folks claim that #pragma once is an alternative to that, but the problem is that #pragma once - by definition, since pragmas are a hook provided by the standard for compiler-specific extensions - is non-standard, even if it is supported by a number of compilers.
That said, there are a number of industry guidelines and encouraged practices that actively discourage all usage of macros other than #include guards because of the problems macros introduce (not respecting scope, etc). In C++ development, use of macros is frowned upon even more strongly than in C development.
Discouraging use of something is not the same as banning it, since it is still possible to legitimately use it - for example, by documenting a justification.
Some coding standards may discourage or even forbid the use of #define to create function-like macros that take arguments, like
#define SQR(x) ((x)*(x))
because a) such macros are not type-safe, and b) somebody will inevitably write SQR(x++), which is bad juju.
Some standards may discourage or ban the use of #ifdefs for conditional compilation. For example, the following code uses conditional compilation to properly print out a size_t value. For C99 and later, you use the %zu conversion specifier; for C89 and earlier, you use %lu and cast the value to unsigned long:
#if __STDC_VERSION__ >= 199901L
# define SIZE_T_CAST
# define SIZE_T_FMT "%zu"
#else
# define SIZE_T_CAST (unsigned long)
# define SIZE_T_FMT "%lu"
#endif
...
printf( "sizeof foo = " SIZE_T_FMT "\n", SIZE_T_CAST sizeof foo );
Some standards may mandate that instead of doing this, you implement the module twice, once for C89 and earlier, once for C99 and later:
/* C89 version */
printf( "sizeof foo = %lu\n", (unsigned long) sizeof foo );
/* C99 version */
printf( "sizeof foo = %zu\n", sizeof foo );
and then let Make (or Ant, or whatever build tool you're using) deal with compiling and linking the correct version. For this example that would be ridiculous overkill, but I've seen code that was an untraceable rat's nest of #ifdefs that should have had that conditional code factored out into separate files.
However, I am not aware of any company or industry group that has banned the use of preprocessor statements outright.
Macros can not be "banned". The statement is nonsense. Literally.
For example, section 7.5 Errors <errno.h> of the C Standard requires the use of macros:
1 The header <errno.h> defines several macros, all relating to the reporting of error conditions.
2 The macros are
EDOM
EILSEQ
ERANGE
which expand to integer constant expressions with type int, distinct
positive values, and which are suitable for use in #if preprocessing
directives; and
errno
which expands to a modifiable lvalue that has type int and thread
local storage duration, the value of which is set to a positive error
number by several library functions. If a macro definition is
suppressed in order to access an actual object, or a program defines
an identifier with the name errno, the behavior is undefined.
So, not only are macros a required part of C, in some cases not using them results in undefined behavior.
No, #define is not banned. Misuse of #define, however, may be frowned upon.
For instance, you may use
#define DEBUG
in your code so that later on, you can designate parts of your code for conditional compilation using #ifdef DEBUG, for debug purposes only. I don't think anyone in his right mind would want to ban something like this. Macros defined using #define are also used extensively in portable programs, to enable/disable compilation of platform-specific code.
However, if you are using something like
#define PI 3.141592653589793
your teacher may rightfully point out that it is much better to declare PI as a constant with the appropriate type, e.g.,
const double PI = 3.141592653589793;
as it allows the compiler to do type checking when PI is used.
Similarly (as mentioned by John Bode above), the use of function-like macros may be disapproved of, especially in C++ where templates can be used. So instead of
#define SQ(X) ((X)*(X))
consider using
double SQ(double X) { return X * X; }
or, in C++, better yet,
template <typename T>T SQ(T X) { return X * X; }
Once again, the idea is that by using the facilities of the language instead of the preprocessor, you allow the compiler to type check and also (possibly) generate better code.
Once you have enough coding experience, you'll know exactly when it is appropriate to use #define. Until then, I think it is a good idea for your teacher to impose certain rules and coding standards, but preferably they themselves should know, and be able to explain, the reasons. A blanket ban on #define is nonsensical.
That's completely false, macros are heavily used in C. Beginners often use them badly but that's not a reason to ban them from industry. A classic bad usage is #define succesor(n) n + 1. If you expect 2 * successor(9) to give 20, then you're wrong because that expression will be translated as 2 * 9 + 1 i.e. 19 not 20. Use parenthesis to get the expected result.
No. It is not banned. And truth to be told, it is impossible to do non-trivial multi-platform code without it.
No your professor is wrong or you misheard something.
#define is a preprocessor macro, and preprocessor macros are needed for conditional compilation and some conventions, which aren't simply built in the C language. For example, in a recent C standard, namely C99, support for booleans had been added. But it's not supported "native" by the language, but by preprocessor #defines. See this reference to stdbool.h
Macros are used pretty heavily in GNU land C, and without conditional preprocessor commands there's be no way to properly handle multiple inclusions of the same source files, so that makes them seem like essential language features to me.
Maybe your class is actually on C++, which despite many people's failure to do so, should be distinguished from C as it is a different language, and I can't speak for macros there. Or maybe the professor meant he's banning them in his class. Anyhow I'm sure the SO community would be interested in hearing which standard he's talking about, since I'm pretty sure all C standards support the use of macros.
Contrary to all of the answers to date, the use of preprocessor directives is oftentimes banned in high-reliability computing. There are two exceptions to this, the use of which are mandated in such organizations. These are the #include directive, and the use of an include guard in a header file. These kinds of bans are more likely in C++ rather than in C.
Here's but one example: 16.1.1 Use the preprocessor only for implementing include guards, and including header files with include guards.
Another example, this time for C rather than C++: JPL Institutional Coding Standard for the C Programming Language . This C coding standard doesn't go quite so far as banning the use of the preprocessor completely, but it comes close. Specifically, it says
Rule 20 (preprocessor use)
Use of the C preprocessor shall be limited to file inclusion and simple macros. [Power of Ten Rule 8].
I'm neither condoning nor decrying those standards. But to say they don't exist is ludicrous.
If you want your C code to interoperate with C++ code, you will want to declare your externally visible symbols, such as function declarations, in the extern "C" namespace. This is often done using conditional compilation:
#ifdef __cplusplus
extern "C" {
#endif
/* C header file body */
#ifdef __cplusplus
}
#endif
Look at any header file and you will see something like this:
#ifndef _FILE_NAME_H
#define _FILE_NAME_H
//Exported functions, strucs, define, ect. go here
#endif /*_FILE_NAME_H */
These define are not only allowed, but critical in nature as each time the header file is referenced in files it will be included separately. This means without the define you are redefining everything in between the guard multiple times which best case fails to compile and worse case leaves you scratching your head later why your code doesn't work the way you want it to.
The compiler will also use define as seen here with gcc that let you test for things like the version of the compiler which is very useful. I'm currently working on a project that needs to compile with avr-gcc, but we have a testing environment that we also run our code though. To prevent the avr specific files and registers from keeping our test code from running we do something like this:
#ifdef __AVR__
//avr specific code here
#endif
Using this in the production code, the complementary test code can compile without using the avr-gcc and the code above is only compiled using avr-gcc.
If you had just mentioned #define, I would have thought maybe he was alluding to its use for enumerations, which are better off using enum to avoid stupid errors such as assigning the same numerical value twice.
Note that even for this situation, it is sometimes better to use #defines than enums, for instance if you rely on numerical values exchanged with other systems and the actual values must stay the same even if you add/delete constants (for compatibility).
However, adding that #if, #ifdef, etc. should not be used either is just weird. Of course, they should probably not be abused, but in real life there are dozens of reasons to use them.
What he may have meant could be that (where appropriate), you should not hardcode behaviour in the source (which would require re-compilation to get a different behaviour), but rather use some form of run-time configuration instead.
That's the only interpretation I could think of that would make sense.

When should you use macros instead of inline functions?

In a previous question what I thought was a good answer was voted down for the suggested use of macros
#define radian2degree(a) (a * 57.295779513082)
#define degree2radian(a) (a * 0.017453292519)
instead of inline functions. Please excuse the newbie question, but what is so evil about macros in this case?
Most of the other answers discuss why macros are evil including how your example has a common macro use flaw. Here's Stroustrup's take: http://www.research.att.com/~bs/bs_faq2.html#macro
But your question was asking what macros are still good for. There are some things where macros are better than inline functions, and that's where you're doing things that simply can't be done with inline functions, such as:
token pasting
dealing with line numbers or such (as for creating error messages in assert())
dealing with things that aren't expressions (for example how many implementations of offsetof() use using a type name to create a cast operation)
the macro to get a count of array elements (can't do it with a function, as the array name decays to a pointer too easily)
creating 'type polymorphic' function-like things in C where templates aren't available
But with a language that has inline functions, the more common uses of macros shouldn't be necessary. I'm even reluctant to use macros when I'm dealing with a C compiler that doesn't support inline functions. And I try not to use them to create type-agnostic functions if at all possible (creating several functions with a type indicator as a part of the name instead).
I've also moved to using enums for named numeric constants instead of #define.
There's a couple of strictly evil things about macros.
They're text processing, and aren't scoped. If you #define foo 1, then any subsequent use of foo as an identifier will fail. This can lead to odd compilation errors and hard-to-find runtime bugs.
They don't take arguments in the normal sense. You can write a function that will take two int values and return the maximum, because the arguments will be evaluated once and the values used thereafter. You can't write a macro to do that, because it will evaluate at least one argument twice, and fail with something like max(x++, --y).
There's also common pitfalls. It's hard to get multiple statements right in them, and they require a lot of possibly superfluous parentheses.
In your case, you need parentheses:
#define radian2degree(a) (a * 57.295779513082)
needs to be
#define radian2degree(a) ((a) * 57.295779513082)
and you're still stepping on anybody who writes a function radian2degree in some inner scope, confident that that definition will work in its own scope.
For this specific macro, if I use it as follows:
int x=1;
x = radian2degree(x);
float y=1;
y = radian2degree(y);
there would be no type checking, and x,y will contain different values.
Furthermore, the following code
float x=1, y=2;
float z = radian2degree(x+y);
will not do what you think, since it will translate to
float z = x+y*0.017453292519;
instead of
float z = (x+y)+0.017453292519;
which is the expected result.
These are just a few examples for the misbehavior ans misuse macros might have.
Edit
you can see additional discussions about this here
if possible, always use inline function. These are typesafe and can not be easily redefined.
defines can be redfined undefined, and there is no type checking.
Macros are relatively often abused and one can easily make mistakes using them as shown by your example. Take the expression radian2degree(1 + 1):
with the macro it will expand to 1 + 1 * 57.29... = 58.29...
with a function it will be what you want it to be, namely (1 + 1) * 57.29... = ...
More generally, macros are evil because they look like functions so they trick you into using them just like functions but they have subtle rules of their own. In this case, the correct way would be to write it would be (notice the paranthesis around a):
#define radian2degree(a) ((a) * 57.295779513082)
But you should stick to inline functions. See these links from the C++ FAQ Lite for more examples of evil macros and their subtleties:
inline vs. macros
macros containing if
macros with multiple lines
macros used to paste two tokens together
The compiler's preprocessor is a finnicky thing, and therefore a terrible candidate for clever tricks. As others have pointed out, it's easy to for the compiler to misunderstand your intention with the macro, and it's easy for you to misunderstand what the macro will actually do, but most importantly, you can't step into macros in the debugger!
Macros are evil because you may end up passing more than a variable or a scalar to it and this could resolve in an unwanted behavior (define a max macro to determine max between a and b but pass a++ and b++ to the macro and see what happens).
If your function is going to be inlined anyway, there is no performance difference between a function and a macro. However, there are several usability differences between a function and a macro, all of which favor using a function.
If you build the macro correctly, there is no problem. But if you use a function, the compiler will do it correctly for you every time. So using a function makes it harder to write bad code.

Resources