"with" macro in C - c

I was looking for a macro that will resemble the with-construct.
The usage should be something like:
with (lock(&x), unlock(&x)) {
...
}
It might be useful for some other purposes.
I came up with this macro:
#define __with(_onenter, _onexit, v) \
for (int __with_uniq##v=1; __with_uniq##v > 0; )\
for (_onenter; __with_uniq##v > 0; _onexit) \
while (__with_uniq##v-- > 0)
#define _with(x, y, z) __with(x, y, z)
#define with(_onenter, _onexit) _with(_onenter, _onexit, __COUNTER__)
It has 3 nested loops because it should:
Initialize loop counter (C99 only, of course)
Possibly initialize variable _onenter (such as with (int fd=open(..), close(fd)))
Allow break inside the code block. (continue is allowed too. And the macro could be adjusted to assert() it out)
I used it on the code for the XV6 OS and it seems quite useful.
My question is - what are the worst problems with such a macro? I mean, besides the mere usage of a C macro (especially one that implements new control-flow construct).
So far have found these drawbacks / problems:
No support for return or goto (but it can save some gotos in kernel code)
No support for errors (such as fd < 0). I think this one is fixable.
gnu89 / c99 and above only (loop counter. the unique variable trick is not necessary)
Somewhat less efficient than simple lock-unlock. I believe it to be insignificant.
Are there any other problems? Is there a better way to implement similar construct in C?

That macro scares me. I'd prefer the traditional approach using gotos.
That approach is primitive, but most C programmers are familiar with the pattern and if they're not, they can understand it by reading the local code. There is no hidden behavior. As a consequence, it's pretty reliable.
Your macro is clever, but it would be new to most everybody and it comes with hidden gotchas. New contributors would have to be thought rules such as "don't return or goto out of a with block" and "break will break out of the with block, not out of the surrounding loop". I fear mistakes would be common.
The balance would shift if you could add warnings for misuses of this construct to the compiler. With clang, that seems to be an option. In this case, misuses would be detected and your code would remain portable to other compilers.
If you're willing to restrict yourself to GCC and Clang, you can use the cleanup attribute. That would make your example look like this:
lock_t x = NULL __attribute__((cleanup(unlock)));
lock(&x);
And unlock will be called with a pointer to the variable when it goes out of scope. This is integrates with other language features like return and goto, and even with exceptions in mixed C/C++ projects.

Related

Use of #define to alias structure members

This is a subjective question, so I will accept 'there is no answer' but read fully as this is specifically on a system where the code is safety critical.
I've adopted some embedded C code for a safety critical system, where the original author has (in random places) used syntax like this:
#include <stdio.h>
typedef struct tag_s2 {
int a;
}s2;
typedef struct tag_s1 {
s2 a;
}s1;
s1 inst;
#define X_myvar inst.a.a
int main(int argc, char **argv)
{
X_myvar = 10;
printf("myvar = %d\n", X_myvar + 1);
return 0;
}
Effectively using a #define to alias and obscure a deep structure member. Mostly two or three, but occasionally four deep.
BTW: This is a simple example, the real code is far more complicated but I can't publish any part of that here.
The use of this is not consistent, in some places the aliased variable is used directly other by it's alias, some parts of code are not aliased.
IMO this is bad practice as it obscures the code with no gain reducing maintainability and readability, leading to future errors and misunderstanding.
If the style was 100% consistent then perhaps I would be more happy with it.
However, being safety critical a change is costly. So not wanting to fix 'wot aint broke' I am open to other arguments.
Should I fix it or leave well alone?
Is there any guidance (e.g. Generic C, MISRA or DO178B style guides) that would have an opinion on this?
However, being safety critical a change is costly. So not wanting to fix 'wot aint broke' I am open to other arguments.
It's paradoxical death spiral that the most critical code gets the least attention because people are afraid to change it.
That you are hesitant to make a simple, rote refactoring to this code tells me the code either has no tests or you don't trust the tests. When you're afraid to improve code because you might break it, that delays improvements to the code. You're likely to do the smallest possible thing which will make the code even more brittle and unsafe.
I'd advise the first thing is to get some tests in place along with a staging environment for trials. Then all changes become safer. There might be some gafes initially while you find all the weird and dangerous things this code is doing, but that's what the staging area is for. In the medium and long term everyone will improve this code faster and with more confidence. Making code easier and safer to change allows it to be made easier and safer to change; the spiral then goes up, not down.
The technique of making a macro seem like a single variable is a technique I've seen before in the Perl 5 code base. It is written more in C macros than in C. For example, here's a bit of manipulating the Perl call stack.
#define SP sp
#define MARK mark
#define TARG targ
#define PUSHMARK(p) \
STMT_START { \
I32 * mark_stack_entry; \
if (UNLIKELY((mark_stack_entry = ++PL_markstack_ptr) \
== PL_markstack_max)) \
mark_stack_entry = markstack_grow(); \
*mark_stack_entry = (I32)((p) - PL_stack_base); \
DEBUG_s(DEBUG_v(PerlIO_printf(Perl_debug_log, \
"MARK push %p %" IVdf "\n", \
PL_markstack_ptr, (IV)*mark_stack_entry))); \
} STMT_END
#define TOPMARK S_TOPMARK(aTHX)
#define POPMARK S_POPMARK(aTHX)
#define INCMARK \
STMT_START { \
DEBUG_s(DEBUG_v(PerlIO_printf(Perl_debug_log, \
"MARK inc %p %" IVdf "\n", \
(PL_markstack_ptr+1), (IV)*(PL_markstack_ptr+1)))); \
PL_markstack_ptr++; \
} STMT_END
#define dSP SV **sp = PL_stack_sp
#define djSP dSP
#define dMARK SV **mark = PL_stack_base + POPMARK
#define dORIGMARK const I32 origmark = (I32)(mark - PL_stack_base)
#define ORIGMARK (PL_stack_base + origmark)
#define SPAGAIN sp = PL_stack_sp
#define MSPAGAIN STMT_START { sp = PL_stack_sp; mark = ORIGMARK; } STMT_END
#define GETTARGETSTACKED targ = (PL_op->op_flags & OPf_STACKED ? POPs : PAD_SV(PL_op->op_targ))
#define dTARGETSTACKED SV * GETTARGETSTACKED
These are macros upon macros upon macros. The Perl 5 source is riddled with them. There is a lot of opaque magic happening there. Some of them need to be macros to allow assignment, but many could be inline functions. Despite being part of a public API they are indifferently documented in part because they are macros and not functions.
This style is very clever and useful if you're already very familiar with the Perl 5 source code. For everyone else it has made the Perl 5 internals extremely difficult to work with. While some compilers will provide stack traces for macro expansions, others will only report on the expanded macro leaving one scratching their head what the hell const I32 origmark = (I32)(mark - PL_stack_base) is because it never appears in your source.
Like many macro hacks, while the technique is very clever it is also mind-bending and unfamiliar to many programmers. Mind-bending is not what you want in safety critical code. You want simple, boring code. That alone is the simplest argument to replace it with well named getter and setter functions. Trust the compiler to optimize them.
A good example of this is GLib which carefully uses well-documented function-like macros to make generic data structures. For example, adding a value to an array.
#define g_array_append_val(a,v)
While this is a macro, it acts and is documented like a function. It's macro solely as a mechanism to create a safe, type generic array. It hides no variables. You can safely use it without ever being aware it's a macro.
In conclusion, yes, change it. But instead of simply replacing X_myvar with inst.a.a consider creating functions that continue to provide encapsulation.
void s1_set_a( s1 *s, int val ) {
s->a.a = val;
}
int s1_get_a( s1 *s ) {
return s->a.a;
}
s1_set_a(&inst, 10);
printf("myvar = %d\n", s1_get_a(&inst) + 1);
The internals of s1 are hidden making it easier to change the internals later (for example, changing s1.a to a pointer to save memory). What variable you're working with is clear making the overall code easier to understand. The function names provide a clear explanation of what's happening. Because they're functions they have an obvious place for documentation. Trust the compiler to know how best to optimize it.
Yeah you should get rid of it. Obscure macros are dangerous.
It was common in older C to avoid spelling out deep nesting to do things like
#define a inst.a
In which case you only had to type inst.a instead of inst.a.a. Although this is questionable practice, macros like these were used to repair a shortcoming in the language, namely the lack of anonymous structs. Modern C supports that from C11 though. We can use anonymous structures to get rid of unnecessarily nested structs:
typedef struct {
struct
{
int a;
};
}s1;
But MISRA-C:2012 doesn't support C11 so that might not be an option.
Another trick you can use to get rid of long names is something like this:
int* x = &longstructname.somemember.anotherstruct.x;
// use *x from here on
x is a local variable with limited scope. That's much more readable than the obscure macros and it gets optimized away in the machine code.
From a maintenance perspective, yes, this is definitely code that needs to be fixed.
However, it is only from that perspective that the code needs to be fixed. It does not harm program correctness, and if the code is correct as-is, that is the paramount consideration.
That's why code like this should never be fixed unless a thorough unit test and regression test regimen is already in place. You should only fix code like this if you can be certain that you don't break correctly-functioning code in the process.

Is #define banned in industry standards?

I am a first year computer science student and my professor said #define is banned in the industry standards along with #if, #ifdef, #else, and a few other preprocessor directives. He used the word "banned" because of unexpected behaviour.
Is this accurate? If so why?
Are there, in fact, any standards which prohibit the use of these directives?
First I've heard of it.
No; #define and so on are widely used. Sometimes too widely used, but definitely used. There are places where the C standard mandates the use of macros — you can't avoid those easily. For example, §7.5 Errors <errno.h> says:
The macros are
EDOM
EILSEQ
ERANGE
which expand to integer constant expressions with type int, distinct positive values, and which are suitable for use in #if preprocessing directives; …
Given this, it is clear that not all industry standards prohibit the use of the C preprocessor macro directives. However, there are 'best practices' or 'coding guidelines' standards from various organizations that prescribe limits on the use of the C preprocessor, though none ban its use completely — it is an innate part of C and cannot be wholly avoided. Often, these standards are for people working in safety-critical areas.
One standard you could check the MISRA C (2012) standard; that tends to proscribe things, but even that recognizes that #define et al are sometimes needed (section 8.20, rules 20.1 through 20.14 cover the C preprocessor).
The NASA GSFC (Goddard Space Flight Center) C Coding Standards simply say:
Macros should be used only when necessary. Overuse of macros can make code harder to read and maintain because the code no longer reads or behaves like standard C.
The discussion after that introductory statement illustrates the acceptable use of function macros.
The CERT C Coding Standard has a number of guidelines about the use of the preprocessor, and implies that you should minimize the use of the preprocessor, but does not ban its use.
Stroustrup would like to make the preprocessor irrelevant in C++, but that hasn't happened yet. As Peter notes, some C++ standards, such as the JSF AV C++ Coding Standards (Joint Strike Fighter, Air Vehicle) from circa 2005, dictate minimal use of the C preprocessor. Essentially, the JSF AV C++ rules restrict it to #include and the #ifndef XYZ_H / #define XYZ_H / … / #endif dance that prevents multiple inclusions of a single header. C++ has some options that are not available in C — notably, better support for typed constants that can then be used in places where C does not allow them to be used. See also static const vs #define vs enum for a discussion of the issues there.
It is a good idea to minimize the use of the preprocessor — it is often abused at least as much as it is used (see the Boost preprocessor 'library' for illustrations of how far you can go with the C preprocessor).
Summary
The preprocessor is an integral part of C and #define and #if etc cannot be wholly avoided. The statement by the professor in the question is not generally valid: #define is banned in the industry standards along with #if, #ifdef, #else, and a few other macros is an over-statement at best, but might be supportable with explicit reference to specific industry standards (but the standards in question do not include ISO/IEC 9899:2011 — the C standard).
Note that David Hammen has provided information about one specific C coding standard — the JPL C Coding Standard — that prohibits a lot of things that many people use in C, including limiting the use of of the C preprocessor (and limiting the use of dynamic memory allocation, and prohibiting recursion — read it to see why, and decide whether those reasons are relevant to you).
No, use of macros is not banned.
In fact, use of #include guards in header files is one common technique that is often mandatory and encouraged by accepted coding guidelines. Some folks claim that #pragma once is an alternative to that, but the problem is that #pragma once - by definition, since pragmas are a hook provided by the standard for compiler-specific extensions - is non-standard, even if it is supported by a number of compilers.
That said, there are a number of industry guidelines and encouraged practices that actively discourage all usage of macros other than #include guards because of the problems macros introduce (not respecting scope, etc). In C++ development, use of macros is frowned upon even more strongly than in C development.
Discouraging use of something is not the same as banning it, since it is still possible to legitimately use it - for example, by documenting a justification.
Some coding standards may discourage or even forbid the use of #define to create function-like macros that take arguments, like
#define SQR(x) ((x)*(x))
because a) such macros are not type-safe, and b) somebody will inevitably write SQR(x++), which is bad juju.
Some standards may discourage or ban the use of #ifdefs for conditional compilation. For example, the following code uses conditional compilation to properly print out a size_t value. For C99 and later, you use the %zu conversion specifier; for C89 and earlier, you use %lu and cast the value to unsigned long:
#if __STDC_VERSION__ >= 199901L
# define SIZE_T_CAST
# define SIZE_T_FMT "%zu"
#else
# define SIZE_T_CAST (unsigned long)
# define SIZE_T_FMT "%lu"
#endif
...
printf( "sizeof foo = " SIZE_T_FMT "\n", SIZE_T_CAST sizeof foo );
Some standards may mandate that instead of doing this, you implement the module twice, once for C89 and earlier, once for C99 and later:
/* C89 version */
printf( "sizeof foo = %lu\n", (unsigned long) sizeof foo );
/* C99 version */
printf( "sizeof foo = %zu\n", sizeof foo );
and then let Make (or Ant, or whatever build tool you're using) deal with compiling and linking the correct version. For this example that would be ridiculous overkill, but I've seen code that was an untraceable rat's nest of #ifdefs that should have had that conditional code factored out into separate files.
However, I am not aware of any company or industry group that has banned the use of preprocessor statements outright.
Macros can not be "banned". The statement is nonsense. Literally.
For example, section 7.5 Errors <errno.h> of the C Standard requires the use of macros:
1 The header <errno.h> defines several macros, all relating to the reporting of error conditions.
2 The macros are
EDOM
EILSEQ
ERANGE
which expand to integer constant expressions with type int, distinct
positive values, and which are suitable for use in #if preprocessing
directives; and
errno
which expands to a modifiable lvalue that has type int and thread
local storage duration, the value of which is set to a positive error
number by several library functions. If a macro definition is
suppressed in order to access an actual object, or a program defines
an identifier with the name errno, the behavior is undefined.
So, not only are macros a required part of C, in some cases not using them results in undefined behavior.
No, #define is not banned. Misuse of #define, however, may be frowned upon.
For instance, you may use
#define DEBUG
in your code so that later on, you can designate parts of your code for conditional compilation using #ifdef DEBUG, for debug purposes only. I don't think anyone in his right mind would want to ban something like this. Macros defined using #define are also used extensively in portable programs, to enable/disable compilation of platform-specific code.
However, if you are using something like
#define PI 3.141592653589793
your teacher may rightfully point out that it is much better to declare PI as a constant with the appropriate type, e.g.,
const double PI = 3.141592653589793;
as it allows the compiler to do type checking when PI is used.
Similarly (as mentioned by John Bode above), the use of function-like macros may be disapproved of, especially in C++ where templates can be used. So instead of
#define SQ(X) ((X)*(X))
consider using
double SQ(double X) { return X * X; }
or, in C++, better yet,
template <typename T>T SQ(T X) { return X * X; }
Once again, the idea is that by using the facilities of the language instead of the preprocessor, you allow the compiler to type check and also (possibly) generate better code.
Once you have enough coding experience, you'll know exactly when it is appropriate to use #define. Until then, I think it is a good idea for your teacher to impose certain rules and coding standards, but preferably they themselves should know, and be able to explain, the reasons. A blanket ban on #define is nonsensical.
That's completely false, macros are heavily used in C. Beginners often use them badly but that's not a reason to ban them from industry. A classic bad usage is #define succesor(n) n + 1. If you expect 2 * successor(9) to give 20, then you're wrong because that expression will be translated as 2 * 9 + 1 i.e. 19 not 20. Use parenthesis to get the expected result.
No. It is not banned. And truth to be told, it is impossible to do non-trivial multi-platform code without it.
No your professor is wrong or you misheard something.
#define is a preprocessor macro, and preprocessor macros are needed for conditional compilation and some conventions, which aren't simply built in the C language. For example, in a recent C standard, namely C99, support for booleans had been added. But it's not supported "native" by the language, but by preprocessor #defines. See this reference to stdbool.h
Macros are used pretty heavily in GNU land C, and without conditional preprocessor commands there's be no way to properly handle multiple inclusions of the same source files, so that makes them seem like essential language features to me.
Maybe your class is actually on C++, which despite many people's failure to do so, should be distinguished from C as it is a different language, and I can't speak for macros there. Or maybe the professor meant he's banning them in his class. Anyhow I'm sure the SO community would be interested in hearing which standard he's talking about, since I'm pretty sure all C standards support the use of macros.
Contrary to all of the answers to date, the use of preprocessor directives is oftentimes banned in high-reliability computing. There are two exceptions to this, the use of which are mandated in such organizations. These are the #include directive, and the use of an include guard in a header file. These kinds of bans are more likely in C++ rather than in C.
Here's but one example: 16.1.1 Use the preprocessor only for implementing include guards, and including header files with include guards.
Another example, this time for C rather than C++: JPL Institutional Coding Standard for the C Programming Language . This C coding standard doesn't go quite so far as banning the use of the preprocessor completely, but it comes close. Specifically, it says
Rule 20 (preprocessor use)
Use of the C preprocessor shall be limited to file inclusion and simple macros. [Power of Ten Rule 8].
I'm neither condoning nor decrying those standards. But to say they don't exist is ludicrous.
If you want your C code to interoperate with C++ code, you will want to declare your externally visible symbols, such as function declarations, in the extern "C" namespace. This is often done using conditional compilation:
#ifdef __cplusplus
extern "C" {
#endif
/* C header file body */
#ifdef __cplusplus
}
#endif
Look at any header file and you will see something like this:
#ifndef _FILE_NAME_H
#define _FILE_NAME_H
//Exported functions, strucs, define, ect. go here
#endif /*_FILE_NAME_H */
These define are not only allowed, but critical in nature as each time the header file is referenced in files it will be included separately. This means without the define you are redefining everything in between the guard multiple times which best case fails to compile and worse case leaves you scratching your head later why your code doesn't work the way you want it to.
The compiler will also use define as seen here with gcc that let you test for things like the version of the compiler which is very useful. I'm currently working on a project that needs to compile with avr-gcc, but we have a testing environment that we also run our code though. To prevent the avr specific files and registers from keeping our test code from running we do something like this:
#ifdef __AVR__
//avr specific code here
#endif
Using this in the production code, the complementary test code can compile without using the avr-gcc and the code above is only compiled using avr-gcc.
If you had just mentioned #define, I would have thought maybe he was alluding to its use for enumerations, which are better off using enum to avoid stupid errors such as assigning the same numerical value twice.
Note that even for this situation, it is sometimes better to use #defines than enums, for instance if you rely on numerical values exchanged with other systems and the actual values must stay the same even if you add/delete constants (for compatibility).
However, adding that #if, #ifdef, etc. should not be used either is just weird. Of course, they should probably not be abused, but in real life there are dozens of reasons to use them.
What he may have meant could be that (where appropriate), you should not hardcode behaviour in the source (which would require re-compilation to get a different behaviour), but rather use some form of run-time configuration instead.
That's the only interpretation I could think of that would make sense.

Using a macro for a small operation, is this good practice?

I have a small piece of code that requires to read 4-bit values bitpacked in a 32-bit integer. Since I need to call this operation several times, even if it's simple, I require max speed on it.
I was pondering about macros and inline functions, thus I made this macro:
#define UI32TO4(x, p) (x >> ((p - 1) *4) & 15)
And I have an inline function that does the same thing.
static inline Uint8 foo_getval(Uint32 bits, int pos){
return (bits >> ((pos-1)*4)) & 15;
}
Considering the simplicity of the operation, and that the values are already prepared for this call (so no possibility of calling on the wrong types, or pass values that are too big or that stuff), what would be the best one to use? Or, at least, the most comprehensible for someone else potentially reading/modifying the code later on?
EDIT! Forgot to mention, I am using C99.
The function is safer. Your assumptions that the values are always "right" only holds while you're developing that code. You can't tell if someone down the line (or yourself when you're tired) won't pass unexpected values.
The compiler will do the inlining when it sees it as effective. Use type-safe functions whenever you can, use macros only when you have no other practical choice.
I would use the inline function because macros can cause unwanted side effects. Use macros only to save typing if necessary.
If a macro name is the same name as a function name in an other compilation unit you would get strange compilation errors. These problems can be hard to find, especially if the macro is expanded elsewhere and no error occurs.
Additionally a function warns you about parameter types and would not let you give a double for pos. The macro could allow this.
It's late, and I'm grumpy (and I'll probably delete this post later) but I get tired of hearing the same arguments against macros parroted over and over again (a double redundacy):
Joachim Pileborg (above) states "using a function allows the compiler to do better typechecking". This is often stated, but I don't believe it. With macros, the compiler already has all the available type information at its fingertips. Functions simply destroy this. (And possibly destroy optimization, by pushing registers out to the stack, but that's a side issue.)
And frast (above) states "macros can cause unwanted side effects". True--but so can functions. I think the rule is to always use UPPER_CASE for macros which don't have function semantics. This rule has often been broken. But it doesnt apply here: the OP has redundantly used both uppercase and function semantics.
But I would suggest a tiny improvement. The OP has quite correctly placed parentheses around the whole macro, but there should also be parentheses around each argument:
#define UI32TO4(x, p) ((x) >> (((p) - 1) * 4) & 15)
Always enclose your macro args in parentheses, unless you are doing string or token concatenting, etc.
Macros are, of course, dangerous, but so are functions. (And the less said of STL, the better).

When should you use macros instead of inline functions?

In a previous question what I thought was a good answer was voted down for the suggested use of macros
#define radian2degree(a) (a * 57.295779513082)
#define degree2radian(a) (a * 0.017453292519)
instead of inline functions. Please excuse the newbie question, but what is so evil about macros in this case?
Most of the other answers discuss why macros are evil including how your example has a common macro use flaw. Here's Stroustrup's take: http://www.research.att.com/~bs/bs_faq2.html#macro
But your question was asking what macros are still good for. There are some things where macros are better than inline functions, and that's where you're doing things that simply can't be done with inline functions, such as:
token pasting
dealing with line numbers or such (as for creating error messages in assert())
dealing with things that aren't expressions (for example how many implementations of offsetof() use using a type name to create a cast operation)
the macro to get a count of array elements (can't do it with a function, as the array name decays to a pointer too easily)
creating 'type polymorphic' function-like things in C where templates aren't available
But with a language that has inline functions, the more common uses of macros shouldn't be necessary. I'm even reluctant to use macros when I'm dealing with a C compiler that doesn't support inline functions. And I try not to use them to create type-agnostic functions if at all possible (creating several functions with a type indicator as a part of the name instead).
I've also moved to using enums for named numeric constants instead of #define.
There's a couple of strictly evil things about macros.
They're text processing, and aren't scoped. If you #define foo 1, then any subsequent use of foo as an identifier will fail. This can lead to odd compilation errors and hard-to-find runtime bugs.
They don't take arguments in the normal sense. You can write a function that will take two int values and return the maximum, because the arguments will be evaluated once and the values used thereafter. You can't write a macro to do that, because it will evaluate at least one argument twice, and fail with something like max(x++, --y).
There's also common pitfalls. It's hard to get multiple statements right in them, and they require a lot of possibly superfluous parentheses.
In your case, you need parentheses:
#define radian2degree(a) (a * 57.295779513082)
needs to be
#define radian2degree(a) ((a) * 57.295779513082)
and you're still stepping on anybody who writes a function radian2degree in some inner scope, confident that that definition will work in its own scope.
For this specific macro, if I use it as follows:
int x=1;
x = radian2degree(x);
float y=1;
y = radian2degree(y);
there would be no type checking, and x,y will contain different values.
Furthermore, the following code
float x=1, y=2;
float z = radian2degree(x+y);
will not do what you think, since it will translate to
float z = x+y*0.017453292519;
instead of
float z = (x+y)+0.017453292519;
which is the expected result.
These are just a few examples for the misbehavior ans misuse macros might have.
Edit
you can see additional discussions about this here
if possible, always use inline function. These are typesafe and can not be easily redefined.
defines can be redfined undefined, and there is no type checking.
Macros are relatively often abused and one can easily make mistakes using them as shown by your example. Take the expression radian2degree(1 + 1):
with the macro it will expand to 1 + 1 * 57.29... = 58.29...
with a function it will be what you want it to be, namely (1 + 1) * 57.29... = ...
More generally, macros are evil because they look like functions so they trick you into using them just like functions but they have subtle rules of their own. In this case, the correct way would be to write it would be (notice the paranthesis around a):
#define radian2degree(a) ((a) * 57.295779513082)
But you should stick to inline functions. See these links from the C++ FAQ Lite for more examples of evil macros and their subtleties:
inline vs. macros
macros containing if
macros with multiple lines
macros used to paste two tokens together
The compiler's preprocessor is a finnicky thing, and therefore a terrible candidate for clever tricks. As others have pointed out, it's easy to for the compiler to misunderstand your intention with the macro, and it's easy for you to misunderstand what the macro will actually do, but most importantly, you can't step into macros in the debugger!
Macros are evil because you may end up passing more than a variable or a scalar to it and this could resolve in an unwanted behavior (define a max macro to determine max between a and b but pass a++ and b++ to the macro and see what happens).
If your function is going to be inlined anyway, there is no performance difference between a function and a macro. However, there are several usability differences between a function and a macro, all of which favor using a function.
If you build the macro correctly, there is no problem. But if you use a function, the compiler will do it correctly for you every time. So using a function makes it harder to write bad code.

When to use function-like macros in C

I was reading some code written in C this evening, and at the top of
the file was the function-like macro HASH:
#define HASH(fp) (((unsigned long)fp)%NHASH)
This left me wondering, why would somebody choose to implement a
function this way using a function-like macro instead of implementing
it as a regular vanilla C function? What are the advantages and
disadvantages of each implementation?
Thanks a bunch!
Macros like that avoid the overhead of a function call.
It might not seem like much. But in your example, the macro turns into 1-2 machine language instructions, depending on your CPU:
Get the value of fp out of memory and put it in a register
Take the value in the register, do a modulus (%) calculation by a fixed value, and leave that in the same register
whereas the function equivalent would be a lot more machine language instructions, generally something like
Stick the value of fp on the stack
Call the function, which also puts the next (return) address on the stack
Maybe build a stack frame inside the function, depending on the CPU architecture and ABI convention
Get the value of fp off the stack and put it in a register
Take the value in the register, do a modulus (%) calculation by a fixed value, and leave that in the same register
Maybe take the value from the register and put it back on the stack, depending on CPU and ABI
If a stack frame was built, unwind it
Pop the return address off the stack and resume executing instructions there
A lot more code, eh? If you're doing something like rendering every one of the tens of thousands of pixels in a window in a GUI, things run an awful lot faster if you use the macro.
Personally, I prefer using C++ inline as being more readable and less error-prone, but inlines are also really more of a hint to the compiler which it doesn't have to take. Preprocessor macros are a sledge hammer the compiler can't argue with.
One important advantage of macro-based implementation is that it is not tied to any concrete parameter type. A function-like macro in C acts, in many respects, as a template function in C++ (templates in C++ were born as "more civilized" macros, BTW). In this particular case the argument of the macro has no concrete type. It might be absolutely anything that is convertible to type unsigned long. For example, if the user so pleases (and if they are willing to accept the implementation-defined consequences), they can pass pointer types to this macro.
Anyway, I have to admit that this macro is not the best example of type-independent flexibility of macros, but in general that flexibility comes handy quite often. Again, when certain functionality is implemented by a function, it is restricted to specific parameter types. In many cases in order to apply similar operation to different types it is necessary to provide several functions with different types of parameters (and different names, since this is C), while the same can be done by just one function-like macro. For example, macro
#define ABS(x) ((x) >= 0 ? (x) : -(x))
works with all arithmetic types, while function-based implementation has to provide quite a few of them (I'm implying the standard abs, labs, llabs and fabs). (And yes, I'm aware of the traditionally mentioned dangers of such macro.)
Macros are not perfect, but the popular maxim about "function-like macros being no longer necessary because of inline functions" is just plain nonsense. In order to fully replace function-like macros C is going to need function templates (as in C++) or at least function overloading (as in C++ again). Without that function-like macros are and will remain extremely useful mainstream tool in C.
On one hand, macros are bad because they're done by the preprocessor, which doesn't understand anything about the language and does text-replace. They usually have plenty of limitations. I can't see one above, but usually macros are ugly solutions.
On the other hand, they are at times even faster than a static inline method. I was heavily optimizing a short program and found that calling a static inline method takes about twice as much time (just overhead, not actual function body) as compared with a macro.
The most common (and most often wrong) reason people give for using macros (in "plain old C") is the efficiency argument. Using them for efficiency is fine if you have actually profiled your code and are optimizing a true bottleneck (or are writing a library function that might be a bottleneck for somebody someday). But most people who insist on using them have Not actually analyzed anything and are just creating confusion where it adds no benefit.
Macros can also be used for some handy search-and-replace type substitutions which the regular C language is not capable of.
Some problems I have had in maintaining code written by macro abusers is that the macros can look quite like functions but do not show up in the symbol table, so it can be very annoying trying to trace them back to their origins in sprawling codesets (where is this thing defined?!). Writing macros in ALL CAPS is obviously helpful to future readers.
If they are more than fairly simple substitutions, they can also create some confusion if you have to step-trace through them with a debugger.
Your example is not really a function at all,
#define HASH(fp) (((unsigned long)fp)%NHASH)
// this is a cast ^^^^^^^^^^^^^^^
// this is your value 'fp' ^^
// this is a MOD operation ^^^^^^
I'd think, this was just a way of writing more readable code with the casting and mod opration wrapped into a single macro 'HASH(fp)'
Now, if you decide to write a function for this, it would probably look like,
int hashThis(int fp)
{
return ((fp)%NHASH);
}
Quite an overkill for a function as it,
introduces a call point
introduces call-stack setup and restore
The C Preprocessor can be used to create inline functions. In your example, the code will appear to call the function HASH, but instead is just inline code.
The benefits of doing macro functions were eliminated when C++ introduced inline functions. Many older API like MFC and ATL still use macro functions to do preprocessor tricks, but it just leaves the code convoluted and harder to read.

Resources