Related
I'm writing unit tests for some function macros, which are just wrappers around some function calls, with a little housekeeping thrown in.
I've been writing tests all morning and I'm starting to get tedium of the brainpan, so this might just be a case of tunnel vision, but:
Is there a valid case to be made for unit testing for macro expansion? By that I mean checking that the correct function behavior is produced for the various source code forms of the function macro's arguments. For example, function arguments can take the form, in source code of a:
literal
variable
operator expression
struct member access
pointer-to-struct member access
pointer dereference
array index
function call
macro expansion
(feel free to point out any that I've missed)
If the macro doesn't expand properly, then the code usually won't even compile. So then, is there even any sensible point in a different unit test if the argument was a float literal or a float variable, or the result of a function call?
Should the expansion be part of the unit test?
As I noted in a comment:
Using expressions such as value & 1 could reveal that the macros are careless, but code inspections can do that too.
I think going through the full panoply of tests is overkill; the tedium is giving you a relevant warning.
There is an additional mode of checking that might be relevant, namely side-effects such as: x++ + ++y as an argument. If the argument to the macro is evaluated more than once, the side-effects will probably be scrambled, or at least repeated. An I/O function (getchar(), or printf("Hello World\n")) as the argument might also reveal mismanagement of arguments.
It also depends in part on the rules you want to apply to the macros. However, if they're supposed to look like and behave like function calls, they should only evaluate arguments once (but they should evaluate each argument — if the macro doesn't evaluate an argument at all, then the side-effects won't occur that should occur (that would occur if the macro was really a function).
Also, don't underestimate the value of inline functions.
Based on the comments and some of the points made in #Jonathan Leffler's answer, I've come to the conclusion that this is something that is better tested in functional testing, preferably with a fuzz tester.
That way, using a couple of automation scripts, the fuzzer can throw a jillion arguments at the function macro and log those that either don't compile, produce compiler warnings, or compile and run, but produce the incorrect result.
Since fuzz tests aren't supposed to run quickly (like unit tests), there's no problem just adding it to the fuzz suite and letting it run over the weekend.
The goal of testing is to find errors. And, your macro definitions can contain errors. Therefore, there is a case for testing macros in general, and unit-testing in particular can find many specific errors, as will be explained below.
Code inspection can obviously also be used to find errors, however, there are good points in favor of doing both: Unit-tests can cheaply be repeated whenever the respective code is modified, say, for reactoring.
Code inspections can not cheaply be repeated (at least they cause more effort than re-running unit-tests), but they also can find other points that tests can never detect, like, wrong or bad documentation, design issues like code duplication etc.
That said, there are a number of issues you can find when unit-testing macros, some of which were already mentioned. And, it may in principle be possible that there are fuzz testers which also check for such problems, but I doubt that problems with macro definitions are already in focus of fuzz-testers:
wrong algorithm: Expressions and statements in macro definitions can just be as wrong as they can be in normal non-macro code.
unsufficient parenthesization (as mentioned in the comments): This is a potential problem with macros, and it can be detected, possibly even at compile time, by passing expressions with operators with low precedence as macro arguments. For example, calling FOO(x = 2) in test code will lead to a compile error if FOO(a) is defined as (2 * a) instead of (2 * (a)).
unintended multiple use of arguments in the expansion (as mentioned by Jonathan): This also is a potential problem specific to macros. It should be part of the specification of a macro how often its arguments will be evaluated in the expanded code (and sometimes there can no fixed number be given, see assert). Such statements about how often an argument will be evaluated can be tested by passing macro arguments with side effects that can afterwards be checked by the test code. For example, if FOO(a) is defined to be ((a) * (a)), then the call FOO(++x) will result in x being incremented twice rather than once.
unintended expansion: Sometimes a macro shall expand in a way that causes no code to be produced. assert with NDEBUG is an example here, which shall expand such that the expanded code will be optimized away completely. Whether a macro shall expand in such a way typically depends on configuration macros. To check that a macro actually 'disappears' for the respective configuration, syntactically wrong macro arguments can be used: FOO(++ ++) for example can be a compile-time test to see if instead of the empty expansion one of the non-empty expansions was used (whether this works, however, depends on whether the non-empty expansions use the argument).
bad semicolon: to ensure that a function like macro expands cleanly into a compound statement (with proper do-while(0) wrapper but without trailing semicolon), a compile time check like if (0) FOO(42); else .. can be used.
Note: Those tests I mentioned to be compile-time tests are, strictly speaking, just some form of static analysis. In contrast to using a static analysis tool, such tests have the benefit to specifically test those properties that the macros are expected to have according to their design. Like, static analysis tools typically issue warnings when macro arguments are used without parentheses in the expansion - however, in many expansions parentheses are intentionally not used.
My lecturer has asked me that in class, and I was wondering why is it a macro instead of a function?
The simple explanation would be that the standard requires assert to be a macro, if we look at the draft C99 standard(as far as I can tell the sections are the same in draft C11 standard as well) section 7.2 Diagnostics paragraph 2 says:
The assert macro shall be implemented as a macro, not as an actual
function. If the macro definition is suppressed in order to access an
actual function, the behavior is undefined.
Why does it require this, the rationale given in Rationale for International Standard—Programming Languages—C is:
It can be difficult or impossible to make assert a true function, so it is restricted to macro
form.
which is not very informative, but we can see from other requirements why. Going back to section 7.2 paragraph 1 says:
[...]If NDEBUG is defined as a macro name at the point in the source file
where is included, the assert macro is defined simply as
#define assert(ignore) ((void)0)
The assert macro is redefined according to the current state of NDEBUG
each time that is included.
This is important since it allows us an easy way to turn off assertions in release mode where you may want to take the cost of potentially expensive checks.
and the second important requirement is that it is required to use the macros __FILE__, __LINE__ and __func__, which is covered in section 7.2.1.1 The assert macro which says:
[...] the assert macro writes information about the particular call
that failed [...] the latter are respectively the values of the
preprocessing macros __FILE_ _ and __LINE_ _ and of the identifier
__func_ _) on the standard error stream in an implementation-defined format.165) It then calls the abort function.
where footnote 165 says:
The message written might be of the form:
Assertion failed: expression, function abc, file xyz, line nnn.
Having it as a macro allows the macros __FILE__ etc... to be evaluated in the proper location and as Joachim points out being a macro allows it to insert the original expression in the message it generates.
The draft C++ standard requires that the contents of the cassert header are the same as the assert.h header from Standrd C library:
The contents are the same as the Standard C library header .
See also: ISO C 7.2.
Why (void)0?
Why use (void)0 as opposed to some other expression that does nothing? We can come up with a few reasons, first this is how the assert synopsis looks in section 7.2.1.1:
void assert(scalar expression);
and it says (emphasis mine):
The assert macro puts diagnostic tests into programs; it expands to a void expression.
the expression (void)0 is consistent with the need to end up with a void expression.
Assuming we did not have that requirement, other possible expressions could have undesirable effects such as allowing uses of assert in release mode that would not be allowed in debug mode for example using plain 0 would allow us to use assert in an assignment and when used correctly would likely generate an expression result unused warning. As for using a compound statement as a comment suggests, we can see from C multi-line macro: do/while(0) vs scope block that they an have undesirable effects in some cases.
It allows capturing the file (through __FILE__) and line number (through __LINE__)
It allows the assert to be substituted for a valid expression which does nothing (i.e. ((void)0)) when building in release mode
This macro is disabled if, at the moment of including , a macro with the name NDEBUG has already been defined. This allows for a coder to include as many assert calls as needed in a source code while debugging the program and then disable all of them for the production version by simply including a line like:
#define NDEBUG
at the beginning of its code, before the inclusion of <assert.h>.
Therefore, this macro is designed to capture programming errors, not user or run-time errors, since it is generally disabled after a program exits its debugging phase.
Making it as function will increase some function calls and you can not control all such asserts in release mode.
If you use function then _FILE__, __LINE__ and __func__ will give the value of that assert function's code. Not that calling line or calling function's line.
Some assertions can be expensive to call. You've just written a high performance matrix inversion routine, and you add a sanity check
assert(is_identity(matrix * inverse))
to the end. Well, your matrices are pretty big, and if assert is a function, it would take a lot of time to do the computation before passing it into assert. Time you really don't want to spend if you're not doing debugging.
Or maybe the assertion is relatively cheap, but it's contained in a very short function that will get called in an inner loop. Or other similar circumstances.
By making assert a macro instead, you can eliminate the calculation entirely when assertions are turned off.
Why is assert a macro and not a function?
Because it should compiled in DEBUG mode and should not compiled in RELEASE mode.
I have couple of simple functions like
#define JacobiLog(x1,x2) ((x1>x2)?x1:x2)+log(1+exp(-fabs(x1-x2)))
What is better to implement (code, compile, memory...) - as above with define or to write some simple function
double JacobiLog(double x1,double x2)
{
return ((x1>x2) ? x1 : x2) + log(1+exp(-fabs(x1-x2)));
}
The compiler will probably automatically set your function as inline. You should use it and not a define.
It will also avoid unexpected comportment in the case where you use your define as
double num = JacobiLog(x++, y++);
I let you imagine the problem with code replacement...
define can possibly be little faster, but most probably compiler will inline the function anyway (or you can mark as inline) and they will be the same. But function is better, because it is more readable and easier to debug.
The function is better, assuming a good compiler.
With the function, it is left to the compiler whether the code is inlined, or not (assuming the definition of the function is accessible to everyone who uses it, for example if it is an inline function declared in a header for C++, or just a plain function with all of its users in the same translation unit). With the macro, it is always inlined, which is not necessarily faster, as it may lead to code bloat and therefore more cache misses and page faults.
Not to mention macros are difficult to read and, even worse, to debug.
Even though the 'define' is faster (since it prevents a function call), the compiler can optimize and inline your function, and make it as fast.
If you are in a c++ environment, you should always use template and functions. It will make you're program more readable and prevent type error.
In C, macro can be useful since the type is not specified (see example below):
/* Will work with int, long, double, short, etc. */
#HIGHER(VAL1, VAL2) ((VAL1) > (VAL2) ? (VAL1) : (VAL2))
It's a micro-optimization. Unless you're doing embedded programming and every instruction counts, go with the function. Not to mention that the log is likely about 100x slower than the overhead to call a function. So you can only get about a 1% saving if your program consists mainly of calling this function. [1] Once your program starts doing significant other things, this saving will be reduced to basically unnoticeable.
The compiler is free to inline the function wherever possible, which would make the two identical. However, you can't force the compiler to do so. There is an inline keyword in C++, but this is just a hint, the compiler is free to ignore it.
See this for some differences between the two (this covers inline versus non-inline functions, but, as stated above, inline functions are essentially the same as #define's). The basic conclusion to the link is "it depends".
Also note that, behaviourally, a #define and a function are not 100% equivalent.
[1]: Figures largely made up. Benchmark if you want accurate results.
First (for a complete answer) we have to acknowledge that using a macro can have surprise side-effects which you might not intend, and that a function ensures that you know the incoming types and you know that each parameter is evaluated exactly once.
More often than not, these effects of using a macro are a source of problems.
Generally a compiler will inline the function as appropriate, and if it does its job right then it should have nearly all the advantages of a macro but without the rarely-intended side-effects.
Occasionally, though, you can actually get some benefits that an inlining compiler mightn't recognise. For example your macro will temporarily defer converting the arguments to double if they were int or long and perform more operations in integer arithmetic (which might have a performance or precision advantage). You might also get integer overflow and incorrect results.
Since you included 'memory' in your list of "better" factors, it's tempting to say that the function is smaller (assuming you configure your compiler to optimise for size), but this isn't necessarily true.
Obviously as a function you need only one copy of it in memory and all callers can use that same code, whereas inlined or expanded at every use duplicates the code. Your compiler is very unlikely to isolate a macro and convert it into a function called from many different places in the code.
Where a never-inlined function can fail to be smaller is where it stands in the way of simplifications. There are three common cases I can think of:
If all of the uses of the function involve constant parameters, the inlined simplifications might come out smaller than the whole original function.
The register marshalling code required to execute a function call with the parameters in the correct registers can be longer than the function itself.
Adding a function call can add to the register pressure in the caller, forcing it to generate more complicated code, possibly forcing it to create a stack frame and save more registers on entry and exit.
I want to run simple analysis on C files (such as if you call foo macro with INT_TYPE as argument, then cast the response to int*), I do not want to prerprocess the file, I just want to parse it (so that, for instance, I'll have correct line numbers).
Ie, I want to get from
#include <a.h>
#define FOO(f)
int f() {FOO(1);}
an list of tokens like
<include_directive value="a.h"/>
<macro name="FOO"><param name="f"/><result/></macro>
<function name="f">
<return>int</return>
<body>
<macro_call name="FOO"><param>1</param></macro_call>
</body>
</function>
with no need to set include path, etc.
Is there any preexisting parser that does it? All parsers I know assume C is preprocessed. I want to have access to the macros and actual include instructions.
Our C Front End can parse code containing preprocesser elements can do this to fair extent and still build a usable AST. (Yes, the parse tree has precise file/line/column number information).
There are a number of restrictions, which allows it to handle most code. In those few cases it cannot handle, often a small, easy change to the source file giving equivalent code solves the problem.
Here's a rough set of rules and restrictions:
#includes and #defines can occur wherever a declaration or statement can occur, but not in the middle of a statement. These rarely cause a problem.
macro calls can occur where function calls occur in expressions, or can appear without semicolon in place of statements. Macro calls that span non-well-formed chunks are not handled well (anybody surprised?). The latter occur occasionally but not rarely and need manual revision. OP's example of "j(v,oid)*" is problematic, but this is really rare in code.
#if ... #endif must be wrapped around major language concepts (nonterminals) (constant, expression, statement, declaration, function) or sequences of such entities, or around certain non-well-formed but commonly occurring idioms, such as if (exp) {. Each arm of the conditional must contain the same kind of syntactic construct as the other arms. #if wrapped around random text used as bad kind of comment is problematic, but easily fixed in the source by making a real comment. Where these conditions are not met, you need to modify the original source code, often by moving the #if #elsif #else #end a few tokens.
In our experience, one can revise a code base of 50,000 lines in a few hours to get around these issues. While that seems annoying (and it is), the alternative is to not be able to parse the source code at all, which is far worse than annoying.
You also want more than just a parser. See Life After Parsing, to know what happens after you succeed in getting a parse tree. We've done some additional work in building symbol tables in which the declarations are recorded with the preprocessor context in which they are embedded, enabling type checking to include the preprocessor conditions.
You can have a look at this ANTLR grammar. You will have to add rules for preprocessor tokens, though.
Your specific example can be handled by writing your own parsing and ignore macro expansion.
Because FOO(1) itself can be interpreted as a function call.
When more cases are considered however, the parser is much more difficult. You can refer PDF Link to find more information.
I was reading some code written in C this evening, and at the top of
the file was the function-like macro HASH:
#define HASH(fp) (((unsigned long)fp)%NHASH)
This left me wondering, why would somebody choose to implement a
function this way using a function-like macro instead of implementing
it as a regular vanilla C function? What are the advantages and
disadvantages of each implementation?
Thanks a bunch!
Macros like that avoid the overhead of a function call.
It might not seem like much. But in your example, the macro turns into 1-2 machine language instructions, depending on your CPU:
Get the value of fp out of memory and put it in a register
Take the value in the register, do a modulus (%) calculation by a fixed value, and leave that in the same register
whereas the function equivalent would be a lot more machine language instructions, generally something like
Stick the value of fp on the stack
Call the function, which also puts the next (return) address on the stack
Maybe build a stack frame inside the function, depending on the CPU architecture and ABI convention
Get the value of fp off the stack and put it in a register
Take the value in the register, do a modulus (%) calculation by a fixed value, and leave that in the same register
Maybe take the value from the register and put it back on the stack, depending on CPU and ABI
If a stack frame was built, unwind it
Pop the return address off the stack and resume executing instructions there
A lot more code, eh? If you're doing something like rendering every one of the tens of thousands of pixels in a window in a GUI, things run an awful lot faster if you use the macro.
Personally, I prefer using C++ inline as being more readable and less error-prone, but inlines are also really more of a hint to the compiler which it doesn't have to take. Preprocessor macros are a sledge hammer the compiler can't argue with.
One important advantage of macro-based implementation is that it is not tied to any concrete parameter type. A function-like macro in C acts, in many respects, as a template function in C++ (templates in C++ were born as "more civilized" macros, BTW). In this particular case the argument of the macro has no concrete type. It might be absolutely anything that is convertible to type unsigned long. For example, if the user so pleases (and if they are willing to accept the implementation-defined consequences), they can pass pointer types to this macro.
Anyway, I have to admit that this macro is not the best example of type-independent flexibility of macros, but in general that flexibility comes handy quite often. Again, when certain functionality is implemented by a function, it is restricted to specific parameter types. In many cases in order to apply similar operation to different types it is necessary to provide several functions with different types of parameters (and different names, since this is C), while the same can be done by just one function-like macro. For example, macro
#define ABS(x) ((x) >= 0 ? (x) : -(x))
works with all arithmetic types, while function-based implementation has to provide quite a few of them (I'm implying the standard abs, labs, llabs and fabs). (And yes, I'm aware of the traditionally mentioned dangers of such macro.)
Macros are not perfect, but the popular maxim about "function-like macros being no longer necessary because of inline functions" is just plain nonsense. In order to fully replace function-like macros C is going to need function templates (as in C++) or at least function overloading (as in C++ again). Without that function-like macros are and will remain extremely useful mainstream tool in C.
On one hand, macros are bad because they're done by the preprocessor, which doesn't understand anything about the language and does text-replace. They usually have plenty of limitations. I can't see one above, but usually macros are ugly solutions.
On the other hand, they are at times even faster than a static inline method. I was heavily optimizing a short program and found that calling a static inline method takes about twice as much time (just overhead, not actual function body) as compared with a macro.
The most common (and most often wrong) reason people give for using macros (in "plain old C") is the efficiency argument. Using them for efficiency is fine if you have actually profiled your code and are optimizing a true bottleneck (or are writing a library function that might be a bottleneck for somebody someday). But most people who insist on using them have Not actually analyzed anything and are just creating confusion where it adds no benefit.
Macros can also be used for some handy search-and-replace type substitutions which the regular C language is not capable of.
Some problems I have had in maintaining code written by macro abusers is that the macros can look quite like functions but do not show up in the symbol table, so it can be very annoying trying to trace them back to their origins in sprawling codesets (where is this thing defined?!). Writing macros in ALL CAPS is obviously helpful to future readers.
If they are more than fairly simple substitutions, they can also create some confusion if you have to step-trace through them with a debugger.
Your example is not really a function at all,
#define HASH(fp) (((unsigned long)fp)%NHASH)
// this is a cast ^^^^^^^^^^^^^^^
// this is your value 'fp' ^^
// this is a MOD operation ^^^^^^
I'd think, this was just a way of writing more readable code with the casting and mod opration wrapped into a single macro 'HASH(fp)'
Now, if you decide to write a function for this, it would probably look like,
int hashThis(int fp)
{
return ((fp)%NHASH);
}
Quite an overkill for a function as it,
introduces a call point
introduces call-stack setup and restore
The C Preprocessor can be used to create inline functions. In your example, the code will appear to call the function HASH, but instead is just inline code.
The benefits of doing macro functions were eliminated when C++ introduced inline functions. Many older API like MFC and ATL still use macro functions to do preprocessor tricks, but it just leaves the code convoluted and harder to read.