Annotate functions with macros - c

I'm looking into Flutters embedder API because I might be calling it via Rust in an upcoming project and find this:
FLUTTER_EXPORT
FlutterResult FlutterEngineShutdown(FlutterEngine engine);
The FLUTTER_EXPORT part is (from what I can see) a macro defined as:
#ifndef FLUTTER_EXPORT
#define FLUTTER_EXPORT
#endif // FLUTTER_EXPORT
I'm by no means a C guru but I have done my fair share of C programming but have never had the opportunity to use something like this. What is it? I've tried to Google it but don't really know what to call it other than "annotation" which doesn't really feel like a perfect fit.
Flutter embedder.h

Also as pointed out - these macro can't be expanded to something meaningful - for example if you write this macro couple of time it will affect nothing.
Another nice use can be - (useful in testing of code) from here
You can pass -DFLUTTER_EXPORT=SOMETHING while using gcc. This will be useful in running your code for testing purposes without touching the codebase. Also this can be used in providing different kind of expnasion based on the compile time passed parameter - which can be used in different ways.
Also a significant part of my answer boasts of the visibility using the empty macro, gcc also provides a way to realize the same thing as described here using __attribute__ ((visibility ("default"))) (as IharobAlAsimi/nemequ mentioned) (-fvisibility=hidden) etc for the macro FLUTTER_EXPORT. The name also gives us a idea that it might be necessary to add the attribute __declspec(dllimport) (which means it will be imported from a dll). An example regarding gcc's usage of visibility support over there will be helpful.
It can useful in associating some kind of debug operation like this (By this I mean that these empty macros can be used like this also - though the name suggests that this was not the intended use)
#ifdef FLUTTER_EXPORT
#define ...
#else
#define ...
#endif
Here #define here will specify some printing or logging macro. And if not defined then it will be replaced with something blank statement. do{}while(0) etc.

This is going to be a bit rambling, but I wanted to talk about some stuff that came up in the comments in addition to your initial question.
On Windows, under certain situations the compiler needs to be explicitly told that certain symbols are to be publicly exposed by a DLL. In Microsoft's compiler (heretofore referred to as MSVC), this is done by adding a __declspec(dllexport) annotation to the function, so you would end up with something like
__declspec(dllexport)
FlutterResult FlutterEngineShutdown(FlutterEngine engine);
Alas, the declspec syntax is non-standard. While GCC does actually support it (IIRC it's ignored on non-Windows platforms), other conformant compilers may not, so it should only be emitted for compilers which support it. The path that the Flutter devs have taken is one easy way to do this; if FLUTTER_EXPORT isn't defined elsewhere, they simply define it to nothing, so that on compilers where __declspec(dllexport) is unnecessary the prototype you've posted would become
FlutterResult FlutterEngineShutdown(FlutterEngine engine);
But on MSVC you'll get the declspec.
Once you have the default value (nothing) set, you can start thinking about how to define special cases. There are a few ways to do this, but the most popular solutions would be to include a platform-specific header which define macros to the correct value for that platform, or to use the build system to pass a definition to the compiler (e.g., -DFLUTTER_EXPORT="__declspec(dllexport)"). I'm prefer to keep logic in the code instead of the build system when possible to make it easier to reuse the code with different build systems, so I'll assume that's the method for the rest of the answer, but you should be able to see the parallels if you choose the build-system route.
As you can imagine, the fact that there is a default definition (which, in this case, is empty) makes maintenance easier; instead of having to define every macro in each platform-specific header, you only have to define it in the headers where a non-default value is required. Furthermore, if you add a new macro you needn't add it to every header immediately.
I think that's pretty much the end of the answer to your initial question.
Now, if we're not building Flutter, but instead using the header in a library which links to the Flutter DLL, __declspec(dllexport) isn't right. We're not exporting the FlutterEngineShutdown function in our code, we're importing it from a DLL. So, if we want to use the same header (which we do, otherwise we introduce the possibility of headers getting out of sync), we actually want to map FLUTTER_EXPORT to __declspec(dllimport). AFAIK this isn't usually necessary even on Windows, but there are situations where it is.
The solution here is to define a macro when we're building Flutter, but never define it in the public headers. Again, I like to use a separate header, something like
#define FLUTTER_COMPILING
#include "public-header.h"
I'd also throw some include guards in, and a check to make sure the public API wasn't accidentally included first, but I'm to lazy to type it here.
Then you can define FLUTTER_EXPORT with something like
#if defined(FLUTTER_COMPILING)
#define FLUTTER_EXPORT __declspec(dllexport)
#else
#define FLUTTER_EXPORT __declspec(dllimport)
#endif
You may also want to add a third case, where neither is defined, for situations where you're building the Flutter SDK into an executable instead of building Flutter as a shared library then linking to it from your executable. I'm not sure if Flutter supports that or not, but for now let's just focus on the Flutter SDK as a shared library.
Next, let's look at a related issue: visibility. Most compilers which aren't MSVC masquerade as GCC; they define __GNUC__, __GNUC_MINOR__, and __GNUC_PATCHLEVEL__ (and other macros) to the appropriate values and, more importantly, if they're pretending to be GCC ≥ 4.2 they support the visibility attribute.
Visibility isn't quite the same as dllexport/dllimport. Instead, it's more like telling the compiler whether the symbol is internal ("hidden") or publicly visible ("default"). This is a bit like the static keyword, but while static restricts a symbol's visibility to the current compilation unit (i.e., source file), hidden symbols can be used in other compilation units but they're not exposed by the linker.
Hiding symbols which needn't be public can be a huge performance win, especially for C++ (which tends to expose a lot more symbols than people think). Obviously having a smaller symbol table makes the linking a lot faster, but perhaps more significant is that the compiler can perform a lot of optimizations which it can't or doesn't for public symbols. Sometimes this is as simple as inlining a function which otherwise wouldn't be, but another huge performance gain can come from the compiler being able to assume data from a caller is aligned, which in turn allows vectorization without unnecessary shuffling. Another might allow the compiler to assume that a pointer isn't aliased, or maybe a function is never called and can be pruned. Basically, there are lots of optimizations the compiler can only do if it knows it can see all the invocations of a function, so if you care about runtime efficiency you should never expose more than is necessary.
This is also a good chance to note that FLUTTER_EXPORT isn't a very good name. Something like FLUTTER_API or FLUTTER_PUBLIC would be better, so let's use that from now on.
By default, symbols are publicly visible, but you can change this by passing -fvisibility=hidden to the compiler. Whether or not you do this will determine whether you need to annotate public functions with __attribute__((visibility("default"))) or private functions with __attribute__((visibility("hidden"))), but I would suggest passing the argument so that if you forget to annotate something you'll end up with an error when you try to use it from a different module instead of silently exposing it publicly.
Two other things also came up in the comments: debugging macros and function annotations. Because of where FLUTTER_EXPORT is used, we know it's not a debugging macro, but we can talk about them for a second anyways. The idea is that you can insert extra code into your compiled software depending on whether or not a macro is defined. The idea is something like this:
#if defined(DISABLE_LOGGING)
# define my_log_func(msg)
#else
# define my_log_func(msg) my_log_func_ex(expr)
#endif
This is actually a pretty bad idea; think about code like this:
if (foo)
my_log_func("Foo is true!");
bar();
With the above definitions, if you compile with DISABLE_LOGGING defined, you'll end up with
if (foo)
bar();
Which probably isn't what you wanted (unless you're entering an obfuscated C competition, or trying to insert a backdoor).
Instead, what you usually want is (as coderredoc mentioned) basically a no-op statement:
#if defined(DISABLE_LOGGING)
# define my_log_func(msg) do{}while(0)
#else
# define my_log_func(msg) my_log_func_ex(expr)
#endif
You may end up with a compiler error in some weird situations, but compile-time errors are vastly preferable to difficult-to-find bugs like you can end up with for the first version.
Annotations for static analysis is another situation which was mentioned in the comments, and it's something I'm a huge fan of. For example, let's say we have a function in our public API which takes a printf-style format string. We can add an annotation:
__attribute__((format(2,3)))
void print_warning(Context* ctx, const char* fmt, ...);
Now if you try to pass an int to a %f, or forget an argument, the compiler can emit a diagnostic at compile time just like with printf itself. I'm not going to go into each and every one of these, but taking advantage of them is a great way to get the compiler to catch bugs before they make it into production code.
Now for some self-promotion. Pretty much all this stuff is highly platform-dependent; whether a feature is available and how to use it properly can depend on the compiler, compiler version, the OS, and more. If you like to keep your code portable you end up with a lots preprocessor cruft to get it all right. To deal with this, I put together a project called Hedley a while back. It's a single header you can drop into your source tree which should make it a lot easier to take advantage of this type of functionality without making people's eyes bleed when they look at your headers.

Related

Is this a reasonable hack to inline functions across translation units?

I'm writing performance-sensitive code that really requires me to force certain function calls to be inlined.
For inline functions that are shared between translation units via a header, one would normally have to put the function definition in the header file. I don't want to do that. Some of these functions operate on complex data structures that should not be exposed in the header.
I've gotten around this by simply #including all the .h and .c files once each into a single .c file, so that there is only one translation unit. (That slows down re-compiles, but not by enough to matter.)
This would be "problem solved," but it eliminates getting an error when a function in one C file calls a function in another C file that is supposed to be private, and I want to get an error in that case. So, I have a separate Makefile entry that does a "normal" build, just to check for this case.
In order to force functions declared inline to play nicely in the "normal" build, I actually define a macro, may_inline, which is used where the inline attribute normally would be. It is defined as empty for a normal build and is defined as "inline" for an optimized build.
This seems like an acceptable solution. The only downside I can see is that I can't have private functions in different .c files that have the same prototype, but so far, that hasn't been much of an issue for me.
Another potential solution is to use GCC's Link-Time Optimization, which is supposed to allow inlining across translation units. It's a new feature, though, and I don't trust it to always inline things the way I would want. Furthermore, I can only get it working on trivial problems, not my actual code.
Is this an acceptable hack, or am I doing something incredibly stupid? The fact that I've never seen this done before makes me a bit nervous.
Unity build is an absolutely valid approach and has been widely used in industry since forever (see e.g. this post). Recent versions of Visual Studio even provide builtin support for them.
LTO has a downside of not being portable even across compilers for the same platform.

How to use the __attribute__ keyword in GCC C?

I am not clear with use of __attribute__ keyword in C.I had read the relevant docs of gcc but still I am not able to understand this.Can some one help to understand.
__attribute__ is not part of C, but is an extension in GCC that is used to convey special information to the compiler. The syntax of __attribute__ was chosen to be something that the C preprocessor would accept and not alter (by default, anyway), so it looks a lot like a function call. It is not a function call, though.
Like much of the information that a compiler can learn about C code (by reading it), the compiler can make use of the information it learns through __attribute__ data in many different ways -- even using the same piece of data in multiple ways, sometimes.
The pure attribute tells the compiler that a function is actually a mathematical function -- using only its arguments and the rules of the language to arrive at its answer with no other side effects. Knowing this the compiler may be able to optimize better when calling a pure function, but it may also be used when compiling the pure function to warn you if the function does do something that makes it impure.
If you can keep in mind that (even though a few other compilers support them) attributes are a GCC extension and not part of C and their syntax does not fit into C in an elegant way (only enough to fool the preprocessor) then you should be able to understand them better.
You should try playing around with them. Take the ones that are more easily understood for functions and try them out. Do the same thing with data (it may help to look at the assembly output of GCC for this, but sizeof and checking the alignment will often help).
Think of it as a way to inject syntax into the source code, which is not standard C, but rather meant for consumption of the GCC compiler only. But, of course, you inject this syntax not for the fun of it, but rather to give the compiler additional information about the elements to which it is attached.
You may want to instruct the compiler to align a certain variable in memory at a certain alignment. Or you may want to declare a function deprecated so that the compiler will automatically generate a deprecated warning when others try to use it in their programs (useful in libraries). Or you may want to declare a symbol as a weak symbol, so that it will be linked in only as a last resort, if any other definitions are not found (useful in providing default definitions).
All of this (and more) can be achieved by attaching the right attributes to elements in your program. You can attach them to variables and functions.
Take a look at this whole bunch of other GCC extensions to C. The attribute mechanism is a part of these extensions.
There are too many attributes for there to be a single answer, but examples help.
For example __attribute__((aligned(16))) makes the compiler align that struct/function on a 16-bit stack boundary.
__attribute__((noreturn)) tells the compiler this function never reaches the end (e.g. standard functions like exit(int) )
__attribute__((always_inline)) makes the compiler inline that function even if it wouldn't normally choose to (using the inline keyword suggests to the compiler that you'd like it inlining, but it's free to ignore you - this attribute forces it).
Essentially they're mostly about telling the compiler you know better than it does, or for overriding default compiler behaviour on a function by function basis.
One of the best (but little known) features of GNU C is the attribute mechanism, which allows a developer to attach characteristics to function declarations to allow the compiler to perform more error checking. It was designed in a way to be compatible with non-GNU implementations, and we've been using this for years in highly portable code with very good results.
Note that attribute spelled with two underscores before and two after, and there are always two sets of parentheses surrounding the contents. There is a good reason for this - see below. Gnu CC needs to use the -Wall compiler directive to enable this (yes, there is a finer degree of warnings control available, but we are very big fans of max warnings anyway).
For more information please go to http://unixwiz.net/techtips/gnu-c-attributes.html
Lokesh Venkateshiah

Using Sparse to check C code

Does anyone have experience with Sparse? I seem unable to find any documentation, so the warnings, and errors it produces are unclear to me. I tried checking the mailing list and man page but there really isn't much in either.
For instance, I use INT_MAX in one of my files. This generates an error (undefined identifier) even though I #include limits.h.
Is there any place where the errors and warnings have been explained?
Sparse isn't intended to be a lint, per say. Sparse is intended to produce a parse tree of arbitrary code so that it can be further analyzed.
In your example, you either want to define GNU_SOURCE (which I believe turns on __GNUC__), which exposes the bits you need in limits.h
I would avoid defining __GNUC__ on its own, as several things it activates might behave in an undefined way without all of the other switches that GNU_SOURCE turns on being defined.
My point isn't to help you squash error by error, its to reiterate that sparse is mostly used as a library, not as a stand alone static analysis tool.
From my copy of the README (not sure if I have the current version) :
This means that a user of the library will literally just need to do
struct string_list *filelist = NULL;
char *file;
action(sparse_initialize(argc, argv, filelist));
FOR_EACH_PTR_NOTAG(filelist, file) {
action(sparse(file));
} END_FOR_EACH_PTR_NOTAG(file);
and he is now done - having a full C parse of the file he opened. The
library doesn't need any more setup, and once done does not impose any
more requirements. The user is free to do whatever he wants with the
parse tree that got built up, and needs not worry about the library ever
again. There is no extra state, there are no parser callbacks, there is
only the parse tree that is described by the header files. The action
function takes a pointer to a symbol_list and does whatever it likes with it.
The library also contains (as an example user) a few clients that do the
preprocessing, parsing and type evaluation and just print out the
results. These clients were done to verify and debug the library, and
also as trivial examples of what you can do with the parse tree once it
is formed, so that users can see how the tree is organized.
The included clients are more 'functional test suites and examples' than anything. Its a very useful tool, but you might consider another usage angle if you want to employ it. I like it because it doesn't use *lex / bison , which makes it remarkably easier to hack.
If you look at limits.h you'll see that INT_MAX is defined inside this #if
/* If we are not using GNU CC we have to define all the symbols ourself.
Otherwise use gcc's definitions (see below). */
#if !defined __GNUC__ || __GNUC__ < 2
so to get it to work you should undefine __GNUC__ before including limits.h

Why should #ifdef be avoided in .c files?

A programmer I respect said that in C code, #if and #ifdef should be avoided at all costs, except possibly in header files. Why would it be considered bad programming practice to use #ifdef in a .c file?
Hard to maintain. Better use interfaces to abstract platform specific code than abusing conditional compilation by scattering #ifdefs all over your implementation.
E.g.
void foo() {
#ifdef WIN32
// do Windows stuff
#else
// do Posix stuff
#endif
// do general stuff
}
Is not nice. Instead have files foo_w32.c and foo_psx.c with
foo_w32.c:
void foo() {
// windows implementation
}
foo_psx.c:
void foo() {
// posix implementation
}
foo.h:
void foo(); // common interface
Then have 2 makefiles1: Makefile.win, Makefile.psx, with each compiling the appropriate .c file and linking against the right object.
Minor amendment:
If foo()'s implementation depends on some code that appears in all platforms, E.g. common_stuff()2, simply call that in your foo() implementations.
E.g.
common.h:
void common_stuff(); // May be implemented in common.c, or maybe has multiple
// implementations in common_{A, B, ...} for platforms
// { A, B, ... }. Irrelevant.
foo_{w32, psx}.c:
void foo() { // Win32/Posix implementation
// Stuff
...
if (bar) {
common_stuff();
}
}
While you may be repeating a function call to common_stuff(), you can't parameterize your definition of foo() per platform unless it follows a very specific pattern. Generally, platform differences require completely different implementations and don't follow such patterns.
Makefiles are used here illustratively. Your build system may not use make at all, such as if you use Visual Studio, CMake, Scons, etc.
Even if common_stuff() actually has multiple implementations, varying per platform.
(Somewhat off the asked question)
I saw a tip once suggesting the use of #if(n)def/#endif blocks for use in debugging/isolating code instead of commenting.
It was suggested to help avoid situations in which the section to be commented already had documentation comments and a solution like the following would have to be implemented:
/* <-- begin debug cmnt if (condition) /* comment */
/* <-- restart debug cmnt {
....
}
*/ <-- end debug cmnt
Instead, this would be:
#ifdef IS_DEBUGGED_SECTION_X
if (condition) /* comment */
{
....
}
#endif
Seemed like a neat idea to me. Wish I could remember the source so I could link it :(
Because then when you do search results you don't know if the code is in or out without reading it.
Because they should be used for OS/Platform dependencies, and therefore that kind of code should be in files like io_win.c or io_macos.c
My interpretation of this rule:
Your (algorithmic) program logic should not be influenced by preprocessor defines. The functioning of your code should always be concise. Any other form of logic (platform, debug) should be abstractable in header files.
This is more a guideline than a strict rule, IMHO.
But I agree that c-syntax based solutions are preferred over preprocessor magic.
The conditional compilation is hard to debug. One has to know all the settings in order to figure out which block of code the program will execute.
I once spent a week debugging a multi-threaded application that used conditional compilation. The problem was that the identifier was not spelled the same. One module used #if FEATURE_1 while the problem area used #if FEATURE1 (Notice the underscore).
I a big proponent of letting the makefile handle the configuration by including the correct libraries or objects. Makes to code more readable. Also, the majority of the code becomes configuration independent and only a few files are configuration dependent.
A reasonable goal but not so great as a strict rule
The advice to try and keep preprocessor conditionals in header files is good, as it allows you to select interfaces conditionally but not litter the code with confusing and ugly preprocessor logic.
However, there is lots and lots and lots of code that looks like the made-up example below, and I don't think there is a clearly better alternative. I think you have cited a reasonable guideline but not a great gold-tablet-commandment.
#if defined(SOME_IOCTL)
case SOME_IOCTL:
...
#endif
#if defined(SOME_OTHER_IOCTL)
case SOME_OTHER_IOCTL:
...
#endif
#if defined(YET_ANOTHER_IOCTL)
case YET_ANOTHER_IOCTL:
...
#endif
CPP is a separate (non-Turing-complete) macro language on top of (usually) C or C++. As such, it's easy to get mixed up between it and the base language, if you're not careful. That's the usual argument against macros instead of e.g. c++ templates, anyway. But #ifdef? Just go try to read someone else's code you've never seen before that has a bunch of ifdefs.
e.g. try reading these Reed-Solomon multiply-a-block-by-a-constant-Galois-value functions:
http://parchive.cvs.sourceforge.net/viewvc/parchive/par2-cmdline/reedsolomon.cpp?revision=1.3&view=markup
If you didn't have the following hint, it will take you a minute to figure out what's going on: There are two versions: one simple, and one with a pre-computed lookup table (LONGMULTIPLY). Even so, have fun tracing the #if BYTE_ORDER == __LITTLE_ENDIAN. I found it a lot easier to read when I rewrote that bit to use a le16_to_cpu function, (whose definition was inside #if clauses) inspired by Linux's byteorder.h stuff.
If you need different low-level behaviour depending on the build, try to encapsulate that in low-level functions that provide consistent behaviour everywhere, instead of putting #if stuff right inside your larger functions.
By all means, favor abstraction over conditional compilation. As anyone who has written portable software can tell you, however, the number of environmental permutations is staggering. Some design discipline can help, but sometimes the choice is between elegance and meeting a schedule. In such cases, a compromise might be necessary.
Consider the situation where you are required to provide fully tested code, with 100% branch coverage etc. Now add in conditional compilation.
Each unique symbol used to control conditional compilation doubles the number of code variants you need to test. So, one symbol - you have two variants. Two symbols, you now have four different ways to compile your code. And so on.
And this only applies for boolean tests such as #ifdef. You can easily imagine the problem if a test is of the form #if VARIABLE == SCALAR_VALUE_FROM_A_RANGE.
If your code will be compiled with different C compilers, and you use compiler-specific features, then you may need to determine which predefined macros are available.
It's true that #if #endif does complicate the reading of the code. However I have seen a lot of real world code that have no issues using this and are still going strong. So there may be better ways to avoid using #if #endif but using them is not that bad if proper care is taken.

Large C macros. What's the benefit?

I've been working with a large codebase written primarily by programmers who no longer work at the company. One of the programmers apparently had a special place in his heart for very long macros. The only benefit I can see to using macros is being able to write functions that don't need to be passed in all their parameters (which is recommended against in a best practices guide I've read). Other than that I see no benefit over an inline function.
Some of the macros are so complicated I have a hard time imagining someone even writing them. I tried creating one in that spirit and it was a nightmare. Debugging is extremely difficult, as it takes N+ lines of code into 1 in the a debugger (e.g. there was a segfault somewhere in this large block of code. Good luck!). I had to actually pull the macro out and run it un-macro-tized to debug it. The only way I could see the person having written these is by automatically generating them out of code written in a function after he had debugged it (or by being smarter than me and writing it perfectly the first time, which is always possible I guess).
Am I missing something? Am I crazy? Are there debugging tricks I'm not aware of? Please fill me in. I would really like to hear from the macro-lovers in the audience. :)
To me the best use of macros is to compress code and reduce errors. The downside is obviously in debugging, so they have to be used with care.
I tend to think that if the resulting code isn't an order of magnitude smaller and less prone to errors (meaning the macros take care of some bookkeeping details) then it wasn't worth it.
In C++, many uses like this can be replaced with templates, but not all. A simple example of Macros that are useful are in the event handler macros of MFC -- without them, creating event tables would be much harder to get right and the code you'd have to write (and read) would be much more complex.
If the macros are extremely long, they probably make the code short but efficient. In effect, he might have used macros to explicitly inline code or remove decision points from the run-time code path.
It might be important to understand that, in the past, such optimizations weren't done by many compilers, and some things that we take for granted today, like fast function calls, weren't valid then.
To me, macros are evil. With their so many side effects, and the fact that in C++ you can gain same perf gains with inline, they are not worth the risk.
For ex. see this short macro:
#define max(a, b) ((a)>(b)?(a):(b))
then try this call:
max(i++, j++)
More. Say you have
#define PLANETS 8
#define SOCCER_MIDDLE_RIGHT 8
if an error is thrown, it will refer to '8', but not either of its meaninful representations.
I only know of two reasons for doing what you describe.
First is to force functions to be inlined. This is pretty much pointless, since the inline keyword usually does the same thing, and function inlining is often a premature micro-optimization anyway.
Second is to simulate nested functions in C or C++. This is related to your "writing functions that don't need to be passed in all their parameters" but can actually be quite a bit more powerful than that. Walter Bright gives examples of where nested functions can be useful.
There are other reasons to use of macros, such as using preprocessor-specific functionality (like including __FILE__ and __LINE__ in autogenerated error messages) or reducing boilerplate code in ways that functions and templates can't (the Boost.Preprocessor library excels here; see Boost.ScopeExit or this sample enum code for examples), but these reasons don't seem to apply for doing what you describe.
Very long macros will have performance drawbacks, like increased compiled binary size, and there are certainly other reasons for not using them.
For the most problematic macros, I would consider running the code through the preprocessor, and replacing the macro output with function calls (inline if possible) or straight LOC. If the macros exists for compatibility with other architectures/OS's, you might be stuck though.
Part of the benefit is code replication without the eventual maintenance cost - that is, instead of copying code elsewhere you create a macro from it and only have to edit it once...
Of course, you could also just make a method to be called but that is sort of more work... I'm against much macro use myself, just trying to present a potential rationale.
There are a number of good reasons to write macros in C.
Some of the most important are for creating configuration tables using x-macros, for making function like macros that can accept multiple parameter types as inputs and converting tables from human readable/configurable/understandable values into computer used values.
I cant really see a reason for people to write very long macros, except for the historic automatic function inline.
I would say that when debugging complex macros, (when writing X macros etc) I tend to preprocess the source file and substitute the preprocessed file for the original.
This allows you to see the C code generated, and gives you real lines to work with in the debugger.
I don't use macros at all. Inline functions serve every useful purpose a macro can do. Macro allow you to do very weird and counterintuitive things like splitting up identifiers (How does someone search for the identifier then?).
I have also worked on a product where a legacy programmer (who thankfully is long gone) also had a special love affair with Macros. His 'custom' scripting language is the height of sloppiness. This was compounded by the fact that he wrote his C++ classes in C, meaning all class functions and variables were all public. Anyways, he wrote almost everything in macro's and variadic functions (Another hideous monstrosity foisted on the world). So instead of writing a proper template class he would use a Macro instead! He also resorted to macro's to create factory classes as well, instead of normal code... His code is pretty much unmaintanable.
From what I have seen, macro's can be used when they are small and are used declaratively and don't contain moving parts like loops, and other program flow expressions. It's OK if the macro is one or at the most two lines long and it declares and instance of something. Something that won't break during runtime. Also macro's should not contain class definitions, or function definitions. If the macro contains code that needs to be stepped into using a debugger than the macro should be removed and replace with something else.
They can also be useful for wrapping custom tracing/debugging functionality. For instance you want custom tracing in debug builds but not release builds.
Anyways when you are working in legacy code like that, just be sure to remove a bit of the macro mess a bit at a time. If you keep it up, with enough time eventually you will remove them all and make life a bit easier for yourself. I have done this in the past, with especially messy macro's. What I do is turn on the compiler switch to have the preprocessor generate an output file. Then I raid that file, and copy the code, re-indent it, and replace the macro with the generated code. Thank goodness for that compiler feature.
Some of the legacy code I've worked with used macros very extensively in the place of methods. The reasoning was that the computer/OS/runtime had an extremely small stack, so that stack overflows were a common problem. Using macros instead of methods meant that there were fewer methods on the stack.
Luckily, most of that code was obsolete, so it is (mostly) gone now.
C89 did not have inline functions. If using a compiler with extensions disabled (which is a desirable thing to do for several reasons), then the macro might be the only option.
Although C99 came out in 1999, there was resistance to it for a long time; commercial compiler vendors didn't feel it was worth their time to implement C99. Some (e.g. MS) still haven't. So for many companies it was not a viable practical decision to use C99 conforming mode, even up to today in the case of some compilers.
I have used C89 compilers that did have an extension for inline functions, but the extension was buggy (e.g. multiple definition errors when there should not be), things like that may dissuade a programmer from using inline functions.
Another thing is that the macro version effectively forces that the function will actually be inlined. The C99 inline keyword is only a compiler hint and the compiler may still decide to generate a single instance of the function code which is linked like a non-inline function. (One compiler that I still use will do this if the function is not trivial and returning void).

Resources