Macro vs Function in C - c

I often see instances in which using a macro is better than using a function.
Could someone explain me with an example the disadvantage of a macro compared to a function?

Macros are error-prone because they rely on textual substitution and do not perform type-checking. For example, this macro:
#define square(a) a * a
works fine when used with an integer:
square(5) --> 5 * 5 --> 25
but does very strange things when used with expressions:
square(1 + 2) --> 1 + 2 * 1 + 2 --> 1 + 2 + 2 --> 5
square(x++) --> x++ * x++ --> increments x twice
Putting parentheses around arguments helps but doesn't completely eliminate these problems.
When macros contain multiple statements, you can get in trouble with control-flow constructs:
#define swap(x, y) t = x; x = y; y = t;
if (x < y) swap(x, y); -->
if (x < y) t = x; x = y; y = t; --> if (x < y) { t = x; } x = y; y = t;
The usual strategy for fixing this is to put the statements inside a "do { ... } while (0)" loop.
If you have two structures that happen to contain a field with the same name but different semantics, the same macro might work on both, with strange results:
struct shirt
{
int numButtons;
};
struct webpage
{
int numButtons;
};
#define num_button_holes(shirt) ((shirt).numButtons * 4)
struct webpage page;
page.numButtons = 2;
num_button_holes(page) -> 8
Finally, macros can be difficult to debug, producing weird syntax errors or runtime errors that you have to expand to understand (e.g. with gcc -E), because debuggers cannot step through macros, as in this example:
#define print(x, y) printf(x y) /* accidentally forgot comma */
print("foo %s", "bar") /* prints "foo %sbar" */
Inline functions and constants help to avoid many of these problems with macros, but aren't always applicable. Where macros are deliberately used to specify polymorphic behavior, unintentional polymorphism may be difficult to avoid. C++ has a number of features such as templates to help create complex polymorphic constructs in a typesafe way without the use of macros; see Stroustrup's The C++ Programming Language for details.

Macro features:
Macro is Preprocessed
No Type Checking
Code Length Increases
Use of macro can lead to side effect
Speed of Execution is Faster
Before Compilation macro name is replaced by macro value
Useful where small code appears many time
Macro does not Check Compile Errors
Function features:
Function is Compiled
Type Checking is Done
Code Length remains Same
No side Effect
Speed of Execution is Slower
During function call, Transfer of Control takes place
Useful where large code appears many time
Function Checks Compile Errors

Side-effects are a big one. Here's a typical case:
#define min(a, b) (a < b ? a : b)
min(x++, y)
gets expanded to:
(x++ < y ? x++ : y)
x gets incremented twice in the same statement. (and undefined behavior)
Writing multi-line macros are also a pain:
#define foo(a,b,c) \
a += 10; \
b += 10; \
c += 10;
They require a \ at the end of each line.
Macros can't "return" anything unless you make it a single expression:
int foo(int *a, int *b){
side_effect0();
side_effect1();
return a[0] + b[0];
}
Can't do that in a macro unless you use GCC's statement expressions. (EDIT: You can use a comma operator though... overlooked that... But it might still be less readable.)
Order of Operations: (courtesy of #ouah)
#define min(a,b) (a < b ? a : b)
min(x & 0xFF, 42)
gets expanded to:
(x & 0xFF < 42 ? x & 0xFF : 42)
But & has lower precedence than <. So 0xFF < 42 gets evaluated first.

When in doubt, use functions (or inline functions).
However answers here mostly explain the problems with macros, instead of having some simple view that macros are evil because silly accidents are possible.You can be aware of the pitfalls and learn to avoid them. Then use macros only when there is a good reason to.
There are certain exceptional cases where there are advantages to using macros, these include:
Generic functions, as noted below, you can have a macro that can be used on different types of input arguments.
Variable number of arguments can map to different functions instead of using C's va_args.eg: https://stackoverflow.com/a/24837037/432509.
They can optionally include local info, such as debug strings:(__FILE__, __LINE__, __func__). check for pre/post conditions, assert on failure, or even static-asserts so the code won't compile on improper use (mostly useful for debug builds).
Inspect input args, You can do tests on input args such as checking their type, sizeof, check struct members are present before casting(can be useful for polymorphic types).Or check an array meets some length condition.see: https://stackoverflow.com/a/29926435/432509
While its noted that functions do type checking, C will coerce values too (ints/floats for example). In rare cases this may be problematic. Its possible to write macros which are more exacting then a function about their input args. see: https://stackoverflow.com/a/25988779/432509
Their use as wrappers to functions, in some cases you may want to avoid repeating yourself, eg... func(FOO, "FOO");, you could define a macro that expands the string for you func_wrapper(FOO);
When you want to manipulate variables in the callers local scope, passing pointer to a pointer works just fine normally, but in some cases its less trouble to use a macro still.(assignments to multiple variables, for a per-pixel operations, is an example you might prefer a macro over a function... though it still depends a lot on the context, since inline functions may be an option).
Admittedly, some of these rely on compiler extensions which aren't standard C. Meaning you may end up with less portable code, or have to ifdef them in, so they're only taken advantage of when the compiler supports.
Avoiding multiple argument instantiation
Noting this since its one of the most common causes of errors in macros (passing in x++ for example, where a macro may increment multiple times).
its possible to write macros that avoid side-effects with multiple instantiation of arguments.
C11 Generic
If you like to have square macro that works with various types and have C11 support, you could do this...
inline float _square_fl(float a) { return a * a; }
inline double _square_dbl(float a) { return a * a; }
inline int _square_i(int a) { return a * a; }
inline unsigned int _square_ui(unsigned int a) { return a * a; }
inline short _square_s(short a) { return a * a; }
inline unsigned short _square_us(unsigned short a) { return a * a; }
/* ... long, char ... etc */
#define square(a) \
_Generic((a), \
float: _square_fl(a), \
double: _square_dbl(a), \
int: _square_i(a), \
unsigned int: _square_ui(a), \
short: _square_s(a), \
unsigned short: _square_us(a))
Statement expressions
This is a compiler extension supported by GCC, Clang, EKOPath & Intel C++ (but not MSVC);
#define square(a_) __extension__ ({ \
typeof(a_) a = (a_); \
(a * a); })
So the disadvantage with macros is you need to know to use these to begin with, and that they aren't supported as widely.
One benefit is, in this case, you can use the same square function for many different types.

Example 1:
#define SQUARE(x) ((x)*(x))
int main() {
int x = 2;
int y = SQUARE(x++); // Undefined behavior even though it doesn't look
// like it here
return 0;
}
whereas:
int square(int x) {
return x * x;
}
int main() {
int x = 2;
int y = square(x++); // fine
return 0;
}
Example 2:
struct foo {
int bar;
};
#define GET_BAR(f) ((f)->bar)
int main() {
struct foo f;
int a = GET_BAR(&f); // fine
int b = GET_BAR(&a); // error, but the message won't make much sense unless you
// know what the macro does
return 0;
}
Compared to:
struct foo {
int bar;
};
int get_bar(struct foo *f) {
return f->bar;
}
int main() {
struct foo f;
int a = get_bar(&f); // fine
int b = get_bar(&a); // error, but compiler complains about passing int* where
// struct foo* should be given
return 0;
}

No type checking of parameters and code is repeated which can lead to code bloat. The macro syntax can also lead to any number of weird edge cases where semi-colons or order of precedence can get in the way. Here's a link that demonstrates some macro evil

one drawback to macros is that debuggers read source code, which does not have expanded macros, so running a debugger in a macro is not necessarily useful. Needless to say, you cannot set a breakpoint inside a macro like you can with functions.

Functions do type checking. This gives you an extra layer of safety.

Adding to this answer..
Macros are substituted directly into the program by the preprocessor (since they basically are preprocessor directives). So they inevitably use more memory space than a respective function. On the other hand, a function requires more time to be called and to return results, and this overhead can be avoided by using macros.
Also macros have some special tools than can help with program portability on different platforms.
Macros don't need to be assigned a data type for their arguments in contrast with functions.
Overall they are a useful tool in programming. And both macroinstructions and functions can be used depending on the circumstances.

I did not notice, in the answers above, one advantage of functions over macros that I think is very important:
Functions can be passed as arguments, macros cannot.
Concrete example: You want to write an alternate version of the standard 'strpbrk' function that will accept, rather than an explicit list of characters to search for within another string, a (pointer to a) function that will return 0 until a character is found that passes some test (user-defined). One reason you might want to do this is so that you can exploit other standard library functions: instead of providing an explicit string full of punctuation, you could pass ctype.h's 'ispunct' instead, etc. If 'ispunct' was implemented only as a macro, this wouldn't work.
There are lots of other examples. For example, if your comparison is accomplished by macro rather than function, you can't pass it to stdlib.h's 'qsort'.
An analogous situation in Python is 'print' in version 2 vs. version 3 (non-passable statement vs. passable function).

If you pass function as an argument to macro it will be evaluated every time.
For example, if you call one of the most popular macro:
#define MIN(a,b) ((a)<(b) ? (a) : (b))
like that
int min = MIN(functionThatTakeLongTime(1),functionThatTakeLongTime(2));
functionThatTakeLongTime will be evaluated 5 times which can significantly drop perfomance

Related

Macro in C to call a function returning integer and then return a string

I have a function which returns an integer value. Now I want to write a macro which call this function, gets the return value and prepends a string to it and return the resultant string.
I have tried this:
#define TEST(x) is_enabled(x)
I call this macro in the main function as:
int ret = 0;
ret = TEST(2);
printf("PORT-%d\n", ret);
This works perfectly. However I want the macro to return the string PORT-x, where, x is the return value of the called function. How can I do this?
EDIT :
I also tried writing it into multiple lines as:
#define TEST(x)\
{\
is_enabled(x);\
}
And called it in the main function as:
printf("PORT-%d\n", TEST(2));
But this gives a compile time error:
error: expected expression before â{â token
Use a function, not a macro. There is no good reason to use a macro here.
You can solve it by using sprintf(3), in conjonction with malloc or a buffer. See Creating C formatted strings (not printing them) or man pages for details.
About your edit: You don't need to use braces {} in a macro, and they are causing your error as preprocessing would translate it to something like
printf("format%d", {
is_enabled(x);
});
To better understand macros, run gcc or clang with -E flag, or try to read this article: http://en.wikipedia.org/wiki/C_preprocessor
That's a bit of a pain since you need to ensure there's storage for the string. In all honesty, macros nowadays could be reserved for conditional compilation only.
Constants are better done with enumerated types, and macro functions are generally better as inline functions (with the knowledge that inline is a suggestion to the compiler, not a demand).
If you insist on using a macro, the storage could be done with static storage though that has problems with threads if you're using them, and delayed/multiple use of the returned string.
You could also dynamically allocate the string but then you have to free it when done, and handle out-of-memory conditions.
Perhaps the easiest way is to demand the macro user provide their own storage, along the lines of:
#include <stdio.h>
#define TEST2_STR(b,p) (sprintf(b,"PORT-%d",p),b)
int main (void) {
char buff[20];
puts (TEST2_STR(buff, 42));
return 0;
}
which outputs:
PORT-42
In case the macro seems a little confusing, it makes use of the comma operator, in which the expression (a, b) evaluates both a and b, and has a result of b.
In this case, it evaluates the sprintf (which populates the buffer) then "returns" the buffer. And, even if you think you've never seen that before, you're probably wrong:
for (i = 0, j = 9; i < 10; i++, j--)
xyzzy[i] = plugh[j];
Despite most people thinking that's a feature of for, it's very much a different construct that can be used in many different places:
int i, j, k;
i = 7, j = 4, k = 42;
while (puts("Hello, world"),sleep(1),1);
(and so on).

can the following C macro cause problems?

I would like to create two macros. One of them will expand to function prototype and function content and the other one will expand to only function prototype. I'm thinking to create followings:
#ifdef SOME_CONDITION
#define MY_MACRO(prototype, content) prototype;
#else
#define MY_MACRO(prototype, content) prototype content
#endif
As an example usage
MY_MACRO(int foo(int a, int b)
,
{
return a + b;
}
)
These macros seems working fine. Do you think are those macros safe enough so that they will work for every kind of C code as intended? Or do you see any pitfall?
The first major pitfall, it doesn't work. When the second macro is used, it creates
int foo(int a, int b), { return a + b; }
which is not a valid function definition. To fix this, you must remove the , in the macro definition.
The second pitfall I see, usually C programmers don't use such fancy macros. It's simply confusing, when you're used to reading C source code.
If you're worried about diverging prototype declarations and corresponding function definitions, I suggest using appropriate compiler flags or tools. See this question and answers, How to find C functions without a prototype?
There are a lot of pitfalls, it is simply too naive, I think.
never have macros that change the grammatical parsing, here in particular that add a ; at the end. Nobody will be able to comprehend code like that that has function-like macros invocations in file scope without a terminating semicolon.
your macro expects two arguments, exactly. If your code block in the second argument contains an unprotected , operator, you are screwed.
Your second variant should definitively not have a , on the right hand side.
This would work a bit better
#ifdef SOME_CONDITION
#define MY_MACRO(prototype, ...) prototype
#else
#define MY_MACRO(prototype, ...) prototype __VA_ARGS__ extern double dummyForMY_MACRO[]
#endif
you'd have to use that as
MY_MACRO(int foo(int a, int b), { return a + b; });
So this provides at least something visually more close to C code (well...) and handles the problem of the intermediate commas. The unused variable dummyForMY_MACRO should cause no harm, but "eats" the ; in the second form.
Not that I'd suggest that you use such a thing untested like this.
Do you think are those macros safe enough so that they will work for every kind of C code as intended? Or do you see any pitfall?
Do not attempt to re-invent the C language. The people who read your code will be other C programmers. You can expect them to know C. You cannot expect them to know "the-home-brewed-garage-hacker-macro-language".
Strive to write code that is as simple as readable as possible. Avoid complexity, avoid confusion. Don't attempt to create solutions when there exists no problem to solve.
I actually did something similar quite recently and learned the pitfalls of passing in code as a macro argument.
For example try this seemingly correct code (using the first macro definition):
MY_MACRO(int foo(int a, int b)
,
{
int c = 1, d = 2;
return a + b + c + d;
}
)
You'll most likely see a compile error something to the tune of the macro expecting only 2 arguments while you provided 3.
What happens is that the macros are compiled by the pre-processor which doesn't care about C syntax. So in it's eyes, each , is a argument separator. So it thinks that the first argument to the macro is int foo(int a, int b), the second argument { int c = 1 and the third argument as d = 2; return a + b + c + d; }.
So basically, the moral of the story is that "never pass in code as argument". Most macros that want to be flexible code-wise compile down into a if or a for statement (e.g. list_for_each in http://lxr.free-electrons.com/source/include/linux/list.h#L407).
I'd suggest that you stick to standard #ifdefery in your situation. e.g.
#ifndef UNIT_TEST
int foo(...) {
//actual implementation
}
#else
int foo(..) {
return 0;
}
#endif
This sort of code is fairly standard: http://lxr.missinglinkelectronics.com/#linux+v3.13/include/linux/of.h (search for CONFIG_OF)
Find below example which helps you to fulfils both of your requirements:
#ifdef SOME_CONDITION
#define MY_MACRO(a, b) \
int foo(int a, int b);
#else
#define MY_MACRO(a, b) \
int foo(int a, int b) \
{\
return 0;\
}
#endif
You can use 1st macro by MY_MACRO(a, b) and 2nd macro as MY_MACRO(a, b);

Is this function macro safe?

Can you tell me if anything and what can go wrong with this C "function macro"?
#define foo(P,I,X) do { (P)[I] = X; } while(0)
My goal is that foo behaves exactly like the following function foofunc for any POD data type T (i.e. int, float*, struct my_struct { int a,b,c; }):
static inline void foofunc(T* p, size_t i, T x) { p[i] = x; }
For example this is working correctly:
int i = 0;
float p;
foo(&p,i++,42.0f);
It can handle things like &p due to putting P in parentheses, it does increment i exactly once because I appears only once in the macro and it requires a semicolon at the end of the line due to do {} while(0).
Are there other situations of which I am not aware of and in which the macro foo would not behave like the function foofunc?
In C++ one could define foofunc as a template and would not need the macro. But I look for a solution which works in plain C (C99).
The fact that your macro works for arbitrary X arguments hinges on the details of operator precedence. I recommend using parentheses even if they happen not to be necessary here.
#define foo(P,I,X) do { (P)[I] = (X); } while(0)
This is an instruction, not an expression, so it cannot be used everywhere foofunc(P,I,X) could be. Even if foofunc returns void, it can be used in comma expressions; foo can't. But you can easily define foo as an expression, with a cast to void if you don't want to risk using the result.
#define foo(P,I,X) ((void)((P)[I] = (X)))
With a macro instead of a function, all you lose is the error checking. For example, you can write foo(3, ptr, 42) instead of foo(ptr, 3, 42). In an implementation where size_t is smaller than ptrdiff_t, using the function may truncate I, but the macro's behavior is more intuitive. The type of X may be different from the type that P points to: an automatic conversion will take place, so in effect it is the type of P that determines which typed foofunc is equivalent.
In the important respects, the macro is safe. With appropriate parentheses, if you pass syntactically reasonable arguments, you get a well-formed expansion. Since each parameter is used exactly once, all side effects will take place. The order of evaluation between the parameters is undefined either way.
The do { ... } while(0) construct protects your result from any harm, your inputs P and I are protected by () and [], respectively. What is not protected, is X. So the question is, whether protection is needed for X.
Looking at the operator precedence table (http://en.wikipedia.org/wiki/Operators_in_C_and_C%2B%2B#Operator_precedence), we see that only two operators are listed as having lower precedence than = so that the assignment could steal their argument: the throw operator (this is C++ only) and the , operator.
Now, apart from being C++ only, the throw operator is uncritical because it does not have a left hand argument that could be stolen.
The , operator, on the other hand, would be a problem if X could contain it as a top level operator. But if you parse the statement
foo(array, index, x += y, y)
you see that the , operator would be interpreted to delimit a fourth argument, and
foo(array, index, (x += y, y))
already comes with the parentheses it requires.
To make a long story short:
Yes, your definition is safe.
However, your definition relies on the impossibility to pass stuff, more_stuff as one macro parameter without adding parentheses. I would prefer not to rely on such intricacies, and just write the obviously safe
#define foo(P, I, X) do { (P)[I] = (X); } while(0)

Benefits of pure function

Today i was reading about pure function, got confused with its use:
A function is said to be pure if it returns same set of values for same set of inputs and does not have any observable side effects.
e.g. strlen() is a pure function while rand() is an impure one.
__attribute__ ((pure)) int fun(int i)
{
return i*i;
}
int main()
{
int i=10;
printf("%d",fun(i));//outputs 100
return 0;
}
http://ideone.com/33XJU
The above program behaves in the same way as in the absence of pure declaration.
What are the benefits of declaring a function as pure[if there is no change in output]?
pure lets the compiler know that it can make certain optimisations about the function: imagine a bit of code like
for (int i = 0; i < 1000; i++)
{
printf("%d", fun(10));
}
With a pure function, the compiler can know that it needs to evaluate fun(10) once and once only, rather than 1000 times. For a complex function, that's a big win.
When you say a function is 'pure' you are guaranteeing that it has no externally visible side-effects (and as a comment says, if you lie, bad things can happen). Knowing that a function is 'pure' has benefits for the compiler, which can use this knowledge to do certain optimizations.
Here is what the GCC documentation says about the pure attribute:
pure
Many functions have no effects except the return value and their return
value depends only on the parameters and/or global variables.
Such a function can be subject to common subexpression elimination and
loop optimization just as an arithmetic operator would be. These
functions should be declared with the attribute pure. For example,
int square (int) __attribute__ ((pure));
Philip's answer already shows how knowing a function is 'pure' can help with loop optimizations.
Here is one for common sub-expression elimination (given foo is pure):
a = foo (99) * x + y;
b = foo (99) * x + z;
Can become:
_tmp = foo (99) * x;
a = _tmp + y;
b = _tmp + z;
In addition to possible run-time benefits, a pure function is much easier to reason about when reading code. Furthermore, it's much easier to test a pure function since you know that the return value only depends on the values of the parameters.
A non-pure function
int foo(int x, int y) // possible side-effects
is like an extension of a pure function
int bar(int x, int y) // guaranteed no side-effects
in which you have, besides the explicit function arguments x, y,
the rest of the universe (or anything your computer can communicate with) as an implicit potential input. Likewise, besides the explicit integer return value, anything your computer can write to is implicitly part of the return value.
It should be clear why it is much easier to reason about a pure function than a non-pure one.
Just as an add-on, I would like to mention that C++11 codifies things somewhat using the constexpr keyword. Example:
#include <iostream>
#include <cstring>
constexpr unsigned static_strlen(const char * str, unsigned offset = 0) {
return (*str == '\0') ? offset : static_strlen(str + 1, offset + 1);
}
constexpr const char * str = "asdfjkl;";
constexpr unsigned len = static_strlen(str); //MUST be evaluated at compile time
//so, for example, this: int arr[len]; is legal, as len is a constant.
int main() {
std::cout << len << std::endl << std::strlen(str) << std::endl;
return 0;
}
The restrictions on the usage of constexpr make it so that the function is provably pure. This way, the compiler can more aggressively optimize (just make sure you use tail recursion, please!) and evaluate the function at compile time instead of run time.
So, to answer your question, is that if you're using C++ (I know you said C, but they are related), writing a pure function in the correct style allows the compiler to do all sorts of cool things with the function :-)
In general, Pure functions has 3 advantages over impure functions that the compiler can take advantage of:
Caching
Lets say that you have pure function f that is being called 100000 times, since it is deterministic and depends only on its parameters, the compiler can calculate its value once and use it when necessary
Parallelism
Pure functions don't read or write to any shared memory, and therefore can run in separate threads without any unexpected consequence
Passing By Reference
A function f(struct t) gets its argument t by value, and on the other hand, the compiler can pass t by reference to f if it is declared as pure while guaranteeing that the value of t will not change and have performance gains
In addition to the compile time considerations, pure functions can be tested fairly easy: just call them.
No need to construct objects or mock connections to DBs / file system.

Anonymous functions using GCC statement expressions

This question isn't terribly specific; it's really for my own C enrichment and I hope others can find it useful as well.
Disclaimer: I know many will have the impulse to respond with "if you're trying to do FP then just use a functional language". I work in an embedded environment that needs to link to many other C libraries, and doesn't have much space for many more large shared libs and does not support many language runtimes. Moreover, dynamic memory allocation is out of the question. I'm also just really curious.
Many of us have seen this nifty C macro for lambda expressions:
#define lambda(return_type, function_body) \
({ \
return_type __fn__ function_body \
__fn__; \
})
And an example usage is:
int (*max)(int, int) = lambda (int, (int x, int y) { return x > y ? x : y; });
max(4, 5); // Example
Using gcc -std=c89 -E test.c, the lambda expands to:
int (*max)(int, int) = ({ int __fn__ (int x, int y) { return x > y ? x : y; } __fn__; });
So, these are my questions:
What precisely does the line int (*X); declare? Of course, int * X; is a pointer to an integer, but how do these two differ?
Taking a look at the exapnded macro, what on earth does the final __fn__ do? If I write a test function void test() { printf("hello"); } test; - that immediately throws an error. I do not understand that syntax.
What does this mean for debugging? (I'm planning to experiment myself with this and gdb, but others' experiences or opinions would be great). Would this screw up static analyzers?
This declaration (at block scope):
int (*max)(int, int) =
({
int __fn__ (int x, int y) { return x > y ? x : y; }
__fn__;
});
is not C but is valid GNU C.
It makes use of two gcc extensions:
nested functions
statement expressions
Both nested functions (defining a function inside a compound statement) and statement expressions (({}), basically a block that yields a value) are not permitted in C and come from GNU C.
In a statement expression, the last expression statement is the value of the construct. This is why the nested function __fn__ appears as an expression statement at the end of the statement expression. A function designator (__fn__ in the last expression statement) in a expression is converted to a pointer to a function by the usual conversions. This is the value used to initialize the function pointer max.
Your lambda macro exploits two funky features. First it uses nested functions to actually define the body of your function (so your lambda is not really anonymous, it just uses an implicit __fn__ variable (which should be renamed to something else, as double-leading-underscore names are reserved for the compiler, so maybe something like yourapp__fn__ would be better).
All of this is itself performed within a GCC compound statement (see http://gcc.gnu.org/onlinedocs/gcc/Statement-Exprs.html#Statement-Exprs), the basic format of which goes something like:
({ ...; retval; })
the last statement of the compound statement being the address of the just-declared function. Now, int (*max)(int,int) simply gets assigned the value of the compound statement, which is now the pointer to the 'anonymous' function just declared.
Debugging macros are a royal pain of course.
As for the reason why test; .. at least here, i get the 'test redeclared as different type of symbol', which I assume means GCC is treating it as a declaration and not a (useless) expression. Because untyped variables default to int and because you have already declared test as a function (essentially, void (*)(void)) you get that.. but I could be wrong about that.
This is not portable by any stretch of the imagination though.
Partial answer:
It isn't int(*X) you are interested in. It is int (*X)(y,z). That is a function pointer to the function called X which takes (y,z) and returns int.
For debugging, this will be really hard. Most debuggers can't trace through a macro. You would most likely have to debug the assembly.
int (*max)(int, int) is the type of variable you are declaring. It is defined as a function pointer named max which returns int, and takes two ints as parameters.
__fn__ refers to the function name, which in this case is max.
I don't have an answer there. I would imagine you can step through it if you have run it through the preprocessor.

Resources