Is it possible to check the value of defined statement using C preprocessor? - c-preprocessor

Let's say I have a #define RTR r2 statement. Is it possible to check the value of the RTR macro? I'm looking something like this:
#if RTR == r1 || RTR == r2
It is router1 or router2!
#endif
I guess this is not possible..

Let's have a look at GCC documentation:
[In '#if'] expression is a C expression of integer type, subject to stringent restrictions. It may contain
[...]
Macros. All macros in the expression are expanded before actual computation of the expression's value begins.
Uses of the defined operator, which lets you check whether macros are defined in the middle of an ‘#if’.
Identifiers that are not macros, which are all considered to be the number zero. This allows you to write #if MACRO instead of #ifdef MACRO, if you know that MACRO, when defined, will always have a nonzero value. Function-like macros used without their function call parentheses are also treated as zero.
So, according to the last point, unless r1 ans r2 are macros (or integer constants) themselves in your example, condition
#if RTR == r1 || RTR == r2
is equivalent to
#if RTR == 0 || RTR == 0
which I guess isn't the desired behaviour. For this to work you should assign RTR an integer constant value (or a expression, evaluating to integer constant at compile-time).
BTW, you should be very careful when giving a preprocessor macro such a short name as RTR, as it's very easy to clash with something.

Related

Working of conditional compilation #if and #else (and others) in c

I tried to write a program using some conditional compilation pre-processing directives instead of "if-else" as follows.
#include<stdio.h>
int main ()
{
int x;
scanf ("%d",&x);
#if (x==5)
printf ("x is 5");
#else
printf ("x not 5");
#endif
}
But the thing is, it always print the else part even though value of xis 5. My simplest question is----->WHY?
Is it possible to successfully complete this program (i.e taking value of x from user and check conditions using #if directive and print statement under #if).
During compilation it shows a warning "'x' is not defined, evaluates to 0". But x seems defined to me. Does that mean x should be defined using #define. Please explain me concept behind Conditional Compilation.
x is not an integer literal or an integer literal expression (integer literals + operators) or a macro expanding to those, so in a conditional, the preprocessor replaces it with 0 (6.10.1p4). 0==5 is false, so the #else branch is taken.
The preprocessor doesn't know about C declarations, types and such. It only works with tokens (and macros that ultimately expand to those).
6.10.1p4
After all replacements due to macro expansion and the defined unary
operator have been performed, all remaining identifiers (including
those lexically identical to keywords) are replaced with the pp-number
0, and then each preprocessing token is converted into a token.
Preprocessing takes place before the compilation. So preprocessor does not know anything about your C code or variables. You cant use any C variables in conditions.
Conditional compilation is for different purposes.
#define DEBUG
/* ....*/
#ifdef DEBUG
printf("Some debug value %d\n", val);
#endif
Operands in #if statements can be only constants, things defined with #define, and a special defined operator. Any other identifiers in the expression are replaced with 0. The x in your sample code is not defined with #define, so (x==5) becomes (0==0).
In the C 2018 standard, clause 6.10.1 tells us that evaluation of the expression in an #if statement includes:
Preprocessor macros (things defined with #define) are replaced according to their definitions.
Uses of the defined operator are replaced with 0 or 1.
Any remaining identifiers are replaced with 0.
Because the x in your sample code is not defined with #define, it is replaced with 0 in the #if statement. This results in (0==5), which is false, so code between the #if and the #else is skipped.
In a preprocessor statement, you cannot evaluate variables based on values that will be set during program execution.
It's the "pre-processor". "Pre" means "before".
You're trying to use a runtime value during preprocessing! The preprocessor of course has no access to that information during the build.
This problem isn't limited to runtime values, but is more fundamental. Even if you were trying to use a (named) compile-time constant such as constexpr int x = 2, you couldn't do that. These are two languages interleaving, like generating HTML with PHP; the HTML has no knowledge of PHP variables, and the PHP has no knowledge of what widgets the user clicks on the page. These are completely different execution contexts with no built-in interaction or cross-compatibility.

C Preprocessor #if handling of non-integer constant

I have the following code snippet to allow me flip easily between double and float representations of floating point values:
#define FLOATINGPOINTSIZE 64
#if FLOATINGPOINTSIZE == 64
typedef double FP_TYPE;
#define FP_LIT_SUFFIX
#else
typedef float FP_TYPE;
#define FP_LIT_SUFFIX f
#endif
At another location I had written the following:
/* set floating point limits used when initialising values that will be subject
* to the MIN() MAX() functions
*
* uses values from float.h */
#if FP_TYPE == double
#define FPTYPE_MAX DBL_MAX
#define FPTYPE_MIN DBL_MIN
#else
#define FPTYPE_MAX FLT_MAX
#define FPTYPE_MIN FLT_MIN
#endif
whereas I think I should have written:
#if FLOATINGPOINTSIZE == 64
I have -Wall compiler setting to give me plenty of warnings but this didn't get flagged up as an issue. Possibly -Wall is completely independent of the preprocessor though?
My question is how is the preprocessor interpreting:
#if FP_TYPE == double
The meaning is obvious to the programmer, but I'm not sure what the preprocessor makes of it?
Its got to be a bug right?
I have -Wall compiler setting to give me plenty of warnings but this
didn't get flagged up as an issue.
The code is valid, but you are right to be concerned.
My question is how is the
preprocessor interpreting:
#if FP_TYPE == double
Good question.
The meaning is obvious to the programmer, but I'm not sure what the
preprocessor makes of it?
The intended meaning seems obvious, and as the code's author, you know what you meant. But what you appear to have intended indeed is not how the preprocessor interprets that conditional.
The expression in a preprocessor conditional is interpreted as an integer constant expression. Just like in a C if statement, if the expression evaluates to 0 then the condition is considered false, and otherwise it is considered true. All macros in the expression are expanded before it is evaluated, and any remaining identifiers are replaced with 0. Details are presented in section 6.10.1 of the standard.
Supposing that there is no in-scope defnition of a macro named either FP_TYPE or double (and a typedef is not a macro definition), your conditional is equivalent to
#if 0 == 0
, which is always true.
Its got to be a bug right?
The preprocessing result will not be what you intended, so it's a bug in your code. The compiler, on the other hand, is correct to accept it.
The meaning is obvious to the programmer, but I'm not sure what the preprocessor makes of it?
Its got to be a bug right?
It's a bug from a user's point of view but it is not a bug in the preprocessor.
#if FP_TYPE == double
is interpreted as
#if 0 == 0
since neither FP_TYPE nor double is a known symbol for the pre-processor.
From https://gcc.gnu.org/onlinedocs/cpp/If.html#If:
Identifiers that are not macros, which are all considered to be the number zero. This allows you to write #if MACRO instead of #ifdef MACRO, if you know that MACRO, when defined, will always have a nonzero value. Function-like macros used without their function call parentheses are also treated as zero.

Nitpicking booleans in C

I was reading comp.lang.cs description of booleans values, pre-C99. It mentions that some people prefer to define their own boolean values as:
#define TRUE (1==1)
#define FALSE (!TRUE)
However, the standard defines the equality operator to always return a signed int with a value of 1 when two values compare equal (C11 - 6.5.9) and the logical not operator shall return a int with a value of 0 if the value compares unequal to 0 (C11 - 6.5.3.3).
If this is the case and the above definitions use literals, won't the evaluation happen compile time and the resulting definitions be:
#define TRUE (1)
#define FALSE (0)
And a follow-up question. Is there any case where it makes sense to define the true and false labels to anything other than 1 and 0, respectively?
And pardon that I reference C11 when my question concerns C89 but I only have the C11 standard at hand.
(1==1) and (!TRUE) are useful definitions on some compilers (I don't have a concrete example off of the top of my head) that track whether an integer came from a boolean comparison. This enables them to warn for
if (i)
while at the same time not warning for
if (i != 0)
and also not warning for
j = i != 0;
if (j)
even though in all three cases, the conditional is a non-constant int.
This way, no warning would be generated for int b = TRUE;...if (b), since b would be considered a truth-integer.
You can make a legitimate argument that such warnings are useless, but others can make an equally legitimate argument that such warnings do have use. It will have many false positives in common code, but it may make code more readable if it is written in a way that avoids such warnings.
At the same time, such definitions are harmless for other compilers that do not track this, since they just see constant expressions that evaluate to 1 and 0.

Expansion of module_param() macro: a struct with a single member or a bitfield? [duplicate]

I bumped into this strange macro code in /usr/include/linux/kernel.h:
/* Force a compilation error if condition is true, but also produce a
result (of value 0 and type size_t), so the expression can be used
e.g. in a structure initializer (or where-ever else comma expressions
aren't permitted). */
#define BUILD_BUG_ON_ZERO(e) (sizeof(struct { int:-!!(e); }))
#define BUILD_BUG_ON_NULL(e) ((void *)sizeof(struct { int:-!!(e); }))
What does :-!! do?
This is, in effect, a way to check whether the expression e can be evaluated to be 0, and if not, to fail the build.
The macro is somewhat misnamed; it should be something more like BUILD_BUG_OR_ZERO, rather than ...ON_ZERO. (There have been occasional discussions about whether this is a confusing name.)
You should read the expression like this:
sizeof(struct { int: -!!(e); }))
(e): Compute expression e.
!!(e): Logically negate twice: 0 if e == 0; otherwise 1.
-!!(e): Numerically negate the expression from step 2: 0 if it was 0; otherwise -1.
struct{int: -!!(0);} --> struct{int: 0;}: If it was zero, then we declare a struct with an anonymous integer bitfield that has width zero. Everything is fine and we proceed as normal.
struct{int: -!!(1);} --> struct{int: -1;}: On the other hand, if it isn't zero, then it will be some negative number. Declaring any bitfield with negative width is a compilation error.
So we'll either wind up with a bitfield that has width 0 in a struct, which is fine, or a bitfield with negative width, which is a compilation error. Then we take sizeof that field, so we get a size_t with the appropriate width (which will be zero in the case where e is zero).
Some people have asked: Why not just use an assert?
keithmo's answer here has a good response:
These macros implement a compile-time test, while assert() is a run-time test.
Exactly right. You don't want to detect problems in your kernel at runtime that could have been caught earlier! It's a critical piece of the operating system. To whatever extent problems can be detected at compile time, so much the better.
The : is a bitfield. As for !!, that is logical double negation and so returns 0 for false or 1 for true. And the - is a minus sign, i.e. arithmetic negation.
It's all just a trick to get the compiler to barf on invalid inputs.
Consider BUILD_BUG_ON_ZERO. When -!!(e) evaluates to a negative value, that produces a compile error. Otherwise -!!(e) evaluates to 0, and a 0 width bitfield has size of 0. And hence the macro evaluates to a size_t with value 0.
The name is weak in my view because the build in fact fails when the input is not zero.
BUILD_BUG_ON_NULL is very similar, but yields a pointer rather than an int.
Some people seem to be confusing these macros with assert().
These macros implement a compile-time test, while assert() is a runtime test.
Well, I am quite surprised that the alternatives to this syntax have not been mentioned. Another common (but older) mechanism is to call a function that isn't defined and rely on the optimizer to compile-out the function call if your assertion is correct.
#define MY_COMPILETIME_ASSERT(test) \
do { \
extern void you_did_something_bad(void); \
if (!(test)) \
you_did_something_bad(void); \
} while (0)
While this mechanism works (as long as optimizations are enabled) it has the downside of not reporting an error until you link, at which time it fails to find the definition for the function you_did_something_bad(). That's why kernel developers starting using tricks like the negative sized bit-field widths and the negative-sized arrays (the later of which stopped breaking builds in GCC 4.4).
In sympathy for the need for compile-time assertions, GCC 4.3 introduced the error function attribute that allows you to extend upon this older concept, but generate a compile-time error with a message of your choosing -- no more cryptic "negative sized array" error messages!
#define MAKE_SURE_THIS_IS_FIVE(number) \
do { \
extern void this_isnt_five(void) __attribute__((error( \
"I asked for five and you gave me " #number))); \
if ((number) != 5) \
this_isnt_five(); \
} while (0)
In fact, as of Linux 3.9, we now have a macro called compiletime_assert which uses this feature and most of the macros in bug.h have been updated accordingly. Still, this macro can't be used as an initializer. However, using by statement expressions (another GCC C-extension), you can!
#define ANY_NUMBER_BUT_FIVE(number) \
({ \
typeof(number) n = (number); \
extern void this_number_is_five(void) __attribute__(( \
error("I told you not to give me a five!"))); \
if (n == 5) \
this_number_is_five(); \
n; \
})
This macro will evaluate its parameter exactly once (in case it has side-effects) and create a compile-time error that says "I told you not to give me a five!" if the expression evaluates to five or is not a compile-time constant.
So why aren't we using this instead of negative-sized bit-fields? Alas, there are currently many restrictions of the use of statement expressions, including their use as constant initializers (for enum constants, bit-field width, etc.) even if the statement expression is completely constant its self (i.e., can be fully evaluated at compile-time and otherwise passes the __builtin_constant_p() test). Further, they cannot be used outside of a function body.
Hopefully, GCC will amend these shortcomings soon and allow constant statement expressions to be used as constant initializers. The challenge here is the language specification defining what is a legal constant expression. C++11 added the constexpr keyword for just this type or thing, but no counterpart exists in C11. While C11 did get static assertions, which will solve part of this problem, it wont solve all of these shortcomings. So I hope that gcc can make a constexpr functionality available as an extension via -std=gnuc99 & -std=gnuc11 or some such and allow its use on statement expressions et. al.
It's creating a size 0 bitfield if the condition is false, but a size -1 (-!!1) bitfield if the condition is true/non-zero. In the former case, there is no error and the struct is initialized with an int member. In the latter case, there is a compile error (and no such thing as a size -1 bitfield is created, of course).

What is ":-!!" in C code?

I bumped into this strange macro code in /usr/include/linux/kernel.h:
/* Force a compilation error if condition is true, but also produce a
result (of value 0 and type size_t), so the expression can be used
e.g. in a structure initializer (or where-ever else comma expressions
aren't permitted). */
#define BUILD_BUG_ON_ZERO(e) (sizeof(struct { int:-!!(e); }))
#define BUILD_BUG_ON_NULL(e) ((void *)sizeof(struct { int:-!!(e); }))
What does :-!! do?
This is, in effect, a way to check whether the expression e can be evaluated to be 0, and if not, to fail the build.
The macro is somewhat misnamed; it should be something more like BUILD_BUG_OR_ZERO, rather than ...ON_ZERO. (There have been occasional discussions about whether this is a confusing name.)
You should read the expression like this:
sizeof(struct { int: -!!(e); }))
(e): Compute expression e.
!!(e): Logically negate twice: 0 if e == 0; otherwise 1.
-!!(e): Numerically negate the expression from step 2: 0 if it was 0; otherwise -1.
struct{int: -!!(0);} --> struct{int: 0;}: If it was zero, then we declare a struct with an anonymous integer bitfield that has width zero. Everything is fine and we proceed as normal.
struct{int: -!!(1);} --> struct{int: -1;}: On the other hand, if it isn't zero, then it will be some negative number. Declaring any bitfield with negative width is a compilation error.
So we'll either wind up with a bitfield that has width 0 in a struct, which is fine, or a bitfield with negative width, which is a compilation error. Then we take sizeof that field, so we get a size_t with the appropriate width (which will be zero in the case where e is zero).
Some people have asked: Why not just use an assert?
keithmo's answer here has a good response:
These macros implement a compile-time test, while assert() is a run-time test.
Exactly right. You don't want to detect problems in your kernel at runtime that could have been caught earlier! It's a critical piece of the operating system. To whatever extent problems can be detected at compile time, so much the better.
The : is a bitfield. As for !!, that is logical double negation and so returns 0 for false or 1 for true. And the - is a minus sign, i.e. arithmetic negation.
It's all just a trick to get the compiler to barf on invalid inputs.
Consider BUILD_BUG_ON_ZERO. When -!!(e) evaluates to a negative value, that produces a compile error. Otherwise -!!(e) evaluates to 0, and a 0 width bitfield has size of 0. And hence the macro evaluates to a size_t with value 0.
The name is weak in my view because the build in fact fails when the input is not zero.
BUILD_BUG_ON_NULL is very similar, but yields a pointer rather than an int.
Some people seem to be confusing these macros with assert().
These macros implement a compile-time test, while assert() is a runtime test.
Well, I am quite surprised that the alternatives to this syntax have not been mentioned. Another common (but older) mechanism is to call a function that isn't defined and rely on the optimizer to compile-out the function call if your assertion is correct.
#define MY_COMPILETIME_ASSERT(test) \
do { \
extern void you_did_something_bad(void); \
if (!(test)) \
you_did_something_bad(void); \
} while (0)
While this mechanism works (as long as optimizations are enabled) it has the downside of not reporting an error until you link, at which time it fails to find the definition for the function you_did_something_bad(). That's why kernel developers starting using tricks like the negative sized bit-field widths and the negative-sized arrays (the later of which stopped breaking builds in GCC 4.4).
In sympathy for the need for compile-time assertions, GCC 4.3 introduced the error function attribute that allows you to extend upon this older concept, but generate a compile-time error with a message of your choosing -- no more cryptic "negative sized array" error messages!
#define MAKE_SURE_THIS_IS_FIVE(number) \
do { \
extern void this_isnt_five(void) __attribute__((error( \
"I asked for five and you gave me " #number))); \
if ((number) != 5) \
this_isnt_five(); \
} while (0)
In fact, as of Linux 3.9, we now have a macro called compiletime_assert which uses this feature and most of the macros in bug.h have been updated accordingly. Still, this macro can't be used as an initializer. However, using by statement expressions (another GCC C-extension), you can!
#define ANY_NUMBER_BUT_FIVE(number) \
({ \
typeof(number) n = (number); \
extern void this_number_is_five(void) __attribute__(( \
error("I told you not to give me a five!"))); \
if (n == 5) \
this_number_is_five(); \
n; \
})
This macro will evaluate its parameter exactly once (in case it has side-effects) and create a compile-time error that says "I told you not to give me a five!" if the expression evaluates to five or is not a compile-time constant.
So why aren't we using this instead of negative-sized bit-fields? Alas, there are currently many restrictions of the use of statement expressions, including their use as constant initializers (for enum constants, bit-field width, etc.) even if the statement expression is completely constant its self (i.e., can be fully evaluated at compile-time and otherwise passes the __builtin_constant_p() test). Further, they cannot be used outside of a function body.
Hopefully, GCC will amend these shortcomings soon and allow constant statement expressions to be used as constant initializers. The challenge here is the language specification defining what is a legal constant expression. C++11 added the constexpr keyword for just this type or thing, but no counterpart exists in C11. While C11 did get static assertions, which will solve part of this problem, it wont solve all of these shortcomings. So I hope that gcc can make a constexpr functionality available as an extension via -std=gnuc99 & -std=gnuc11 or some such and allow its use on statement expressions et. al.
It's creating a size 0 bitfield if the condition is false, but a size -1 (-!!1) bitfield if the condition is true/non-zero. In the former case, there is no error and the struct is initialized with an int member. In the latter case, there is a compile error (and no such thing as a size -1 bitfield is created, of course).

Resources