I would like to do the following, but the compiler doesn't like it:
unsigned short foo = 1;
// do something with foo
#if sizeof(short) * CHAR_BIT > 16
foo &= 0xffff;
#endif
I know this expression can always be fully evaluated at compile time, but maybe it's only evaluated after the preprocessor does it's thing? Is this possible in ANSI C or do I just have to do the check at run time?
You can't use sizeof in a preprocessor expression. You might want to do something like this instead:
#include <limits.h>
#if SHRT_MAX > 32767
/* do soemthing */
#endif
If your aim is to stop compilation when a data type is the wrong size, the following technique is useful:
struct _check_type_sizes
{
int int_is_4_bytes[(sizeof(int) == 4) ? 1 : -1];
int short_is_2_bytes[(sizeof(short) == 2) ? 1 : -1];
};
(The sizeof() function is interpreted here by the compiler, not the preprocessor.)
The main disadvantage of this method is that the compiler error isn't very obvious. Make sure you write a very clear comment.
Related
I wanted to define a compile time input validity checker for the library I am developing. So I thought maybe #define a function like #if preprocessor in C which is going to be compiled with GCC, something link this:
#define VALIDITY_CHECK(x) {#if (x)>10
#error "input out of range"
#endif}
But it doesn't work. So what is the proper way of writing such compile time validity checker?
You can use a trick that provokes a compile time error if a condition is not met:
#define ASSERT(condition) (void)(sizeof (struct { int:-!(condition); }))
#define x1 23
#define x2 42
void f(void) {
ASSERT(x1 < 31);
ASSERT(x1 > 31);
ASSERT(x2 < 31);
ASSERT(x2 > 31);
}
It works by evaluating the condition by the !-operator as 1 for false and 0 for true. This number is used to declare an anonymous bit field of size -1 or 0, respectively, in a struct. The anonymous struct is just used for the sizeof operator, and the result is discarded as an unused expression.
Since a negative bit field size is not allowed, the compiler will output a diagnostic message, if the condition is not met.
If the condition is met, the compiler will happily optimize the unused expression away and generate no code for the line.
You could augment the ASSERT() by a comment behind it, if necessary. The diagnostic message will show it.
ASSERT(x1 < 31); // Bla bla bla
You can’t have macro directives in the value of a #define. This is because the # in a value is the "stringizing" operator and it must be followed by a macro argument.
In the #define value, you can thus have #x because x is the argument of VALIDITY_CHECK, but you can’t have #if, #error, or #endif.
You are forced to write the following code explicitly everywhere.
#if (x)>10
#error "input out of range"
#endif
Why I need to figure out the smalles type of a literal (Backstory)
I've written a set of macros to create and use fifos. Macros allow for a generic, yet still very fast implementation on all systems with static memory allocation, such as in small embedded systems. The guys over at codereview did not have any major concerns with my implementation either.
The data is put into anonymous struts, all data is accessed by the identifier of that struct. Currently the functions-like macros to create these structs look like this
#define _fff_create(_type, _depth, _id) \
struct {uint8_t read; uint8_t write; _type data[_depth];} _id = {0,0,{}}
#define _fff_create_deep(_type, _depth, _id) \
struct {uint16_t read; uint16_t write; _type data[_depth];} _id = {0,0,{}}
What I'm looking for
Now I'd like to merge both of these into one macro. To do this I've to figure the minimum required size of read and write to index _depth amount of elements at compile time. Parameters name starting with _ indicate only a literal or a #define value might be passed, both are known at compile time.
Thus I hope to find a macro typeof_literal(arg) which returns uint8_t if arg<256 or uint16_t else.
What I've tried
GCC 4.9.2. offers a command called typeof(). However when used with any literal it returns an int type, which is two byte on my system.
Another feature of GCC 4.9.2 is a compound statement. typeof(({uint8_t u8 = 1; u8;})) will correctly return uint8_t. However I could not figure out a way to put a condition for the type in that block:
typeof(({uint8_t u8 = 1; uint16_t u16 = 1; input ? u8 : u16;})) always returns uint16_t because of the type promotion of the ?: operator
if(...) can't be used either, as any command will happen in "lower" blocks
Macros can't contain #if, which make them unusable for this comparison either.
Can't you just leave it like that?
I realize there might not be a solution to this problem. That's ok too; the current code is just a minor inconvinience. Yet I'd like to know if there's a tricky way around this. A solution to this could open up new possibilities for macros in general. If you are sure that this can't be possible, please explain why.
I think the building block you are looking for is __builtin_choose_expr, which is a lot like the ternary operator, but does not convert its result to a common type. With
#define CHOICE(x) __builtin_choose_expr (x, (int) 1, (short) 2)
this
printf ("%zu %zu\n", sizeof (CHOICE (0)), sizeof (CHOICE (1)));
will print
2 4
as expected.
However, as Greg Hewgill points out, C++ has better facilities for that (but they are still difficult to use).
The macro I was looking for can indeed be written with __builtin_choose_expr as Florian suggested. My solution is attached below, it has been tested and is confirmed working. Use it as you wish!
#define typeof_literal(_literal) \
typeof(__builtin_choose_expr((_literal)>0, \
__builtin_choose_expr((_literal)<=UINT8_MAX, (uint8_t) 0, \
__builtin_choose_expr((_literal)<=UINT16_MAX, (uint16_t) 0, \
__builtin_choose_expr((_literal)<=UINT32_MAX, (uint32_t) 0, (uint64_t) 0))), \
__builtin_choose_expr((_literal)>=INT8_MIN, (int8_t) 0, \
__builtin_choose_expr((_literal)>=INT16_MIN, (int16_t) 0, \
__builtin_choose_expr((_literal)>=INT32_MIN, (int32_t) 0, (int64_t) 0)))))
I'm trying this for over a week with no success.
I'm creating a logger interface between two processors and I need help with defining automated MACROS.
What do I mean?
Let's say I have a logger message defined as LOGGER_MSG_ID_2 that takes two parameter of uint8 and uint16 types.
I have an enum defined as:
typedef enum{
PARAM_NONE,
PARAM_SIZE_UINT8,
PARAM_SIZE_UINT16,
PARAM_SIZE_UINT32
}paramSize_e;
So LOGGER_MSG_ID_2 will have a bitmap defined as:
#define LOGGER_MSG_ID_2_BITMAP (PARAM_SIZE_UINT16 << 2 | PARAM_SIZE_UINT8)
This bitmap is 1 Byte size, so the maximum number of parameters is 4.
Later on I have a list that defines all parameters type according to message ID:
#define ID_2_P0_TYPE uint8 // first parameter
#define ID_2_P1_TYPE uint16 // 2nd parameter
#define ID_2_P2_TYPE 0 // 3rd parameter
#define ID_2_P3_TYPE 0 // 4th parameter
As I said, I have a limitation of 4 parameters, so I would like to define them and let the MACRO decide weather to use them or not. I defined them as 0 but it can be whatever that works.
I have other MACROS that uses the bitmap to get all kind of attributes, such as number of parameters and message size.
Now it's the tricky part. I want to build a MACRO that creates a bitmap from types. The reason is that I don't want redundancy between the bitmap and parameters definitions.
My problem is that everything I tried failed to compile.
Eventually I would like to have a MACRO such as:
#define GET_ENUM_FROM_TYPE(_type)
that gives me PARAM_SIZE_UINT8, PARAM_SIZE_UINT16 or PARAM_SIZE_UINT32 according to type.
Limitations: I'm using arm compiler on windows (armcl.exe) and C99. I can't use C11 Generic().
I tried the following:
#define GET_ENUM_FROM_TYPE(_type) \
(_type == uint8) ? PARAM_SIZE_UINT8 : \
((_type == uint16) ? PARAM_SIZE_UINT16 : \
((_type == uint32) ? PARAM_SIZE_UINT32 : PARAM_NONE))
Eventually I want to use it like:
#define LOGGER_MSG_ID_2_BITMAP \
(GET_ENUM_FROM_TYPE(ID_2_P3_TYPE) << 6 | \
GET_ENUM_FROM_TYPE(ID_2_P2_TYPE) << 4 | \
GET_ENUM_FROM_TYPE(ID_2_P1_TYPE) << 2 | \
GET_ENUM_FROM_TYPE(ID_2_P0_TYPE))
But when I use it, it doesn't compile.
I have a table of bitmaps:
uint8 paramsSizeBitmap [] = {
LOGGER_MSG_ID_1_BITMAP, /* LOGGER_MSG_ID_1 */
LOGGER_MSG_ID_2_BITMAP, /* LOGGER_MSG_ID_2 */
LOGGER_MSG_ID_3_BITMAP, /* LOGGER_MSG_ID_3 */
LOGGER_MSG_ID_4_BITMAP, /* LOGGER_MSG_ID_4 */
LOGGER_MSG_ID_5_BITMAP, /* LOGGER_MSG_ID_5 */
LOGGER_MSG_ID_6_BITMAP, /* LOGGER_MSG_ID_6 */
LOGGER_MSG_ID_7_BITMAP, /* LOGGER_MSG_ID_7 */
LOGGER_MSG_ID_8_BITMAP, /* LOGGER_MSG_ID_8 */
LOGGER_MSG_ID_9_BITMAP, /* LOGGER_MSG_ID_9 */
LOGGER_MSG_ID_10_BITMAP, /* LOGGER_MSG_ID_10 */
};
And I get this error:
line 39: error #18: expected a ")"
line 39: error #29: expected an expression
(line 39 is LOGGER_MSG_ID_2_BITMAP)
Where do I go wrong?
----- Edit -----
For now I have a workaround that I don't really like.
I don't use uint64 so I made a use of sizeof() MACRO and now my MACRO looks like this:
#define GET_ENUM_FROM_TYPE(_type) \
(sizeof(_type) == sizeof(uint8)) ? PARAM_SIZE_UINT8 : \
((sizeof(_type) == sizeof(uint16)) ? PARAM_SIZE_UINT16 : \
((sizeof(_type) == sizeof(uint32)) ? PARAM_SIZE_UINT32 : PARAM_NONE))
and my paraemters list is:
#define NO_PARAM uint64
#define ID_2_P0_TYPE uint8
#define ID_2_P1_TYPE uint16
#define ID_2_P2_TYPE NO_PARAM
#define ID_2_P3_TYPE NO_PARAM
It works fine but... you know...
I believe the solution is to use concatenation operator ##, and helper defines.
// These must match your enum
#define HELPER_0 PARAM_NONE
#define HELPER_uint8 PARAM_SIZE_UINT8
#define HELPER_uint16 PARAM_SIZE_UINT16
#define HELPER_uint32 PARAM_SIZE_UINT32
// Secondary macro to avoid expansion to HELPER__type
#define CONCAT(a, b) a ## b
// Outer parenthesis not strictly necessary here
#define GET_ENUM_FROM_TYPE(_type) (CONCAT(HELPER_, _type))
With that GET_ENUM_FROM_TYPE(ID_2_P1_TYPE) will expand to (PARAM_SIZE_UINT16) after preprocessing.
Note that suffix in HELPER_*** defines has to match exactly the content of ID_*_P*_TYPE macros. For example HELPER_UINT8 won't work (invalid case). (Thanks #cxw)
The basic problem is that == is not supported for types, only for values. Given
uint8 foo;
you can say foo==42 but not foo == uint8. This is because types are not first class in C.
One hack would be to use the C preprocessor stringification operator # (gcc docs). However, this moves all your computation to runtime and may not be suitable for an embedded environment. For example:
#define GET_ENUM_FROM_TYPE(_type) ( \
(strcmp(#_type, "uint8")==0) ? PARAM_SIZE_UINT8 : \
((strcmp(#_type, "uint16")==0) ? PARAM_SIZE_UINT16 : \
((strcmp(#_type, "uint32")==0) ? PARAM_SIZE_UINT32 : PARAM_NONE)) \
)
With that definition,
GET_ENUM_FROM_TYPE(uint8)
expands to
( (strcmp("uint8", "uint8")==0) ? PARAM_SIZE_UINT8 : ((strcmp("uint8", "uint16")==0) ? PARAM_SIZE_UINT16 : ((strcmp("uint8", "uint32")==0) ? PARAM_SIZE_UINT32 : PARAM_NONE)) )
which should do what you want, although at runtime.
Sorry, this doesn't directly answer the question. But you should reconsider this whole code.
First of all, _Generic would have solved this elegantly.
The dirty alternative to untangle groups of macros like these, would be to use so-called X macros, which are perfect for cases such as "I don't want redundancy between the bitmap and parameters definitions". You can likely rewrite your code with X macros and get rid of a lot of superfluous defines and macros. How readable it will end up is another story.
However, whenever you find yourself this deep inside some macro meta-programming jungle, it is almost always a certain indication of poor program design. All of this smells like an artificial solution to a problem that could have been solved in much better ways - it is a "XY problem". (not to be confused with X macros :) ). The best solution most likely involves rewriting this entirely in simpler ways. Your case doesn't sound unique in any way, it seems that you just want to generate a bunch of bit masks.
Good programmers always try to make their code simpler, rather than making it more complex.
In addition, you may have more or less severe bugs all over the code, caused by the C language type system. It can be summarized as:
Bit shifts or other bitwise arithmetic should never be used on signed types. Doing so can lead to all manner of subtle bugs and poorly-defined behavior.
Enumeration constants are always of type int which is signed. You should avoid mixing them with bitwise arithmetic. Avoid enums entirely for programs like this.
Small integer types such as uint8_t or uint16_t get implicitly type promoted to int when used in an expression. Meaning that the C language will bandwagon most of your attempts to get the correct type and replace everything with int anyway.
The resulting type of your macros will be int, which is not what you want.
As the questions says, is the C preprocessor able to do it?
E.g.:
#define PI 3.1416
#define OP PI/100
#define OP2 PI%100
Is there any way OP and/or OP2 get calculated in the preprocessing phase?
Integer arithmetic? Run the following program to find out:
#include "stdio.h"
int main() {
#if 1 + 1 == 2
printf("1+1==2\n");
#endif
#if 1 + 1 == 3
printf("1+1==3\n");
#endif
}
Answer is "yes", there is a way to make the preprocessor perform integer arithmetic, which is to use it in a preprocessor condition.
Note however that your examples are not integer arithmetic. I just checked, and gcc's preprocessor fails if you try to make it do float comparisons. I haven't checked whether the standard ever allows floating point arithmetic in the preprocessor.
Regular macro expansion does not evaluate integer expressions, it leaves it to the compiler, as can be seen by preprocessing (-E in gcc) the following:
#define ONEPLUSONE (1 + 1)
#if ONEPLUSONE == 2
int i = ONEPLUSONE;
#endif
Result is int i = (1 + 1); (plus probably some stuff to indicate source file names and line numbers and such).
The code you wrote doesn't actually make the preprocessor do any calculation. A #define does simple text replacement, so with this defined:
#define PI 3.1416
#define OP PI/100
This code:
if (OP == x) { ... }
becomes
if (3.1416/100 == x) { ... }
and then it gets compiled. The compiler in turn may choose to take such an expression and calculate it at compile time and produce a code equivalent to this:
if (0.031416 == x) { ... }
But this is the compiler, not the preprocessor.
To answer your question, yes, the preprocessor CAN do some arithmetic. This can be seen when you write something like this:
#if (3.141/100 == 20)
printf("yo");
#elif (3+3 == 6)
printf("hey");
#endif
YES, I mean: it can do arithmetic :)
As demonstrated in 99 bottles of beer.
Yes, it can be done with the Boost Preprocessor. And it is compatible with pure C so you can use it in C programs with C only compilations. Your code involves floating point numbers though, so I think that needs to be done indirectly.
#include <boost/preprocessor/arithmetic/div.hpp>
BOOST_PP_DIV(11, 5) // expands to 2
#define KB 1024
#define HKB BOOST_PP_DIV(A,2)
#define REM(A,B) BOOST_PP_SUB(A, BOOST_PP_MUL(B, BOOST_PP_DIV(A,B)))
#define RKB REM(KB,2)
int div = HKB;
int rem = RKB;
This preprocesses to (check with gcc -S)
int div = 512;
int rem = 0;
Thanks to this thread.
Yes.
I can't believe that no one has yet linked to a certain obfuscated C contest winner. The guy implemented an ALU in the preprocessor via recursive includes. Here is the implementation, and here is something of an explanation.
Now, that said, you don't want to do what that guy did. It's fun and all, but look at the compile times in his hint file (not to mention the fact that the resulting code is unmaintainable). More commonly, people use the pre-processor strictly for text replacement, and evaluation of constant integer arithmetic happens either at compile time or run time.
As others noted however, you can do some arithmetic in #if statements.
Be carefull when doing arithmetic: add parenthesis.
#define SIZE4 4
#define SIZE8 8
#define TOTALSIZE SIZE4 + SIZE8
If you ever use something like:
unsigned int i = TOTALSIZE/4;
and expect i to be 3, you would get 4 + 2 = 6 instead.
Add parenthesis:
#define TOTALSIZE (SIZE4 + SIZE8)
As an exercise, I'd like to write a macro which tells me if an integer variable is signed. This is what I have so far and I get the results I expect if I try this on a char variable with gcc -fsigned-char or -funsigned-char.
#define ISVARSIGNED(V) (V = -1, (V < 0) ? 1 : 0)
Is this portable? Is there a way to do this without destroying the value of the variable?
#define ISVARSIGNED(V) ((V)<0 || (-V)<0 || (V-1)<0)
doesn't change the value of V. The third test handles the case where V == 0.
On my compiler (gcc/cygwin) this works for int and long but not for char or short.
#define ISVARSIGNED(V) ((V)-1<0 || -(V)-1<0)
also does the job in two tests.
If you're using GCC you can use the typeof keyword to not overwrite the value:
#define ISVARSIGNED(V) ({ typeof (V) _V = -1; _V < 0 ? 1 : 0 })
This creates a temporary variable, _V, that has the same type as V.
As for portability, I don't know. It will work on a two's compliment machine (a.k.a. everything your code will ever run on in all probability), and I believe it will work on one's compliment and sign-and-magnitude machines as well. As a side note, if you use typeof, you may want to cast -1 to typeof (V) to make it safer (i.e. less likely to trigger warnings).
#define ISVARSIGNED(V) ((-(V) < 0) != ((V) < 0))
Without destroying the variable's value. But doesn't work for 0 values.
What about:
#define ISVARSIGNED(V) (((V)-(V)-1) < 0)
This simple solution has no side effects, including the benefit of only referring to v once (which is important in a macro). We use the gcc extension "typeof" to get the type of v, and then cast -1 to this type:
#define IS_SIGNED_TYPE(v) ((typeof(v))-1 <= 0)
It's <= rather than just < to avoid compiler warnings for some cases (when enabled).
A different approach to all the "make it negative" answers:
#define ISVARSIGNED(V) (~(V^V)<0)
That way there's no need to have special cases for different values of V, since ∀ V ∈ ℤ, V^V = 0.
A distinguishing characteristic of signed/unsigned math is that when you right shift a signed number, the most significant bit is copied. When you shift an unsigned number, the new bits are 0.
#define HIGH_BIT(n) ((n) & (1 << sizeof(n) * CHAR_BITS - 1))
#define IS_SIGNED(n) (HIGH_BIT(n) ? HIGH_BIT(n >> 1) != 0 : HIGH_BIT(~n >> 1) != 0
So basically, this macro uses a conditional expression to determine whether the high bit of a number is set. If it's not, the macro sets it by bitwise negating the number. We can't do an arithmetic negation because -0 == 0. We then shift right by 1 bit and test whether sign extension occurred.
This assumes 2's complement arithmetic, but that's usually a safe assumption.
Why on earth do you need it to be a macro? Templates are great for this:
template <typename T>
bool is_signed(T) {
static_assert(std::numeric_limits<T>::is_specialized, "Specialize std::numeric_limits<T>");
return std::numeric_limits<T>::is_signed;
}
Which will work out-of-the-box for all fundamental integral types. It will also fail at compile-time on pointers, which the version using only subtraction and comparison probably won't.
EDIT: Oops, the question requires C. Still, templates are the nice way :P