When is it appropriate to include a type conversion in a symbolic constant/macro, like this:
#define MIN_BUF_SIZE ((size_t) 256)
Is it a good way to make it behave more like a real variable, with type checking?
When is it appropriate to use the L or U (or LL) suffixes:
#define NBULLETS 8U
#define SEEK_TO 150L
You need to do it any time the default type isn't appropriate. That's it.
Typing a constant can be important at places where the automatic conversions are not applied, in particular functions with variable argument list
printf("my size is %zu\n", MIN_BUF_SIZE);
could easily crash when the width of int and size_t are different and you wouldn't do the cast.
But your macro leaves room for improvement. I'd do that as
#define MIN_BUF_SIZE ((size_t)+256U)
(see the little + sign, there?)
When given like that the macro still can be used in preprocessor expressions (with #if). This is because in the preprocessor the (size_t) evaluates to 0 and thus the result is an unsigned 256 there, too.
#define is just token pasting preprocessor.
Whatever you write in #define it will replace with the replacement text before compilation.
So either way is correct
#define A a
int main
{
int A; // A will be replaced by a
}
There are many variations in #define like variadic macro or multiline macro
But the main aim of #define is the only one explained above.
Explicitly indicating the types in a constant was more relevant in Kernighan and Richie C (before ANSI/Standard C and its function prototypes came along).
Function prototypes like double fabs(double value); now allow the compiler to generate proper type conversions when needed.
You still want to explicitly indicate the constant sizes in some cases. The examples that come to my mind right now are bit masks:
#define VALUE_1 ((short) -1) might be 16 bits long while #define VALUE_2 ((char) -1) might be 8. Therefore, given a long x, x & VALUE_1 and x & VALUE_2would give very different results.
This would also be the case for the L or LL suffixes: the constants would use different numbers of bits.
Related
Certainly a dup and I shall remove it ASAP I'll run into an answer. I just can't find what I'm looking for.
What does this two lines in c mean?
#define NN_DIGITS ( 4)
#define MM_MARKS_DONE (255)
I know what #define and #define () does, where #define () execute the macro in (), but I don't know this particular caveat (with an integer).
Is actually redundant to write down () to define an integer value? Shall this values be interpreted bitwise? What will happen if we shan't write (). Will 4 and 255 be taken as a string?
Keyword: "execute". This is the root of your misunderstanding.
Macros aren't executed. They are substituted. The preprcosseor replaces the token NN_DIGITS by the token sequence ( 4). As a matter of fact, it would replace it with practically any token sequence. Even #define NN_DIGITS ( %34 (DDd ] is a valid macro definition (courtesy of my cat), despite the fact we most certainly don't want to try and expand it.
Is actually redundant to write down () to define an integer value?
From a practical standpoint, yes, it's redundant. But some would probably do it to maintain consistency with other macros where the resulting expressions can depend on the presence of parenthesis.
Shall this values be interpreted bitwise?
Everything is bitwise to a computer.
What will happen if we shan't write (). Will 4 and 255 be taken as a string?
No, it will just be the tokens 4 and 255 as opposed to the sequences ( 4) and (255) respectfully. The preprocessor deals only in tokens, it knows practically nothing about the type system. If the macro appear in a program, say:
int a = NN_DIGITS;
It will be turned by the preprocessor into:
int a = ( 4);
And then compiled further by the other steps in the pipeline of turning a program into an executable.
The parenthesis does absolutely nothing in this case - it's just noise.
There's a general rule of survival saying that function-like macros should always:
Wrap each occurrence of a macro parameter in parenthesis, and
Wrap the whole macro in an outer parenthesis
That is:
#define ADD(x,y) x + y // BAD
#define ADD(x,y) (x) + (y) // BAD
#define ADD(x,y) ((x) + (y)) // correct
This is to dodge issues of operator precedence and will be addressed by any decent beginner-level learning material.
Overly pedantic people who've learned the above rules tend to apply them to all macros, not just function-like macros. But in case the macro contains nothing but a single integer constant (a single pre-processor token), then the parenthesis achieves absolutely nothing.
Is actually redundant to write down () to define an integer value?
Yes, it just adds noise.
Shall this values be interpreted bitwise?
Macros are mostly just to regard as text replacement. What you do with the value in the calling code is no business of the macro.
What will happen if we shan't write ()
The code will get slightly easier to read.
Will 4 and 255 be taken as a string?
No, why would they.
There is a specific case where the parenthesis causes harm though, and that is when you use macros to convert a pre-processor constant to a string. Suppose I have this program:
#define STR(x) #x
#define AGE(x) STR(x)
#define DOG_AGE 5
int main(void)
{
puts("My dog is " AGE(DOG_AGE) " years old.");
}
AGE expands the macro DOG_AGE to 5 and then the next macro converts it to a string. So this prints My dog is 5 years old. because the # operator converts the pre-processor token exactly as it is given. If I add "useless noise parenthesis" to the macro:
#define DOG_AGE (5)
Then the output becomes My dog is (5) years old. Not what I intended.
This question already has answers here:
Type of #define variables
(7 answers)
Closed 3 years ago.
When using the #define command in C, what is the maximum or minimum amount the variable can be? For example, is
#define INT_MIN (pow(-2,31))
#define INT_MAX (pow(2,31))
an acceptable definition? I suppose a better way to ask is what is the datatype of the defined value?
#define performs token substitution. If you don't know what tokens are, you can think of this as text substitution on complete words, much like your editor's "search and replace" function could do. Therefore,
#define FOO 123456789123456789123456789123456789123456789
is perfectly valid so far — that just means that the preprocessor will replace every instance of FOO with that long number. It would also be perfectly legal (as far as preprocessing goes) to do
#define FOO this is some text that does not make sense
because the preprocessor doesn't know anything about C, and just replaces FOO with whatever it is defined as.
But this is not the answer you're probably looking for.
After the preprocessor has replaced the macro, the compiler will have to compile whatever was left in its place. And compilers will almost certainly be unable to compile either example I posted here and error out.
Integer constants can be as large as the largest integer type defined by your compiler, which is equivalent to uintmax_t (defined in <stdint.h>). For instance, if this type is 64 bits wide (very common case), the maximum valid integer constant is 18446744073709551615, i.e., 2 to the power of 64 minus 1.
This is independent of how this constant is written or constructed — whether it is done via a #define, written directly in the code, written in hexadecimal, it doesn't matter. The limit is the same, because it is given by the compiler, and the compiler runs after preprocessing is finished.
EDIT: as pointed out by #chux in comments, in recent versions of C (starting with C99), decimal constants will be signed by default unless they carry a suffix indicating otherwise (such as U/u, or a combined type/signedness suffix like ULL). In this case, the maximum valid unsuffixed constant would be whatever fits in an intmax_t value (typically half the max of uintmax_t rounded down); constants with unsigned suffixes can grow as large as an uintmax_t value can. (Note that C integer constants, signed or not, are never negative.)
#define INT_MIN (pow(-2,31)) is not acceptable, as it forms a maximum of the wrong type.
pow() returns a double.
Consider this: INT_MIN % 2 leads to invalid code, as % cannot be done on a double.
Your definition is ill-advised for a number of reasons:
These macro names are used in the standard library header limits.h where they are correctly defined for the toolchain's target platform.
Macros are not part of the C language proper; rather they cause replacement text to be inserted into the code for evaluation by the compiler; as such your definition will cause the functionpow() to be called everywhere these macros are used - evaluated at run-time (repeatedly) rather then being a compile-time constant.
The maximum value of a 32 bit two's complement integer is not 231 but 231 - 1.
The pow() function returns a double not an integer - your macro expressions therefore have type double.
Your macros assume the integer size of the platform to be 32 bit, which need not be the case - the definitions are not portable. This is possibly true also of those in , but there the entire library is platform specific, and you'd use a different library/toolchain with each platform.
If you must (and you really shouldn't) define your own macros for this purpose, you should:
define them using distinct macro names,
without assumptions regarding the target platform integer width,
use a constant-expression,
use an expression having int type.
For example:
#define PLATFORM_INDEPENDENT_INT_MAX ((int)(~0u >> 1u))
#define PLATFORM_INDEPENDENT_INT_MIN ((int)~(~0u >> 1u))
Using these the following code:
#include <stdio.h>
#include <limits.h>
#define PLATFORM_INDEPENDENT_INT_MAX ((int)(~0u >> 1u))
#define PLATFORM_INDEPENDENT_INT_MIN ((int)~(~0u >> 1u))
int main()
{
printf( "Standard: %d\t%d\n", INT_MIN, INT_MAX);
printf( "Mine: %d\t%d\n", PLATFORM_INDEPENDENT_INT_MIN, PLATFORM_INDEPENDENT_INT_MAX);
return 0;
}
Outputs:
Standard: -2147483648 2147483647
Mine: -2147483648 2147483647
I switched to fixed-length integer types in my projects mainly because they help me think about integer sizes more clearly when using them. Including them via #include <inttypes.h> also includes a bunch of other macros like the printing macros PRIu32, PRIu64,...
To assign a constant value to a fixed length variable I can use macros like UINT32_C() and INT32_C(). I started using them whenever I assigned a constant value.
This leads to code similar to this:
uint64_t i;
for (i = UINT64_C(0); i < UINT64_C(10); i++) { ... }
Now I saw several examples which did not care about that. One is the stdbool.h include file:
#define bool _Bool
#define false 0
#define true 1
bool has a size of 1 byte on my machine, so it does not look like an int. But 0 and 1 should be integers which should be turned automatically into the right type by the compiler. If I would use that in my example the code would be much easier to read:
uint64_t i;
for (i = 0; i < 10; i++) { ... }
So when should I use the fixed length constant macros like UINT32_C() and when should I leave that work to the compiler(I'm using GCC)? What if I would write code in MISRA C?
As a rule of thumb, you should use them when the type of the literal matters. There are two things to consider: the size and the signedness.
Regarding size:
An int type is guaranteed by the C standard values up to 32767. Since you can't get an integer literal with a smaller type than int, all values smaller than 32767 should not need to use the macros. If you need larger values, then the type of the literal starts to matter and it is a good idea to use those macros.
Regarding signedness:
Integer literals with no suffix are usually of a signed type. This is potentially dangerous, as it can cause all manner of subtle bugs during implicit type promotion. For example (my_uint8_t + 1) << 31 would cause an undefined behavior bug on a 32 bit system, while (my_uint8_t + 1u) << 31 would not.
This is why MISRA has a rule stating that all integer literals should have an u/U suffix if the intention is to use unsigned types. So in my example above you could use my_uint8_t + UINT32_C(1) but you can as well use 1u, which is perhaps the most readable. Either should be fine for MISRA.
As for why stdbool.h defines true/false to be 1/0, it is because the standard explicitly says so. Boolean conditions in C still use int type, and not bool type like in C++, for backwards compatibility reasons.
It is however considered good style to treat boolean conditions as if C had a true boolean type. MISRA-C:2012 has a whole set of rules regarding this concept, called essentially boolean type. This can give better type safety during static analysis and also prevent various bugs.
It's for using smallish integer literals where the context won't result in the compiler casting it to the correct size.
I've worked on an embedded platform where int is 16 bits and long is 32 bits. If you were trying to write portable code to work on platforms with either 16-bit or 32-bit int types, and wanted to pass a 32-bit "unsigned integer literal" to a variadic function, you'd need the cast:
#define BAUDRATE UINT32_C(38400)
printf("Set baudrate to %" PRIu32 "\n", BAUDRATE);
On the 16-bit platform, the cast creates 38400UL and on the 32-bit platform just 38400U. Those will match the PRIu32 macro of either "lu" or "u".
I think that most compilers would generate identical code for (uint32_t) X as for UINT32_C(X) when X is an integer literal, but that might not have been the case with early compilers.
Any solution to this compiler ERROR?
#define TYPE_TOTAL 10
#define MAX_SIZE 20
#define NBITS2(n) ((n&2)?1:0)
#define NBITS4(n) ((n&(0xC))?(2+NBITS2(n>>2)):(NBITS2(n)))
#define NBITS8(n) ((n&0xF0)?(4+NBITS4(n>>4)):(NBITS4(n)))
#define NBITS16(n) ((n&0xFF00)?(8+NBITS8(n>>8)):(NBITS8(n)))
#define NBITS32(n) ((n&0xFFFF0000)?(16+NBITS16(n>>16)):(NBITS16(n)))
#define NBITS(n) (n==0?0:NBITS32(n)+1)
typedef struct StatsEntry_s
{
uint32 type:NBITS(TYPE_TOTAL);
uint32 subtype:NBITS(MAX_SIZE);
} StatsEntry_t;
uint32 type:NBITS(TYPE_TOTAL);
"this operator is not allowed in a constant expression".
Edit: John Bollinger commented correctly that your macros are actually valid constant expressions; indeed the code compiles and runs with a gcc 4.9.3, VS C++ (http://webcompiler.cloudapp.net/) as well as clang 3.5.1. We are not sure why you have problems -- which compiler are you using? But anyway, if you are stuck with a compiler that can't do that:
I think you can substitute the conditions with arithmetic, the way I suggested in my comment for the simplest macro. For example for NBITS4(n):
The original is
#define NBITS4(n) ((n&(0xC))?(2+NBITS2(n>>2)):(NBITS2(n)))
which makes two things dependent on the 0xC bits, the bit shift count and adding 2. Let's see. If there is a match we want to add 2: (n&(0xC) != 0)*2 should be 2 if a bit is set. For the bit shift we consider that n>>0 is (I think) n, so that we can again compute 0 or 2 dependent on n&0xC the same way as before. That should yield
#define NBITS4(n) (((n)&(0xC) != 0)*2+NBITS2((n)>>((n)&(0xC) != 0)*2))
I'm not sure whether that's the easiest way, and I didn't test it, but it should be a start.
A technicality: Always bracket your macro arguments; defines are text replacement and can expand to surprising expressions.
On a general note: Computations are much faster than jumps on modern CPUs. Sometimes using booleans as numbers instead of as conditions for branching gives surprising performance gains. The downside is that it can be absolutely unreadable, like here.
You could redefine your macros without the ternary operator. This is only good up to 255 but could be extended easily:
#define NBITS2(n) (((!(n&2))*(n&1))|(n&2))
#define NBITS4(n) (((!(n&4))*NBITS2(n))|(((n&4)>>2)*3))
#define NBITS8(n) (((!(n&8))*NBITS4(n))|(((n&8)>>3)*4))
#define NBITS16(n) (((!(n&16))*NBITS8(n))|(((n&16)>>4)*5))
#define NBITS32(n) (((!(n&32))*NBITS16(n))|(((n&32)>>5)*6))
#define NBITS64(n) (((!(n&64))*NBITS32(n))|(((n&64)>>6)*7))
#define NBITS(n) (((!(n&128))*NBITS64(n))|(((n&128)>>7)*8))
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
A riddle (in C)
I have a couple of questions regarding the following snippet:
#include<stdio.h>
#define TOTAL_ELEMENTS (sizeof(array) / sizeof(array[0]))
int array[] = {23,34,12,17,204,99,16};
int main()
{
int d;
for(d=-1;d <= (TOTAL_ELEMENTS-2);d++)
printf("%d\n",array[d+1]);
return 0;
}
Here the output of the code does not print the array elements as expected. But when I add a typecast of (int) the the macro definition of ELEMENTS as
#define TOTAL_ELEMENTS (int) (sizeof(array) / sizeof(array[0]))
It displays all array elements as expected.
How does this typecast work?
Based on this I have few questions:
Does it mean if I have some macro definition as:
#define AA (-64)
by default in C, all constants defined as macros are equivalent to signed int.
If yes, then
But if I have to forcibly make some constant defined in a macro behave as an unsigned int is there any constant suffix than I can use (I tried UL, UD neither worked)?
How can I define a constant in a macro definition to behave as unsigned int?
Look at this line:
for(d=-1;d <= (TOTAL_ELEMENTS-2);d++)
In the first iteration, you are checking whether
-1 <= (TOTAL_ELEMENTS-2)
The operator size_of returns unsigned value and the check fails (-1 signed = 0xFFFFFFFF unsigned on 32bit machines).
A simple change in the loop fixes the problem:
for(d=0;d <= (TOTAL_ELEMENTS-1);d++)
printf("%d\n",array[d]);
To answer your other questions: C macros are expanded text-wise, there is no notion of types. The C compiler sees your loop as this:
for(d=-1;d <= ((sizeof(array) / sizeof(array[0]))-2);d++)
If you want to define an unsigned constant in a macro, use the usual suffix (u for unsigned, ul for unsigned long).
sizeof returns the number of bytes in unsigned format. That's why you need the cast.
See more here.
Regarding your question about
#define AA (-64)
See Macro definition and expansion in the C preprocessor:
Object-like macros were conventionally used as part of good programming practice to create symbolic names for constants, e.g.
#define PI 3.14159
... instead of hard-coding those numbers throughout one's code. However, both C and C++ provide the const directive, which provides another way to avoid hard-coding constants throughout the code.
Constants defined as macros have no associated type. Use const where possible.
Answering just one of your sub-questions:
To "define a constant in a macro" (this is a bit sloppy, you're not defining a "constant", merely doing some text-replacement trickery) that is unsigned, you should use the 'u' suffix:
#define UNSIGNED_FORTYTWO 42u
This will insert an unsigned int literal wherever you type UNSIGNED_FORTYTWO.
Likewise, you often see (in <math.h> for instance) suffices used to set the exact floating-point type:
#define FLOAT_PI 3.14f
This inserts a float (i.e. "single precision") floating-point literal wherever you type FLOAT_PI in the code.