I wonder to know is it possible to send a parameter to a #define macro for selecting different output
For example:
#define Row(1) LPC_GPIO0
#define Row(2) LPC_GPIO3
#define Row(3) LPC_GPIO2
Then in my code I create a loop for sending the parameter
Row(x)
This macro syntax doesn't exist.
Moreover, it can't possibly exist, because macros are expanded before the compiler compiles the code. If your x isn't a compile time constant, there would never be a way to determine what to replace in the source code for the macro invocation.
If you need to index some values, just use an array, e.g. (assuming these constants are integers):
static int rows[] = { 0, LPC_GPIO0, LPC_GPIO3, LPC_GPIO2 };
Writing
rows[x]
would have the effect you seem to have expected from your invalid macro syntax.
if you want to use macros
#define GPIOx(x) GPIO##x
and GPIOx(1) will expand to GPIO1
If you want these calculated at runtime there is a way to do what you want
#define Row(x) (x == 1 ? LPC_GPIO0 : (x == 2 ? LPC_GPIO3 : (x == 3 ? LPC_GPIO2 : ERROR_VALUE)))
Though this gets messy as the number of options increases
Also, even if you do want this evaluated at compile time, most optimizing compilers would do that for you as long as x is a constant
Related
I am trying to use #if macros by defining the type of operation to invoke the right code, So i made a very simple example similar to what I am trying to do:
#include <stdio.h>
enum{ADD,SUB,MUL};
#define operation ADD
int main()
{
int a = 4;
int b = 2;
int c;
#if (operation == ADD)
c = a+b;
#endif
#if (operation == SUB)
c = a-b;
#endif
#if (operation == MUL)
c = a*b;
#endif
printf("result = %i",c);
return 0;
}
But unfortunately that does not work I get the following result = 8... if I replace The operation with numbers it works fine .... But i want it to work as it is described above.
Any help
The preprocessor is a step that is (in a way) done before the actual compiler sees the code. Therefore it has no idea about enumerations or their values, as they are set during compilation which happens after preprocessing.
You simply can't use preprocessor conditional compilation using enumerations.
The preprocessor will always consider that as false:
#if IDENT == IDENT
It can only test for numeric values.
Simplify your code and feed it to the preprocessor:
enum {ADD,SUB,MUL};
#define operation ADD
int main()
{
(operation == ADD);
}
The result of the preprocessor output is:
enum {ADD,SUB,MUL};
int main()
{
(ADD == ADD);
}
As you see, the enumerate value hasn't been evaluated. In the #if statement, that expression is just seen as false.
So a workaround would be to replace your enumerate by a series of #define:
#define ADD 1
#define SUB 2
#define MUL 3
like this it works. Output of preprocessor output is now:
int main()
{
int a = 4;
int b = 2;
int c;
c = a+b;
# 28 "test.c"
printf("result = %i",c);
return 0;
}
the solution is:
either rely at 100% on the preprocessor (as the solution above suggests)
or rely at 100% on the compiler (use enums and real if statements)
As others have said, the preprocessor performs its transformations at a very early phase in compilation, before enum values are known. So you can't do this test in #if.
However, you can just use an ordinary if statement. Any decent compiler with optimization enabled will detect that you're comparing constants, perform the tests at compile time, and throw out the code that will never be executed. So you'll get the same result that you were trying to achieve with #if.
But i want it to work as it is described above.
You seem to mean that you want the preprocessor to recognize the enum constants as such, and to evaluate the == expressions in that light. I'm afraid you're out of luck.
The preprocessor knows nothing about enums. It operates on a mostly-raw stream of tokens and whitespace. When it evaluates a directive such as
#if (operation == SUB)
it first performs macro expansion to produce
#if (ADD == SUB)
. Then it must somehow convert the tokens ADD and SUB to numbers, but, again, it knows nothing about enums or the C significance of the preceding code. Its rule for interpreting such symbols as numbers is simple: it replaces each with 0. The result is that all three preprocessor conditionals in your code will always evaluate to true.
If you want the preprocessor to do this then you need to define the symbols to the preprocessor. Since you're not otherwise using the enum, you might as well just replace it altogether with
#define ADD 1
#define SUB 2
#define MUL 3
If you want the enum, too, then just use different symbols with the preprocessor than you use for the enum constants. You can use the same or different values, as you like, because never the twain shall meet.
Another solution would be to have the enum in an included header file.
How are the definitions in C processed? Are they processed in order of line numbers?
For example, will the following statements work?
#define ONE 1
#define TWO (ONE+1)
Could there be any problems with definitions that depend on previous definitions?
Yes, one #define can reference other #define substitutions and macros without any problem.
Moreover, the expression on these constants would remain a constant expression.
Your second expression would be textually equivalent to (ONE+1) replacement in the text, with no limits to the level of nesting. In other words, if you later define
#define THREE (TWO+1)
and then use it in an assignment i = THREE, you would get
i = ((ONE+1)+1)
after preprocessing.
If you are planning to use this trick with numeric values, a common alternative would be to use an enum with specific values, i.e.
enum {
ONE = 1
, TWO = ONE+1
, THREE = TWO+1
, ... // and so on
};
They're processed at point when they're used, so you example and even this
#define TWO (ONE+1)
#define ONE 1
will work.
The best way is to check by yourself:
g++ test.cpp
gcc test.c
For strict compiler check:
gcc test.c -pedantic
And all worked for me!
test.c/test.cpp
#include <stdio.h>
#define A 9
#define B A
int main()
{
printf("%d\n",B);
return 0;
}
The compiler processes the #define-s in the order they were de...fined. After each #define gets processed, the preprocessor then proceeds to process all text after this #define, using it in the state left by this #define. So, in your example:
#define ONE 1
#define TWO (ONE+1)
It first processes #define ONE 1, replacing all further occurunces of ONE with 1. So, the second macro becomes
#define TWO (1+1)
That is how it will be processed and applied by the preprocessor.
The reverse example:
#define TWO (ONE+1)
#define ONE 1
will also work. Why? Well, the preprocessor will take the first #define, scan the code for any occurences of TWO, and replace it with (ONE+1). Then it reaches the second #define, and replaces all occurences of ONE, including those put in place by the previous #define, with 1.
I'd personally prefer the former approach over the latter: it's plainly easier for the preprocessor to handle.
I'm working in a C program and I came across a problem. I have this
#define NUMBER_OF_OPTIONS 5
#define NAME_OPTION1 "Partida Rapida"
#define NAME_OPTION2 "Elige Nivel"
#define NAME_OPTION3 "Ranking"
#define NAME_OPTION4 "Creditos"
#define NAME_OPTION5 "Exit"
for (iterator = 1; iterator <= NUMBER_OF_OPTIONS; iterator++){
menu_options[iterator-1]= NAME_OPTION + iterator
}
I want that "NAME_OPTION + iterator" takes the value of the corresponding #define. For example if the variable "iterator" is equal to one, I want menu_options[iterator-1] to take the value of NAME_OPTION1, which is "Partida Rapida".
How can I get this?
Essentially, you can't. #define macros are handled by the C Preprocessor and do textual substitution wherever that macro appears in the code. The macro NAME_OPTION has not been defined, so the compiler should complain. C does not allow appending numbers onto strings, or especially onto symbols like NAME_OPTION. Use an array of const char*, which you can then refer to with your iterator.
You can't use defines as this, you can do:
const char *menu_options[5] = {
"Partida Rapida",
"Elige Nivel",
"Ranking",
"Creditos",
"Exit"
};
If you use #define macro, you just tell preprocessor to replace every occurence of defined word by something else before the code is compiled into machine code.
In this case NUMBER_OF_OPTIONS will be replaced by 5, but there's no occurence of NAME_OPTION*, so nothing will be replaced and you'll probably get an error while preprocessing.
Piere's solutions shows how to do it, but I highly doubt that there's an iterator over char *array, so you have to iterate over given array using an integer index.
Is there any way to use a sizeof in a preprocessor macro?
For example, there have been a ton of situations over the years in which I wanted to do something like:
#if sizeof(someThing) != PAGE_SIZE
#error Data structure doesn't match page size
#endif
The exact thing I'm checking here is completely made up - the point is, I often like to put in these types of (size or alignment) compile-time checks to guard against someone modifying a data-structure which could misalign or re-size things which would break them.
Needless to say - I don't appear to be able to use a sizeof in the manner described above.
There are several ways of doing this.
Following snippets will produce no code if sizeof(someThing) equals PAGE_SIZE; otherwise they will produce a compile-time error.
1. C11 way
Starting with C11 you can use static_assert (requires #include <assert.h>).
Usage:
static_assert(sizeof(someThing) == PAGE_SIZE, "Data structure doesn't match page size");
2. Custom macro
If you just want to get a compile-time error when sizeof(something) is not what you expect, you can use following macro:
#define BUILD_BUG_ON(condition) ((void)sizeof(char[1 - 2*!!(condition)]))
Usage:
BUILD_BUG_ON( sizeof(someThing) != PAGE_SIZE );
This article explains in details why it works.
3. MS-specific
On Microsoft C++ compiler you can use C_ASSERT macro (requires #include <windows.h>), which uses a trick similar to the one described in section 2.
Usage:
C_ASSERT(sizeof(someThing) == PAGE_SIZE);
Is there anyway to use a "sizeof" in a pre-processor macro?
No. The conditional directives take a restricted set of conditional expressions; sizeof is one of the things not allowed.
Preprocessing directives are evaluated before the source is parsed (at least conceptually), so there aren't any types or variables yet to get their size.
However, there are techniques to getting compile-time assertions in C (for example, see this page).
I know it's a late answer, but to add on to Mike's version, here's a version we use that doesn't allocate any memory. I didn't come up with the original size check, I found it on the internet years ago and unfortunately can't reference the author. The other two are just extensions of the same idea.
Because they are typedef's, nothing is allocated. With the __LINE__ in the name, it's always a different name so it can be copied and pasted where needed. This works in MS Visual Studio C compilers, and GCC Arm compilers. It does not work in CodeWarrior, CW complains about redefinition, not making use of the __LINE__ preprocessor construct.
//Check overall structure size
typedef char p__LINE__[ (sizeof(PARS) == 4184) ? 1 : -1];
//check 8 byte alignment for flash write or similar
typedef char p__LINE__[ ((sizeof(PARS) % 8) == 0) ? 1 : 1];
//check offset in structure to ensure a piece didn't move
typedef char p__LINE__[ (offsetof(PARS, SUB_PARS) == 912) ? 1 : -1];
I know this thread is really old but...
My solution:
extern char __CHECK__[1/!(<<EXPRESSION THAT SHOULD COME TO ZERO>>)];
As long as that expression equates to zero, it compiles fine. Anything else and it blows up right there. Because the variable is extern'd it will take up no space, and as long as no-one references it (which they won't) it won't cause a link error.
Not as flexible as the assert macro, but I couldn't get that to compile in my version of GCC and this will... and I think it will compile just about anywhere.
The existing answers just show how to achieve the effect of "compile-time assertions" based on the size of a type. That may meet the OP's needs in this particular case, but there are other cases where you really need a preprocessor conditional based on the size of a type. Here's how to do it:
Write yourself a little C program like:
/* you could call this sizeof_int.c if you like... */
#include <stdio.h>
/* 'int' is just an example, it could be any other type */
int main(void) { printf("%zd", sizeof(int); }
Compile that. Write a script in your favorite scripting language, which runs the above C program and captures its output. Use that output to generate a C header file. For example, if you were using Ruby, it might look like:
sizeof_int = `./sizeof_int`
File.open('include/sizes.h','w') { |f| f.write(<<HEADER) }
/* COMPUTER-GENERATED, DO NOT EDIT BY HAND! */
#define SIZEOF_INT #{sizeof_int}
/* others can go here... */
HEADER
Then add a rule to your Makefile or other build script, which will make it run the above script to build sizes.h.
Include sizes.h wherever you need to use preprocessor conditionals based on sizes.
Done!
(Have you ever typed ./configure && make to build a program? What configure scripts do is basically just like the above...)
What about next macro:
/*
* Simple compile time assertion.
* Example: CT_ASSERT(sizeof foo <= 16, foo_can_not_exceed_16_bytes);
*/
#define CT_ASSERT(exp, message_identifier) \
struct compile_time_assertion { \
char message_identifier : 8 + !(exp); \
}
For example in comment MSVC tells something like:
test.c(42) : error C2034: 'foo_can_not_exceed_16_bytes' : type of bit field too small for number of bits
Just as a reference for this discussion, I report that some compilers get sizeof() ar pre-processor time.
JamesMcNellis answer is correct, but some compilers go through this limitation (this probably violates strict ansi c).
As a case of this, I refer to IAR C-compiler (probably the leading one for professional microcontroller/embedded programming).
#define SIZEOF(x) ((char*)(&(x) + 1) - (char*)&(x)) might work
In C11 _Static_assert keyword is added. It can be used like:
_Static_assert(sizeof(someThing) == PAGE_SIZE, "Data structure doesn't match page size")
In my portable c++ code ( http://www.starmessagesoftware.com/cpcclibrary/ ) wanted to put a safe guard on the sizes of some of my structs or classes.
Instead of finding a way for the preprocessor to throw an error ( which cannot work with sizeof() as it is stated here ), I found a solution here that causes the compiler to throw an error.
http://www.barrgroup.com/Embedded-Systems/How-To/C-Fixed-Width-Integers-C99
I had to adapt that code to make it throw an error in my compiler (xcode):
static union
{
char int8_t_incorrect[sizeof( int8_t) == 1 ? 1: -1];
char uint8_t_incorrect[sizeof( uint8_t) == 1 ? 1: -1];
char int16_t_incorrect[sizeof( int16_t) == 2 ? 1: -1];
char uint16_t_incorrect[sizeof(uint16_t) == 2 ? 1: -1];
char int32_t_incorrect[sizeof( int32_t) == 4 ? 1: -1];
char uint32_t_incorrect[sizeof(uint32_t) == 4 ? 1: -1];
};
After trying out the mentioned macro's, this fragment seems to produce the desired result (t.h):
#include <sys/cdefs.h>
#define STATIC_ASSERT(condition) typedef char __CONCAT(_static_assert_, __LINE__)[ (condition) ? 1 : -1]
STATIC_ASSERT(sizeof(int) == 4);
STATIC_ASSERT(sizeof(int) == 42);
Running cc -E t.h:
# 1 "t.h"
...
# 2 "t.h" 2
typedef char _static_assert_3[ (sizeof(int) == 4) ? 1 : -1];
typedef char _static_assert_4[ (sizeof(int) == 42) ? 1 : -1];
Running cc -o t.o t.h:
% cc -o t.o t.h
t.h:4:1: error: '_static_assert_4' declared as an array with a negative size
STATIC_ASSERT(sizeof(int) == 42);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
t.h:2:84: note: expanded from macro 'STATIC_ASSERT'
...typedef char __CONCAT(_static_assert_, __LINE__)[ (condition) ? 1 : -1]
^~~~~~~~~~~~~~~~~~~~
1 error generated.
42 isn't the answer to everything after all...
To check at compile time the size of data structures against their constraints I've used this trick.
#if defined(__GNUC__)
{ char c1[sizeof(x)-MAX_SIZEOF_X-1]; } // brakets limit c1's scope
#else
{ char c1[sizeof(x)-MAX_SIZEOF_X]; }
#endif
If x's size is greater or equal than it's limit MAX_SIZEOF_X, then the gcc wil complain with a 'size of array is too large' error. VC++ will issue either error C2148 ('total size of array must not exceed 0x7fffffff bytes') or C4266 'cannot allocate an array of constant size 0'.
The two definitions are needed because gcc will allow a zero-sized array to be defined this way (sizeof x - n).
The sizeof operator is not available for the preprocessor, but you can transfer sizeof to the compiler and check the condition in runtime:
#define elem_t double
#define compiler_size(x) sizeof(x)
elem_t n;
if (compiler_size(elem_t) == sizeof(int)) {
printf("%d",(int)n);
} else {
printf("%lf",(double)n);
}
I'm trying to decide whether to implement certain operations as macros or as functions.
Say, just as an example, I have the following code in a header file:
extern int x_GLOB;
extern int y_GLOB;
#define min(x,y) ((x_GLOB = (x)) < (y_GLOB = (y))? x_GLOB : y_GLOB)
the intent is to have each single argument computed only once (min(x++,y++) would not cause any problem here).
The question is: do the two global variables pose any issue in terms of code re-entrancy or thread safeness?
I would say no but I'm not sure.
And what about:
#define min(x,y) ((x_GLOB = (x)), \
(y_GLOB = (y)), \
((x_GLOB < y_GLOB) ? x_GLOB : y_GLOB)
would it be a differnt case?
Please note that the question is not "in general" but is related to the fact that those global variables are used just within that macro and for the evaluation of a single expression.
Anyway, looking at the answers below, I think I can summarize as follows:
It's not thread safe as nothing guarantees that a thread is suspended "in the middle" of the evaluation of an expression (as I hoped instead)
The "state" those globals represent is, at least, the internal state of the "min" operation and preserving that state would, again at least, require to impose restrictions on how the function can be called (e.g. avoid min(min(1,2),min(3,1)).
I don't think using non-portable constructs is a good idea either so I guess the only option is to stay on the safe side an implement those cases as regular functions.
Silently modifying global variables inside a macro like min is a very bad idea. You're much better off
accepting that min can evaluate its argument multiple times (in which case I'd call it MIN to make it look less like a regular C function), or
write a regular or inline function for min, or
use gcc extensions to make the macro safe.
If you do any of these things, you eliminate the global variables entirely and hence any questions about reentracy and multi-threading.
Option 1:
#define MIN(x, y) ((x) < (y) ? (x) : (y))
Option 2:
int min(int x, int y) { return x < y ? x : y; }
Of course, this has the problem that you can't use this version of min for different argument types.
Option 3:
#define min(x, y) ({ \
typeof(x) x__ = (x); \
typeof(y) y__ = (y); \
x__ < y__ ? x__ : y__; \
})
The globals are not thread-safe. You can make them so by using thread-local storage.
Personally, I'd use an inline function instead of a macro to avoid the problem altogether!
Personally I don't think this code is safe at all, even if we don't consider multithreaded scenarios.
Take the following evaluation:
int a = min(min(1, 2), min(3, 4));
How will this expand properly and evaluate?
Evaluate min(1, 2) and assign value to x_GLOB (outer macro):
x_GLOB = 1
y_GLOB = 2
evaluate min, result is 1, assign to x_GLOB (for outer macro)
Evaluate min(3, 4) and assign value to y_GLOB (outer macro):
x_GLOB = 3 (uh-oh, 1 from the first expansion is now clobbered)
y_GLOB = 4
evaluate min, result is 3, assign to y_GLOB (for outer macro)
Evaluate min(a, b) where a is min(1, 2) and b is min(3, 4)
evaluate min of x_GLOB (which is 3) and y_GLOB (which is 3)
result of total evaluation is 3
Or am I missing something here?
Generally speaking, anytime you use global variables, your subroutine (or macro) is not re-entrant.
Anytime two threads access the same data or shared resource, you have to use synchronization primitives (mutexes, semaphores, etc) to coordinate access to the "critical section".
See Wikipedia reentrant page
It's not thread-safe. To demonstrate this, suppose that thread A calls min(1,2) and threadB calls min(3,4). Suppose that thread A is running first, and is interrupted by the scheduler right at the question mark of the macro, because its time slice has expired.
Then x_GLOB is 1, and y_GLOB is 2.
Now suppose threadB runs for a while, and completes the contents of the macro:
x_GLOB is 3, y_GLOB is 4.
Now suppose threadA resumes. It will return x_GLOB (3) instead of the right answer (1).
Obviously I've simplified things a bit - being interrupted "at the question mark" is pretty handwavy, and threadA doesn't necessarily return 3, on some compilers with some optimisation levels it might keep the values in registers and not read back from x_GLOB at all. So the code emitted might happen to work with that compiler and options. But hopefully my example makes the point - you can't be sure.
I recommend you write a static inline function. If you're trying to be very portable, do it like this:
#define STATIC_INLINE static
STATIC_INLINE int min_func(int x, int y) { return x < y ? x : y; }
#define min(a,b) min_func((a),(b))
Then if you have a performance problem on some particular platform, and it turns out to be because that compiler fails to inline it, you can worry about whether on that compiler you need to define STATIC_INLINE differently (to static inline in C99, or static __inline for Microsoft, or whatever), or perhaps even implement the min macro differently. That level of optimisation ("does it get inlined?") isn't something you can do much about in portable code.
That code is not safe for multithreading.
Just avoid macros, unless indispensable ("Efficient C++" books have examples of those cases).