i want to concatenate a lot of macros in order to pass them as parameter in a struck array. to be more specific i have this struct
static struct
{
unsigned int num_irqs;
volatile __int_handler *_int_line_handler_table;
}_int_handler_table[INTR_GROUPS];
and I want to pass as num_irqs parameter a series of macros
AVR32_INTC_NUM_IRQS_PER_GRP1
AVR32_INTC_NUM_IRQS_PER_GRP2
...
first I thought to use this code
for (int i=0;i<INTR_GROUPS;i++)
{
_int_handler_table[i].num_irqs = TPASTE2(AVR32_INTC_NUM_IRQS_PER_GRP,i);
}
but it takes the i as char and not the specific value each time. I saw also that there is a MREPEAT macro defined in the preprocessor.h but I do not understand how it is used from the examples.
Can anyone can explain the use of MREPEAT or another way to do the above.
Keep in mind the preprocessor (which manipulates macros) runs before the compiler. It's meant to manipulate the final source code to be submitted to the compiler.
Hence, it has no idea of what value has a variable. For the preprocessor, i means i.
What you try to do is a bit complex, especially keeping in mind that preprocessor cannot generate preprocessor directives.
But it can generate constants.
Speaking of which, for your use case, I would prefer to use a table of constants, such as :
const int AVR32_INTC_NUM_IRQS_PER_GRP[] = { 1, 2, 3, 4, 5 };
for (int i=0;i<INTR_GROUPS;i++)
{
_int_handler_table[i].num_irqs = TPASTE2(AVR32_INTC_NUM_IRQS_PER_GRP[i]);
}
C doesn't work like that.
Macros are just text-replacement which happens att compile-time. You can't write code to construct a macro name, that doesn't make sense. The compiler is no longer around when your code runs.
You probably should just do it manually, unless the amount of code is very large (in which case code-generation is a common solution).
Which one is better to use among the below statements in C?
static const int var = 5;
or
#define var 5
or
enum { var = 5 };
It depends on what you need the value for. You (and everyone else so far) omitted the third alternative:
static const int var = 5;
#define var 5
enum { var = 5 };
Ignoring issues about the choice of name, then:
If you need to pass a pointer around, you must use (1).
Since (2) is apparently an option, you don't need to pass pointers around.
Both (1) and (3) have a symbol in the debugger's symbol table - that makes debugging easier. It is more likely that (2) will not have a symbol, leaving you wondering what it is.
(1) cannot be used as a dimension for arrays at global scope; both (2) and (3) can.
(1) cannot be used as a dimension for static arrays at function scope; both (2) and (3) can.
Under C99, all of these can be used for local arrays. Technically, using (1) would imply the use of a VLA (variable-length array), though the dimension referenced by 'var' would of course be fixed at size 5.
(1) cannot be used in places like switch statements; both (2) and (3) can.
(1) cannot be used to initialize static variables; both (2) and (3) can.
(2) can change code that you didn't want changed because it is used by the preprocessor; both (1) and (3) will not have unexpected side-effects like that.
You can detect whether (2) has been set in the preprocessor; neither (1) nor (3) allows that.
So, in most contexts, prefer the 'enum' over the alternatives. Otherwise, the first and last bullet points are likely to be the controlling factors — and you have to think harder if you need to satisfy both at once.
If you were asking about C++, then you'd use option (1) — the static const — every time.
Generally speaking:
static const
Because it respects scope and is type-safe.
The only caveat I could see: if you want the variable to be possibly defined on the command line. There is still an alternative:
#ifdef VAR // Very bad name, not long enough, too general, etc..
static int const var = VAR;
#else
static int const var = 5; // default value
#endif
Whenever possible, instead of macros / ellipsis, use a type-safe alternative.
If you really NEED to go with a macro (for example, you want __FILE__ or __LINE__), then you'd better name your macro VERY carefully: in its naming convention Boost recommends all upper-case, beginning by the name of the project (here BOOST_), while perusing the library you will notice this is (generally) followed by the name of the particular area (library) then with a meaningful name.
It generally makes for lengthy names :)
In C, specifically? In C the correct answer is: use #define (or, if appropriate, enum)
While it is beneficial to have the scoping and typing properties of a const object, in reality const objects in C (as opposed to C++) are not true constants and therefore are usually useless in most practical cases.
So, in C the choice should be determined by how you plan to use your constant. For example, you can't use a const int object as a case label (while a macro will work). You can't use a const int object as a bit-field width (while a macro will work). In C89/90 you can't use a const object to specify an array size (while a macro will work). Even in C99 you can't use a const object to specify an array size when you need a non-VLA array.
If this is important for you then it will determine your choice. Most of the time, you'll have no choice but to use #define in C. And don't forget another alternative, that produces true constants in C - enum.
In C++ const objects are true constants, so in C++ it is almost always better to prefer the const variant (no need for explicit static in C++ though).
The difference between static const and #define is that the former uses the memory and the later does not use the memory for storage. Secondly, you cannot pass the address of an #define whereas you can pass the address of a static const. Actually it is depending on what circumstance we are under, we need to select one among these two. Both are at their best under different circumstances. Please don't assume that one is better than the other... :-)
If that would have been the case, Dennis Ritchie would have kept the best one alone... hahaha... :-)
In C #define is much more popular. You can use those values for declaring array sizes for example:
#define MAXLEN 5
void foo(void) {
int bar[MAXLEN];
}
ANSI C doesn't allow you to use static consts in this context as far as I know. In C++ you should avoid macros in these cases. You can write
const int maxlen = 5;
void foo() {
int bar[maxlen];
}
and even leave out static because internal linkage is implied by const already [in C++ only].
Another drawback of const in C is that you can't use the value in initializing another const.
static int const NUMBER_OF_FINGERS_PER_HAND = 5;
static int const NUMBER_OF_HANDS = 2;
// initializer element is not constant, this does not work.
static int const NUMBER_OF_FINGERS = NUMBER_OF_FINGERS_PER_HAND
* NUMBER_OF_HANDS;
Even this does not work with a const since the compiler does not see it as a constant:
static uint8_t const ARRAY_SIZE = 16;
static int8_t const lookup_table[ARRAY_SIZE] = {
1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16}; // ARRAY_SIZE not a constant!
I'd be happy to use typed const in these cases, otherwise...
If you can get away with it, static const has a lot of advantages. It obeys the normal scope principles, is visible in a debugger, and generally obeys the rules that variables obey.
However, at least in the original C standard, it isn't actually a constant. If you use #define var 5, you can write int foo[var]; as a declaration, but you can't do that (except as a compiler extension" with static const int var = 5;. This is not the case in C++, where the static const version can be used anywhere the #define version can, and I believe this is also the case with C99.
However, never name a #define constant with a lowercase name. It will override any possible use of that name until the end of the translation unit. Macro constants should be in what is effectively their own namespace, which is traditionally all capital letters, perhaps with a prefix.
#define var 5 will cause you trouble if you have things like mystruct.var.
For example,
struct mystruct {
int var;
};
#define var 5
int main() {
struct mystruct foo;
foo.var = 1;
return 0;
}
The preprocessor will replace it and the code won't compile. For this reason, traditional coding style suggest all constant #defines uses capital letters to avoid conflict.
It is ALWAYS preferable to use const, instead of #define. That's because const is treated by the compiler and #define by the preprocessor. It is like #define itself is not part of the code (roughly speaking).
Example:
#define PI 3.1416
The symbolic name PI may never be seen by compilers; it may be removed by the preprocessor before the source code even gets to a compiler. As a result, the name PI may not get entered into the symbol table. This can be confusing if you get an error during compilation involving the use of the constant, because the error message may refer to 3.1416, not PI. If PI were defined in a header file you didn’t write, you’d have no idea where that 3.1416 came from.
This problem can also crop up in a symbolic debugger, because, again, the name you’re programming with may not be in the symbol table.
Solution:
const double PI = 3.1416; //or static const...
I wrote quick test program to demonstrate one difference:
#include <stdio.h>
enum {ENUM_DEFINED=16};
enum {ENUM_DEFINED=32};
#define DEFINED_DEFINED 16
#define DEFINED_DEFINED 32
int main(int argc, char *argv[]) {
printf("%d, %d\n", DEFINED_DEFINED, ENUM_DEFINED);
return(0);
}
This compiles with these errors and warnings:
main.c:6:7: error: redefinition of enumerator 'ENUM_DEFINED'
enum {ENUM_DEFINED=32};
^
main.c:5:7: note: previous definition is here
enum {ENUM_DEFINED=16};
^
main.c:9:9: warning: 'DEFINED_DEFINED' macro redefined [-Wmacro-redefined]
#define DEFINED_DEFINED 32
^
main.c:8:9: note: previous definition is here
#define DEFINED_DEFINED 16
^
Note that enum gives an error when define gives a warning.
The definition
const int const_value = 5;
does not always define a constant value. Some compilers (for example tcc 0.9.26) just allocate memory identified with the name "const_value". Using the identifier "const_value" you can not modify this memory. But you still could modify the memory using another identifier:
const int const_value = 5;
int *mutable_value = (int*) &const_value;
*mutable_value = 3;
printf("%i", const_value); // The output may be 5 or 3, depending on the compiler.
This means the definition
#define CONST_VALUE 5
is the only way to define a constant value which can not be modified by any means.
Although the question was about integers, it's worth noting that #define and enums are useless if you need a constant structure or string. These are both usually passed to functions as pointers. (With strings it's required; with structures it's much more efficient.)
As for integers, if you're in an embedded environment with very limited memory, you might need to worry about where the constant is stored and how accesses to it are compiled. The compiler might add two consts at run time, but add two #defines at compile time. A #define constant may be converted into one or more MOV [immediate] instructions, which means the constant is effectively stored in program memory. A const constant will be stored in the .const section in data memory. In systems with a Harvard architecture, there could be differences in performance and memory usage, although they'd likely be small. They might matter for hard-core optimization of inner loops.
Don't think there's an answer for "which is always best" but, as Matthieu said
static const
is type safe. My biggest pet peeve with #define, though, is when debugging in Visual Studio you cannot watch the variable. It gives an error that the symbol cannot be found.
Incidentally, an alternative to #define, which provides proper scoping but behaves like a "real" constant, is "enum". For example:
enum {number_ten = 10;}
In many cases, it's useful to define enumerated types and create variables of those types; if that is done, debuggers may be able to display variables according to their enumeration name.
One important caveat with doing that, however: in C++, enumerated types have limited compatibility with integers. For example, by default, one cannot perform arithmetic upon them. I find that to be a curious default behavior for enums; while it would have been nice to have a "strict enum" type, given the desire to have C++ generally compatible with C, I would think the default behavior of an "enum" type should be interchangeable with integers.
A simple difference:
At pre-processing time, the constant is replaced with its value.
So you could not apply the dereference operator to a define, but you can apply the dereference operator to a variable.
As you would suppose, define is faster that static const.
For example, having:
#define mymax 100
you can not do printf("address of constant is %p",&mymax);.
But having
const int mymax_var=100
you can do printf("address of constant is %p",&mymax_var);.
To be more clear, the define is replaced by its value at the pre-processing stage, so we do not have any variable stored in the program. We have just the code from the text segment of the program where the define was used.
However, for static const we have a variable that is allocated somewhere. For gcc, static const are allocated in the text segment of the program.
Above, I wanted to tell about the reference operator so replace dereference with reference.
We looked at the produced assembler code on the MBF16X... Both variants result in the same code for arithmetic operations (ADD Immediate, for example).
So const int is preferred for the type check while #define is old style. Maybe it is compiler-specific. So check your produced assembler code.
I am not sure if I am right but in my opinion calling #defined value is much faster than calling any other normally declared variable (or const value).
It's because when program is running and it needs to use some normally declared variable it needs to jump to exact place in memory to get that variable.
In opposite when it use #defined value, the program don't need to jump to any allocated memory, it just takes the value. If #define myValue 7 and the program calling myValue, it behaves exactly the same as when it just calls 7.
Sometimes, in C, you do this:
typedef struct foo {
unsigned int some_data;
} foo; /* btw, foo_t is discouraged */
To use this new type in an OO-sort-of-way, you might have alloc/free pairs like these:
foo *foo_alloc(/* various "constructor" params */);
void foo_free(foo *bar);
Or, alternatively init/clear pairs (perhaps returning error-codes):
int foo_init(foo *bar, /* and various "constructor" params */);
int foo_clear(foo *bar);
I have seen the following idiom used, in particular in the MPFR library:
struct foo {
unsigned int some_data;
};
typedef struct foo foo[1]; /* <- notice, 1-element array */
typedef struct foo *foo_ptr; /* let's create a ptr-type */
The alloc/free and init/clear pairs now read:
foo_ptr foo_alloc(/* various "constructor" params */);
void foo_free(foo_ptr bar);
int foo_init(foo_ptr bar, /* and various "constructor" params */);
int foo_clear(foo_ptr bar);
Now you can use it all like this (for instance, the init/clear pairs):
int main()
{
foo bar; /* constructed but NOT initialized yet */
foo_init(bar); /* initialize bar object, alloc stuff on heap, etc. */
/* use bar */
foo_clear(bar); /* clear bar object, free stuff on heap, etc. */
}
Remarks: The init/clear pair seems to allow for a more generic way of initializing and clearing out objects. Compared to the alloc/free pair, the init/clear pair requires that a "shallow" object has already been constructed. The "deep" construction is done using init.
Question: Are there any non-obvious pitfalls of the 1-element array "type-idiom"?
This is very clever (but see below).
It encourages the misleading idea that C function arguments can be passed by reference.
If I see this in a C program:
foo bar;
foo_init(bar);
I know that the call to foo_init does not modify the value of bar. I also know that the code passes the value of bar to a function when it hasn't initialized it, which is very probably undefined behavior.
Unless I happen to know that foo is a typedef for an array type. Then I suddenly realize that foo_init(bar) is not passing the value of bar, but the address of its first element. And now every time I see something that refers to type foo, or to an object of type foo, I have to think about how foo was defined as a typedef for a single-element array before I can understand the code.
It is an attempt to make C look like something it's not, not unlike things like:
#define BEGIN {
#define END }
and so forth. And it doesn't result in code that's easier to understand because it uses features that C doesn't support directly. It results in code that's harder to understand (especially to readers who know C well), because you have to understand both the customized declarations and the underlying C semantics that make the whole thing work.
If you want to pass pointers around, just pass pointers around, and do it explicitly. See, for example, the use of FILE* in the various standard functions defined in <stdio.h>. There is no attempt to hide pointers behind macros or typedefs, and C programmers have been using that interface for decades.
If you want to write code that looks like it's passing arguments by reference, define some function-like macros, and give them all-caps names so knowledgeable readers will know that something odd is going on.
I said above that this is "clever". I'm reminded of something I did when I was first learning the C language:
#define EVER ;;
which let me write an infinite loop as:
for (EVER) {
/* ... */
}
At the time, I thought it was clever.
I still think it's clever. I just no longer think that's a good thing.
The only advantage to this method is nicer looking code and easier typing. It allows the user to create the struct on the stack without dynamic allocation like so:
foo bar;
However, the structure can still be passed to functions that require a pointer type, without requiring the user to convert to a pointer with &bar every time.
foo_init(bar);
Without the 1 element array, it would require either an alloc function as you mentioned, or constant & usage.
foo_init(&bar);
The only pitfall I can think of is the normal concerns associated with direct stack allocation. If this in a library used by other code, updates to the struct may break client code in the future, which would not happen when using an alloc free pair.
Is there any free library to let a user easily build a C mathematical expression, which can be used like any other function? I mean c expression/function which could be as quick as 'inline' mathematical expression and could be used many times in program.
I think I can be done in C somehow, but does anybody can tell if it could be real if it have to be a CUDA dveice function?
There are a few options. I assume you want something that the user can "call" several times, like this:
void *s = make_func("2 * x + 7");
...
printf("%lf\n", call_func(s, 3.0)); // prints 13
...
printf("%lf\n", call_func(s, 5.0)); // prints 17
...
free_func(s);
One option is to implement this as a recursive structure holding function pointers and constants. Something like:
enum item_type { VAR, CONST, FUNC };
struct var {
enum item_type;
int id;
};
struct constant {
enum item_type;
double value;
};
struct func {
enum item_type;
double (*func)(double, double);
enum item_type *a, *b;
};
Then make_func would parse the above string into something like:
(struct func *){ FUNC, &plus,
(struct func *){ FUNC, ×,
(struct constant *){ CONST, 2 },
(struct var *){ VAR, 'x' } }
(struct constant *){ CONST, 7 } }
If you can understand that - the enum type_item in the struct func is used to point to the next node in the tree (or rather, the first element of that node, which is the enum), and the enum is what our code uses to find out what the item type is. Then, when we use the call(void *, ...) function, it counts how many variables there are - this is how many extra arguments the call function should have been passed - then replaces the variables with the values we've called it with, then does the calculations.
The other option (which will probably be considerably faster and easier to extend) is to use something like libjit to do most of that work for you. I've never used it, but a JIT compiler gives you some basic building blocks (like add, multiply, etc. "instructions") that you can string together as you need them, and it compiles them down to actual assembly code (so no going through a constructed syntax tree calling function pointers like we had to before) so that when you call it it's as fast and dynamic as possible.
I don't know libjit's API, but it looks easily capable of doing what you seem to need. The make_func and free_func could all be pretty much the same as they are above (you might have to alter your calls to call_func) and would basically construct, use, and destroy a JIT object based on how it parses the user's string. The same as above, really, but you wouldn't need to define the syntax tree, data types, etc. yourself.
Hope that is somewhat helpful.
libtcc (from TCC) can be used as a very small and fast JIT; see libtcc_test.cc for sample usage.
I was just curious to know if it is possible to have a pointer referring to #define constant. If yes, how to do it?
The #define directive is a directive to the preprocessor, meaning that it is invoked by the preprocessor before anything is even compiled.
Therefore, if you type:
#define NUMBER 100
And then later you type:
int x = NUMBER;
What your compiler actually sees is simply:
int x = 100;
It's basically as if you had opened up your source code in a word processor and did a find/replace to replace each occurrence of "NUMBER" with "100". So your compiler has no idea about the existence of NUMBER. Only the pre-compilation preprocessor knows what NUMBER means.
So, if you try to take the address of NUMBER, the compiler will think you are trying to take the address of an integer literal constant, which is not valid.
No, because #define is for text replacement, so it's not a variable you can get a pointer to -- what you're seeing is actually replaced by the definition of the #define before the code is passed to the compiler, so there's nothing to take the address of. If you need the address of a constant, define a const variable instead (C++).
It's generally considered good practice to use constants instead of macros, because of the fact that they actually represent variables, with their own scoping rules and data types. Macros are global and typeless, and in a large program can easily confuse the reader (since the reader isn't seeing what's actually there).
#define defines a macro. A macro just causes one sequence of tokens to be replaced by a different sequence of tokens. Pointers and macros are totally distinct things.
If by "#define constant" you mean a macro that expands to a numeric value, the answer is still no, because anywhere the macro is used it is just replaced with that value. There's no way to get a pointer, for example, to the number 42.
No ,It's Not possible in C/C++
You can use the #define directive to give a meaningful name to a constant in your program
We can able to use in two forms.
Please : See this link
http://msdn.microsoft.com/en-us/library/teas0593%28VS.80%29.aspx
The #define directive can contain an object-like definition or a function-like definition.
Iam sorry iam unable to provide one more wink ... Please see the IBM links..since below i pasted linke link
u can get full info from above 2 links
There is a way to overcome this issue:
#define ROW 2
void foo()
{
int tmpInt = ROW;
int *rowPointer = &tmpInt;
// ...
}
Or if you know it's type you can even do that:
void getDefinePointer(int * pointer)
{
*pointer = ROW;
}
And use it:
int rowPointer = NULL;
getDefinePointer(&rowPointer2);
printf("ROW==%d\n", rowPointer2);
and you have a pointer to #define constant.