I'm reviewing some code and I stumbled across this:
In a header file we have this MAGIC_ADDRESS defined
#define ANOTHER_ADDRESS ((uint8_t*)0x40024000)
#define MAGIC_ADDRESS (ANOTHER_ADDRESS + 4u)
And then peppered throughout the code in various files we have things like this:
*(uint32_t*)MAGIC_ADDRESS = 0;
and
*(uint32_t*)MAGIC_ADDRESS = SOME_OTHER_DEFINE;
This compiles, apparently works, and it throws no linter errors. MAGIC_ADDRESS = 0; without the cast does not compile as I would expect.
So my questions are:
Why in the world would we ever want to do this rather than just making a uint32_t in the first place?
How does this actually work? I thought preprocessor defines were untouchable, how are we managing to cast one?
Why in the world would we ever want to do this rather than just making a uint32_t in the first place?
That's a fair question. One possibility is that ANOTHER_ADDRESS is used as a base address for more than one kind of data, but the code fragments presented do not show any reason why ANOTHER_ADDRESS should not be defined to expand to an expression of type uint32_t *. Note, however, that if that change were made then the definition of MAGIC_ADDRESS would need to be changed to (ANOTHER_ADDRESS + 1u).
How does this actually work? I thought preprocessor defines were untouchable, how are we managing to cast one?
Where an in-scope macro identifier appears in C source code, the macro's replacement text is substituted. Simplifying a bit, if the replacement text contains macro identifiers, too, then those are then replaced with their replacement text, etc.. Nowhere in your code fragments as a macro being cast, per se, but the fully-expanded result expresses some casts.
For example, this ...
*(uint32_t*)MAGIC_ADDRESS = 0;
... expands to ...
*(uint32_t*)(ANOTHER_ADDRESS + 4u) = 0;
... and then on to ...
*(uint32_t*)(((uint8_t*)0x40024000) + 4u) = 0;
. There are no casts of macros there, but there are (valid) casts of macros' replacement text.
It's not the cast that allows the assignment to work, it's the * dereferencing operator. The macro expands to a pointer constant, and you can't reassign a constant. But since it's a pointer you can assign to the memory it points to. So if you wrote
*MAGIC_ADDRESS = 0;
you wouldn't get an error.
The cast is necessary to assign to a 4-byte field at that address, rather than just a single byte, since the macro expands to a uint8_t*. Casting it to uint32_t* make it a 4-byte assignment.
#define ANOTHER_ADDRESS ((uint8_t*)0x40024000)
#define MAGIC_ADDRESS (ANOTHER_ADDRESS + 4u)
And then peppered throughout the code in various files we have things like this:
*(uint32_t*)MAGIC_ADDRESS = 0;
That's the problem - you don't want anything repetitive peppered throughout. Instead, this is what more-or-less idiomatic embedded C code would look like:
// Portable to compilers without void* arithmetic extension
#define BASE_ADDRESS ((uint8_t*)0x40024000)
#define REGISTER1 (*(uint32_t*)(ANOTHER_ADDRESS + 4u))
You can then write REGISTER1 = 42 or if (REGISTER1 != 42) etc. As you may imagine, this is normally used to for memory-mapped peripheral control registers.
If you're using gcc or clang, there's another layer of type safety available as an extension: you don't really want the compiler to allow *BASE_ADDRESS to compile, since presumably you only want to access registers - the *BASE_ADDRESS expression shouldn't pass a code review. And thus:
// gcc, clang, icc, and many others but not MSVC
#define BASE_ADDRESS ((void*)0x40024000)
#define REGISTER1 (*(uint32_t*)(ANOTHER_ADDRESS + 4u))
Arithmetic on void* is a gcc extension adopted most compilers that don't come from Microsoft, and it's handy: the *BASE_ADDRESS expression won't compile, and that's a good thing.
I imagine that the BASE_ADDRESS is the address of the battery-backed RAM on an STM32 MCU, in which case the "REGISTER" interpretation is incorrect, since all you want is to persist some application data, and you're using C, not assembly language, and there's this handy thing we call structures - absolutely use a structure instead of this ugly hack. The things beings stored in that non-volatile area aren't registers, they are just fields in a structure, and the structure itself is stored in a non-volatile fashion:
#define BKPSRAM_BASE_ ((void*)0x40024000)
#define nvstate (*(NVState*)BKPSRAM_BASE_)
enum NVLayout { NVVER_1 = 1, NVVER_2 = 2 };
struct {
// Note: This structure is persisted in NVRAM.
// Do not reorder the fields.
enum NVLayout layout;
// NVVER_1 fields
uint32_t value1;
uint32_t value2;
...
/* sometime later after a release */
// NVVER_2 fields
uint32_t valueA;
uint32_t valueB;
} typedef NVState;
Use:
if (nvstate.layout >= NVVER1) {
nvstate.value1 = ...;
if (nvstate.value2 != 42) ...
}
And here we come to the crux of the problem: your code review was focused on the minutiae, but you should have also divulged the big picture. If my big picture guess is correct - that it's all about sticking some data in a battery-backed RAM, then an actual data structure should be used, not macro hackery and manual offset management. Yuck.
And yes, you'll need that layout field for forward compatibility unless the entire NVRAM area is pre-initialized to zeroes, and you're OK with zeroes as default values.
This approach easily allows you to copy the NVRAM state, e.g. if you wanted to send it over the wire for diagnostic purposes - you don't have to worry about how much data is there, just use sizeof(NVState) for passing it to functions such as fwrite, and you can even use a working copy of that NV data - all without a single memcpy:
NVState wkstate = nvstate;
/* user manipulates the state here */
if (OK_pressed)
nvstate = wkstate;
else if (Cancel_pressed)
wkstate = nvstate;
If you need to assign values to a specific place in memory using MACROs
allows you to do so in a way that is relatively easy to read (and if you need to
use another address later - just change the macro definition)
The macro is translated by the preprocessor to a value. When you then de-reference
it you get access to the memory which you can read or write to. This has nothing to
do with the string that is used as a label by the preprocessor.
Both definitions are wrong I afraid (or at least not completely correct)
It should be defined as a pointer to volatile value if pointers are referencing hardware registers.
#define ANOTHER_POINTER ((volatile uint8_t*)0x40024000)
#define MAGIC_APOINTER (ANOTHER_ADDRESS + 4u)
I was defined as uint8_t * pointer because probably author wanted pointer arithmetic to be done on the byte level.
Related
I was going through a programming manual for one of the microcontrollers I came across and it had the preprocessor definition as follows:
#define SCICTL1A (volatile unsigned int *)0x7051
and a statement in the source file as follows:
*SCICTL1A = 0X0003;
My question is, what is the pointer variable here and what is it pointing to, (I have never come across pointer definitions in preprocessor directives before since I am a beginner to C programming) and what does the assignment statement do?
There is no variables here. The macro expands as text in place, so the 2nd excerpt becomes
*(volatile unsigned int *)0x7051 = 0X0003;
It casts the unsigned integer 0x7051 into a pointer to volatile unsigned integer, then references this in assignment. Essentially it stores 0x0003 into the unsigned integer-wide piece of memory that starts from address 0x7051 (or, however the integer-to-pointer conversion happens to work on your target platform)
volatile is required so that the compiler does not just optimize the assignment out - it must be strictly evaluated and considered a side effect (see as-if rule).
As for the actual reason why this is done - it is probably some memory-mapped device, check the microcontroller datasheets for more information.
There is no variable there. Only the pointer.
the *SCICTL1A = 0X0003; is replaced by the preprocessor by the:
*(volatile unsigned int *)0x7051 = 0x0003;
You just write the location with the address of the 0x07051. That does it mean depends on your implementation
I'm assuming you're using a TMS320F2803x Piccolo microcontroller: http://www.ti.com/lit/ds/sprs584l/sprs584l.pdf
According to this document, address 0x7051 is Control Register 1 for the Serial Communications Interface (SCI) Module.
According to this document, https://www.swarthmore.edu/NatSci/echeeve1/Ref/embedRes/DL/28069TechRefManual_spruh18d.pdf, you're able to do the following with this register:
SCICTL1 controls the receiver/transmitter enable, TXWAKE and SLEEP
functions, and the SCI software reset.
I have read a few questions on the topic:
Should I use #define, enum or const?
How does Enum allocate Memory on C?
What makes a better constant in C, a macro or an enum?
What is the size of an enum in C?
"static const" vs "#define" vs "enum"
static const vs. #define in c++ - differences in executable size
and I understand that enum are usually preferred on the #define macros for a better encapsulation and/or readibility. Plus it allows the compilers to check for types preventing some errors.
const declaration are somewhat in between, allowing type checking, and encapsulation, but more messy.
Now I work in Embedded applications with very limited memory space (we often have to fight for byte saving). My first ideas would be that constants take more memory than enums. But I realised that I am not sure how constants will appear in the final firmware.
Example:
enum { standby, starting, active, stoping } state;
Question
In a resource limited environment, how does the enum vs #define vs static const compare in terms of execution speed and memory imprint?
To try to get some substantial elements to the answer, I made a simple test.
Code
I wrote a simple C program main.c:
#include <stdio.h>
#include "constants.h"
// Define states
#define STATE_STANDBY 0
#define STATE_START 1
#define STATE_RUN 2
#define STATE_STOP 3
// Common code
void wait(unsigned int n)
{
unsigned long int vLoop;
for ( vLoop=0 ; vLoop<n*LOOP_SIZE ; ++vLoop )
{
if ( (vLoop % LOOP_SIZE) == 0 ) printf(".");
}
printf("\n");
}
int main ( int argc, char *argv[] )
{
int state = 0;
int loop_state;
for ( loop_state=0 ; loop_state<MACHINE_LOOP ; ++loop_state)
{
if ( state == STATE_STANDBY )
{
printf("STANDBY ");
wait(10);
state = STATE_START;
}
else if ( state == STATE_START )
{
printf("START ");
wait(20);
state = STATE_RUN;
}
else if ( state == STATE_RUN )
{
printf("RUN ");
wait(30);
state = STATE_STOP;
}
else // ( state == STATE_STOP )
{
printf("STOP ");
wait(20);
state = STATE_STANDBY;
}
}
return 0;
}
while constants.h contains
#define LOOP_SIZE 10000000
#define MACHINE_LOOP 100
And I considered three variants to define the state constants. The macro as above, the enum:
enum {
STATE_STANDBY=0,
STATE_START,
STATE_RUN,
STATE_STOP
} possible_states;
and the const:
static const int STATE_STANDBY = 0;
static const int STATE_START = 1;
static const int STATE_RUN = 2;
static const int STATE_STOP = 3;
while the rest of the code was kept identical.
Tests and Results
Tests were made on a 64 bits linux machine and compiled with gcc
Global Size
gcc main.c -o main gives
macro: 7310 bytes
enum: 7349 bytes
const: 7501 bytes
gcc -O2 main.c -o main gives
macro: 7262 bytes
enum: 7301 bytes
const: 7262 bytes
gcc -Os main.c -o main gives
macro: 7198 bytes
enum: 7237 bytes
const: 7198 bytes
When optimization is turned on, both the const and the macro variants come to the same size. The enum is always slightly larger. Using gcc -S I can see that the difference is a possible_states,4,4 in .comm. So the enum is always larger than the macro. and the const can be larger but can also be optimized away.
Section size
I checked a few sections of the programs using objdump -h main: .text, .data, .rodata, .bss, .dynamic. In all cases, .bss has 8 bytes, .data, 16 bytes and .dynamic: 480 bytes.
.rodata has 31 bytes, except for the non-optimized const version (47 bytes).
.text goes from 620 bytes up to 780 bytes, depending on the optimisation. The const unoptimised being the only one differing with the same flag.
Execution speed
I ran the program a few times, but I did not notice a substantial difference between the different versions. Without optimisation, it ran for about 50 seconds. Down to 20 seconds with -O2 and up to more than 3 minutes with -Os. I measured the time with /usr/bin/time.
RAM usage
Using time -f %M, I get about 450k in each case, and when using valgrind --tool=massif --pages-as-heap=yes I get 6242304 in all cases.
Conclusion
Whenever some optimisation has been activated, the only notable difference is about 40 Bytes more for the enum case. But no RAM or speed difference.
Remains other arguments about scope, readability... personal preferences.
and I understand that enum are usually preferred on the #define macros for a better encapsulation and/or readibility
Enums are preferred mainly for better readability, but also because they can be declared at local scope and they add a tiny bit more of type safety (particularly when static analysis tools are used).
Constant declaration are somewhat in between, allowing type checking, and encapsulation, but more messy.
Not really, it depends on scope. "Global" const can be messy, but they aren't as bad practice as global read/write variables and can be justified in some cases. One major advantage of const over the other forms is that such variables tend to be allocated in .rodata and you can view them with a debugger, something that isn't always possible with macros and enums (depends on how good the debugger is).
Note that #define are always global and enum may or may not be, too.
My first ideas would be that constants take more memory than enums
This is incorrect. enum variables are usually of type int, though they can be of smaller types (since their size can vary they are bad for portability). Enumeration constants however (that is the things inside the enum declaration) are always int, which is a type of at least 16 bits.
A const on the other hand, is exactly as large as the type you declared. Therefore const is preferred over enum if you need to save memory.
In a resource limited environment, how does the enum vs #define vs static const compare in terms of execution speed and memory imprint?
Execution speed will probably not differ - it is impossible to say since it is so system-specific. However, since enums tend to give 16 bit or larger values, they are a bad idea when you need to save memory. And they are also a bad idea if you need an exact memory layout, as is often the case in embedded systems. The compiler may however of course optimize them to a smaller size.
Misc advice:
Always use the stdint.h types in embedded systems, particularly when you need exact memory layout.
Enums are fine unless you need them as part of some memory layout, like a data protocol. Don't use enums for such cases.
const is ideal when you need something to be stored in flash. Such variables get their own address and are easier to debug.
In embedded systems, it usually doesn't make much sense to optimize code in order to reduce flash size, but rather to optimize to reduce RAM size.
#define will always end up in .text flash memory, while enum and const may end up either in RAM, .rodata flash or .text flash.
When optimizing for size (RAM or flash) on an embedded system, keep track of your variables in the map file (linker output file) and look for things that stand out there, rather than running around and manually optimizing random things at a whim. This way you can also detect if some variables that should be const have ended up in RAM by mistake (a bug).
enums, #defines and static const will generally give exactly the same code (and therefore the same speed and size), with certain assumptions.
When you declare an enum type and enumeration constants, these are just names for integer constants. They don't take any space or time. The same applies to #define'd values (though these are not limited to int's).
A "static const" declaration may take space, usually in a read-only section in flash. Typically this will only be the case when optimisation is not enabled, but it will also happen if you forget to use "static" and write a plain "const", or if you take the address of the static const object.
For code like the sample given, the results will be identical with all versions, as long as at least basic optimisation is enabled (so that the static const objects are optimised). However, there is a bug in the sample. This code:
enum {
STATE_STANDBY = 0,
STATE_START,
STATE_RUN,
STATE_STOP
} possible_states;
not only creates the enumeration constants (taking no space) and an anonymous enum type, it also creates an object of that type called "possible_states". This has global linkage, and has to be created by the compiler because other modules can refer to it - it is put in the ".comm" common section. What should have been written is one of these:
// Define just the enumeration constants
enum { STATE_STANDBY, ... };
// Define the enumeration constants and the
// type "enum possible_states"
enum possible_states { STATE_STANDBY, ... };
// Define the enumeration constants and the
// type "possible_states"
typedef enum { STATE_STANDBY, ... } possible_states;
All of these will give optimal code generation.
When comparing the sizes generated by the compiler here, be careful not to include the debug information! The examples given show a bigger object file for the enumeration version partly because of the error above, but mainly because it is a new type and leads to more debug information.
All three methods work for constants, but they all have their peculiarities. A "static const" can be any type, but you can't use it for things like case labels or array sizes, or for initialising other objects. An enum constant is limited to type "int". A #define macro contains no type information.
For this particular case, however, an enumerated type has a few big advantages. It collects the states together in one definition, which is clearer, and it lets you make them a type (once you get the syntax correct :-) ). When you use a debugger, you should see the actual enum constants, not just a number, for variables of the enum type. ("state" should be declared of this type.) And you can get better static error checking from your tools. Rather than using a series of "if" / "else if" statements, use a switch and use the "-Wswitch" warning in gcc (or equivalent for other compilers) to warn you if you have forgotten a case.
I used a OpenGL library which defined most constants in an enum, and the default OpenGL-header defines it as #defines. So as a user of the header/library there is no big difference. It is just an aspect of desing. When using plain C, there is no static const until it is something that changes or is a big string like (extern) static const char[] version;. For myself I avoid using macros, so in special cases the identifier can be reused. When porting C code to C++, the enums even are scoped and are Typechecked.
Aspect: characterusage & readability:
#define MY_CONST1 10
#define MY_CONST2 20
#define MY_CONST3 30
//...
#define MY_CONSTN N0
vs.
enum MyConsts {
MY_CONST1 = 10,
MY_CONST2 = 20,
MY_CONST3 = 30,
//...
MY_CONSTN = N0 // this gives already an error, which does not happen with (unused) #defines, because N0 is an identifier, which is probably not defined
};
i have a struct which represents a set of hardware registers. Here, some parts are reserved and must neither be written nor read. Is there a placeholder or something similar instead of using an obvious variable naming?
typedef volatile struct RegisterStruct
{
uint8 BDH;
uint8 BDL;
...
uint8 IR;
uint8 RESERVED0; // this area should not be accessed
...
}
Obvious naming would be the right thing to use, as there's no "reserved" feature in C.
You can use arrays of byte-sized integers to correctly pad to the right length:
typedef volatile struct RegisterStruct
{
uint8_t BDH;
uint8_t BDL;
uint8_t IR;
uint8_t __RESERVED[num_of_reserved_bytes]; // this area should not be accessed
uint8_t NEXT_REGISTER_NAME;
};
The problem with using structs for register mapping in general (or similarly, for data communication protocol mapping), is that a struct may contain padding bytes anywhere.
If you use a struct (or union) for such purposes, you have to ensure that padding is disabled, by adding a line like for example
_Static_assert(sizeof(RegisterStruct) == sizeof(uint8_t)*4, "Padding detected");
This will prevent padding bugs, as it will block structs with padding from compiling.
Unfortunately, you cannot disable struct padding in a portable manner; most of the time you don't want to disable it because it will make the programs slower at best, in the worst case you'll get hardware exceptions for misaligned access, all depending on CPU.
The most common non-standard extension to disable padding is #pragma pack(1), but it is non-standard and non-portable.
In my opinion, the best way to avoid all such problems is to avoid structs entirely for the actual mapping. Instead, just declare everything as plain volatile variables. (Or by using macros, which is unfortunately the only way you can map something to a specific memory location in standard C).
And when you have gotten that far, there's no need to use any "reserved" place holders. Simply don't map anything to those reserved memory locations.
There's actually really no sound reason why you would want to have a number of hardware registers in a struct, even though it is for some reason mighty popular to do so among embedded compilers. You'll find that register maps written for such compilers are unreadable and also extremely non-standard.
For communication protocols it makes more sense to have structs, but then you would typically write serialize/de-serialize routines to fill up the struct.
There isn't anything in C to declare a placeholder/hole without a name in a structure or something with a name that is unreadable (const could help but with write protection only). And I don't see anything in gcc's extensions that could help here.
But you could additionally scramble the name by using the preprocessor, e.g.:
#define GLUE(X,Y,Z) X ## Y ## Z
#ifdef __GNUC__
#define SCRAMBLE(X) GLUE(X,_,__COUNTER__)
#else
#define SCRAMBLE(X) GLUE(X,_,__LINE__)
#endif
typedef volatile struct
{
uint8 BDH;
uint8 BDL;
// ...
uint8 IR;
uint8 SCRAMBLE(RESERVED0);
// ...
} RegisterStruct;
Which one is better to use among the below statements in C?
static const int var = 5;
or
#define var 5
or
enum { var = 5 };
It depends on what you need the value for. You (and everyone else so far) omitted the third alternative:
static const int var = 5;
#define var 5
enum { var = 5 };
Ignoring issues about the choice of name, then:
If you need to pass a pointer around, you must use (1).
Since (2) is apparently an option, you don't need to pass pointers around.
Both (1) and (3) have a symbol in the debugger's symbol table - that makes debugging easier. It is more likely that (2) will not have a symbol, leaving you wondering what it is.
(1) cannot be used as a dimension for arrays at global scope; both (2) and (3) can.
(1) cannot be used as a dimension for static arrays at function scope; both (2) and (3) can.
Under C99, all of these can be used for local arrays. Technically, using (1) would imply the use of a VLA (variable-length array), though the dimension referenced by 'var' would of course be fixed at size 5.
(1) cannot be used in places like switch statements; both (2) and (3) can.
(1) cannot be used to initialize static variables; both (2) and (3) can.
(2) can change code that you didn't want changed because it is used by the preprocessor; both (1) and (3) will not have unexpected side-effects like that.
You can detect whether (2) has been set in the preprocessor; neither (1) nor (3) allows that.
So, in most contexts, prefer the 'enum' over the alternatives. Otherwise, the first and last bullet points are likely to be the controlling factors — and you have to think harder if you need to satisfy both at once.
If you were asking about C++, then you'd use option (1) — the static const — every time.
Generally speaking:
static const
Because it respects scope and is type-safe.
The only caveat I could see: if you want the variable to be possibly defined on the command line. There is still an alternative:
#ifdef VAR // Very bad name, not long enough, too general, etc..
static int const var = VAR;
#else
static int const var = 5; // default value
#endif
Whenever possible, instead of macros / ellipsis, use a type-safe alternative.
If you really NEED to go with a macro (for example, you want __FILE__ or __LINE__), then you'd better name your macro VERY carefully: in its naming convention Boost recommends all upper-case, beginning by the name of the project (here BOOST_), while perusing the library you will notice this is (generally) followed by the name of the particular area (library) then with a meaningful name.
It generally makes for lengthy names :)
In C, specifically? In C the correct answer is: use #define (or, if appropriate, enum)
While it is beneficial to have the scoping and typing properties of a const object, in reality const objects in C (as opposed to C++) are not true constants and therefore are usually useless in most practical cases.
So, in C the choice should be determined by how you plan to use your constant. For example, you can't use a const int object as a case label (while a macro will work). You can't use a const int object as a bit-field width (while a macro will work). In C89/90 you can't use a const object to specify an array size (while a macro will work). Even in C99 you can't use a const object to specify an array size when you need a non-VLA array.
If this is important for you then it will determine your choice. Most of the time, you'll have no choice but to use #define in C. And don't forget another alternative, that produces true constants in C - enum.
In C++ const objects are true constants, so in C++ it is almost always better to prefer the const variant (no need for explicit static in C++ though).
The difference between static const and #define is that the former uses the memory and the later does not use the memory for storage. Secondly, you cannot pass the address of an #define whereas you can pass the address of a static const. Actually it is depending on what circumstance we are under, we need to select one among these two. Both are at their best under different circumstances. Please don't assume that one is better than the other... :-)
If that would have been the case, Dennis Ritchie would have kept the best one alone... hahaha... :-)
In C #define is much more popular. You can use those values for declaring array sizes for example:
#define MAXLEN 5
void foo(void) {
int bar[MAXLEN];
}
ANSI C doesn't allow you to use static consts in this context as far as I know. In C++ you should avoid macros in these cases. You can write
const int maxlen = 5;
void foo() {
int bar[maxlen];
}
and even leave out static because internal linkage is implied by const already [in C++ only].
Another drawback of const in C is that you can't use the value in initializing another const.
static int const NUMBER_OF_FINGERS_PER_HAND = 5;
static int const NUMBER_OF_HANDS = 2;
// initializer element is not constant, this does not work.
static int const NUMBER_OF_FINGERS = NUMBER_OF_FINGERS_PER_HAND
* NUMBER_OF_HANDS;
Even this does not work with a const since the compiler does not see it as a constant:
static uint8_t const ARRAY_SIZE = 16;
static int8_t const lookup_table[ARRAY_SIZE] = {
1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16}; // ARRAY_SIZE not a constant!
I'd be happy to use typed const in these cases, otherwise...
If you can get away with it, static const has a lot of advantages. It obeys the normal scope principles, is visible in a debugger, and generally obeys the rules that variables obey.
However, at least in the original C standard, it isn't actually a constant. If you use #define var 5, you can write int foo[var]; as a declaration, but you can't do that (except as a compiler extension" with static const int var = 5;. This is not the case in C++, where the static const version can be used anywhere the #define version can, and I believe this is also the case with C99.
However, never name a #define constant with a lowercase name. It will override any possible use of that name until the end of the translation unit. Macro constants should be in what is effectively their own namespace, which is traditionally all capital letters, perhaps with a prefix.
#define var 5 will cause you trouble if you have things like mystruct.var.
For example,
struct mystruct {
int var;
};
#define var 5
int main() {
struct mystruct foo;
foo.var = 1;
return 0;
}
The preprocessor will replace it and the code won't compile. For this reason, traditional coding style suggest all constant #defines uses capital letters to avoid conflict.
It is ALWAYS preferable to use const, instead of #define. That's because const is treated by the compiler and #define by the preprocessor. It is like #define itself is not part of the code (roughly speaking).
Example:
#define PI 3.1416
The symbolic name PI may never be seen by compilers; it may be removed by the preprocessor before the source code even gets to a compiler. As a result, the name PI may not get entered into the symbol table. This can be confusing if you get an error during compilation involving the use of the constant, because the error message may refer to 3.1416, not PI. If PI were defined in a header file you didn’t write, you’d have no idea where that 3.1416 came from.
This problem can also crop up in a symbolic debugger, because, again, the name you’re programming with may not be in the symbol table.
Solution:
const double PI = 3.1416; //or static const...
I wrote quick test program to demonstrate one difference:
#include <stdio.h>
enum {ENUM_DEFINED=16};
enum {ENUM_DEFINED=32};
#define DEFINED_DEFINED 16
#define DEFINED_DEFINED 32
int main(int argc, char *argv[]) {
printf("%d, %d\n", DEFINED_DEFINED, ENUM_DEFINED);
return(0);
}
This compiles with these errors and warnings:
main.c:6:7: error: redefinition of enumerator 'ENUM_DEFINED'
enum {ENUM_DEFINED=32};
^
main.c:5:7: note: previous definition is here
enum {ENUM_DEFINED=16};
^
main.c:9:9: warning: 'DEFINED_DEFINED' macro redefined [-Wmacro-redefined]
#define DEFINED_DEFINED 32
^
main.c:8:9: note: previous definition is here
#define DEFINED_DEFINED 16
^
Note that enum gives an error when define gives a warning.
The definition
const int const_value = 5;
does not always define a constant value. Some compilers (for example tcc 0.9.26) just allocate memory identified with the name "const_value". Using the identifier "const_value" you can not modify this memory. But you still could modify the memory using another identifier:
const int const_value = 5;
int *mutable_value = (int*) &const_value;
*mutable_value = 3;
printf("%i", const_value); // The output may be 5 or 3, depending on the compiler.
This means the definition
#define CONST_VALUE 5
is the only way to define a constant value which can not be modified by any means.
Although the question was about integers, it's worth noting that #define and enums are useless if you need a constant structure or string. These are both usually passed to functions as pointers. (With strings it's required; with structures it's much more efficient.)
As for integers, if you're in an embedded environment with very limited memory, you might need to worry about where the constant is stored and how accesses to it are compiled. The compiler might add two consts at run time, but add two #defines at compile time. A #define constant may be converted into one or more MOV [immediate] instructions, which means the constant is effectively stored in program memory. A const constant will be stored in the .const section in data memory. In systems with a Harvard architecture, there could be differences in performance and memory usage, although they'd likely be small. They might matter for hard-core optimization of inner loops.
Don't think there's an answer for "which is always best" but, as Matthieu said
static const
is type safe. My biggest pet peeve with #define, though, is when debugging in Visual Studio you cannot watch the variable. It gives an error that the symbol cannot be found.
Incidentally, an alternative to #define, which provides proper scoping but behaves like a "real" constant, is "enum". For example:
enum {number_ten = 10;}
In many cases, it's useful to define enumerated types and create variables of those types; if that is done, debuggers may be able to display variables according to their enumeration name.
One important caveat with doing that, however: in C++, enumerated types have limited compatibility with integers. For example, by default, one cannot perform arithmetic upon them. I find that to be a curious default behavior for enums; while it would have been nice to have a "strict enum" type, given the desire to have C++ generally compatible with C, I would think the default behavior of an "enum" type should be interchangeable with integers.
A simple difference:
At pre-processing time, the constant is replaced with its value.
So you could not apply the dereference operator to a define, but you can apply the dereference operator to a variable.
As you would suppose, define is faster that static const.
For example, having:
#define mymax 100
you can not do printf("address of constant is %p",&mymax);.
But having
const int mymax_var=100
you can do printf("address of constant is %p",&mymax_var);.
To be more clear, the define is replaced by its value at the pre-processing stage, so we do not have any variable stored in the program. We have just the code from the text segment of the program where the define was used.
However, for static const we have a variable that is allocated somewhere. For gcc, static const are allocated in the text segment of the program.
Above, I wanted to tell about the reference operator so replace dereference with reference.
We looked at the produced assembler code on the MBF16X... Both variants result in the same code for arithmetic operations (ADD Immediate, for example).
So const int is preferred for the type check while #define is old style. Maybe it is compiler-specific. So check your produced assembler code.
I am not sure if I am right but in my opinion calling #defined value is much faster than calling any other normally declared variable (or const value).
It's because when program is running and it needs to use some normally declared variable it needs to jump to exact place in memory to get that variable.
In opposite when it use #defined value, the program don't need to jump to any allocated memory, it just takes the value. If #define myValue 7 and the program calling myValue, it behaves exactly the same as when it just calls 7.
I was just curious to know if it is possible to have a pointer referring to #define constant. If yes, how to do it?
The #define directive is a directive to the preprocessor, meaning that it is invoked by the preprocessor before anything is even compiled.
Therefore, if you type:
#define NUMBER 100
And then later you type:
int x = NUMBER;
What your compiler actually sees is simply:
int x = 100;
It's basically as if you had opened up your source code in a word processor and did a find/replace to replace each occurrence of "NUMBER" with "100". So your compiler has no idea about the existence of NUMBER. Only the pre-compilation preprocessor knows what NUMBER means.
So, if you try to take the address of NUMBER, the compiler will think you are trying to take the address of an integer literal constant, which is not valid.
No, because #define is for text replacement, so it's not a variable you can get a pointer to -- what you're seeing is actually replaced by the definition of the #define before the code is passed to the compiler, so there's nothing to take the address of. If you need the address of a constant, define a const variable instead (C++).
It's generally considered good practice to use constants instead of macros, because of the fact that they actually represent variables, with their own scoping rules and data types. Macros are global and typeless, and in a large program can easily confuse the reader (since the reader isn't seeing what's actually there).
#define defines a macro. A macro just causes one sequence of tokens to be replaced by a different sequence of tokens. Pointers and macros are totally distinct things.
If by "#define constant" you mean a macro that expands to a numeric value, the answer is still no, because anywhere the macro is used it is just replaced with that value. There's no way to get a pointer, for example, to the number 42.
No ,It's Not possible in C/C++
You can use the #define directive to give a meaningful name to a constant in your program
We can able to use in two forms.
Please : See this link
http://msdn.microsoft.com/en-us/library/teas0593%28VS.80%29.aspx
The #define directive can contain an object-like definition or a function-like definition.
Iam sorry iam unable to provide one more wink ... Please see the IBM links..since below i pasted linke link
u can get full info from above 2 links
There is a way to overcome this issue:
#define ROW 2
void foo()
{
int tmpInt = ROW;
int *rowPointer = &tmpInt;
// ...
}
Or if you know it's type you can even do that:
void getDefinePointer(int * pointer)
{
*pointer = ROW;
}
And use it:
int rowPointer = NULL;
getDefinePointer(&rowPointer2);
printf("ROW==%d\n", rowPointer2);
and you have a pointer to #define constant.