#define or enum? [duplicate] - c

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Why use enum when #define is just as efficient?
When programming in C, is it better practice to use #define statements or enums for states in a state machine?

Technically it doesn't matter. The compiler will most likely even create identical machine code for either case, but an enumeration has three advantages:
Using the right compiler+debugger combination, the debugger will print enumeration variables by their enumeration name and not by their number. So "StateBlahBlup" reads much nicer than "41", doesn't it?
You don't have explicitly give every state a number, the compiler does the numbering for you if you let it. Let's assume you have already 20 states and you want to add a new state in the middle, in case of defines, you have to do all renumbering on your own. In case of enumeration, you can just add the state and the compiler will renumber all states below this new state for you.
You can tell the compiler to warn you if a switch statement does not handle all the possible enum values, e.g. because you forgot to handle some values or because the enum was extended but you forgot to also update the switch statements handling enum values (it will not warn if there's a default case though, as all values not handled explicitly end up in the default case).

Since the states are related elements I think is better to have an enum defining them.

There's no definitive answer. enum offers you scoping and automatic value assignment, but does not give any control over the constant type (always signed int). #define ignores scoping, but allows you to use better typing facilities: lets you choose the constant type (either by using suffixes or by including an explicit cast into the definition).
So, choose for yourself what is more important to you. For a state machine, enum might be a better choice, unless you have a good reason to control the type.

I prefer enum. They are more compact and are 'safer'. You can also imply order in an enum, which might be helpful in a state machine. #defines should be avoided if possible, since they will overwrite all occurrences in source, which can lead to some unintended actions which are difficult to debug.

If enum is supported by your compiler, then that would be preferred. Failing that, by all means, use #define. All C++ compilers and modern C compilers should support enum, but older compilers (particularly ones targeting embedded platforms) may not support enum.
If you must use #define make sure to define your constants with parentheses, to avoid preprocessor errors:
#define RED_STATE (1)
#define YELLOW_STATE (2)
#define GREEN_STATE (3)

#define directives can have lots of unintended consequences and don't follow common scoping rules. Use enums when you have related data.
More information: http://www.embedded.com/columns/programmingpointers/9900402?_requestid=341945 [C++ material, but still marginally relevant]

You can do this trick to make compiler check the type of #define value.
#define VALUE_NAME ((TYPE_NAME) 12)
However the real problem of #define is it can be redefined in application code. (Of course compiler will warn you about it.)

enum is great when you have exclusive options, but you can't use them to define bitfield flags, like this:
#define SQ_DEFAULT 0x0
#define SQ_WITH_RED 0x1
#define SQ_WITH_BLUE 0x2
void paint_square(int flags);
Then you can paint red-blue square with:
paint_square(SQ_WITH_RED | SQ_WITH_BLUE);
...which you can't with enum.

You can use whatever you want and like.
Still as everyone is saying I would also like add up me as voting for Enums.
Enums should always be preferred if you are using related data as in case of a State Machine, you can define order in enums also that will help in implementing the State Machine.
Further enums will keep your program safe as all enums will be of its type only so they will avoid any possible confusions too.
#define should not be used in case of a state machine or related data. Anyway thats my suggestion, but there is no hard and fast rule.
Also I would like to add up one more point that enums will add more readability and understandability to your code if used in future or or if read by someone else. It is an important point when you are having a very large program and there are a lot of #defines in the program other than you are using for your State Machine.

Related

Best practice on writing constant parameters for embedded systems

This is a case of "static const” vs “#define” in C" for embedded systems.
On large/mid projects with "passed-down" code and modules, what is the best practice on writing constant parameters for your include files, modules, etc?
In a code "passed-down" where you don't know if the names you're choosing are defined in some other included file or might be called with extern or as macros in some other file that might include your file.
Having these 3 options:
static const int char_height = 12;
#define CHAR_HEIGHT 12
enum { char_height = 12 };
which one would be better (on an embedded system with unknown memory constraints)?
The original code uses mainly #define's for this, but these kind of constants are haphazardly implemented in several ways (and at different locations even in the same files) since it seems several people developed this demo software for a certain device.
Specifically, this is a demo code, showing off every hardware and SDK feature of a certain device.
Most of the data I'm thinking about is the kind used to configure the environment: screen dimensions, charset characteristics, something to improve the readability of the code. Not on the automatic configuration a compiler and pre-processor could do. But since there's a lot of code in there and I'm afraid of global name conflicts, I'm reluctant to use #define's
Currently, I'm considering that it would be better to rewrite the project from scratch and re-implement most of the already written functions to get their constants from just one c file or reorganize the constants' implementation to just one style.
But:
This is a one person project (so it would take a lot of time to re-implement everything)
The already implemented code works and it has been revised several times. (If it's not broken...)
Always consider readability and memory constraints. Also, macros are simply copy/paste operations that occur before compilation. With that being said I like to do the following:
I define all variables that are constant as being static const if they are to be used in one c file (e.g. not globally accessible across multiple files). Anything defined as const shall be placed in ROM when at file scope. Obviously you cannot change these variables after they're initialized.
I define all constant values using #define.
I use enumerations where it adds to readability. Any place where you have a fixed range of values I prefer enumerations to explicitly state the intent.
Try to approach the project with an object oriented perspective (even though c isn't OO). Hide private functions (don't create a prototype in the header), do not use globals if you can avoid it, mark variables that should only reside in one c module (file) as static, etc.
They are 3 different things that should be used in 3 different situations.
#define should be used for constants that need to be evaluated at compile time. One typical example is the size of a statically allocated array, i.e.
#define N 10
int x[N];
It is also fine to use #define all constants where it doesn't matter how or where the constant is allocated. People who claim that it is bad practice to do so only voice their own, personal, subjective opinions.
But of course, for such cases you can also use const variables. There is no important difference between #define and const, except for the following cases:
const should be used where it matters at what memory address a constant is allocated. It should also be used for variables that the programmer will likely change often. Because if you used const, you an easily move the variable to a memory segment in EEPROM or data flash (but if you do so, you need to declare it as volatile).
Another slight advantage of const is that you get stronger type safety than a #define. For the #define to get equal type safety, you have to add explicit type casts in the macro, which might get a bit harder to read.
And then of course, since consts (and enums) are variables, you can reduce their scope with the static keyword. This is good practice since such variables do not clutter down the global namespace. Although the true source of name conflicts in the global namespaces are in 99% of all cases caused by poor naming policies, or no naming policies at all. If you follow no coding standard, then that is the true source of the problem.
So generally it is fine to make constants global when needed, it is rather harmless practice as long as you have a sane naming policy (preferably all items belonging to one code module should share the same naming prefix). This shouldn't be confused with the practice of making regular variables global, which is always a very bad idea.
Enums should only be used when you have several constant values that are related to each other and you want to create a special type, such as:
typedef enum
{
OK,
ERROR_SOMETHING,
ERROR_SOMETHING_ELSE
} error_t;
One advantage of the enum is that you can use a classic trick to get the number of enumerated items as another compile-time constant "free of charge":
typedef enum
{
OK,
ERROR_SOMETHING,
ERROR_SOMETHING_ELSE,
ERRORS_N // the number of constants in this enum
} error_t;
But there are various pitfalls with enums, so they should always be used with caution.
The major disadvantage of enum is that it isn't type safe, nor is it "type sane". First of all, enumeration constants (like OK in the above example) are always of the type int, which is signed.
The enumerated type itself (error_t in my example) can however be of any type compatible with char or int, signed or unsigned. Take a guess, it is implementation-defined and non-portable. Therefore you should avoid enums, particularly as part of various data byte mappings or as part of arithmetic operations.
I agree with bblincoe...+1
I wonder if you understand what the differences are in that syntax and how it can/might affect implementation. Some folks may not care about implementation but if you are moving into embedded perhaps you should.
When bblincoe mentions ROM instead of RAM.
static const int char_height = 12;
That should, ideally, consume .text real estate and pre-init that real estate with the value you specified. Being const you wont change it but it does have a placeholder? now why would you need a placeholder for a constant? think about that, certainly you could hack the binary down the road for some reason to turn something on or off or change a board specific tuning parameter...
Without a volatile though that doesnt mean that compiler has to always use that .text location, it can optimize and put that value in as instructions directly or even worse optimize math operations and remove some math.
The define and enum do not consume storage, they are constants that the compiler chooses how to implement, ultimately those bits if they are not optimized away, land somewhere in .text sometimes everywhere in .text, depends on the instruction set how its immediates work the specific constant, etc.
So define vs enum is basically do you want to pick all the values or do you want the compiler to pick some values for you, define if you want to control it enum if you want the compiler to choose the values.
So it really isnt a best practice thing at all it is a case of determining what your program needs to do and choosing the appropriate programming solution for that situation.
Depending on the compiler and the target processor, choosing volatile static const int vs not doing that can affect the rom consumption. But it is a very specific optimization, and not a general answer (and has nothing to do with embedded but with compiling in general).
Dan Saks explains why he prefers the enumeration constant in these articles, Symbolic Constants and Enumeration Constants vs Constant Objects. In summary, avoid macros because they don't observe the usual scope rules and the symbolic names are typically not preserved for symbolic debuggers. And prefer enumeration constants because they are not susceptible to a performance penalty that may affect constant objects. There is a lot more details in the linked articles.
Another thing to considerer is performance. A #define constant can usually be accessed faster than a const variable (for integers) since the const will need to be fetched from ROM (or RAM) and the #define value will usually be an immediate instruction argument so it is fetched along with the instruction (no extra cycles).
As for naming conflicts, I like to use prefixes like MOD_OPT_ where MOD is the module name OPT means that the define is a compile-time option, etc. Also only include the #defines in your header files if they're part of the public API, otherwise use an .inc file if they're needed in multiple source files or define them in the source file itself if they're only specific to that file.

How to give readable names to elements of an array in C?

I'm inexperienced with C, and working on a microcontroller with messages stored in arrays where each byte does something different. How do I give each element of the array a human-readable name instead of referencing them as msg[1], msg[2], etc.?
Is this what structs are for? But "you cannot make assumptions about the binary layout of a structure, as it may have padding between fields."
Should I just use macros like this? (I know "macros are bad", but the code is already full of them)
#define MSG_ID msg[0]
#define MSG_COMMAND msg[1]
Oh! Or I guess I could just do
MSG_ID = 0;
MSG_COMMAND = 1;
MSG[MSG_ID];
That's probably better, if a little uglier.
If you want to go that route, use a macro, for sure, but make them better than what you suggest:
#define MSG_ID(x) (x)[0]
#define MSG_COMMAND(x) (x)[1]
Which will allow the code to name the arrays in ways that make sense, instead of ways that work with the macro.
Otherwise, you can define constants for the indexes instead (sorry I could not come up with better names for them...):
#define IDX_MSG_ID 0
#define IDX_MSG_COMMAND 1
And macros are not bad if they are used responsibly. This kind of "simple aliasing" is one of the cases where macros help making the code easier to read and understand, provided the macros are named appropriately and well documented.
Edit: per #Lundin's comments, the best way to improve readability and safety of the code is to introduce a type and a set of functions, like so (assuming you store in char and a message is MESSAGE_SIZE long):
typedef char MESSAGE[MESSAGE_SIZE];
char get_message_id(MESSAGE msg) { return msg[0]; }
char get_message_command(MESSAGE msg) { return msg[1]; }
This method, though it brings some level of type safety and allows you to abstract the storage away from the use, also introduces call overhead, which in microcontroller world might be problematic. The compiler may alleviate some of this through inlining the functions (which you could incentize by adding the inline keyword to the definitions).
The most natural concept for naming a set of integers in C are enumerations:
enum msg_pos { msg_id, msg_command, };
By default they start counting at 0 and increment by one. You would then access a field by msg[msg_id] for example.
It's fine to use a struct if you take the time to figure out how your compiler lays them out, and structs can very useful in embedded programming. It will always lay out the members in order, but there may be padding if you are not on an 8-bit micro. GCC has a "packed" attribute you can apply to the struct to prohibit padding, and some other compilers have a similar feature.

Is using enums safe in all compilers?

In a discussion, a colleague told me that he never uses enum because he experienced that some C-compilers don't cope with the enum statement correctly.
He couldn't remember which compiler exactly had problems but among the problems, there were errors when doing something like
enum my_enum{
my_enum_first = 5;
my_enum_second = 10;
};
i.e. initializing enum values instead of letting the compiler do the automatic assignment. Another one was that the compiler decides for itself how big the enum is and therefore you could have unpredictable behavior for sizeof my_enum when compiling your code under various platforms.
To get around that, he told me to better use #defines to define the constant elements. But especially for using doxygen it's quite handy to have an enum (e.g. as function parameter) because in the generated documentation, you could simply click on my_enum and directly jump to the description of my_enum.
Another example would be code completion, where your IDE tells you what you could specify as valid parameters for functions. I know that – as long as you're compiling the code as C-code – that there's no type-safety (i.e. I could also specify 5 instead of my_enum_first), so the use of an enum seems to be a more cosmetic thing.
The question is: do you know any compilers that have limitations regarding the usage of enum?
Edit 1:
Regarding the environment: we are developing for various embedded platforms, so there could also be a compiler for some obscure micro-controller...
Edit 2:
He could tell me that the KEIL C51 compiler didn't play well with enums. Are there any experiences with current versions of the C51 compiler?
Compilers are free to choose the size of an enum based on its range of possible values. This only really becomes an issue if you're exposing enums in your API, and users of your code may be using a different compiler or build options.
In this case, confusion can be caused by the calling code passing in a 16-bit value, for example, and the receiving code expecting it to be 32 bits. If the top 16 bits of the passed-in value are left uninitialized, then bad things will happen.
You can work around this kind of issue by including a dummy entry in your enum to enforce a minimum size.
For example:
typedef enum {
FirstValue = 12,
SecondValue = 25,
DummyValue = 65536 // force enum to be greater than 16 bits
} MyEnum;
I'm pretty sure that a compiler that doesn't play nice with enum is an invalid compiler - enum is specified in the standard, so a failure to implement it means the compiler shouldn't technically be used to compile C (For the record, the scope of enumeration types is discussed in 6.2.1 and defined as a type in 6.2.5 (of C99), so one would assume that it's a valid part of the standard from thereon in)!
So no, I don't know of any such compilers.

#define vs. enums for addressing peripherals

I have to program peripheral registers in an ARM9-based microcontroller.
For instance, for the USART, I store the relevant memory addresses in an enum:
enum USART
{
US_BASE = (int) 0xFFFC4000,
US_BRGR = US_BASE + 0x16,
//...
};
Then, I use pointers in a function to initialize the registers:
void init_usart (void)
{
vuint* pBRGR = (vuint*) US_BRGR;
*pBRGR = 0x030C;
//...
}
But my teacher says I'd better use #defines, such as:
#define US_BASE (0xFFFC4000)
#define US_BRGR (US_BASE + 0x16)
#define pBRGR ((vuint*) US_BRGR)
void init_usart (void)
{
*pBRGR = 0x030C;
}
Like so, he says, you don't have the overhead of allocating pointers in the stack.
Personally, I don't like #defines much, nor other preprocessor directives.
So the question is, in this particular case, are #defines really worth using instead of enums and stack-allocated pointers?
Related question: Want to configure a particular peripheral register in ARM9 based chip
The approach I've always preferred is to first define a struct reflecting the peripherals register layout
typedef volatile unsigned int reg32; // or other appropriate 32-bit integer type
typedef struct USART
{
reg32 pad1;
reg32 pad2;
reg32 pad3;
reg32 pad4;
reg32 brgr;
// any other registers
} USART;
USART *p_usart0 = (USART * const) 0xFFFC4000;
Then in code I can just use
p_usart0->brgr = 0x030C;
This approach is much cleaner when you have multiple instances of the same sort of peripheral:
USART *p_usart1 = (USART * const) 0xFFFC5000;
USART *p_usart2 = (USART * const) 0xFFFC6000;
User sbass provided a link to an excellent column by Dan Saks that gives much more detail on this technique, and points out its advantages over other approaches.
If you're lucky enough to be using C++, then you can add methods for all the common operations on the peripheral and nicely encapsulate the devices peculiarities.
I am afraid that enum are a dead end for such a task. The standard defines enum constants to be of type int, so in general they are not compatible with pointers.
One day on an architecture with 32bit int and 64bit pointers you might have a constant that doesn't fit into an int. It is not well defined what will happen.
On the other hand the argument that enum would allocate something on the stack is not valid. They are compile time constants and have nothing to do with the function stack or no more than any constants that you specify through macros.
Dan Saks has written a number of columns on this for Embedded Systems Programming. Here's one of his latest ones. He discusses C, C++, enums, defines, structs, classes, etc. and why you might one over another. Definitely worth reading and always good advice.
In my experience, one big reason to use #define for this kind of thing is that it's more of the standard idiom used in the embedded community.
Using enums instead of #define will generate questions/comments from instructors (and in the future, colleagues), even when using other techniques might have other advantages (like not stomping on the global identifier namespace).
I personally like using enums for numeric constants, but sometimes you need to do what is customary for what and where you're working.
However, performance shouldn't be an issue.
The answer is always do whatever the teacher wants and pass the class then on your own question everything and find out if their reasons were valid and form your own opinions. You cant win against the school, not worth it.
In this case it is easy to compile to assembler or disassemble to see the difference if any between the enum and define.
I would recommend the define over enum, have had compiler discomfort with enums. I highly discourage using pointers the way you are using them, I have seen every compiler fail to accurately generate the desired instructions, it is rare but when it happens you will wonder how your last decades of coding ever worked. Pointing structs or anything else is considerably worse. I often get flamed for this, and expect to this time around. Too many miles around the block, fixed too much broken code with these problems to ignore the root cause.
I wouldn't necessarily say that either way is better. It is just personal preference. As for your professor's argument, it is really a moot point. Allocating variables on the stack is one instruction, no matter how many there are, usually in the form sub esp, 10h. So if you have one local or 20, it is still one instruction to allocate the space for all of them.
I would say that the one advantage of the #include is that if for some reason down the road you wanted to change how that pointer is accessed, you just need to change it in one location.
I would tend towards using an enum, for potential future compatibility with C++ code. I say this because at my job, we have a lot of C header files shared between projects, some of which use C code and some of which use C++. For those using C++, we'd often like to wrap the definitions in a namespace, to prevent symbol masking, but you can't assign a #define to a namespace.

difference between #define and enum{} in C [duplicate]

What's the difference between using a define statement and an enum statement in C/C++ (and is there any difference when using them with either C or C++)?
For example, when should one use
enum {BUFFER = 1234};
over
#define BUFFER 1234
enum defines a syntactical element.
#define is a pre-preprocessor directive, executed before the compiler sees the code, and therefore is not a language element of C itself.
Generally enums are preferred as they are type-safe and more easily discoverable. Defines are harder to locate and can have complex behavior, for example one piece of code can redefine a #define made by another. This can be hard to track down.
#define statements are handled by the pre-processor before the compiler gets to see the code so it's basically a text substitution (it's actually a little more intelligent with the use of parameters and such).
Enumerations are part of the C language itself and have the following advantages.
1/ They may have type and the compiler can type-check them.
2/ Since they are available to the compiler, symbol information on them can be passed through to the debugger, making debugging easier.
Enums are generally prefered over #define wherever it makes sense to use an enum:
Debuggers can show you the symbolic name of an enums value ("openType: OpenExisting", rather than "openType: 2"
You get a bit more protection from name clashes, but this isn't as bad as it was (most compilers warn about re#defineition.
The biggest difference is that you can use enums as types:
// Yeah, dumb example
enum OpenType {
OpenExisting,
OpenOrCreate,
Truncate
};
void OpenFile(const char* filename, OpenType openType, int bufferSize);
This gives you type-checking of parameters (you can't mix up openType and bufferSize as easily), and makes it easy to find what values are valid, making your interfaces much easier to use. Some IDEs can even give you intellisense code completion!
Define is a preprocessor command, it's just like doing "replace all" in your editor, it can replace a string with another and then compile the result.
Enum is a special case of type, for example, if you write:
enum ERROR_TYPES
{
REGULAR_ERR =1,
OK =0
}
there exists a new type called ERROR_TYPES.
It is true that REGULAR_ERR yields to 1 but casting from this type to int should produce a casting warning (if you configure your compiler to high verbosity).
Summary:
they are both alike, but when using enum you profit the type checking and by using defines you simply replace code strings.
It's always better to use an enum if possible. Using an enum gives the compiler more information about your source code, a preprocessor define is never seen by the compiler and thus carries less information.
For implementing e.g. a bunch of modes, using an enum makes it possible for the compiler to catch missing case-statements in a switch, for instance.
enum can group multiple elements in one category:
enum fruits{ apple=1234, orange=12345};
while #define can only create unrelated constants:
#define apple 1234
#define orange 12345
#define is a preprocessor command, enum is in the C or C++ language.
It is always better to use enums over #define for this kind of cases. One thing is type safety. Another one is that when you have a sequence of values you only have to give the beginning of the sequence in the enum, the other values get consecutive values.
enum {
ONE = 1,
TWO,
THREE,
FOUR
};
instead of
#define ONE 1
#define TWO 2
#define THREE 3
#define FOUR 4
As a side-note, there is still some cases where you may have to use #define (typically for some kind of macros, if you need to be able to construct an identifier that contains the constant), but that's kind of macro black magic, and very very rare to be the way to go. If you go to these extremities you probably should use a C++ template (but if you're stuck with C...).
If you only want this single constant (say for buffersize) then I would not use an enum, but a define. I would use enums for stuff like return values (that mean different error conditions) and wherever we need to distinguish different "types" or "cases". In that case we can use an enum to create a new type we can use in function prototypes etc., and then the compiler can sanity check that code better.
Besides all the thing already written, one said but not shown and is instead interesting. E.g.
enum action { DO_JUMP, DO_TURNL, DO_TURNR, DO_STOP };
//...
void do_action( enum action anAction, info_t x );
Considering action as a type makes thing clearer. Using define, you would have written
void do_action(int anAction, info_t x);
For integral constant values I've come to prefer enum over #define. There seem to be no disadvantages to using enum (discounting the miniscule disadvantage of a bit more typing), but you have the advantage that enum can be scoped, while #define identifiers have global scope that tromps everything.
Using #define isn't usually a problem, but since there are no drawbacks to enum, I go with that.
In C++ I also generally prefer enum to const int even though in C++ a const int can be used in place of a literal integer value (unlike in C) because enum is portable to C (which I still work in a lot) .
If you have a group of constants (like "Days of the Week") enums would be preferable, because it shows that they are grouped; and, as Jason said, they are type-safe. If it's a global constant (like version number), that's more what you'd use a #define for; although this is the subject of a lot of debate.
In addition to the good points listed above, you can limit the scope of enums to a class, struct or namespace. Personally, I like to have the minimum number of relevent symbols in scope at any one time which is another reason for using enums rather than #defines.
Another advantage of an enum over a list of defines is that compilers (gcc at least) can generate a warning when not all values are checked in a switch statement. For example:
enum {
STATE_ONE,
STATE_TWO,
STATE_THREE
};
...
switch (state) {
case STATE_ONE:
handle_state_one();
break;
case STATE_TWO:
handle_state_two();
break;
};
In the previous code, the compiler is able to generate a warning that not all values of the enum are handled in the switch. If the states were done as #define's, this would not be the case.
enums are more used for enumerating some kind of set, like days in a week. If you need just one constant number, const int (or double etc.) would be definetly better than enum. I personally do not like #define (at least not for the definition of some constants) because it does not give me type safety, but you can of course use it if it suits you better.
Creating an enum creates not only literals but also the type that groups these literals: This adds semantic to your code that the compiler is able to check.
Moreover, when using a debugger, you have access to the values of enum literals. This is not always the case with #define.
While several answers above recommend to use enum for various reasons, I'd like to point out that using defines has an actual advantage when developing interfaces. You can introduce new options and you can let software use them conditionally.
For example:
#define OPT_X1 1 /* introduced in version 1 */
#define OPT_X2 2 /* introduced in version 2 */
Then software which can be compiled with either version it can do
#ifdef OPT_X2
int flags = OPT_X2;
#else
int flags = 0;
#endif
While on an enumeration this isn't possible without a run-time feature detection mechanism.
Enum:
1. Generally used for multiple values
2. In enum there are two thing one is name and another is value of name name must be distinguished but value can be same.If we not define value then first value of enum name is 0 second value is 1,and so on, unless explicitly value are specified.
3. They may have type and compiler can type check them
4. Make debugging easy
5. We can limit scope of it up to a class.
Define:
1. When we have to define only one value
2. It generally replace one string to another string.
3. It scope is global we cannot limit its scope
Overall we have to use enum
There is little difference. The C Standard says that enumerations have integral type and that enumeration constants are of type int, so both may be freely intermixed with other integral types, without errors. (If, on the other hand, such intermixing were disallowed without explicit casts, judicious use of enumerations could catch certain programming errors.)
Some advantages of enumerations are that the numeric values are automatically assigned, that a debugger may be able to display the symbolic values when enumeration variables are examined, and that they obey block scope. (A compiler may also generate nonfatal warnings when enumerations are indiscriminately mixed, since doing so can still be considered bad style even though it is not strictly illegal.) A disadvantage is that the programmer has little control over those nonfatal warnings; some programmers also resent not having control over the sizes of enumeration variables.

Resources