Does using typedef enum { VALUE_1 = 0x00, ... } typeName; have any more overhead in C (specifically, compiling using AVR-GCC for an AVR MCU) than doing typedef unsigned char typeName; and then just defining each value with #define VALUE_1 0x00?
My specific application is status codes that can be returned and checked by functions. It seems neater to me to use the typedef enum style, but I wanted to be sure that it wasn't going to add any significant overhead to the compiled application.
I would assume no, but I wasn't really sure. I tried to look for similar questions but most of them pertained to C++ and got more specific answers to C++.
An enum declaration creates an enumerated type. Such a type is compatible with (and therefore has the same size and representation as) some predefined integer type, but the compiler chooses which one.
But the enumeration constants are always of type int. (This differs from C++, where the constants are of the enumerated type.)
So typedef unsigned char ... vs. typedef enum ... will likely change the size and representation of the type, which can matter if you define objects of the type or functions that return the type, but the constants VALUE_1 et al will be of type int either way.
It's probably best to use the enum type; that way the compiler can decide what representation is best. Your alternative of specifying unsigned char will minimize storage, but depending on the platform it might actually slow down access to objects relative to, say, using something compatible with int.
Incidentally, the typedef isn't strictly necessary. If you prefer, you can use a tag:
enum typeName { Value_1 = 0x00, ... };
But then you have to refer to the type as enum typeName rather than just typeName. The advantage of typedef is that it lets you give the type a name that's just a single identifier.
Related
Setting:
I define an enum in C99:
enum MY_ENUM {TEST_ENUM_ITEM1, TEST_ENUM_ITEM2, TEST_ENUM_ITEM_MAX};
I ensure with compile time asserts that TEST_ENUM_ITEM_MAX does not exceed UINT16_MAX. I assume little endian as byte order.
I have a serialize-into-buffer function with following parameters:
PutIntoBuffer(uint8_t* src, uint32_t count);
I serialize a variable holding an value into a buffer. For this task i access the variable, holding the enum, like this:
enum MY_ENUM testVar = TEST_ENUM_ITEM;
PutIntoBuffer((uint8_t*) &testVar, sizeof(uint16_t));
Question: Is it legitimate to access the enum (which is an int) in this way? Does C standard guarantee the intended behaviour?
It is legitimate as in "it will work if int is 16 bits". It does not violate any pointer aliasing rules either, as long as you use a character type like uint8_t. (De-serializing is another story though.)
However, the code is not portable. In case int is 32 bit, the enumeration constants will turn 32 bit too, as may the enum variable itself. Then the code will turn endianess-dependent and you might end up reading garbage. Checking TEST_ENUM_ITEM_MAX against UINT16_MAX doesn't solve this.
The proper way to serialize an enum is to use a pre-generated read-only look-up table which is guaranteed to be 8 bits, like this:
#include <stdint.h>
enum MY_ENUM {TEST_ENUM_ITEM1, TEST_ENUM_ITEM2, TEST_ENUM_ITEM_MAX};
static const uint8_t MY_ENUM8 [] =
{
[TEST_ENUM_ITEM1] = TEST_ENUM_ITEM1,
[TEST_ENUM_ITEM2] = TEST_ENUM_ITEM2,
};
int main (void)
{
_Static_assert(sizeof(MY_ENUM8)==TEST_ENUM_ITEM_MAX, "Something went wrong");
}
The designated initializer syntax improves the integrity of the data, should the enum be updated during maintenance. Similarly, the static assert will ensure that the list contains the right number of items.
I have an enum like this
typedef enum {
FIRST,
SECOND,
THIRD = 0X80000001,
FOURTH,
FIFTH,
} STATUS;
I am getting a pedantic warning since I am compiling my files with the option -Wpedantic:
warning: ISO C restricts enumerator values to range of 'int' [-Wpedantic]
I found that it occurs since when I convert the hex value 0X80000001 to integer it exceeds the unsigned integer limits. My purpose is to have continuous hex values as the status in the enum without this warning.
I cannot use the macros since this will defy the purpose of having the enums in the first place. What code change will avoid this warning?
Enumeration constants are guaranteed to be of the same size as (signed) int. Apparently your system uses 32 bit int, so an unsigned hex literal larger than 0x7FFFFFFF will not fit.
So the warning is not just "pedantic", it hints of a possibly severe bug. Note that -pedantic in GCC does not mean "be picky and give me unimportant warnings" but rather "ensure that my code actually follows the C standard".
It appears that you want to do a list of bit masks or hardware addresses, or some other hardware-related programming. enum is unsuitable for such tasks, because in hardware-related programming, you rarely ever want to use signed types, but always unsigned ones.
If you must have a safe and portable program, then there is no elegant way to do this. C is a language with a lot of flaws, the way enum is defined by the standard is one of them.
One work-around is to use some sort of "poor man's enum", such as:
typedef uint32_t STATUS;
#define THIRD 0X80000001
If you must also have the increased type safety of an enum, then you could possibly use a struct:
typedef struct
{
uint32_t value;
} STATUS;
Or alternatively, just declare an array of constants and use an enum to define the array index. Probably the cleanest solution but takes a little bit of extra overhead:
typedef enum {
FIRST,
SECOND,
THIRD,
FOURTH,
FIFTH,
STATUS_N
} STATUS;
const uint32_t STATUS_DATA [STATUS_N] =
{
0,
1,
0X80000001,
0X80000002,
0X80000003
};
This question already has answers here:
typedef enum explanation in c
(5 answers)
Closed 7 years ago.
I recently saw this in an answer that was posted for me:
typedef enum
{
NO_OP,
ADDITION,
} operator_t;
int main()
{
operator_t operator = NO_OP;
}
What is typedef enum and why should we use it? I googled and found the following:
http://www.programiz.com/c-programming/c-enumeration
Right now it sounds slightly too technical for me so I don't think I understand what is going on or why anyone would use that.
Bonus (optional): What type of variable is the operator_t?
It's definitely not "too technical".
"typedef" and "enum" are two completely different things.
The basic reason to have "enums" is to avoid "magic numbers":
Let's say you have three "states": STOP, CAUTION and GO. How do you represent them in your program?
One way is to use the string literals "STOP", "CAUTION" and "GO". But that has a lot of problems - including the fact that you can't use them in a C "switch/case" block.
Another way is to Map" them to the integer values "0", "1" and "2". This has a lot of benefits. But seeing "STOP" in your code is a lot more meaningful than seeing a "0". Using "0" in your code like that is an example of a "magic number". Magic numbers are Bad: you want to use a "meaningful name" instead.
Before enums were introduced in the language, C programmers used macros:
#define STOP 0
#define CAUTION 1
#define GO 2
A better, cleaner approach in modern C/C++ is to use an enum instead:
enum traffic_light_states {
STOP,
CAUTION,
GO
};
Using a "typedef" just simplifies declaring a variable of this type:
typedef enum {
STOP,
CAUTION,
GO
} traffic_light_states_t ;
typedef is used to define an alternative name for an existing type. The enum could have declared like this:
enum operator_t
{
NO_OP,
ADDITION,
};
and then you could declare a variable of this type like so:
enum operator_t x = NO_OP;
This is kind of verbose so you would use typedef to define a shorter alias for this type:
typedef enum operator_t operator_t;
This defines operator_t to mean the type enum operator_t allowing you to initialize a variable like so:
operator_t x = NO_OP;
This syntax:
typedef enum
{
NO_OP,
ADDITION,
} operator_t;
does the whole process in one step, so it defines an (untagged or tagless) enum type and gives it the alias operator_t.
Bonus: operator_t is an enum data type; read more about it here: https://en.wikipedia.org/wiki/Enumerated_type
What is typedef enum and why should we use it?
There are two different things going on there: a typedef and an enumerated type (an "enum"). A typedef is a mechanism for declaring an alternative name for a type. An enumerated type is an integer type with an associated set of symbolic constants representing the valid values of that type.
Taking the enum first, the full form of an enum declaration consists of the enum keyword, followed by a tag by which that particular enum will be identified, followed by the symbolic enum constants in curly brackets. By default, the enum constants correspond to consecutive integer values, starting at zero. For example:
enum operator {
NO_OP,
ADDITION
};
As you can see, it has some similarities to a struct declaration, and like a struct declaration, variables of that enumerated type can be declared in the same statement:
enum operator {
NO_OP,
ADDITION
} op1, op2, op3;
or they can be declared later, by referencing the enum's tag:
enum operator op4, op5;
Also like a struct declaration, the tag can be omitted, in which case the enumerated type cannot be referenced elsewhere in the source code (but any declared variables of that type are still fine):
enum {
NO_OP,
ADDITION
} op1, op2, op3;
Now we get to the typedef. As I already wrote, a typedef is a means to declare an alternative name for a type. It works by putting the typedef keyword in front of something that would otherwise be a variable declaration; the symbol that would have been the variable name is then the alternative name for the type. For instance this ...
typedef unsigned long long int ull_t;
declares ull_t to be an alternative name for type unsigned long long int. The two type names can thereafter be used interchangeably (within the scope of the typedef declaration).
In your case, you have
typedef enum
{
NO_OP,
ADDITION,
} operator_t;
which declares operator_t as an alias for the tagless enumerated type given. Declaring a typedef in this way makes the enum usable elsewhere, via the typedef name, even though the enum is tagless. This is a fairly common mechanism for declaring a shorthand name for an enumerated type, and an analogous technique is common for structs, too.
Bonus (optional): What type of variable is the operator_t?
As I explained, the operator_t is not a variable, it is a type. In particular, it is an enumerated type, and the symbols NO_OP and ADDITION represent values of that type.
Typedefs for enums, structs and unions are a complete waste. They hide important information for the questionable benefit of saving a few characters to type.
Don't use them in new code.
Technically, a typedef introduce an alias, i.e. a new name for something that already exists. This means, typedefs are not new types. The type system will treat them just like the aliased type.
The downvoters may please educate themselves by, for example, reading the wonderful Peter van der Linden book Expert C Programming where the case against typedefs for enum/struct/union is made.
typedef and enum are two different concepts. You can rewrite the code like this:
enum operator
{
NO_OP,
ADDITION
};
typedef enum operator operator_t;
The first statement declares an enumeration called operator, with two values. The second statement declares that the enumeration operator is now also to be known as the type operator_t.
The syntax does allow to combine these two statements into one statement:
typedef enum operator
{
NO_OP,
ADDITION,
} operator_t;
And finally to omit a name for the enumeration, as there is a datatype for it anyway:
typedef enum
{
NO_OP,
ADDITION,
} operator_t;
Wikipedia has a good discussion of what a typedef is
typedef is a keyword in the C and C++ programming languages. The purpose of typedef is to form complex types from more-basic machine types1 and assign simpler names to such combinations. They are most often used when a standard definition or declaration is cumbersome, potentially confusing, or likely to vary from one implementation to another.
See this page for a detailed discussion of Typedef in Wikipedia
Enumerated Types allow us to create our own symbolic names for a list of related ideas.
Given the example you gave I'm guessing you can use enum to select which arithmetic operation to use for a particular set of variables.
The following example code should give you a good idea on what enum is useful for.
enum ARITHMETIC_OPERATION {ADD, SUBTRACT, MULTIPLY};
int do_arithmetic_operation(int a, int b, enum ARITHMETIC_OPERATION operation){
if(operation == ADD)
return a+b;
if(operation == SUBTRACT)
return a-b;
if(operation == MULTIPLY)
return a*b;
}
If you didn't have enum, you would do something like this instead:
#define ADD 0
#define SUBTRACT 1
#define MULTIPLY 2
int do_artithmetic_operation(int a, int b, int operation);
This alternative is less readable, because operation is not really an integer but a symbolic type that represents an arithmetic operation that is either ADD, MULTIPLY, or SUBTRACT.
The following links provide good discussions and sample code that uses Enum.
http://www.cs.utah.edu/~germain/PPS/Topics/C_Language/enumerated_types.html
http://www.cprogramming.com/tutorial/enum.html
http://cplus.about.com/od/introductiontoprogramming/p/enumeration.htm
See the simple example below. When a function returning one enum is assigned to a variable of a different enum I don't get any warning even with gcc -Wall -pedantic. Why is it not possible for a C compiler to do type checking on enums? Or is it gcc specific? I don't have access to any other compiler right now to try it out..
enum fruit {
APPLE,
ORANGE
};
enum color {
RED,
GREEN
};
static inline enum color get_color() {
return RED;
}
int main() {
enum fruit ftype;
ftype = get_color();
}
This declaration:
enum fruit {
apple,
orange
};
declares three things: a type called enum fruit, and two enumerators called apple and orange.
enum fruit is actually a distinct type. It's compatible with some implementation-defined integer type; for example, enum fruit might be compatible with int, with char, or even with unsigned long long if the implementation chooses, as long as the chosen type can represent all the values.
The enumerators, on the other hand, are constants of type int. In fact, there's a common trick of using a bare enum declaration to declare int constants without using the preprocessor:
enum { MAX = 1000 };
Yes, that means that the constant apple, even though it was declared as part of the definition of enum fruit, isn't actually of type enum fruit. The reasons for this are historical. And yes, it would probably have made more sense for the enumerators to be constants of the type.
In practice, this inconsistency rarely matters much. In most contexts, discrete types (i.e., integer and enumeration types) are largely interchangeable, and the implicit conversions usually do the right thing.
enum fruit { apple, orange };
enum fruit obj; /* obj is of type enum fruit */
obj = orange; /* orange is of type int; it's
implicitly converted to enum fruit */
if (obj == orange) { /* operands are converted to a common type */
/* ... */
}
But the result is that, as you've seen, the compiler isn't likely to warn you if you use a constant associated with one enumerated type when you mean to use a different one.
One way to get strong type-checking is to wrap your data in a struct:
enum fruit { /* ... */ };
enum color { /* ... */ };
struct fruit { enum fruit f; };
struct color { enum color c; };
struct fruit and struct color are distinct and incompatible types with no implicit (or explicit) conversion between them. The drawback is that you have to refer to the .f or .c member explicitly. (Most C programmers just count on their ability to get things right in the first place -- with mixed results.)
(typedef doesn't give you strong type checking; despite the name, it creates an alias for an existing type, not a new type.)
(The rules in C++ are a little different.)
Probably most of us understand the underlying causes ("the spec says it must work"), but we also agree that this is a cause of a lot of programming errors in "C" land and that the struct wrapping workaround is gross. Ignoring add-on checkers such as lint, here's what we have:
gcc (4.9): No warning available.
microsoft cl (18.0): No warning available.
clang (3.5): YES -Wenum-conversion
gcc decided not to warn (as does clang) but icc (Intel compiler) would warn in this situation. If you want some additional type checking for enum types, you can pass your code to some static code checker software like Lint that is able to warn in such cases.
gcc decided it was not useful to warn for implicit conversions between enum types but note also that C doesn't require the implementation to issue a diagnostic in case of an assignment between two different enum types. This is the same as for the assignment between any arithmetic type: diagnostic is not required by C. For example, gcc would also not warn if you assign a long long to a char or a short to a long.
That's because enums in C are simply a group of unique integer constants, that save you from having to #define a whole bunch of constants. It's not like C++ where the enums you create are of a specific type. That's just how C is.
It's also worth noting that the actual size used to represent enum values depends on the compiler.
10 years after this question was asked, GCC can do it now:
gcc -Wextra main.c
main.c: In function ‘main’:
main.c:17:11: warning: implicit conversion from ‘enum color’ to ‘enum fruit’ [-Wenum-conversion]
17 | ftype = get_color();
An enum in C is basically handled like an integer. It's just a nicer way to use constants.
// this would work as well
ftype = 1;
You can also specify the values:
enum color {
RED=0,GREEN,BLUE
} mycolor;
mycolor = 1; // GREEN
gcc guys always have a reason not to do somthing.
Use clang with options -Wenum-conversion -Wassign-enum.
How does one define different types of ints?
I have the following
struct movCommand
{
uint8_t type;
uint8_t order;
int16_t height;
uint16_t distance;
int16_t yaw;
};
and need to define these according to the types they are.
What is the correct syntax for #define when selecting the type for the define?
EDIT :
It looks like my question has been misunderstood.
I want to do this #define LANDING_COMMAND "2"
But I want to set the type of the landing command because it needs to be int16_t
You do not use #define for this. You #include <stdint.h>
Rather than using the #define directive, I'd use a typedef, which is how the standard-library would define them inside of <stdint.h> (at least on a C99-compatible platform). If you look in that header, you'll see how they're defined specifically on their platform. Typical typedefs will be:
typedef unsigned char uint8_t;
typedef signed char int8_t;
typedef unsigned short uint16_t;
typedef signed short int16_t;
typedef unsigned int uint32_t;
typedef int int32_t;
//... etc., etc.
There's a lot more typedef's defined inside the header file, including 64-bit types, etc.
If you are working with C99, you can use the typedefs from <stdint.h> or <inttypes.h> (and <inttypes.h> might be available even if <stdint.h> is not - in non-C99 compilers).
If they are available (they usually are), all the types you show will be provided by those headers.
In general, a typedef is preferable to a #define.
With regards to your new question, the #define is replaced literally with the text you provide. So
#define LANDING_COMMAND "2";
Will replace all uses of LANDING_COMMAND with "2"; in the program text. This is probably not what you want.
First, preprocessing directives are not part of the C language, they're part of the preprocessor. Since they're not part of C, they're not statements, so they don't end with ;. If you leave that in, it will likely cause problems if you intend to do things like func(LANDING_COMMAND);.
Second, "2" is of type char *, which is not convertible to int16_t with any safety. You need to use a literal 2 for the numeric value.
Lastly, to make it type int16_t, you'll need to provide either a cast (((int16_t)2)) or use the macro INT16_C(2) which expands to a literal with the appropriate suffix to make it of size (and type) int16_t. I recommend the latter, but the former should work. The macro INT16_C(2) could be used, but it expands to a literal (with the appropriate suffix) of type int_least16_t, which is close but no cigar. stdint.h only provides macros to make integer constant literals of the [u]int_leastN_t types and the [u]intmax_t types, not more generally for the [u]intN_t or [u]int_fastN_t types. Why they don't is beyond me.
include stdint.h gives you 8, 16, 32, and 64 signed and unsigned.
http://en.wikipedia.org/wiki/Stdint.h
You can't do what you describe. Other answers have indicated workarounds. As for your specific question, from the MSDN site:
Expressions must have integral type
and can include only integer
constants, character constants, and
the defined operator.
The expression cannot use sizeof or a
type-cast operator.
#define doesn't have a type. It's exactly the same as find/replace in your editor.
You can do
#define LANDING_COMMAND 2
...
my_movCommand.yaw = LANDING_COMMAND;
The compiler will do the right type conversions for you, but if you insist on a type int16_t then
#define LANDING_COMMAND ((int16_t)2)