How to give readable names to elements of an array in C? - c

I'm inexperienced with C, and working on a microcontroller with messages stored in arrays where each byte does something different. How do I give each element of the array a human-readable name instead of referencing them as msg[1], msg[2], etc.?
Is this what structs are for? But "you cannot make assumptions about the binary layout of a structure, as it may have padding between fields."
Should I just use macros like this? (I know "macros are bad", but the code is already full of them)
#define MSG_ID msg[0]
#define MSG_COMMAND msg[1]
Oh! Or I guess I could just do
MSG_ID = 0;
MSG_COMMAND = 1;
MSG[MSG_ID];
That's probably better, if a little uglier.

If you want to go that route, use a macro, for sure, but make them better than what you suggest:
#define MSG_ID(x) (x)[0]
#define MSG_COMMAND(x) (x)[1]
Which will allow the code to name the arrays in ways that make sense, instead of ways that work with the macro.
Otherwise, you can define constants for the indexes instead (sorry I could not come up with better names for them...):
#define IDX_MSG_ID 0
#define IDX_MSG_COMMAND 1
And macros are not bad if they are used responsibly. This kind of "simple aliasing" is one of the cases where macros help making the code easier to read and understand, provided the macros are named appropriately and well documented.
Edit: per #Lundin's comments, the best way to improve readability and safety of the code is to introduce a type and a set of functions, like so (assuming you store in char and a message is MESSAGE_SIZE long):
typedef char MESSAGE[MESSAGE_SIZE];
char get_message_id(MESSAGE msg) { return msg[0]; }
char get_message_command(MESSAGE msg) { return msg[1]; }
This method, though it brings some level of type safety and allows you to abstract the storage away from the use, also introduces call overhead, which in microcontroller world might be problematic. The compiler may alleviate some of this through inlining the functions (which you could incentize by adding the inline keyword to the definitions).

The most natural concept for naming a set of integers in C are enumerations:
enum msg_pos { msg_id, msg_command, };
By default they start counting at 0 and increment by one. You would then access a field by msg[msg_id] for example.

It's fine to use a struct if you take the time to figure out how your compiler lays them out, and structs can very useful in embedded programming. It will always lay out the members in order, but there may be padding if you are not on an 8-bit micro. GCC has a "packed" attribute you can apply to the struct to prohibit padding, and some other compilers have a similar feature.

Related

Use of #define to alias structure members

This is a subjective question, so I will accept 'there is no answer' but read fully as this is specifically on a system where the code is safety critical.
I've adopted some embedded C code for a safety critical system, where the original author has (in random places) used syntax like this:
#include <stdio.h>
typedef struct tag_s2 {
int a;
}s2;
typedef struct tag_s1 {
s2 a;
}s1;
s1 inst;
#define X_myvar inst.a.a
int main(int argc, char **argv)
{
X_myvar = 10;
printf("myvar = %d\n", X_myvar + 1);
return 0;
}
Effectively using a #define to alias and obscure a deep structure member. Mostly two or three, but occasionally four deep.
BTW: This is a simple example, the real code is far more complicated but I can't publish any part of that here.
The use of this is not consistent, in some places the aliased variable is used directly other by it's alias, some parts of code are not aliased.
IMO this is bad practice as it obscures the code with no gain reducing maintainability and readability, leading to future errors and misunderstanding.
If the style was 100% consistent then perhaps I would be more happy with it.
However, being safety critical a change is costly. So not wanting to fix 'wot aint broke' I am open to other arguments.
Should I fix it or leave well alone?
Is there any guidance (e.g. Generic C, MISRA or DO178B style guides) that would have an opinion on this?
However, being safety critical a change is costly. So not wanting to fix 'wot aint broke' I am open to other arguments.
It's paradoxical death spiral that the most critical code gets the least attention because people are afraid to change it.
That you are hesitant to make a simple, rote refactoring to this code tells me the code either has no tests or you don't trust the tests. When you're afraid to improve code because you might break it, that delays improvements to the code. You're likely to do the smallest possible thing which will make the code even more brittle and unsafe.
I'd advise the first thing is to get some tests in place along with a staging environment for trials. Then all changes become safer. There might be some gafes initially while you find all the weird and dangerous things this code is doing, but that's what the staging area is for. In the medium and long term everyone will improve this code faster and with more confidence. Making code easier and safer to change allows it to be made easier and safer to change; the spiral then goes up, not down.
The technique of making a macro seem like a single variable is a technique I've seen before in the Perl 5 code base. It is written more in C macros than in C. For example, here's a bit of manipulating the Perl call stack.
#define SP sp
#define MARK mark
#define TARG targ
#define PUSHMARK(p) \
STMT_START { \
I32 * mark_stack_entry; \
if (UNLIKELY((mark_stack_entry = ++PL_markstack_ptr) \
== PL_markstack_max)) \
mark_stack_entry = markstack_grow(); \
*mark_stack_entry = (I32)((p) - PL_stack_base); \
DEBUG_s(DEBUG_v(PerlIO_printf(Perl_debug_log, \
"MARK push %p %" IVdf "\n", \
PL_markstack_ptr, (IV)*mark_stack_entry))); \
} STMT_END
#define TOPMARK S_TOPMARK(aTHX)
#define POPMARK S_POPMARK(aTHX)
#define INCMARK \
STMT_START { \
DEBUG_s(DEBUG_v(PerlIO_printf(Perl_debug_log, \
"MARK inc %p %" IVdf "\n", \
(PL_markstack_ptr+1), (IV)*(PL_markstack_ptr+1)))); \
PL_markstack_ptr++; \
} STMT_END
#define dSP SV **sp = PL_stack_sp
#define djSP dSP
#define dMARK SV **mark = PL_stack_base + POPMARK
#define dORIGMARK const I32 origmark = (I32)(mark - PL_stack_base)
#define ORIGMARK (PL_stack_base + origmark)
#define SPAGAIN sp = PL_stack_sp
#define MSPAGAIN STMT_START { sp = PL_stack_sp; mark = ORIGMARK; } STMT_END
#define GETTARGETSTACKED targ = (PL_op->op_flags & OPf_STACKED ? POPs : PAD_SV(PL_op->op_targ))
#define dTARGETSTACKED SV * GETTARGETSTACKED
These are macros upon macros upon macros. The Perl 5 source is riddled with them. There is a lot of opaque magic happening there. Some of them need to be macros to allow assignment, but many could be inline functions. Despite being part of a public API they are indifferently documented in part because they are macros and not functions.
This style is very clever and useful if you're already very familiar with the Perl 5 source code. For everyone else it has made the Perl 5 internals extremely difficult to work with. While some compilers will provide stack traces for macro expansions, others will only report on the expanded macro leaving one scratching their head what the hell const I32 origmark = (I32)(mark - PL_stack_base) is because it never appears in your source.
Like many macro hacks, while the technique is very clever it is also mind-bending and unfamiliar to many programmers. Mind-bending is not what you want in safety critical code. You want simple, boring code. That alone is the simplest argument to replace it with well named getter and setter functions. Trust the compiler to optimize them.
A good example of this is GLib which carefully uses well-documented function-like macros to make generic data structures. For example, adding a value to an array.
#define g_array_append_val(a,v)
While this is a macro, it acts and is documented like a function. It's macro solely as a mechanism to create a safe, type generic array. It hides no variables. You can safely use it without ever being aware it's a macro.
In conclusion, yes, change it. But instead of simply replacing X_myvar with inst.a.a consider creating functions that continue to provide encapsulation.
void s1_set_a( s1 *s, int val ) {
s->a.a = val;
}
int s1_get_a( s1 *s ) {
return s->a.a;
}
s1_set_a(&inst, 10);
printf("myvar = %d\n", s1_get_a(&inst) + 1);
The internals of s1 are hidden making it easier to change the internals later (for example, changing s1.a to a pointer to save memory). What variable you're working with is clear making the overall code easier to understand. The function names provide a clear explanation of what's happening. Because they're functions they have an obvious place for documentation. Trust the compiler to know how best to optimize it.
Yeah you should get rid of it. Obscure macros are dangerous.
It was common in older C to avoid spelling out deep nesting to do things like
#define a inst.a
In which case you only had to type inst.a instead of inst.a.a. Although this is questionable practice, macros like these were used to repair a shortcoming in the language, namely the lack of anonymous structs. Modern C supports that from C11 though. We can use anonymous structures to get rid of unnecessarily nested structs:
typedef struct {
struct
{
int a;
};
}s1;
But MISRA-C:2012 doesn't support C11 so that might not be an option.
Another trick you can use to get rid of long names is something like this:
int* x = &longstructname.somemember.anotherstruct.x;
// use *x from here on
x is a local variable with limited scope. That's much more readable than the obscure macros and it gets optimized away in the machine code.
From a maintenance perspective, yes, this is definitely code that needs to be fixed.
However, it is only from that perspective that the code needs to be fixed. It does not harm program correctness, and if the code is correct as-is, that is the paramount consideration.
That's why code like this should never be fixed unless a thorough unit test and regression test regimen is already in place. You should only fix code like this if you can be certain that you don't break correctly-functioning code in the process.

#define vs. enums for addressing peripherals

I have to program peripheral registers in an ARM9-based microcontroller.
For instance, for the USART, I store the relevant memory addresses in an enum:
enum USART
{
US_BASE = (int) 0xFFFC4000,
US_BRGR = US_BASE + 0x16,
//...
};
Then, I use pointers in a function to initialize the registers:
void init_usart (void)
{
vuint* pBRGR = (vuint*) US_BRGR;
*pBRGR = 0x030C;
//...
}
But my teacher says I'd better use #defines, such as:
#define US_BASE (0xFFFC4000)
#define US_BRGR (US_BASE + 0x16)
#define pBRGR ((vuint*) US_BRGR)
void init_usart (void)
{
*pBRGR = 0x030C;
}
Like so, he says, you don't have the overhead of allocating pointers in the stack.
Personally, I don't like #defines much, nor other preprocessor directives.
So the question is, in this particular case, are #defines really worth using instead of enums and stack-allocated pointers?
Related question: Want to configure a particular peripheral register in ARM9 based chip
The approach I've always preferred is to first define a struct reflecting the peripherals register layout
typedef volatile unsigned int reg32; // or other appropriate 32-bit integer type
typedef struct USART
{
reg32 pad1;
reg32 pad2;
reg32 pad3;
reg32 pad4;
reg32 brgr;
// any other registers
} USART;
USART *p_usart0 = (USART * const) 0xFFFC4000;
Then in code I can just use
p_usart0->brgr = 0x030C;
This approach is much cleaner when you have multiple instances of the same sort of peripheral:
USART *p_usart1 = (USART * const) 0xFFFC5000;
USART *p_usart2 = (USART * const) 0xFFFC6000;
User sbass provided a link to an excellent column by Dan Saks that gives much more detail on this technique, and points out its advantages over other approaches.
If you're lucky enough to be using C++, then you can add methods for all the common operations on the peripheral and nicely encapsulate the devices peculiarities.
I am afraid that enum are a dead end for such a task. The standard defines enum constants to be of type int, so in general they are not compatible with pointers.
One day on an architecture with 32bit int and 64bit pointers you might have a constant that doesn't fit into an int. It is not well defined what will happen.
On the other hand the argument that enum would allocate something on the stack is not valid. They are compile time constants and have nothing to do with the function stack or no more than any constants that you specify through macros.
Dan Saks has written a number of columns on this for Embedded Systems Programming. Here's one of his latest ones. He discusses C, C++, enums, defines, structs, classes, etc. and why you might one over another. Definitely worth reading and always good advice.
In my experience, one big reason to use #define for this kind of thing is that it's more of the standard idiom used in the embedded community.
Using enums instead of #define will generate questions/comments from instructors (and in the future, colleagues), even when using other techniques might have other advantages (like not stomping on the global identifier namespace).
I personally like using enums for numeric constants, but sometimes you need to do what is customary for what and where you're working.
However, performance shouldn't be an issue.
The answer is always do whatever the teacher wants and pass the class then on your own question everything and find out if their reasons were valid and form your own opinions. You cant win against the school, not worth it.
In this case it is easy to compile to assembler or disassemble to see the difference if any between the enum and define.
I would recommend the define over enum, have had compiler discomfort with enums. I highly discourage using pointers the way you are using them, I have seen every compiler fail to accurately generate the desired instructions, it is rare but when it happens you will wonder how your last decades of coding ever worked. Pointing structs or anything else is considerably worse. I often get flamed for this, and expect to this time around. Too many miles around the block, fixed too much broken code with these problems to ignore the root cause.
I wouldn't necessarily say that either way is better. It is just personal preference. As for your professor's argument, it is really a moot point. Allocating variables on the stack is one instruction, no matter how many there are, usually in the form sub esp, 10h. So if you have one local or 20, it is still one instruction to allocate the space for all of them.
I would say that the one advantage of the #include is that if for some reason down the road you wanted to change how that pointer is accessed, you just need to change it in one location.
I would tend towards using an enum, for potential future compatibility with C++ code. I say this because at my job, we have a lot of C header files shared between projects, some of which use C code and some of which use C++. For those using C++, we'd often like to wrap the definitions in a namespace, to prevent symbol masking, but you can't assign a #define to a namespace.

difference between #define and enum{} in C [duplicate]

What's the difference between using a define statement and an enum statement in C/C++ (and is there any difference when using them with either C or C++)?
For example, when should one use
enum {BUFFER = 1234};
over
#define BUFFER 1234
enum defines a syntactical element.
#define is a pre-preprocessor directive, executed before the compiler sees the code, and therefore is not a language element of C itself.
Generally enums are preferred as they are type-safe and more easily discoverable. Defines are harder to locate and can have complex behavior, for example one piece of code can redefine a #define made by another. This can be hard to track down.
#define statements are handled by the pre-processor before the compiler gets to see the code so it's basically a text substitution (it's actually a little more intelligent with the use of parameters and such).
Enumerations are part of the C language itself and have the following advantages.
1/ They may have type and the compiler can type-check them.
2/ Since they are available to the compiler, symbol information on them can be passed through to the debugger, making debugging easier.
Enums are generally prefered over #define wherever it makes sense to use an enum:
Debuggers can show you the symbolic name of an enums value ("openType: OpenExisting", rather than "openType: 2"
You get a bit more protection from name clashes, but this isn't as bad as it was (most compilers warn about re#defineition.
The biggest difference is that you can use enums as types:
// Yeah, dumb example
enum OpenType {
OpenExisting,
OpenOrCreate,
Truncate
};
void OpenFile(const char* filename, OpenType openType, int bufferSize);
This gives you type-checking of parameters (you can't mix up openType and bufferSize as easily), and makes it easy to find what values are valid, making your interfaces much easier to use. Some IDEs can even give you intellisense code completion!
Define is a preprocessor command, it's just like doing "replace all" in your editor, it can replace a string with another and then compile the result.
Enum is a special case of type, for example, if you write:
enum ERROR_TYPES
{
REGULAR_ERR =1,
OK =0
}
there exists a new type called ERROR_TYPES.
It is true that REGULAR_ERR yields to 1 but casting from this type to int should produce a casting warning (if you configure your compiler to high verbosity).
Summary:
they are both alike, but when using enum you profit the type checking and by using defines you simply replace code strings.
It's always better to use an enum if possible. Using an enum gives the compiler more information about your source code, a preprocessor define is never seen by the compiler and thus carries less information.
For implementing e.g. a bunch of modes, using an enum makes it possible for the compiler to catch missing case-statements in a switch, for instance.
enum can group multiple elements in one category:
enum fruits{ apple=1234, orange=12345};
while #define can only create unrelated constants:
#define apple 1234
#define orange 12345
#define is a preprocessor command, enum is in the C or C++ language.
It is always better to use enums over #define for this kind of cases. One thing is type safety. Another one is that when you have a sequence of values you only have to give the beginning of the sequence in the enum, the other values get consecutive values.
enum {
ONE = 1,
TWO,
THREE,
FOUR
};
instead of
#define ONE 1
#define TWO 2
#define THREE 3
#define FOUR 4
As a side-note, there is still some cases where you may have to use #define (typically for some kind of macros, if you need to be able to construct an identifier that contains the constant), but that's kind of macro black magic, and very very rare to be the way to go. If you go to these extremities you probably should use a C++ template (but if you're stuck with C...).
If you only want this single constant (say for buffersize) then I would not use an enum, but a define. I would use enums for stuff like return values (that mean different error conditions) and wherever we need to distinguish different "types" or "cases". In that case we can use an enum to create a new type we can use in function prototypes etc., and then the compiler can sanity check that code better.
Besides all the thing already written, one said but not shown and is instead interesting. E.g.
enum action { DO_JUMP, DO_TURNL, DO_TURNR, DO_STOP };
//...
void do_action( enum action anAction, info_t x );
Considering action as a type makes thing clearer. Using define, you would have written
void do_action(int anAction, info_t x);
For integral constant values I've come to prefer enum over #define. There seem to be no disadvantages to using enum (discounting the miniscule disadvantage of a bit more typing), but you have the advantage that enum can be scoped, while #define identifiers have global scope that tromps everything.
Using #define isn't usually a problem, but since there are no drawbacks to enum, I go with that.
In C++ I also generally prefer enum to const int even though in C++ a const int can be used in place of a literal integer value (unlike in C) because enum is portable to C (which I still work in a lot) .
If you have a group of constants (like "Days of the Week") enums would be preferable, because it shows that they are grouped; and, as Jason said, they are type-safe. If it's a global constant (like version number), that's more what you'd use a #define for; although this is the subject of a lot of debate.
In addition to the good points listed above, you can limit the scope of enums to a class, struct or namespace. Personally, I like to have the minimum number of relevent symbols in scope at any one time which is another reason for using enums rather than #defines.
Another advantage of an enum over a list of defines is that compilers (gcc at least) can generate a warning when not all values are checked in a switch statement. For example:
enum {
STATE_ONE,
STATE_TWO,
STATE_THREE
};
...
switch (state) {
case STATE_ONE:
handle_state_one();
break;
case STATE_TWO:
handle_state_two();
break;
};
In the previous code, the compiler is able to generate a warning that not all values of the enum are handled in the switch. If the states were done as #define's, this would not be the case.
enums are more used for enumerating some kind of set, like days in a week. If you need just one constant number, const int (or double etc.) would be definetly better than enum. I personally do not like #define (at least not for the definition of some constants) because it does not give me type safety, but you can of course use it if it suits you better.
Creating an enum creates not only literals but also the type that groups these literals: This adds semantic to your code that the compiler is able to check.
Moreover, when using a debugger, you have access to the values of enum literals. This is not always the case with #define.
While several answers above recommend to use enum for various reasons, I'd like to point out that using defines has an actual advantage when developing interfaces. You can introduce new options and you can let software use them conditionally.
For example:
#define OPT_X1 1 /* introduced in version 1 */
#define OPT_X2 2 /* introduced in version 2 */
Then software which can be compiled with either version it can do
#ifdef OPT_X2
int flags = OPT_X2;
#else
int flags = 0;
#endif
While on an enumeration this isn't possible without a run-time feature detection mechanism.
Enum:
1. Generally used for multiple values
2. In enum there are two thing one is name and another is value of name name must be distinguished but value can be same.If we not define value then first value of enum name is 0 second value is 1,and so on, unless explicitly value are specified.
3. They may have type and compiler can type check them
4. Make debugging easy
5. We can limit scope of it up to a class.
Define:
1. When we have to define only one value
2. It generally replace one string to another string.
3. It scope is global we cannot limit its scope
Overall we have to use enum
There is little difference. The C Standard says that enumerations have integral type and that enumeration constants are of type int, so both may be freely intermixed with other integral types, without errors. (If, on the other hand, such intermixing were disallowed without explicit casts, judicious use of enumerations could catch certain programming errors.)
Some advantages of enumerations are that the numeric values are automatically assigned, that a debugger may be able to display the symbolic values when enumeration variables are examined, and that they obey block scope. (A compiler may also generate nonfatal warnings when enumerations are indiscriminately mixed, since doing so can still be considered bad style even though it is not strictly illegal.) A disadvantage is that the programmer has little control over those nonfatal warnings; some programmers also resent not having control over the sizes of enumeration variables.

#define or enum? [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Why use enum when #define is just as efficient?
When programming in C, is it better practice to use #define statements or enums for states in a state machine?
Technically it doesn't matter. The compiler will most likely even create identical machine code for either case, but an enumeration has three advantages:
Using the right compiler+debugger combination, the debugger will print enumeration variables by their enumeration name and not by their number. So "StateBlahBlup" reads much nicer than "41", doesn't it?
You don't have explicitly give every state a number, the compiler does the numbering for you if you let it. Let's assume you have already 20 states and you want to add a new state in the middle, in case of defines, you have to do all renumbering on your own. In case of enumeration, you can just add the state and the compiler will renumber all states below this new state for you.
You can tell the compiler to warn you if a switch statement does not handle all the possible enum values, e.g. because you forgot to handle some values or because the enum was extended but you forgot to also update the switch statements handling enum values (it will not warn if there's a default case though, as all values not handled explicitly end up in the default case).
Since the states are related elements I think is better to have an enum defining them.
There's no definitive answer. enum offers you scoping and automatic value assignment, but does not give any control over the constant type (always signed int). #define ignores scoping, but allows you to use better typing facilities: lets you choose the constant type (either by using suffixes or by including an explicit cast into the definition).
So, choose for yourself what is more important to you. For a state machine, enum might be a better choice, unless you have a good reason to control the type.
I prefer enum. They are more compact and are 'safer'. You can also imply order in an enum, which might be helpful in a state machine. #defines should be avoided if possible, since they will overwrite all occurrences in source, which can lead to some unintended actions which are difficult to debug.
If enum is supported by your compiler, then that would be preferred. Failing that, by all means, use #define. All C++ compilers and modern C compilers should support enum, but older compilers (particularly ones targeting embedded platforms) may not support enum.
If you must use #define make sure to define your constants with parentheses, to avoid preprocessor errors:
#define RED_STATE (1)
#define YELLOW_STATE (2)
#define GREEN_STATE (3)
#define directives can have lots of unintended consequences and don't follow common scoping rules. Use enums when you have related data.
More information: http://www.embedded.com/columns/programmingpointers/9900402?_requestid=341945 [C++ material, but still marginally relevant]
You can do this trick to make compiler check the type of #define value.
#define VALUE_NAME ((TYPE_NAME) 12)
However the real problem of #define is it can be redefined in application code. (Of course compiler will warn you about it.)
enum is great when you have exclusive options, but you can't use them to define bitfield flags, like this:
#define SQ_DEFAULT 0x0
#define SQ_WITH_RED 0x1
#define SQ_WITH_BLUE 0x2
void paint_square(int flags);
Then you can paint red-blue square with:
paint_square(SQ_WITH_RED | SQ_WITH_BLUE);
...which you can't with enum.
You can use whatever you want and like.
Still as everyone is saying I would also like add up me as voting for Enums.
Enums should always be preferred if you are using related data as in case of a State Machine, you can define order in enums also that will help in implementing the State Machine.
Further enums will keep your program safe as all enums will be of its type only so they will avoid any possible confusions too.
#define should not be used in case of a state machine or related data. Anyway thats my suggestion, but there is no hard and fast rule.
Also I would like to add up one more point that enums will add more readability and understandability to your code if used in future or or if read by someone else. It is an important point when you are having a very large program and there are a lot of #defines in the program other than you are using for your State Machine.

Why use C typedefs rather than #defines?

What advantage (if any) is there to using typedef in place of #define in C code?
As an example, is there any advantage to using
typedef unsigned char UBYTE
over
#define UBYTE unsigned char
when both can be used as
void func()
{
UBYTE byte_value = 0;
/* Do some stuff */
return byte_value;
}
Obviously the pre-processor will try to expand a #define wherever it sees one, which wouldn't happen with a typedef, but that doesn't seem to me to be any particular advantage or disadvantage; I can't think of a situation where either use wouldn't result in a build error if there was a problem.
If you do a typedef of an array type, you'll see the difference:
typedef unsigned char UCARY[3];
struct example { UCARY x, y, z; };
Doing that with a #define... no, let's not go there.
[EDIT]: Another advantage is that a debuggers usually know about typedefs but not #defines.
1) Probably the great advantage is a cleaner code.
Usually abusing macros transforms the code in an unmaintainable mess, known as: 'macro soup'.
2) By using a typedef you define a new type. Using a macro you actually substitute text. The compiler is surely more helpful when dealing with typedef errors.
Well, coming from a C++, perspective, a C++ programmer using your code might have something like:
template<typename T> class String
{
typedef T char_type;
// ...
};
Now, if in your C code, you've written something like:
#define char_type uint32_t // because I'm using UTF-32
Well, you are going to be causing serious trouble for the users of your header file. With typedefs, you can change the value of the typedef within different scopes... while scopes aren't respected with #defines.
I know that you've labeled this C, but C programmers and C++ programmers need to realize that their headers might be used by each other... and this is one of those things to keep in mind.
With #define all you get is string substitution during preprocessing. typedef introduces a new type. This makes it easier to find possible problems in your code and in case of any the compiler might be able to give you more detailed information.
Debuggers and compiler error messages become more helpful if the compiler/debugger knows about the type. (this is also why you should use constants and not defines where possible)
Arrays, as others have shown
you can restrict typedefs to a smaller scope (say, a function). Even more true in C++.

Resources