How does one define different types of ints?
I have the following
struct movCommand
{
uint8_t type;
uint8_t order;
int16_t height;
uint16_t distance;
int16_t yaw;
};
and need to define these according to the types they are.
What is the correct syntax for #define when selecting the type for the define?
EDIT :
It looks like my question has been misunderstood.
I want to do this #define LANDING_COMMAND "2"
But I want to set the type of the landing command because it needs to be int16_t
You do not use #define for this. You #include <stdint.h>
Rather than using the #define directive, I'd use a typedef, which is how the standard-library would define them inside of <stdint.h> (at least on a C99-compatible platform). If you look in that header, you'll see how they're defined specifically on their platform. Typical typedefs will be:
typedef unsigned char uint8_t;
typedef signed char int8_t;
typedef unsigned short uint16_t;
typedef signed short int16_t;
typedef unsigned int uint32_t;
typedef int int32_t;
//... etc., etc.
There's a lot more typedef's defined inside the header file, including 64-bit types, etc.
If you are working with C99, you can use the typedefs from <stdint.h> or <inttypes.h> (and <inttypes.h> might be available even if <stdint.h> is not - in non-C99 compilers).
If they are available (they usually are), all the types you show will be provided by those headers.
In general, a typedef is preferable to a #define.
With regards to your new question, the #define is replaced literally with the text you provide. So
#define LANDING_COMMAND "2";
Will replace all uses of LANDING_COMMAND with "2"; in the program text. This is probably not what you want.
First, preprocessing directives are not part of the C language, they're part of the preprocessor. Since they're not part of C, they're not statements, so they don't end with ;. If you leave that in, it will likely cause problems if you intend to do things like func(LANDING_COMMAND);.
Second, "2" is of type char *, which is not convertible to int16_t with any safety. You need to use a literal 2 for the numeric value.
Lastly, to make it type int16_t, you'll need to provide either a cast (((int16_t)2)) or use the macro INT16_C(2) which expands to a literal with the appropriate suffix to make it of size (and type) int16_t. I recommend the latter, but the former should work. The macro INT16_C(2) could be used, but it expands to a literal (with the appropriate suffix) of type int_least16_t, which is close but no cigar. stdint.h only provides macros to make integer constant literals of the [u]int_leastN_t types and the [u]intmax_t types, not more generally for the [u]intN_t or [u]int_fastN_t types. Why they don't is beyond me.
include stdint.h gives you 8, 16, 32, and 64 signed and unsigned.
http://en.wikipedia.org/wiki/Stdint.h
You can't do what you describe. Other answers have indicated workarounds. As for your specific question, from the MSDN site:
Expressions must have integral type
and can include only integer
constants, character constants, and
the defined operator.
The expression cannot use sizeof or a
type-cast operator.
#define doesn't have a type. It's exactly the same as find/replace in your editor.
You can do
#define LANDING_COMMAND 2
...
my_movCommand.yaw = LANDING_COMMAND;
The compiler will do the right type conversions for you, but if you insist on a type int16_t then
#define LANDING_COMMAND ((int16_t)2)
Related
As far as I know the suffix t in uint32_t denote type name but I wonder to know what is the C in UINT32_C and what is the differences?
UINT32_C is a macro which defines integer constant of type uint_least32_t. For example:
UINT32_C(123) // Might expand to 123UL on system where uint_least32_t is unsigned long
// or just 123U, if uint_least32_t is unsigned int.
7.20.4.1 Macros for minimum-width integer constants
The macro INTN_C(value) shall expand to an integer constant expression
corresponding to the type int_leastN_t. The macro UINTN_C(value) shall expand
to an integer constant expression corresponding to the type uint_leastN_t. For
example, if uint_least64_t is a name for the type unsigned long long int,
then UINT64_C(0x123) might expand to the integer constant 0x123ULL.
It is thus possible that this constant is more than 32 bits on some rare systems.
But if you are on a system where multiple of 8-bits 2's complement types are defined (most modern systems), and uint32_t exists, then this creates 32-bit constant.
They all are defined in stdint.h, and have been part of the C standard since C99.
UINT32_C is a macro for writing a constant of type uint_least32_t. Such a constant is suitable e.g. for initializing an uint32_t variable. I found for example the following definition in avr-libc (this is for the AVR target, just as an example):
#define UINT32_C(value) __CONCAT(value, UL)
So, when you write
UINT32_C(25)
it's expanded to
25UL
UL is the suffix for an unsigned long integer constant. The macro is useful because there is no standard suffix for uint32_t, so you can use it without knowing that on your target, uint32_t is a typedef e.g. for unsigned long. With other targets, it will be defined in a different way.
These constants are defined something like this:
#define UINT32_C(value) (value##UL)
You can only put constant values as macro argument, otherwise it wont compile.
UINT32_C(10); // compiles
uint32_t x = 10;
UINT32_C(x); // does not compile
Don't know about keil, but at least in Linux UINT32_C is a macro to create a uint32_t literal.
And as mentioned by others, uint32_t is a type defined as of C99 in stdint.h.
It is the macro appending the suffix creating the literal for example #define UINT32_C(c) c##UL
I'm writing a Chip-8 emulator in C, and my goal is to have it be compatible with as many different operating systems, new and old as possible I realize that a lot of different types for representing exact bit widths have been added over the years, so is something like this reasonable to both create a shortcut (so I don't have to write lots of unsigned chars/longs) and account for compilers that already have the numbers defined? If not, is there a better/more efficient way to do this?
#ifdef __uint8_t_defined
typedef uint8_t uchar;
typedef int8_t schar;
typedef uint16_t ushort;
typedef int16_t sshort;
#else
typedef unsigned char uchar;
typedef signed char schar;
typedef unsigned short ushort;
typedef signed short sshort;
#endif
You shouldn't make any assumptions on the sizes of the primitive types. Not even that char has 8 bits. Check this discussion:
What does the C++ standard state the size of int, long type to be?
I think standard integer types are pretty well-supported. If you don't have stdint.h, then your chances of cross-compatibility seem very dim to me. Expecting stdint.h to be available for the compiler seems like a reasonable pre-condition.
Yes, this is a very good idea! Never assume sizes for primitive types. If you are writing portable code this is a must for maintainability! This little trick will save tons of time and will help create a good foundation for maintaining a portable code base.
The overall idea is reasonable - write portable code. OP's approach is not.
__uint8_t is not defined by the C spec. Using that to steer compilation for portable code can lead to unspecified behavior. Better to use ..._MAX definitions.
Code is creating types based on fixed width types on one platform and non-specified widths on another. Not a good plan. Where code needs fixed width types, use fixed width types like uint8_t, etc. Where code wants to use short-hand uchar for unsigned char, etc., use a #define uchar unsigned char or better typedef unsigned char uchar;.
Attempting to create portable integer code without <stdint.h> is folly. Even compilers that do not natively have the file have on-line look alikes easily findable.
If user still wants to create uchar and friends like originally posted, suggest the more portable:
#include <stdint.h>
#ifdef INT_LEAST8_MAX
typedef uint_least8_t uchar;
#else
typedef unsigned char uchar;
#endif
#ifdef INT_LEAST16_MAX
typedef uint_least16_t ushort;
...
Does using typedef enum { VALUE_1 = 0x00, ... } typeName; have any more overhead in C (specifically, compiling using AVR-GCC for an AVR MCU) than doing typedef unsigned char typeName; and then just defining each value with #define VALUE_1 0x00?
My specific application is status codes that can be returned and checked by functions. It seems neater to me to use the typedef enum style, but I wanted to be sure that it wasn't going to add any significant overhead to the compiled application.
I would assume no, but I wasn't really sure. I tried to look for similar questions but most of them pertained to C++ and got more specific answers to C++.
An enum declaration creates an enumerated type. Such a type is compatible with (and therefore has the same size and representation as) some predefined integer type, but the compiler chooses which one.
But the enumeration constants are always of type int. (This differs from C++, where the constants are of the enumerated type.)
So typedef unsigned char ... vs. typedef enum ... will likely change the size and representation of the type, which can matter if you define objects of the type or functions that return the type, but the constants VALUE_1 et al will be of type int either way.
It's probably best to use the enum type; that way the compiler can decide what representation is best. Your alternative of specifying unsigned char will minimize storage, but depending on the platform it might actually slow down access to objects relative to, say, using something compatible with int.
Incidentally, the typedef isn't strictly necessary. If you prefer, you can use a tag:
enum typeName { Value_1 = 0x00, ... };
But then you have to refer to the type as enum typeName rather than just typeName. The advantage of typedef is that it lets you give the type a name that's just a single identifier.
Is there any difference between uint and unsigned int?
I'm looking in this site, but all questions refer to C# or C++.
I'd like an answer about the C language.
If it is relevant, note that I'm using GCC under Linux.
uint isn't a standard type - unsigned int is.
Some systems may define uint as a typedef.
typedef unsigned int uint;
For these systems they are same. But uint is not a standard type, so every system may not support it and thus it is not portable.
I am extending a bit answers by Erik, Teoman Soygul and taskinoor
uint is not a standard.
Hence using your own shorthand like this is discouraged:
typedef unsigned int uint;
If you look for platform specificity instead (e.g. you need to specify the number of bits your int occupy), including stdint.h:
#include <stdint.h>
will expose the following standard categories of integers:
Integer types having certain exact widths
Integer types having at least certain specified widths
Fastest integer types having at least certain specified widths
Integer types wide enough to hold pointers to objects
Integer types having greatest width
For instance,
Exact-width integer types
The typedef name int N _t designates a signed integer type with width
N, no padding bits, and a two's-complement representation. Thus,
int8_t denotes a signed integer type with a width of exactly 8 bits.
The typedef name uint N _t designates an unsigned integer type with
width N. Thus, uint24_t denotes an unsigned integer type with a width
of exactly 24 bits.
defines
int8_t
int16_t
int32_t
uint8_t
uint16_t
uint32_t
All of the answers here fail to mention the real reason for uint.
It's obviously a typedef of unsigned int, but that doesn't explain its usefulness.
The real question is,
Why would someone want to typedef a fundamental type to an abbreviated
version?
To save on typing?
No, they did it out of necessity.
Consider the C language; a language that does not have templates.
How would you go about stamping out your own vector that can hold any type?
You could do something with void pointers,
but a closer emulation of templates would have you resorting to macros.
So you would define your template vector:
#define define_vector(type) \
typedef struct vector_##type { \
impl \
};
Declare your types:
define_vector(int)
define_vector(float)
define_vector(unsigned int)
And upon generation, realize that the types ought to be a single token:
typedef struct vector_int { impl };
typedef struct vector_float { impl };
typedef struct vector_unsigned int { impl };
The unsigned int is a built in (standard) type so if you want your project to be cross-platform, always use unsigned int as it is guarantied to be supported by all compilers (hence being the standard).
The uint is a possible and proper abbreviation for unsigned int. It is better readable. But: It is not C standard. You can define and use it (as all other defines) to your own responsibiity.
But unfortunately some system headers define uint too. I have found in a sys/types.h from a currently compiler (ARM):
# ifndef _POSIX_SOURCE
//....
typedef unsigned short ushort; /* System V compatibility */
typedef unsigned int uint; /* System V compatibility */
typedef unsigned long ulong; /* System V compatibility */
# endif /*!_POSIX_SOURCE */
It seems to be a concession for familiary sources programmed as Unix System V standard. To switch off this undesired behaviour (because I want to
#define uint unsigned int
by myself, I have set firstly
#define _POSIX_SOURCE
A system's header must not define things which is not standard. But there are many things which are defined there, unfortunately.
See also on my web page https://www.vishia.org/emc/html/Base/int_pack_endian.html#truean-uint-problem-admissibleness-of-system-definitions resp. https://www.vishia.org/emc.
I wonder if typedef and #define are the same in c?
typedef obeys scoping rules just like variables, whereas define stays valid until the end of the compilation unit (or until a matching undef).
Also, some things can be done with typedef that cannot be done with define.
For example:
typedef int* int_p1;
int_p1 a, b, c; // a, b, c are all int pointers
#define int_p2 int*
int_p2 a, b, c; // only the first is a pointer, because int_p2
// is replaced with int*, producing: int* a, b, c
// which should be read as: int *a, b, c
typedef int a10[10];
a10 a, b, c; // create three 10-int arrays
typedef int (*func_p) (int);
func_p fp; // func_p is a pointer to a function that
// takes an int and returns an int
No.
#define is a preprocessor token: the compiler itself will never see it.
typedef is a compiler token: the preprocessor does not care about it.
You can use one or the other to achieve the same effect, but it's better to use the proper one for your needs
#define MY_TYPE int
typedef int My_Type;
When things get "hairy", using the proper tool makes it right
#define FX_TYPE void (*)(int)
typedef void (*stdfx)(int);
void fx_typ(stdfx fx); /* ok */
void fx_def(FX_TYPE fx); /* error */
No, they are not the same. For example:
#define INTPTR int*
...
INTPTR a, b;
After preprocessing, that line expands to
int* a, b;
Hopefully you see the problem; only a will have the type int *; b will be declared a plain int (because the * is associated with the declarator, not the type specifier).
Contrast that with
typedef int *INTPTR;
...
INTPTR a, b;
In this case, both a and b will have type int *.
There are whole classes of typedefs that cannot be emulated with a preprocessor macro, such as pointers to functions or arrays:
typedef int (*CALLBACK)(void);
typedef int *(*(*OBNOXIOUSFUNC)(void))[20];
...
CALLBACK aCallbackFunc; // aCallbackFunc is a pointer to a function
// returning int
OBNOXIOUSFUNC anObnoxiousFunc; // anObnoxiousFunc is a pointer to a function
// returning a pointer to a 20-element array
// of pointers to int
Try doing that with a preprocessor macro.
#define defines macros.
typedef defines types.
Now saying that, here are a few differences:
With #define you can define constants that can be used in compile time. The constants can be used with #ifdef to check how the code is compiled, and specialize certain code according to compile parameters.
You can also use #define to declare miniature find-and-replace Macro functions.
typedef can be used to give aliases to types (which you could probably do with #define as well), but it's safer because of the find-and-replace nature of #define constants.
Besides that, you can use forward declaration with typedef which allows you to declare a type that will be used, but isn't yet linked to the file you're writing in.
Preprocessor macros ("#define's") are a lexical replacement tool a la "search and replace". They are entirely agnostic of the programming language and have no understanding what you're trying to do. You can think of them as a glorified copy/paste mechanic -- occasionally that's useful, but you should use it with care.
Typedefs are a C language feature that lets you create aliases for types. This is extremely useful to make complicated compound types (like structs and function pointers) readable and handlable (in C++ there are even situations where you must typedef a type).
For (3): You should always prefer language features over preprocessor macros when that's possible! So always use typedefs for types, and constant values for constants. That way, the compiler can actually interact with you meaningfully. Remember that the compiler is your friend, so you should tell it as much as possible. Preprocessor macros do the exact opposite by hiding your semantics from the compiler.
They are very different, although they are often used to implement custom data types (which is what I am assuming this question is all about).
As pmg mentioned, #define is handled by the pre-processor (like a cut-and-paste operation) before the compiler sees the code, and typedef is interpreted by the compiler.
One of the main differences (at least when it comes to defining data types) is that typedef allows for more specific type checking. For example,
#define defType int
typedef int tdType
defType x;
tdType y;
Here, the compiler sees variable x as an int, but variable y as a data type called 'tdType' that happens to be the same size as an int. If you wrote a function that took a parameter of type defType, the caller could pass a normal int and the compiler wouldn't know the difference. If the function instead took a parameter of type tdType, the compiler would ensure that a variable of the proper type was used during function calls.
Also, some debuggers have the ability to handle typedefs, which can be much more useful than having all custom types listed as their underlying primitive types (as it would be if #define was used instead).
No. typedef is a C keyword that creates an alias for a type. #define is a pre-processor instruction, that creates a text replacement event prior to compilation. When the compiler gets to the code, the original "#defined" word is no longer there. #define is mostly used for macros and global constants.
AFAIK, No.
typedef helps you set up an "alias" to an existing data type. For eg. typedef char chr;
#define is a preprocessor directive used to define macros or general pattern substitutions. For eg. #define MAX 100, substitutes all occurrences of MAX with 100
As mentioned above, there is a key difference between #define and typedef. The right way to think about that is to view a typedef as being a complete "encapsulated" type. It means that you cannot add to it after you have declared it.
You can extend a macro typename with other type specifiers, but not a typedef'd typename:
#define fruit int
unsigned fruit i; // works fine
typedef int fruit;
unsigned fruit i; // illegal
Also, a typedef'd name provides the type for every declator in a declaration.
#define fruit int *
fruit apple, banana;
After macro expansion, the second line becomes:
int *apple, banana;
Apple is a pointer to an int, while banana is an int. In comparison. a typedef like this:
typedef char *fruit;
fruit apple, banana;
declares both apple and banana to be the same. The name on the front is different, but they are both pointers to a char.
Another reason to use typedef (which has only been mentioned briefly in other answers and yet I think is the entire reason typedef was created) is to make debugging easier when using libraries that have custom types. For example, I'll use a type-conversion error. Both the codes below will print a compile-time error saying that a char is not comparable to a string, but in different ways.
typedef char letter;
letter el = 'e';
if(el == "hello");
The above code will print something like the variable "el" of type letter (aka "char") is not compatable with type "char*"
#define letter char
letter el = 'e';
if(el == "hello");
This code will instead print the variable "el" of type char is not compatable with type "char*"
This may seem silly because I'm defining "letter" as "char", but in more complex libraries this can be extremely confusing because pointers to objects like buttons, windows, sound servers, images, and lots of other things are defined as unsigned char *, which would only be debuggable as exactly that when using the #define method.
As everyone said above, they aren't the same. Most of the answers indicate typedef to be more advantageous than #define.
But let me put a plus point of #define :when your code is extremely big, scattered across many files, it's better to use #define; it helps in readability - you can simply preprocess all the code to see the actual type definition of a variable at the place of its declaration itself.