Difference between UINT32_C and uint32_t - c

As far as I know the suffix t in uint32_t denote type name but I wonder to know what is the C in UINT32_C and what is the differences?

UINT32_C is a macro which defines integer constant of type uint_least32_t. For example:
UINT32_C(123) // Might expand to 123UL on system where uint_least32_t is unsigned long
// or just 123U, if uint_least32_t is unsigned int.
7.20.4.1 Macros for minimum-width integer constants
The macro INTN_C(value) shall expand to an integer constant expression
corresponding to the type int_leastN_t. The macro UINTN_C(value) shall expand
to an integer constant expression corresponding to the type uint_leastN_t. For
example, if uint_least64_t is a name for the type unsigned long long int,
then UINT64_C(0x123) might expand to the integer constant 0x123ULL.
It is thus possible that this constant is more than 32 bits on some rare systems.
But if you are on a system where multiple of 8-bits 2's complement types are defined (most modern systems), and uint32_t exists, then this creates 32-bit constant.
They all are defined in stdint.h, and have been part of the C standard since C99.

UINT32_C is a macro for writing a constant of type uint_least32_t. Such a constant is suitable e.g. for initializing an uint32_t variable. I found for example the following definition in avr-libc (this is for the AVR target, just as an example):
#define UINT32_C(value) __CONCAT(value, UL)
So, when you write
UINT32_C(25)
it's expanded to
25UL
UL is the suffix for an unsigned long integer constant. The macro is useful because there is no standard suffix for uint32_t, so you can use it without knowing that on your target, uint32_t is a typedef e.g. for unsigned long. With other targets, it will be defined in a different way.

These constants are defined something like this:
#define UINT32_C(value) (value##UL)
You can only put constant values as macro argument, otherwise it wont compile.
UINT32_C(10); // compiles
uint32_t x = 10;
UINT32_C(x); // does not compile

Don't know about keil, but at least in Linux UINT32_C is a macro to create a uint32_t literal.
And as mentioned by others, uint32_t is a type defined as of C99 in stdint.h.

It is the macro appending the suffix creating the literal for example #define UINT32_C(c) c##UL

Related

Are exact-width integers in Cython actually platform dependent?

In Cython one can use exact-width integral types by importing them from stdint, e.g.
from libc.stdint cimport int32_t
Looking through stdint.pxd, we see that int32_t is defined as
cdef extern from "<stdint.h>" nogil:
...
ctypedef signed int int32_t
Does this mean that if I use int32_t in my Cython code, this type is just an alias for signed int (int), which might in fact be only 16 bits wide?
The issue is the same for all the other integral types.
They should be fine.
The typedefs that are really used come from the C stdint.h header, which are almost certainly right.
The Cython typedef
ctypedef signed int int32_t
Is really just so that Cython understands that the type is an integer and that it's signed. It isn't what's actually used in the generated C code. Since it's in a cdef extern block it's telling Cython "a typedef like this exists" rather than acting as the real definition.
No.
On a platform where signed int is not 32bit wide, int32_t would be typedef'ed to a type that actually is 32bit wide.
If no such type is available -- e.g. on a platform where the maximum int width is 16, or where all ints are 64bit, or where CHAR_BIT does not equal 8 -- the exact-width types would not be defined. (Yes, the exact-width types are optional. That is why there are least-width types as well.)
Disclaimer: This is speaking from a purely C perspective. I have no experience with Cython whatsoever. But it would be very surprising (and a bug) if this would not be covered adequately in Cython as well.
And as #JörgWMittag points out in his comment, the alternative is of course to simply not support any platform where signed int isn't 32 bit wide.

Why do we have "EventMaskType" twice in the last line?

I found following C-code in OS source code files. These 3 lines were in three different source files. Why do we have "EventMaskType" twice in the last line? It's giving me same output even if I remove first occurence of "EventMaskType" from the third line.
typedef unsigned long uint64_t;
typedef uint64_t EventMaskType;
#define EVENT_MASK_OsEvent1 EventMaskType((EventMaskType)1u<<1)
typedef unsigned long uint64_t;
typedef uint64_t EventMaskType;
#define EVENT_MASK_OsEvent1 EventMaskType((EventMaskType)1u<<1)
The first line defines the type unsigned long to be also known under the name uint64_t. As pointed out in the comments this should be part of the standard header stdint.h.
The second line defines EventMaskType to be an alias of the type uint64_t (and as such transitively an alias of unsigned long).
The third line defines a symbol macro for the preprocessor, so that every (following) occurrence of the token EVENT_MASK_OsEvent1 will be replaced by EventMaskType((EventMaskType)1u<<1).
That in turn is the value 1u (an unsigned int) which is cast using a "classic C style cast" to an value of type EventMaskType (which is an unsigned long, so instead of the cast one could also write 1ul). This value is then shifted one bit to the left (result 2ul) and finally cast using a functional-style cast to EventMaskType ...
EventMaskType( /* ... */)
... which doesn't change anything, because it's already of that type.

Why was the boolean data type not implemented in C

One of my friends asked a question, why is there no Boolean data type in the C programming language. I did a bit of searching and reading. I got few questions and answers in stack overflow saying that,
All data types should be addressable, and a bit cannot be addressed.
The basic data structure at the hardware level of mainstream CPUs is a byte. Operating on bits in these CPUs require additional processing.
We can use a bool in this manner
#define bool int
#define TRUE 1
#define FALSE 0
or use typedefs.
But my question is this: why wasn't it implemented as a data type in C, even after so many years. doesn't it make sense to implement a one byte data type to store a boolean value rather than using int or short explicitly.
That's not true any more. The built-in boolean type, aka _Bool is available since C99. If you include stdbool.h, its alias bool is also there for you.
_Bool is a true native type, not an alias of int. As for its size, the standard only specifies it's large enough to store 0 and 1. But in practice, most compilers do make its size 1:
For example, this code snippet on ideone outputs 1:
#include <stdio.h>
#include <stdbool.h>
int main(void) {
bool b = true;
printf("size of b: %zu\n", sizeof(b));
return 0;
}
C99 added support for boolean type _Bool, is not simply a typedef and does not have to be the same size as int, from the draft C99 standard section 6.2.5 Types:
An object declared as type _Bool is large enough to store the values 0 and 1.
We have convenience macros through the stdbool.h header. we can see this from going to the draft C99 standard section 7.16 Boolean type and values whcih says:
The header defines four macros.
The macro
bool
expands to _Bool.
The remaining three macros are suitable for use in #if preprocessing directives. They
are
true
which expands to the integer constant 1,
false
which expands to the integer constant 0, and
__bool_true_false_are_defined
which expands to the integer constant 1.
Since the data types are predefined ,we cannot use the "bool" data type as it is not present in the documentation

Integer types in C

Suppose I wish to write a C program (C99 or C2011) that I want to be completely portable and not tied to a particular architecture.
It seems that I would then want to make a clean break from the old integer types (int, long, short) and friends and use only int8_t, uint8_t, int32_t and so on (perhaps using the the least and fast versions as well).
What then is the return type of main? Or must we strick with int? Is it required by the standard to be int?
GCC-4.2 allows me to write
#include <stdint.h>
#include <stdio.h>
int32_t main() {
printf("Hello\n");
return 0;
}
but I cannot use uint32_t or even int8_t because then I get
hello.c:3: warning: return type of ‘main’ is not ‘int’
This is because of a typedef, no doubt. It seems this is one case where we are stuck with having to use the unspecified size types, since it's not truly portable unless we leave the return type up to the target architecture. Is this interpretation correct? It seems odd to have "just one" plain old int in the code base but I am happy to be pragmatic.
Suppose I wish to write a C program (C99 or C2011) that I want to be
completely portable and not tied to a particular architecture.
It seems that I would then want to make a clean break from the old
integer types (int, long, short) and friends and use only int8_t,
uint8_t, int32_t and so on (perhaps using the the least and fast
versions as well).
These two affirmations, in bold, are contradictory. That's because whether uint32_t, uint8_t and al are available or not is actually implementation-defined (C11, 7.20.1.1/3: Exact-width integer types).
If you want your program to be truly portable, you must use the built-in types (int, long, etc.) and stick to the minimum ranges defined in the C standard (i.e.: C11, 5.2.4.2.1: Sizes of integer types),
Per example, the standard says that both short and int should range from at least -32767 to at least 32767. So if you want to store a bigger or lesser value, say 42000, you'd use a long instead.
The return type of main is required by the Standard to be int in C89, C99 and in C11.
Now the exact-width integer types are alias to integer types. So if you use the right alias for int it will still be valid.
For example:
int32_t main(void)
if int32_t is a typedef to int.

#define SOMETHING of type int16_t

How does one define different types of ints?
I have the following
struct movCommand
{
uint8_t type;
uint8_t order;
int16_t height;
uint16_t distance;
int16_t yaw;
};
and need to define these according to the types they are.
What is the correct syntax for #define when selecting the type for the define?
EDIT :
It looks like my question has been misunderstood.
I want to do this #define LANDING_COMMAND "2"
But I want to set the type of the landing command because it needs to be int16_t
You do not use #define for this. You #include <stdint.h>
Rather than using the #define directive, I'd use a typedef, which is how the standard-library would define them inside of <stdint.h> (at least on a C99-compatible platform). If you look in that header, you'll see how they're defined specifically on their platform. Typical typedefs will be:
typedef unsigned char uint8_t;
typedef signed char int8_t;
typedef unsigned short uint16_t;
typedef signed short int16_t;
typedef unsigned int uint32_t;
typedef int int32_t;
//... etc., etc.
There's a lot more typedef's defined inside the header file, including 64-bit types, etc.
If you are working with C99, you can use the typedefs from <stdint.h> or <inttypes.h> (and <inttypes.h> might be available even if <stdint.h> is not - in non-C99 compilers).
If they are available (they usually are), all the types you show will be provided by those headers.
In general, a typedef is preferable to a #define.
With regards to your new question, the #define is replaced literally with the text you provide. So
#define LANDING_COMMAND "2";
Will replace all uses of LANDING_COMMAND with "2"; in the program text. This is probably not what you want.
First, preprocessing directives are not part of the C language, they're part of the preprocessor. Since they're not part of C, they're not statements, so they don't end with ;. If you leave that in, it will likely cause problems if you intend to do things like func(LANDING_COMMAND);.
Second, "2" is of type char *, which is not convertible to int16_t with any safety. You need to use a literal 2 for the numeric value.
Lastly, to make it type int16_t, you'll need to provide either a cast (((int16_t)2)) or use the macro INT16_C(2) which expands to a literal with the appropriate suffix to make it of size (and type) int16_t. I recommend the latter, but the former should work. The macro INT16_C(2) could be used, but it expands to a literal (with the appropriate suffix) of type int_least16_t, which is close but no cigar. stdint.h only provides macros to make integer constant literals of the [u]int_leastN_t types and the [u]intmax_t types, not more generally for the [u]intN_t or [u]int_fastN_t types. Why they don't is beyond me.
include stdint.h gives you 8, 16, 32, and 64 signed and unsigned.
http://en.wikipedia.org/wiki/Stdint.h
You can't do what you describe. Other answers have indicated workarounds. As for your specific question, from the MSDN site:
Expressions must have integral type
and can include only integer
constants, character constants, and
the defined operator.
The expression cannot use sizeof or a
type-cast operator.
#define doesn't have a type. It's exactly the same as find/replace in your editor.
You can do
#define LANDING_COMMAND 2
...
my_movCommand.yaw = LANDING_COMMAND;
The compiler will do the right type conversions for you, but if you insist on a type int16_t then
#define LANDING_COMMAND ((int16_t)2)

Resources