When to use stdint.h's scalars in driver code - c

It has come to my attention that there seems no consistency nor best practise when to use native scalar types (integer, short, char) or the ones provided by stdint: uint32_t uint16_t uint8_t.
This is bugging me a lot because drivers form an essential part of a kernel that needs to be maintainable, consistent, stable and good.
Here is an illustrational example in gcc (used this for a hobby project for the raspberry pi):
// using native scalars
struct fbinfo {
unsigned width, height;
unsigned vwidth, vheight;
unsigned pitch, bits;
int x, y;
void *ptr;
unsigned size;
} __attribute__((aligned(16)));
// using stdint scalars
struct fbinfo {
uint32_t width, height;
uint32_t vwidth, vheight;
uint32_t pitch, bits;
int32_t x, y;
uint32_t ptr; // convert to void* in order to use it
uint32_t size;
} __attribute__((aligned(16)));
To me, the first example seems more logical, because this piece of code
is only intended to run on a raspberry pi. It would be pointless to run
this on other hardware.
The second example seems more practical, because it looks more
descriptive since C does not guarantee much about the size of integers.
It may be 16 bits or something else. uint32_t, uint_fast32_t and variants make guarantees about the exact or approximated size: e.g. at least or at most X bytes.
The operating system development community tends to use the stdint types, while the linux kernel uses multiple different techniques: u32, __u32, and endian specific stuff like __le32.
What considerations should be taken into account when to choose a scalar type and when to use a typedef'd scalar type? Is it better to use native scalar types in the provided example or use stdint.h's?

1. fixed with vs. fundamental types
Fixed width types are sometimes difficultly to use. E.g. printf() specifiers for int32_t are PRIi32 and require splitting of the format string:
printk("foo=" PRIi32 ", bar=" PRIi32 "\n", foo, bar);
Fixed width types should/must be used when hardware is accessed directly; e.g. when writing DMA descriptors. But for simple register accesses, writel() or readl() functions can be used which work with fundamental types.
As a rule of thumb, when a certain memory layout is assumed (like the __attribute__((__aligned__(16))) in your example, fixed width types should be used.
Signed fixed width types (int32_t x,y in your example) might need double checking, whether their representation matches the hardware expectations.
NOTE that in your example, the second structure is architecture dependent because of
uint32_t ptr; // convert to void* in order to use it
Writing such thing in common C would be uintptr_t ptr and in the kernel it is common to write
unsigned long ptr;
Alternatively, dma_addr_t might be a better type.
2. uint32_t vs. __u32
More than 10 years ago, Linus Torvalds objected against uint32_t because at this time, non-C99 compilers were common and using such types in (exported) linux headers would pollute the namespace.
But now, uint32_t and similar types are available everywhere (you can not compile the kernel with a non-C99 compiler) and kernel header export has been improved significantly, so these arguments are gone.
It is a matter of personal preference whether to use standard types or typedef'ed variants (which are framework dependent and differ between them).
3. uint_fastX_t and variants
They are not used in the kernel and I would avoid them. They combine disadvantages of uint32_t (difficult usage) and int (variable width).
4. __le32 vs. __u32
Use the endian types when specification explicitly requires them (e.g. in network protocol implementations). This makes it easy to detect wrong usage (e.g. assignments like endian_variable = native_variable).
Do not use them e.g. for filling processor structures (e.g. DMA descriptors); some processors can run both in little and big endian mode and native datatypes are usually the right way to write such infomration.

Related

Determine the fastest unsigned integer type for basic arithmetic calculations

I'm writing some code for calculating with arbitrarily large unsigned integers. This is just for fun and training, otherwise I'd use libgmp. My representation uses an array of unsigned integers and for chosing the "base type", I use a typedef:
#include <limits.h>
#include <stdint.h>
typedef unsigned int hugeint_Uint;
typedef struct hugeint hugeint;
#define HUGEINT_ELEMENT_BITS (CHAR_BIT * sizeof(hugeint_Uint))
#define HUGEINT_INITIAL_ELEMENTS (256 / HUGEINT_ELEMENT_BITS)
struct hugeint
{
size_t s; // <- maximum number of elements
size_t n; // <- number of significant elements
hugeint_Uint e[]; // <- elements of the number starting with least significant
};
The code is working fine, so I only show the part relevant to my question here.
I would like to pick a better "base type" than unsigned int, so the calculations are the fastest possible on the target system (e.g. a 64bit type when targeting x86_64, a 32bit type when targeting i686, an 8bit type when targeting avr_attiny, ...)
I thought that uint_fast8_t should do what I want. But I found out it doesn't, see e.g. here the relevant part of stdint.h from MinGW:
/* 7.18.1.3 Fastest minimum-width integer types
* Not actually guaranteed to be fastest for all purposes
* Here we use the exact-width types for 8 and 16-bit ints.
*/
typedef signed char int_fast8_t;
typedef unsigned char uint_fast8_t;
The comment is interesting: for which purpose would an unsigned char be faster than an unsigned int on win32? Well, the important thing is: uint_fast8_t will not do what I want.
So is there some good and portable way to find the fastest unsigned integer type?
It's not quite that black and white; processors may have different/specialized registers for certain operations, like AVX registers on x86_64, may operate most efficiently on half-sized registers or not have registers at all. The choice of the "fastest integer type" thus depends heavily on the actual calculation you need to perform.
Having said that, C99 defines uintmax_t which is meant to represent the maximum width unsigned integer type, but beware, it could be 64 bit simply because the compiler is able to emulate 64-bit math.
If you target commodity processors, size_t usually provides a good approximation for the "bitness" of the underlying hardware because it is directly tied to the memory addressing capability of the machine, and as such is most likely to be the most optimal size for integer math.
In any case you're going to have to test your solution on all hardware that you're planning to support.
It's a good idea to start your code with the largest integer type the platform has, uintmax_t. As has already been pointed out, this is not necessarily, but rather most probably the fastest. I'd rather say there are exceptions where this is not the case, but as a default, it is probably your best bet.
Be very careful to build the size granularity into expressions that the compiler can resolve at compile type, rather than runtime for speed.
It is most probably a good idea to define the base type as something like
#define LGINT_BASETYPE uintmax_t
#define LGINT_GRANUL sizeof(LGINT_BASETYPE)
This will allow you to change the base type in one single place and adapt to different platforms quickly. That results in code that is easily moved to a new platform, but still can be adapted to the exception cases where the largest int type is not the most performant one (after you have proven that by measurement)
As always, it does not make a lot of sense to think about optimal performance when designing your code - Start with a reasonable balance of "designed for optimization" and "design for maintainability" - You might easily find out that the choice of base type is not really the most CPU-eating part of your code. In my experience, I was nearly always in for some surprises when comparing my guesses on where CPU is spent to my measurements. Don't fall into the premature optimization trap.

When should I just use "int" versus more sign-specific or size-specific types?

I have a little VM for a programming language implemented in C. It supports being compiled under both 32-bit and 64-bit architectures as well as both C and C++.
I'm trying to make it compile cleanly with as many warnings enabled as possible. When I turn on CLANG_WARN_IMPLICIT_SIGN_CONVERSION, I get a cascade of new warnings.
I'd like to have a good strategy for when to use int versus either explicitly unsigned types, and/or explicitly sized ones. So far, I'm having trouble deciding what that strategy should be.
It's certainly true that mixing them—using mostly int for things like local variables and parameters and using narrower types for fields in structs—causes lots of implicit conversion problems.
I do like using more specifically sized types for struct fields because I like the idea of explicitly controlling memory usage for objects in the heap. Also, for hash tables, I rely on unsigned overflow when hashing, so it's nice if the hash table's size is stored as uint32_t.
But, if I try to use more specific types everywhere, I find myself in a maze of twisty casts everywhere.
What do other C projects do?
Just using int everywhere may seem tempting, since it minimizes the need for casting, but there are several potential pitfalls you should be aware of:
An int might be shorter than you expect. Even though, on most desktop platforms, an int is typically 32 bits, the C standard only guarantees a minimum length of 16 bits. Could your code ever need numbers larger than 216−1 = 32,767, even for temporary values? If so, don't use an int. (You may want to use a long instead; a long is guaranteed to be at least 32 bits.)
Even a long might not always be long enough. In particular, there is no guarantee that the length of an array (or of a string, which is a char array) fits in a long. Use size_t (or ptrdiff_t, if you need a signed difference) for those.
In particular, a size_t is defined to be large enough to hold any valid array index, whereas an int or even a long might not be. Thus, for example, when iterating over an array, your loop counter (and its initial / final values) should generally be a size_t, at least unless you know for sure that the array is short enough for a smaller type to work. (But be careful when iterating backwards: size_t is unsigned, so for(size_t i = n-1; i >= 0; i--) is an infinite loop! Using i != SIZE_MAX or i != (size_t) -1 should work, though; or use a do/while loop, but beware of the case n == 0!)
An int is signed. In particular, this means that int overflow is undefined behavior. If there's ever any risk that your values might legitimately overflow, don't use an int; use an unsigned int (or an unsigned long, or uintNN_t) instead.
Sometimes, you just need a fixed bit length. If you're interfacing with an ABI, or reading / writing a file format, that requires integers of a specific length, then that's the length you need to use. (Of course, is such situations, you may also need to worry about things like endianness, and so may sometimes have to resort to manually packing data byte-by-byte anyway.)
All that said, there are also reasons to avoid using the fixed-length types all the time: not only is int32_t awkward to type all the time, but forcing the compiler to always use 32-bit integers is not always optimal, particularly on platforms where the native int size might be, say, 64 bits. You could use, say, C99 int_fast32_t, but that's even more awkward to type.
Thus, here are my personal suggestions for maximum safety and portability:
Define your own integer types for casual use in a common header file, something like this:
#include <limits.h>
typedef int i16;
typedef unsigned int u16;
#if UINT_MAX >= 4294967295U
typedef int i32;
typedef unsigned int u32;
#else
typedef long i32;
typedef unsigned long i32;
#endif
Use these types for anything where the exact size of the type doesn't matter, as long as they're big enough. The type names I've suggested are both short and self-documenting, so they should be easy to use in casts where needed, and minimize the risk of errors due to using a too-narrow type.
Conveniently, the u32 and u16 types defined as above are guaranteed to be at least as wide as unsigned int, and thus can be used safely without having to worry about them being promoted to int and causing undefined overflow behavior.
Use size_t for all array sizes and indexing, but be careful when casting between it and any other integer types. Optionally, if you don't like to type so many underscores, typedef a more convenient alias for it too.
For calculations that assume overflow at a specific number of bits, either use uintNN_t, or just use u16 / u32 as defined above and explicit bitmasking with &. If you choose to use uintNN_t, make sure to protect yourself against unexpected promotion to int; one way to do that is with a macro like:
#define u(x) (0U + (x))
which should let you safely write e.g.:
uint32_t a = foo(), b = bar();
uint32_t c = u(a) * u(b); /* this is always unsigned multiply */
For external ABIs that require a specific integer length, again define a specific type, e.g.:
typedef int32_t fooint32; /* foo ABI needs 32-bit ints */
Again, this type name is self-documenting, with regard to both its size and its purpose.
If the ABI might actually require, say, 16- or 64-bit ints instead, depending on the platform and/or compile-time options, you can change the type definition to match (and rename the type to just fooint) — but then you really do need to be careful whenever you cast anything to or from that type, because it might overflow unexpectedly.
If your code has its own structures or file formats that require specific bitlengths, consider defining custom types for those too, exactly as if it was an external ABI. Or you could just use uintNN_t instead, but you'll lose a little bit of self-documentation that way.
For all these types, don't forget to also define the corresponding _MIN and _MAX constants for easy bounds checking. This might sound like a lot of work, but it's really just a couple of lines in a single header file.
Finally, remember to be careful with integer math, especially overflows.
For example, keep in mind that the difference of two n-bit signed integers may not fit in an n-bit int. (It will fit into an n-bit unsigned int, if you know it's non-negative; but remember that you need to cast the inputs to an unsigned type before taking their difference to avoid undefined behavior!)
Similarly, to find the average of two integers (e.g. for a binary search), don't use avg = (lo + hi) / 2, but rather e.g. avg = lo + (hi + 0U - lo) / 2; the former will break if the sum overflows.
You seem to know what you are doing, judging from the linked source code, which I took a glance at.
You said it yourself - using "specific" types makes you have more casts. That's not an optimal route to take anyway. Use int as much as you can, for things that do not mandate a more specialized type.
The beauty of int is that it is abstracted over the types you speak of. It is optimal in all cases where you need not expose the construct to a system unaware of int. It is your own tool for abstracting the platform for your program(s). It may also yield you speed, size and alignment advantage, depending.
In all other cases, e.g. where you want to deliberately stay close to machine specifications, int can and sometimes should be abandoned. Typical cases include network protocols where the data goes on the wire, and interoperability facilities - bridges of sorts between C and other languages, kernel assembly routines accessing C structures. But don't forget that sometimes you would want to in fact use int even in these cases, as it follows platforms own "native" or preferred word size, and you might want to rely on that very property.
With platform types like uint32_t, a kernel might want to use these (although it may not have to) in its data structures if these are accessed from both C and assembler, as the latter doesn't typically know what int is supposed to be.
To sum up, use int as much as possible and resort to moving from more abstract types to "machine" types (bytes/octets, words, etc) in any situation which may require so.
As to size_t and other "usage-suggestive" types - as long as syntax follows semantics inherent to the type - say, using size_t for well, size values of all kinds - I would not contest. But I would not liberally apply it to anything just because it is guaranteed to be the largest type (regardless if it is actually true). That's an underwater stone you don't want to be stepping on later. Code has to be self-explanatory to the degree possible, I would say - having a size_t where none is naturally expected, would raise eyebrows, for a good reason. Use size_t for sizes. Use offset_t for offsets. Use [u]intN_t for octets, words, and such things. And so on.
This is about applying semantics inherent in a particular C type, to your source code, and about the implications on the running program.
Also, as others have illustrated, don't shy away from typedef, as it gives you the power to efficiently define your own types, an abstraction facility I personally value. A good program source code may not even expose a single int, nevertheless relying on int aliased behind a multitude of purpose-defined types. I am not going to cover typedef here, the other answers hopefully will.
Keep large numbers that are used to access members of arrays, or control buffers as size_t.
For an example of a project that makes use of size_t, refer to GNU's dd.c, line 155.
Here are a few things I do. Not sure they're for everyone but they work for me.
Never use int or unsigned int directly. There always seems to be a more appropriately named type for the job.
If a variable needs to be a specific width (e.g. for a hardware register or to match a protocol) use a width-specific type (e.g. uint32_t).
For array iterators, where I want to access array elements 0 thru n, this should also be unsigned (no reason to access any index less than 0) and I use one of the fast types (e.g. uint_fast16_t), selecting the type based on the minimum size required to access all array elements. For example, if I have a for loop that will iterate through 24 elements max, I'll use uint_fast8_t and let the compiler (or stdint.h, depending how pedantic we want to get) decide which is the fastest type for that operation.
Always use unsigned variables unless there is a specific reason for them to be signed.
If your unsigned variables and signed variables need to play together, use explicit casts and be aware of the consequences. (Luckily this will be minimized if you avoid using signed variables except where absolutely necessary.)
If you disagree with any of those or have recommended alternatives please let me know in the comments! That's the life of a software developer... we keep learning or we become irrelevant.
Always.
Unless you have specific reasons for using a more specific type, including you're on a 16-bit platform and need integers greater than 32767, or you need to ensure proper byte order and signage for data exchange over a network or in a file (and unless you're resource constrained, consider transferring data in "plain text," meaning ASCII or UTF8 if you prefer).
My experience has shown that "just use 'int'" is a good maxim to live by and makes it possible to turn out working, easily maintained, correct code quickly every time. But your specific situation may differ, so take this advice with a bit of well-deserved scrutiny.
Most of the time, using int is not ideal. The main reason is that int is signed and signed can cause UB, signed integers can also be negative, something that you don't need for most integers. Prefer unsigned integers. Secondly, data types reflect meaning and a, very limited, way to document the used range and values this variable may have. If you use int, you imply that you expect this variable to sometimes hold negative values, that this values probably do not always fit into 8 bit but always fit into INT_MAX, which can be as low as 32767. Do not assume a int is 32 bit.
Always, think about the possible values of a variable and choose the type accordingly. I use the following rules:
Use unsigned integers except when you need to be able to handle negative numbers.
If you want to index an array, from the start, use size_t except when there are good reasons not to. Almost never use int for it, a int can be too small and there is a high chance of creating a UB bug that isn't found during testing because you never tested arrays large enough.
Same for array sizes and sizes of other object, prefer size_t.
If you need to index array with negative index, which you may need for image processing, prefer ptrdiff_t. But be aware, ptrdiff_t can be too small, but that is rare.
If you have arrays that never exceed a certain size, you may use uint_fastN_t, uintN_t, or uint_leastN_t types. This can make a lot of sense especially on a 8 bit microcontroller.
Sometimes, unsigned int can be used instead of uint_fast16_t, similarly int for int_fast16_t.
To handle the value of a single byte (or character, but this is not a real character because of UTF-8 and Unicode sometimes using more than one code pointer per character), use int. int can store -1 if you need an indicator for error or not set and a character literal is of type int. (This is true for C, for C++ you may use a different strategy). There is the extremely rare possibility that a machine uses sizeof(int)==1 && CHAR_MIN==0 where a byte can not be handled with a int, but i never saw such a machine.
It can make sense to define your own types for different purposes.
Use explicit cast where casts are needed. This way the code is well defined and has the least amount of unexpected behaviour.
After a certain size, a project needs a list/enum of the native integer data types. You can use macros with the _Generic expression from C11, that only needs to handle bool, signed char, short, int, long, long long and their unsigned counterparts to get the underlying native type from a typedefed one. This way your parsers and similar parts only need to handle 11 integer types and not 56 standard integer (if i counted correctly), and a bunch of other non-standard types.

Software exposed Bit width in C

I have two questions:
Is there any method to specify or limit the bit widths used for integer variables in a C program?
Is there any way to monitor the actual bit usage for a variable in a C program? What I mean by bit usage is, in some programs when a register is allocated for a variable not all the bits of that register are used for calculations. Hence when a program is executed can we monitor how many bits in a register have been actually changed through out the execution of a program?
You can use fixed width (or guaranteed-at-least-this-many-bits) types in C as of the 1999 standard, see e.g. Wikipedia or any decent C description, defined in the inttypes.h C header (called cinttypes in C++), also in stdint.h (C) or cstdint (C++).
You certainly can check for each computation what the values of the values could be, and limit the variables acordingly. But unless you are seriously strapped for space, I'd just forget about this. In many cases using "just large enough" data types wastes space (and computation time) by having to cast small values to the natural widths for computation, and then cast back. Beware of premature optimization, and even more of optimizing the wrong code (measure if the performance is enough, and if not, where modifications are worthwhile, before digging in to make code "better").
You have limited control if you use <stdint.h>.
On most systems, it will provide:
int8_t and uint8_t for 8-bit integers.
int16_t and uint16_t for 16-bit integers.
int32_t and uint32_t for 32-bit integers.
int64_t and uint64_t for 64-bit integers.
You don't usually get other choices. However, you might be able to use a bit-field to get a more arbitrary size value:
typedef struct int24_t
{
signed int b24:24;
} int24_t;
This might occupy more than 24 bits (probably 32 bits), but arithmetic will end up being 24-bit. You're not constrained to a power of 2, or even a multiple of 2:
typedef struct int13_t
{
signed int b13:13;
} int13_t;

Using structs with attibute packed when parsing binary data

Have seen various code around where one read data into a char or void and then
cast it to a struct. Example is parsing of file formats where data has fixed offsets.
Example:
struct some_format {
char magic[4];
uint32_t len;
uint16_t foo;
};
struct some_format *sf = (struct some_format*) buf;
To be sure this is always valid one need to align the struct by using __attribute__((packed)).
struct test {
uint8_t a;
uint8_t b;
uint32_t c;
uint8_t d[128];
} __attribute__((packed));
When reading big and complex file formats this surely makes things much simpler. Typically
reading media format with structs having 30+ members etc.
It is also easy to read in a huge buffer and cast to proper type by for example:
struct mother {
uint8_t a;
uint8_t b;
uint32_t offset_child;
};
struct child {
...
}
m = (struct mother*) buf;
c = (struct child*) ((uint8_t*)buf + mother->offset_child);
Or:
read_big_buf(buf, 4096);
a = (struct a*) buf;
b = (struct b*) (buf + sizeof(struct a));
c = (struct c*) (buf + SOME_DEF);
...
It would also be easy to quickly write such structures to file.
My question is how good or bad this way of coding is. I am looking at various data
structures and would use the best way to handle this.
Is this how it is done? (As in: is this common practice.)
Is __attribute__((packed)) always safe?
Is it better to use sscanf. What was I thinking about?, Thanks #Amardeep
Is it better to make functions where one initiates structure with casts and bit shifting.
etc.
As of now I use this mainly in data information tools. Like listing all structures of
a certain type with their values in a file format like e.g. a media stream.
Information dumping tools.
It is how it is sometimes done. Packed is safe as long as you use it correctly. Using sscanf() would imply you are reading text data, which is a different use case than a binary image from a structure.
If your code does not require portability across compilers and/or platforms (CPU architectures), and your compiler has support for packed structures, then this is a perfectly legitimate way of accessing serialized data.
However, problems may arise if you try to generate data on one platform and use it on another due to:
Host Byte Order (Little Endian/Big Endian)
Different sizes for language primitive types (long can be 32 or 64 bits for example)
Code changes on one side but not the other.
There are libraries that simplify serialization/deserialization and handle most of these issues. The overhead of such operations is easier justified on systems that must span processes and hosts. However, if your structures are very complex, using a ser/des library may be justified simply due to ease of maintenance.
Is this how it is done?
I don't this question understand. Edit: you'd like to know if this is a common idiom. In codebases where dependency on GNU extensions is acceptable, yes, this is used quite frequently, since it's convenient.
is __attribute__((packed)) always safe?
For this use case, pretty much yes, except when it's unavailable.
Is it better to use sscanf.
No. Don't use scanf().
Is it better to make functions where one initiates structure with casts and bit shifting.
It's more portable. __attribute__((packed)) is a GNU extension, and not all compilers support it (although I'm wondering who cares about compilers other than GCC and Clang, but theoretically, this still is an issue).
One of my gripes about C language standards to date is that they impose enough rules about how compilers have to lay out structures and bit fields to preclude what might otherwise be useful optimizations [e.g. on a system with power-of-two integer sizes, a compiler would be forbidden from packing eight three-bit fields into three bytes] but does not provide any means by which a programmer can specify an explicit struct layout. I used to frequently use byte pointers to read out data from structures, but I don't favor such techniques now so much as I used to. When speed isn't critical, I prefer nowadays to use a family functions which either write multi-byte types to multiple consecutive memory locations using whatever endianness is needed [e.g. void storeI32LE(uint8_t **dest, int32_t dat) or int32_t readI32LE(uint8_t const **src);]. Such code will not be as efficient as what a compiler might be able to write in cases where processors have the correct endianness and either the structure members are aligned or processors support unaligned accesses, but code using such methods may easily be ported to any processor regardless of its native alignment and endianness.

Primitive data type in C to represent the WORD size of a CPU-arch

I observed that size of long is always equal to the WORD size of any given CPU architecture. Is it true for all architectures? I am looking for a portable way to represent a WORD sized variable in C.
C doesn't deal with instructions. In C99, you can copy any size struct using an single assignment:
struct huge { int data[1 << 20]; };
struct huge a, b;
a = b;
With a smart compiler, this should generate the fastest (single-threaded, though in the future hopefully multi-threaded) code to perform the copy.
You can use the int_fast8_t type if you want the "fastest possible" integer type as defined by the vendor. This will likely correspond with the word size, but it's certainly not guaranteed to even be single-instruction-writable.
I think your best option would be to default to one type (e.g. int) and use the C preprocessor to optimize for certain CPU's.
No. In fact, the scalar and vector units often have different word sizes. And then there are string instructions and built-in DMA controllers with oddball capabilities.
If you want to copy data fast, memcpy from the platform's standard C library is usually the fastest.
Under Windows, sizeof(long) is 4, even on 64-bit versions of Windows.
I think the nearest answers you'll get are...
int and unsigned int often (but not always) match the register width of the machine.
there's a type which is an integer-the-same-size-as-a-pointer, spelled intptr_t and available from stddef.h IIRC. This should obviously match the address-width for your architecture, though I don't know that there's any guarantee.
However, there often really isn't a single word-size for the architecture - there can be registers with different widths (e.g. the "normal" vs. MMX registers in Intel x86), the register width often doesn't match the bus width, addresses and data may be different widths and so on.
No, standard have no such type (with maximize memory throughput).
But it states that int must be fastest type for the processor for doing ALU operations on it.
Things will get more complicated in embedded world. ASAIK, C51 is 8bit processor but in Keil C for c51, long have 4 bytes. I think it's compiler dependent.

Resources