I am developing a program for an STM32Fx cortex-M3 series processor. In stdint.h the following are defined:
typedef unsigned int uint_fast32_t;
typedef uint32_t uint_least32_t;
typedef unsigned long uint32_t;
As I understand it.
[u]int_fast[n]_t will give you the fastest data type of at least n bits.
[u]int_least[n]_t will give you the smallest data type of at least n bits.
[u]int[n]_t will give you the data type of exactly n bits.
Also as far as i know sizeof(unsigned int) <= sizeof(unsigned long) and UINT_MAX <= ULONG_MAX - always.
Thus I would expect uint_fast32_t to be a data type with a size equal to or greater than the size of uint32_t.
In the case of the cortex-M3 sizeof(unsigned int) == sizeof(unsigned long) == 4. So the above definitions are 'correct' in terms of size.
But why are they not defined in a way that is consistent with the names and logical sizes of the underlying data types i.e.
typedef unsigned long uint_fast32_t;
typedef unsigned int uint_least32_t;
typedef uint_fast32_t uint32_t;
Can someone please clarify the selection of the underlying types?
Given that 'long' and 'int' are the same size, why not use the same data type for all three definitions?
typedef unsigned int uint_fast32_t;
typedef unsigned int uint_least32_t;
typedef unsigned int uint32_t;
The case is, it is only guaranteed that
sizeof(long) >= sizeof(int)
and it is not guaranteed that it is actually any longer. On a lot of systems, int is usually as big as long.
See my answer to your other question.
Basically, it doesn't matter which type is used. Given that int and long are the same size and have the same representation and other characteristics, the implementer can choose either type for int32_t, int_fast32_t, and int_least32_t, and likewise for the corresponding unsigned versions.
(It's possible that the particular choices could be influenced by a perceived need to use the same header for implementations with different sizes for int and long, but I don't see how the particular definitions you quoted would achieve that.)
As long as the types are the right size and meet all the other requirements imposed by the standard, and as long as you don't write code that depends on, for example, int32_t being compatible with int, or with long, it doesn't matter.
The particular choices made were likely an arbitrary whim of the implementer -- which is perfectly acceptable. Or perhaps that header file was modified by two or more developers who had different ideas about which type is best.
Related
I want to use the method strtoull(...) but do I really have to type out unsigned long long whenever I use it?
The largest integer typedef I could find was uint64_t / size_t but it is not the same as unsigned long long.
I believe unsigned long long takes up to much space. Is there some sort of official and recognized shortcut for it among the community?
Do I have to make my own type? What would be a good name for it? u128int_t or uLLong?
You say you "need a type for unsigned long long int", but unsigned long long int is a type.
Apparently your concern is that the name unsigned long long int is too long to type. You have a point, but defining an alias for it is likely to cause more confusion than it's worth. Every knowledgeable C programmer knows what unsigned long long int means. Nobody knows what your alias means without looking it up, and even then they can't be sure the meaning won't change as the software evolves. If you want to use unsigned long long int, it's best to use unsigned long long int (or unsigned long long).
You can define your own typedef. Remember that typedef doesn't define a new type. It only defines a new name for an existing type.
uint64_t, defined in <stdint.h>, may or may not be an alias for unsigned long long int, depending on the implementation. unsigned long long int is guaranteed to be at least 64 bits, but it could be wider (though I know of no implementations where it's not exactly 64 bits wide). Similarly, uintmax_t is likely to be unsigned long long int, but that's not guaranteed either.
You can define an alias if you like, but I wouldn't recommend it unless you need a name for some type that just happens to be defined as unsigned long long int. If you can give your typedef a name that has meaning within your application, that's probably a good idea. If its only meaning is "a shorter name for unsigned long long, I'd advise against it.
If you need a 64-bit unsigned integer type, uint64_t already exists; just use that. (And if the implementation doesn't provide a type with the required characteristics, then it won't define uint64_t and the error message when you try to use it will tell you that.)
Do I have to make my own type?
You can typedefine whatever name you wish for whatever type (inside the defined naming rules of course), keep in mind that this is only an alias for the original type, hidding predefined types behind typedefs is not consensual, it is legal though.
u128int_t or uLLong?
Taking the above paragraph in consideration, uLLong is perfectly fine. As of today there is no primitive 128 bit wide type in C, u128int_t would be misleading, I would avoid it.
uint64_t is guaranteed to have 64 bits, unsigned long long int is not, it has to have at least 64 bits, but it is not guaranteed by any rule that it should have only that.
Some compilers (for example gcc) support 128bit integers as an extension
example:
__int128_t mul(__int128_t x, __int128_t y)
{
return x * y;
}
https://godbolt.org/z/c8bW1xYdh
I read that stdint.h is used for portability, but I'm confused.
If I wrote a program on a 32-bit system, uint32_t (unsigned int) is 4-bytes.
But when this program is run on 16-bit system, int is 2bytes and uint32_t (unsigned int) is 2bytes.
I think portability is not guaranteed in this case.
Is there anything I am understanding wrong?
Is there any thing I am understanding wrong?
Yes.
Type uint32_t is always an unsigned integer type with exactly 32 bits, none of them padding bits. On many modern systems that corresponds to type unsigned int, but on others it might correspond to a different type, such as unsigned long int. On systems that do not have a type with the correct properties, it is not defined at all.
The point of uint32_t and the other explicit-width data types from stdint.h is exactly to address the issue you raise, that (for example) unsigned int has a different size on different machines.
A uint32_t and a uint16_t are distinct types from int.
While the size of int may vary, a uint32_t is always 32 bit and a uint16_t is always 16 bit.
I came across the data type int32_t in a C program recently. I know that it stores 32 bits, but don't int and int32 do the same?
Also, I want to use char in a program. Can I use int8_t instead? What is the difference?
To summarize: what is the difference between int32, int, int32_t, int8 and int8_t in C?
Between int32 and int32_t, (and likewise between int8 and int8_t) the difference is pretty simple: the C standard defines int8_t and int32_t, but does not define anything named int8 or int32 -- the latter (if they exist at all) is probably from some other header or library (most likely predates the addition of int8_t and int32_t in C99).
Plain int is quite a bit different from the others. Where int8_t and int32_t each have a specified size, int can be any size >= 16 bits. At different times, both 16 bits and 32 bits have been reasonably common (and for a 64-bit implementation, it should probably be 64 bits).
On the other hand, int is guaranteed to be present in every implementation of C, where int8_t and int32_t are not. It's probably open to question whether this matters to you though. If you use C on small embedded systems and/or older compilers, it may be a problem. If you use it primarily with a modern compiler on desktop/server machines, it probably won't be.
Oops -- missed the part about char. You'd use int8_t instead of char if (and only if) you want an integer type guaranteed to be exactly 8 bits in size. If you want to store characters, you probably want to use char instead. Its size can vary (in terms of number of bits) but it's guaranteed to be exactly one byte. One slight oddity though: there's no guarantee about whether a plain char is signed or unsigned (and many compilers can make it either one, depending on a compile-time flag). If you need to ensure its being either signed or unsigned, you need to specify that explicitly.
The _t data types are typedef types in the stdint.h header, while int is an in built fundamental data type. This make the _t available only if stdint.h exists. int on the other hand is guaranteed to exist.
Always keep in mind that 'size' is variable if not explicitly specified so if you declare
int i = 10;
On some systems it may result in 16-bit integer by compiler and on some others it may result in 32-bit integer (or 64-bit integer on newer systems).
In embedded environments this may end up in weird results (especially while handling memory mapped I/O or may be consider a simple array situation), so it is highly recommended to specify fixed size variables. In legacy systems you may come across
typedef short INT16;
typedef int INT32;
typedef long INT64;
Starting from C99, the designers added stdint.h header file that essentially leverages similar typedefs.
On a windows based system, you may see entries in stdin.h header file as
typedef signed char int8_t;
typedef signed short int16_t;
typedef signed int int32_t;
typedef unsigned char uint8_t;
There is quite more to that like minimum width integer or exact width integer types, I think it is not a bad thing to explore stdint.h for a better understanding.
I am trying to port a C program to a SPARC architecture that has
the following type declaration
#include <stdint.h>
typedef uint32_t WORD ;
typedef uint64_t DWORD ;
The trouble is, that the compiler tells me that stdint.h cant be found. Hence,
I redefined those datatypes as follows:
unsigned int WORD;
unsigned long DWORD;
This seems for me the straightforward declaration, but the program is not expecting as it should. Did I maybe miss something?
Thanks
<stdint.h> and the types uint32_t and uint64_t are "new" in ISO/IEC 9899:1999. Your compiler may only conform to the previous version of the standard.
If you are sure that unsigned int and unsigned long are 32-bit and 64-bit respectively then you shouldn't have any problems (at least not ones due to the typedefs themselves). As you are, this may not be the case. Do you know (or can you find out) if your compiler supports unsigned long long?
I'm guessing that unsigned int is probably 32-bit, how old is your SPARC?
If your compiler/OS does not have <stdint.h> then the best thing to do is implement your own version of this header rather than not modify the code you're trying to port. You probably only need a subset of the types that are normally defined in <stdint.h>, e.g.
//
// stdint.h
//
typedef int int32_t; // signed 32 bit int
typedef unsigned long long uint64_t; // unsigned 64 bit int
(obviously you need to know the sizes of the various integer types on your particular platform to do this correctly).
So, you need an integer that's 32 bit and another that's 64 bit.
It might be that int and longs are the same on your architecture, and if your compiler supports long long, that might be 64 bit while int might be 32 bit. Check your compiler docs for what it supports, and if it has any extension (e.g. some compilers might provide an __int64 type). This could be what you need:
typedef unsigned int WORD;
typedef unsigned long long DWORD;
Anyway, I'd write a small program to verify the sizes of integers on your host, so you can pick the correct one , that is printf the sizeof(int), sizeof(long) and so on. (On a sparc host CHAR_BIT will be 8, so it's all atleast a multple of 8 bits. )
Also, since you're porting to a sparc host, make sure your code is not messing up somewhere regarding endianess
I want to know how to announce int to make sure it's 4 bytes or short in 2 bytes no matter on what platform. Does C99 have rules about this?
C99 doesn't say much about this, but you can check whether sizeof(int) == 4, or you can use fixed size types like uint32_t (32 bits unsigned integer). They are defined in stdint.h
If you are using C99 and require integer types of a given size, include stdint.h. It defines types such as uint32_t for an unsigned integer of exactly 32-bits, and uint_fast32_t for an unsigned integer of at least 32 bits and “fast” on the target machine by some definition of fast.
Edit: Remember that you can also use bitfields to get a specific number of bits (though it may not give the best performance, especially with “strange” sizes, and most aspects are implementation-defined):
typedef struct {
unsigned four_bytes:32;
unsigned two_bytes:16;
unsigned three_bits:3;
unsigned five_bits:5;
} my_message_t;
Edit 2: Also remember that sizeof returns the number of chars. It's theoretically possible (though very unlikely these days) that char is not 8 bits; the number of bits in a char is defined as CHAR_BIT in limits.h.
Try the INT_MAX constant in limits.h
Do you want to require it to be 4 bytes?
If you just want to see the size of int as it is compiled on each platform then you can just do sizeof(int).
sizeof (int) will return the number of bytes an int occupies in memory on the current system.
I assume you want something beyond just the obvious sizeof (int) == 4 check. Likely you want some compile-time check.
In C++, you could use BOOST_STATIC_ASSERT.
In C, you can make compile-time assertions by writing code that tries to create negatively-sized arrays on failure or that tries to create switch statements with redefined cases. See this stackoverflow question for examples: Ways to ASSERT expressions at build time in C
You can use sizeof(int), but you can never assume how large an int is. The C specification doesn't put any assumptions on the size of an int, except that it must be greater or equal to the size of a short (which must be greater or equal to the size of a char).
Often the size of an int aligns to the underlying hardware. This means an int is typically the same as a word, where a word is the functional size of data fetched off the memory bus (or sometimes the CPU register width). It doesn't have to be the same as a word, but the earliest notes I have indicated it should be the preferred size for memory transfer (which is typically a word).
In the past, there have been 18 bit ints (PDP-8) and 24 bit ints (PDP-15). There have been architectures with 36 bit word sizes (PDP-11) but I can't recall what their int size turned out to be.
On Linux platforms, you can peek in
#include <sys/types.h>
to get the actual bit count for each type.
I found last night that visual studio 2008 doesn't support C99 well, and it doesn't support stdint.h. BUT they have their own types. here is a example:
#ifdef _MSC_VER
typedef __int8 int8_t;
typedef unsigned __int8 uint8_t;
typedef __int16 int16_t;
typedef unsigned __int16 uint16_t;
typedef __int32 int32_t;
typedef unsigned __int32 uint32_t;
typedef __int64 int64_t;
typedef unsigned __int64 uint64_t;
#else
#include <stdint.h>
#endif