This is related to following question,
How to Declare a 32-bit Integer in C
Several people mentioned int is always 32-bit on most platforms. I am curious if this is true.
Do you know any modern platforms with int of a different size? Ignore dinosaur platforms with 8-bit or 16-bit architectures.
NOTE: I already know how to declare a 32-bit integer from the other question. This one is more like a survey to find out which platforms (CPU/OS/Compiler) supporting integers with other sizes.
As several people have stated, there are no guarantees that an 'int' will be 32 bits, if you want to use variables of a specific size, particularly when writing code that involves bit manipulations, you should use the 'Standard Integer Types' mandated by the c99 specification.
int8_t
uint8_t
int32_t
uint32_t
etc...
they are generally of the form [u]intN_t, where the 'u' specifies that you want an unsigned quantity, and N is the number of bits
the correct typedefs for these should be available in stdint.h on whichever platform you are compiling for, using these allows you to write nice, portable code :-)
"is always 32-bit on most platforms" - what's wrong with that snippet? :-)
The C standard does not mandate the sizes of many of its integral types. It does mandate relative sizes, for example, sizeof(int) >= sizeof(short) and so on. It also mandates minimum ranges but allows for multiple encoding schemes (two's complement, ones' complement, and sign/magnitude).
If you want a specific sized variable, you need to use one suitable for the platform you're running on, such as the use of #ifdef's, something like:
#ifdef LONG_IS_32BITS
typedef long int32;
#else
#ifdef INT_IS_32BITS
typedef int int32;
#else
#error No 32-bit data type available
#endif
#endif
Alternatively, C99 and above allows for exact width integer types intN_t and uintN_t:
The typedef name intN_t designates a signed integer type with width N, no padding bits, and a two's complement representation. Thus, int8_t denotes a signed integer type with a width of exactly 8 bits.
The typedef name uintN_t designates an unsigned integer type with width N. Thus, uint24_t denotes an unsigned integer type with a width of exactly 24 bits.
These types are optional. However, if an implementation provides integer types with widths of 8, 16, 32, or 64 bits, no padding bits, and (for the signed types) that have a two's complement representation, it shall define the corresponding typedef names.
At this moment in time, most desktop and server platforms use 32-bit integers, and even many embedded platforms (think handheld ARM or x86) use 32-bit ints. To get to a 16-bit int you have to get very small indeed: think "Berkeley mote" or some of the smaller Atmel Atmega chips. But they are out there.
No. Small embedded systems use 16 bit integers.
It vastly depends on your compiler. Some compile them as 64-bit on 64-bit machines, some compile them as 32-bit. Embedded systems are their own little special ball of wax.
Best thing you can do to check:
printf("%d\n", sizeof(int));
Note that sizeof will print out bytes. Do sizeof(int)*CHAR_BIT to get bits.
Code to print the number of bits for various types:
#include <limits.h>
#include <stdio.h>
int main(void) {
printf("short is %d bits\n", CHAR_BIT * sizeof( short ) );
printf("int is %d bits\n", CHAR_BIT * sizeof( int ) );
printf("long is %d bits\n", CHAR_BIT * sizeof( long ) );
printf("long long is %d bits\n", CHAR_BIT * sizeof(long long) );
return 0;
}
TI are still selling OMAP boards with the C55x DSPs on them, primarily used for video decoding. I believe the supplied compiler for this has a 16 bit int. It is hardly dinosaur (the Nokia 770 was released in 2005), although you can get 32 bit DSPs.
Most code you write, you can safely assume it won't ever be run on a DSP. But perhaps not all.
Well, most ARM-based processors can run Thumb code, which is a 16-bit mode. That includes the yet-only-rumored Android notebooks and the bleeding-edge smartphones.
Also, some graphing calculators use 8-bit processors, and I'd call those fairly modern as well.
If you are also interested in the actual Max/Min Value instead of the number of bits, limits.h contains pretty much everything you want to know.
Related
I notice that modern C and C++ code seems to use size_t instead of int/unsigned int pretty much everywhere - from parameters for C string functions to the STL. I am curious as to the reason for this and the benefits it brings.
The size_t type is the unsigned integer type that is the result of the sizeof operator (and the offsetof operator), so it is guaranteed to be big enough to contain the size of the biggest object your system can handle (e.g., a static array of 8Gb).
The size_t type may be bigger than, equal to, or smaller than an unsigned int, and your compiler might make assumptions about it for optimization.
You may find more precise information in the C99 standard, section 7.17, a draft of which is available on the Internet in pdf format, or in the C11 standard, section 7.19, also available as a pdf draft.
Classic C (the early dialect of C described by Brian Kernighan and Dennis Ritchie in The C Programming Language, Prentice-Hall, 1978) didn't provide size_t. The C standards committee introduced size_t to eliminate a portability problem
Explained in detail at embedded.com (with a very good example)
In short, size_t is never negative, and it maximizes performance because it's typedef'd to be the unsigned integer type that's big enough -- but not too big -- to represent the size of the largest possible object on the target platform.
Sizes should never be negative, and indeed size_t is an unsigned type. Also, because size_t is unsigned, you can store numbers that are roughly twice as big as in the corresponding signed type, because we can use the sign bit to represent magnitude, like all the other bits in the unsigned integer. When we gain one more bit, we are multiplying the range of numbers we can represents by a factor of about two.
So, you ask, why not just use an unsigned int? It may not be able to hold big enough numbers. In an implementation where unsigned int is 32 bits, the biggest number it can represent is 4294967295. Some processors, such as the IP16L32, can copy objects larger than 4294967295 bytes.
So, you ask, why not use an unsigned long int? It exacts a performance toll on some platforms. Standard C requires that a long occupy at least 32 bits. An IP16L32 platform implements each 32-bit long as a pair of 16-bit words. Almost all 32-bit operators on these platforms require two instructions, if not more, because they work with the 32 bits in two 16-bit chunks. For example, moving a 32-bit long usually requires two machine instructions -- one to move each 16-bit chunk.
Using size_t avoids this performance toll. According to this fantastic article, "Type size_t is a typedef that's an alias for some unsigned integer type, typically unsigned int or unsigned long, but possibly even unsigned long long. Each Standard C implementation is supposed to choose the unsigned integer that's big enough--but no bigger than needed--to represent the size of the largest possible object on the target platform."
The size_t type is the type returned by the sizeof operator. It is an unsigned integer capable of expressing the size in bytes of any memory range supported on the host machine. It is (typically) related to ptrdiff_t in that ptrdiff_t is a signed integer value such that sizeof(ptrdiff_t) and sizeof(size_t) are equal.
When writing C code you should always use size_t whenever dealing with memory ranges.
The int type on the other hand is basically defined as the size of the (signed) integer value that the host machine can use to most efficiently perform integer arithmetic. For example, on many older PC type computers the value sizeof(size_t) would be 4 (bytes) but sizeof(int) would be 2 (byte). 16 bit arithmetic was faster than 32 bit arithmetic, though the CPU could handle a (logical) memory space of up to 4 GiB.
Use the int type only when you care about efficiency as its actual precision depends strongly on both compiler options and machine architecture. In particular the C standard specifies the following invariants: sizeof(char) <= sizeof(short) <= sizeof(int) <= sizeof(long) placing no other limitations on the actual representation of the precision available to the programmer for each of these primitive types.
Note: This is NOT the same as in Java (which actually specifies the bit precision for each of the types 'char', 'byte', 'short', 'int' and 'long').
Type size_t must be big enough to store the size of any possible object. Unsigned int doesn't have to satisfy that condition.
For example in 64 bit systems int and unsigned int may be 32 bit wide, but size_t must be big enough to store numbers bigger than 4G
This excerpt from the glibc manual 0.02 may also be relevant when researching the topic:
There is a potential problem with the size_t type and versions of GCC prior to release 2.4. ANSI C requires that size_t always be an unsigned type. For compatibility with existing systems' header files, GCC defines size_t in stddef.h' to be whatever type the system'ssys/types.h' defines it to be. Most Unix systems that define size_t in `sys/types.h', define it to be a signed type. Some code in the library depends on size_t being an unsigned type, and will not work correctly if it is signed.
The GNU C library code which expects size_t to be unsigned is correct. The definition of size_t as a signed type is incorrect. We plan that in version 2.4, GCC will always define size_t as an unsigned type, and the fixincludes' script will massage the system'ssys/types.h' so as not to conflict with this.
In the meantime, we work around this problem by telling GCC explicitly to use an unsigned type for size_t when compiling the GNU C library. `configure' will automatically detect what type GCC uses for size_t arrange to override it if necessary.
If my compiler is set to 32 bit, size_t is nothing other than a typedef for unsigned int. If my compiler is set to 64 bit, size_t is nothing other than a typedef for unsigned long long.
size_t is the size of a pointer.
So in 32 bits or the common ILP32 (integer, long, pointer) model size_t is 32 bits.
and in 64 bits or the common LP64 (long, pointer) model size_t is 64 bits (integers are still 32 bits).
There are other models but these are the ones that g++ use (at least by default)
According to C99 §5.2.4.2.1-1 the following types have size which is implementation dependant. What is said is they are equal or greater in magnitude than these values:
short >= 8 bits
int >= 16 bits
long >= 32 bits
long long >= 64 bits
I have always heard that long is always 32-bits, and that it is strictly equivalent to int32_t which looks wrong.
What is true?
On my computer long is 64 bits in Linux.
Windows is the only major platform that uses the 32-bit longs in 64-bit mode, exactly because of the false assumptions being widespread in the existing code. This made it difficult to change the size of long on Windows, hence on 64-bit x86 processors longs are still 32 bits in Windows to keep all sorts of existing code and definitions compatible.
The standard is by definition correct, and the way you interpret it is correct. The sizes of some types may vary. The standard only states the minimum width of these types. Usually (but not necessarily) the type int has the same width as the target processor.
This goes back to the old days where performance was a very important aspect. Se whenever you used an int the compiler could choose the fastest type that still holds at least 16 bits.
Of course, this approach is not very good today. It's just something we have to live with. And yes, it can break code. So if you want to write fully portable code, use the types defined in stdint.h like int32_t and such instead. Or to the very least, never use int if you expect the variable to hold a number not in the range [−32,767; 32,767].
I have always heard that long is always 32-bits and it is strictly equivalent to int32_t which looks wrong.
I'm curious where you heard that. It's absolutely wrong.
There are plenty of systems (mostly either 16-bit or 32-bit systems or 64-bit Windows, I think) where long is 32 bits, but there are also plenty of systems where long is 64 bits.
(And even if long is 32 bits, it may not be the same type as int32_t. For example, if int and long are both 32 bits, they're still distinct types, and int32_t is probably defined as one or the other.)
$ cat c.c
#include <stdio.h>
#include <limits.h>
int main(void) {
printf("long is %zu bits\n", sizeof (long) * CHAR_BIT);
}
$ gcc -m32 c.c -o c && ./c
long is 32 bits
$ gcc -m64 c.c -o c && ./c
long is 64 bits
$
The requirements for the sizes of the integer types are almost as you stated in your question (you had the wrong size for short). The standard actually states its requirements in terms of ranges, not sizes, but that along with the requirement for a binary representation implies minimal sizes in bits. The requirements are:
char, unsigned char, signed char : 8 bits
short, unsigned short: 16 bits
int, unsigned int: 16 bits
long, unsigned long: 32 bits
long long, unsigned long long: 64 bits
Each signed type has a range that includes the range of the previous type in the list. There are no upper bounds.
It's common for int and long to be 32 and 64 bits, respectively, particularly on non-Windows 64-bit systems. (POSIX requires int to be at least 32 bits.) long long is exactly 64 bits on every system I've seen, though it can be wider.
Note that the Standard doesn't specify sizes for types like int, short, long, etc., but rather a minimum range of values that those types must be able to represent.
For example, an int must be able to represent at least the range [-32767..32767]1, meaning it must be at least 16 bits wide. It may be wider for two reasons:
The platform offers more value bits to store a wider range (e.g., x86 uses 32 bits to store integer values and 64 bits for long integers);
The platform uses padding bits that store something other than part of the value.
As an example of the latter, supposed you have a 9-bit platform with 18-bit words. In the case of the 18-bit word, two of the bits are padding bits and are not used to store part of the value - the type can still only store [-32767..32767], even though it's wider than 16 bits.
The <stdint.h> header does define integer types with specific, fixed sizes (16-bit, 32-bit), but they may not be available everywhere.
C does not mandate 2's complement representation for signed integers, which is why the range doesn't go from [-32768..32767].
Is it always true that long int (which as far as I understand is a synonym for long) is 4 bytes?
Can I rely on that? If not, could it be true for a POSIX based OS?
The standards say nothing regarding the exact size of any integer types aside from char. Typically, long is 32-bit on 32-bit systems and 64-bit on 64-bit systems.
The standard does however specify a minimum size. From section 5.2.4.2.1 of the C Standard:
1 The values given below shall be replaced by constant expressions
suitable for use in #if preprocessing directives. Moreover,
except for CHAR_BIT and MB_LEN_MAX, the following shall be
replaced by expressions that have the same type as would an
expression that is an object of the corresponding type converted
according to the integer promotions. Their implementation-defined
values shall be equal or greater in magnitude (absolute value) to
those shown, with the same sign.
...
minimum value for an object of type long int
LONG_MIN -2147483647 // −(2^31−1)
maximum value for an object of type long int
LONG_MAX +2147483647 // 2^31−1
This says that a long int must be a minimum of 32 bits, but may be larger. On a machine where CHAR_BIT is 8, this gives a minimum byte size of 4. However on machine with e.g. CHAR_BIT equal to 16, a long int could be 2 bytes long.
Here's a real-world example. For the following code:
#include <stdio.h>
int main ()
{
printf("sizeof(long) = %zu\n", sizeof(long));
return 0;
}
Output on Debian 7 i686:
sizeof(long) = 4
Output on CentOS 7 x64:
sizeof(long) = 8
So no, you can't make any assumptions on size. If you need a type of a specific size, you can use the types defined in stdint.h. It defines the following types:
int8_t: signed 8-bit
uint8_t: unsigned 8-bit
int16_t: signed 16-bit
uint16_t: unsigned 16-bit
int32_t: signed 32-bit
uint32_t: unsigned 32-bit
int64_t: signed 64-bit
uint64_t: unsigned 64-bit
The stdint.h header is described in section 7.20 of the standard, with exact width types in section 7.20.1.1. The standard states that these typedefs are optional, but they exist on most implementations.
No, neither the C standard nor POSIX guarantee this and in fact most Unix-like 64-bit platforms have a 64 bit (8 byte) long.
Use code sizeof(long int) and check the size. It will give you the size of long int in bytes on the system you're working currently. The answer of your question in particular is NO. It is nowhere guaranteed in C or in POSIX or anywhere.
As pointed out by #delnan, POSIX implementations keep the size of long and int as unspecified and it often differs between 32 bit and 64 bit systems.
The length of long is mostly hardware related (often matching the size of data registers on the CPU and sometimes other software related issues such as OS design and ABI interfacing).
To ease your mind, sizeof isn't a function, but a compiler directive*, so your code isn't using operations when using sizeof - it's the same as writing a number, only it's portable.
use:
sizeof(long int)
* As Dave pointed out in the comments, sizeof will be computed at runtime when it's impossible to compute the value during compilation, such as when using variable length arrays.
Also, as pointed out in another comment, sizeof takes into consideration the padding and alignment used by the implementation, meaning that the actual bytes in use could be different then the size in memory (this could be important when bit shifting).
If you're looking for specific byte sized variables, consider using a byte array or (I would assume to be supported) the types defined by C99 in stdint.h - as suggested by #dbush.
When we first implemented C on ICL Series 39 hardware, we took the standard at its word and mapped the data types to the natural representation on that machine architecture, which was short = 32 bits, int = 64 bits, long = 128 bits.
But we found that no serious C applications worked; they all assumed the mapping short = 16, int = 32, long = 64, and we had to change the compiler to support that.
So whatever the official standard says, for many years everyone has converged on long = 64 bits and it's not likely to change.
The standard says nothing about the size of long int, so it is dependent on the environment which you are using.
To get the size of long int on your environment you can use the sizeof operator and get the size of long int. Something like
sizeof(long int)
C standard only requires the following points about the sizes of types
int >= 16 bits,
long >= 32 bits,
long long (since C99) >= 64 bits
sizeof(char) <= sizeof(short) <= sizeof(int) <= sizeof(long) <= sizeof(long long)
sizeof(char) == 1
CHAR_BIT >= 8
The remaining are implementations defined, so it's not surprised if
one encountered some systems where int has 18/24/36/60 bits, one's
complement signed form, sizeof(char) == sizeof(short) == sizeof(int)
== sizeof(long) == 4, 48-bit long or 9-bit char like Exotic architectures the standards committees care about and List of
platforms supported by the C standard
The point about long int above is completely wrong. Most Linux/Unix
implementations define long as a 64-bit type but it's only 32 bits in
Windows because they use different data models (have a look at the
table here 64-bit computing), and this is regardless of 32 or 64-bit
OS version.
Source
The compiler determines the size based on the type of hardware and OS.
So, assumptions should not be made regarding the size.
No, you can't assume that since the size of the “long” data type varies from compiler to compiler.
Check out this article for more details.
From Usrmisc's Blog:
The standard leaves it completely up to the compiler, which also means the same compiler can make it depend on options and target architecture.
So you can't.
Incidentally even long int could be the same as long.
Short answer: No! You cannot make fixed assumptions on the size of long int. Because, the standard (C standard or POSIX) does not document the size of long int (as repeatedly emphasized). Just to provide a counter example to your belief, most of the 64 bit systems have long of size 64! To maximize portability use sizeof appropriately.
Use sizeof(long int) to check the size, it returns the size of long in bytes. The value is system or environment dependent; meaning, the compiler determines the size based on the hardware and OS.
We have two kinds of remote systems in our university; we can connect to them remotely and work. I wrote a C program on one of the systems where size of void pointer and size of size_t variable is 8 bytes. But when I connected to other system, my program started working differently. I wasted too much time debugging for the reason and finally found that it is happening due to architecture differences between the two systems.
My questions are:
On what factors the size of primitive types depend?
How to know the size of primitive types before we start programming?
How to write cross platform code in C?
Question:
On what factors the size of primitive types depend?
The CPU and the compiler.
Question:
How to know the size of primitive types before we start programming?
You can't. However, you can write a small program to get the sizes of the primitive types.
#include <stdio.h>
int main()
{
printf("Size of short: %zu\n", sizeof(short));
printf("Size of int: %zu\n", sizeof(int));
printf("Size of long: %zu\n", sizeof(long));
printf("Size of long long: %zu\n", sizeof(long long));
printf("Size of size_t: %zu\n", sizeof(size_t));
printf("Size of void*: %zu\n", sizeof(void*));
printf("Size of float: %zu\n", sizeof(float));
printf("Size of double: %zu\n", sizeof(double));
}
Question:
How to write cross platform code in C?
Minimize code that are dependent an sizes of primitive types.
When exchanging data between platforms, use text files for persistent data as much as possible.
In general Size of integer in a processor depends on how many bits ALU can operate in single cycle.
For e.g.
i)For 8051 Architecture as size of data bus is 8 bits ,All 8051 compilers specifies size of integer is 8 bits.
ii) For 32 bit ARM architecture as data bus is 8 bits wide size of integer is 32 bits.
You should always refer the compiler documentation for correct size of data types.
Almost all compiles declares their name/version as a predefined macro,you can use them in your header file like this:
#ifedef COMPILER_1
typedef char S8
typedef int S16
typedef long S32
:
:
#else COPILER_2
typedef int S32
typedef long S64
:
:
#endif
Then in you code you can declare variables like
S32 Var1;
How to write cross platform code in C?
If you need to marshal data in a platform-independent way (e.g. on a filesystem, or over a network), you should be consistent in (at least) these things:
Datatype sizes - Rely on the types from <stdint.h>. For example, if you need a two-byte unsigned integer, use uint16_t.
Datatype alignment/padding - Be aware of how members in a struct are packed/padded. The default alignment of a member may change from one system to another, which means a member may be at different byte offsets, depending on the compiler. When marshalling data, use __attribute__((packed)) (on GCC), or similar.
Byte order - Multi-byte integers can be stored with their bytes in either order: Little-endian systems store the least-significant byte at the lowest address/offset, while Big-endian systems start with the most-significant. Luckily, everyone has agreed that bytes are sent as big-endian over the network. For this, we use htons/ntohs to convert byte order when sending/receiving multi-byte integers over network connections.
Question:
On what factors the size of primitive types depend & How to know the size of primitive types before we start programming?
Short Answer:
The CPU and the compiler.
Long Answer
To understand Primitive types one has to understand types of Primitive types, there are two types of Primitive types:
1. Integer Types
The integer data types range in size from at least 8 bits to at least 32 bits. The C99 standard extends this range to include integer sizes of at least 64 bits. The sizes and ranges listed for these types are minimums; depending on your computer platform, these sizes and ranges may be larger.
signed char : 8-bit integer values in the range of −128 to 127.
unsigned char : 8-bit integer values in the range of 0 to 255.
char : Depending on your system, the char data type is defined as having the same range as either the signed char or the unsigned char data type
short int : 16-bit integer values in the range of −32,768 to 32,767
unsigned short int : 16-bit integer values in the range of 0 to 65,535
int : 32-bit integer values in the range of −2,147,483,648 to 2,147,483,647
long int : 32-bit integer range of at least −2,147,483,648 to 2,147,483,647 (Depending on your system, this data type might be 64-bit)
unsigned long int : 32-bit integer range of at least −2,147,483,648 to 2,147,483,647 (Depending on your system, this data type might be 64-bit)
long long int : 64-bit Integer values in the range of −9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. (This type is not part of C89, but is both part of C99 and a GNU C extension. )
unsigned long long int: 64-bit integer values in the range of at least 0 to 18,446,744,073,709,551,615 (This type is not part of C89, but is both part of C99 and a GNU C extension. )
Real Number Types
float : float data type is the smallest of the three floating point types, if they differ in size at all. Its minimum value is stored in the FLT_MIN, and should be no greater than 1e-37. Its maximum value is stored in FLT_MAX, and should be no less than 1e37.
double : The double data type is at least as large as the float type. Its minimum value is stored in DBL_MIN, and its maximum value is stored in DBL_MAX.
long double : type is at least as large as the float type, and it may be larger. Its minimum value is stored in DBL_MIN, and its maximum value is stored in DBL_MAX.
Question:
How to write cross platform code in C?
Cross Platform code has two things to do that are:
Use standard 'C' types, not platform specific types
Use only built in #ifdef compiler flags, do not invent your own
Try to re-useable, cross-platform "base" libraries to hide platform code
Don’t use 3rd party "Application Frameworks" or "Runtime Environments"
Jonathan Leffler basically covered this in his comments, but you technically cannot know how large primitive types will be across systems/architectures. If a system follows the C standard, then you know you will have a minimum number of bytes for every variable type, but you may be given more than that value.
For example, if I am writing a program that uses a signed long, I can reliably know that I will be given at least 4 bytes and that I can store numbers up to 2,147,483,647; however, some systems could give me more than 4 bytes.
Unfortunately, a developer cannot know ahead of time (without testing) how many bytes a system will return, and thus good code should be dynamic enough to account for this.
The exceptions to this rule are int8_t, int16_t, int32_t, int64_t and their unsigned counterparts (uintn_t). With these variable types you are guaranteed exactly n number of bits - no more, no less.
C has a standard sizeof() operator.
I ask this question of every developer I hire.
My quess is you are doing this.
struct A {
int X; // 2 or 4 or 8 bytes
short Y; // 2 bytes
}
On 32 bit computer you get a structure that is 48 bits, 32 for int, 16 for the short.
On the 64 bit computer you get structure that is 80 bits long, 64 for in, 16 for short.
(Yes, I know, all kinds of esoteric stuff might happen here, but the goal is solve the problem, not to confuse the questioner.)
The problem comes about when you tried to use this struct to read what was written by the other.
You need a structure that will marshall correctly.
struct A {
long X; // 4 bytes
short Y; // 2 bytes
}
Now both sides will read and write the data correctly in most cases, unless you have monkeyed with the flags.
If you are sending stuff across the wire you must use the char, short, long, etc. If you are not then you can use int, as int and let the compiler figure it out.
I have ever read that int32_t is exact 32 bits long and int_least32_t only at least 32 bits, but they have both the same typedefs in my stdint.h:
typedef int int_least32_t;
and
typedef int int32_t;
So where is the difference? They exactly the same...
int32_t is signed integer type with width of exactly 32 bits with no padding bits and using 2's complement for negative values.
int_least32_t is smallest signed integer type with width of at least 32 bits.
These are provided only if the implementation directly supports the type.
The typedefs that you are seeing simply means that in your environment both these requirements are satisfied by the standard int type itself. This need not mean that these typedefs are the same on a different environment.
Why do you think that on another computer with different processor, different OS, different version of C standard libs you will see exactly this typedefs?
This 2 types are exactly what you wrote. One of them is 32 bits exactly, another type is at least 32 bit. So one of the possible situations is when both of them are 32 bits and on your particular case you see it in stdint.h. On another system you may see that they are different.