According to C99 §5.2.4.2.1-1 the following types have size which is implementation dependant. What is said is they are equal or greater in magnitude than these values:
short >= 8 bits
int >= 16 bits
long >= 32 bits
long long >= 64 bits
I have always heard that long is always 32-bits, and that it is strictly equivalent to int32_t which looks wrong.
What is true?
On my computer long is 64 bits in Linux.
Windows is the only major platform that uses the 32-bit longs in 64-bit mode, exactly because of the false assumptions being widespread in the existing code. This made it difficult to change the size of long on Windows, hence on 64-bit x86 processors longs are still 32 bits in Windows to keep all sorts of existing code and definitions compatible.
The standard is by definition correct, and the way you interpret it is correct. The sizes of some types may vary. The standard only states the minimum width of these types. Usually (but not necessarily) the type int has the same width as the target processor.
This goes back to the old days where performance was a very important aspect. Se whenever you used an int the compiler could choose the fastest type that still holds at least 16 bits.
Of course, this approach is not very good today. It's just something we have to live with. And yes, it can break code. So if you want to write fully portable code, use the types defined in stdint.h like int32_t and such instead. Or to the very least, never use int if you expect the variable to hold a number not in the range [−32,767; 32,767].
I have always heard that long is always 32-bits and it is strictly equivalent to int32_t which looks wrong.
I'm curious where you heard that. It's absolutely wrong.
There are plenty of systems (mostly either 16-bit or 32-bit systems or 64-bit Windows, I think) where long is 32 bits, but there are also plenty of systems where long is 64 bits.
(And even if long is 32 bits, it may not be the same type as int32_t. For example, if int and long are both 32 bits, they're still distinct types, and int32_t is probably defined as one or the other.)
$ cat c.c
#include <stdio.h>
#include <limits.h>
int main(void) {
printf("long is %zu bits\n", sizeof (long) * CHAR_BIT);
}
$ gcc -m32 c.c -o c && ./c
long is 32 bits
$ gcc -m64 c.c -o c && ./c
long is 64 bits
$
The requirements for the sizes of the integer types are almost as you stated in your question (you had the wrong size for short). The standard actually states its requirements in terms of ranges, not sizes, but that along with the requirement for a binary representation implies minimal sizes in bits. The requirements are:
char, unsigned char, signed char : 8 bits
short, unsigned short: 16 bits
int, unsigned int: 16 bits
long, unsigned long: 32 bits
long long, unsigned long long: 64 bits
Each signed type has a range that includes the range of the previous type in the list. There are no upper bounds.
It's common for int and long to be 32 and 64 bits, respectively, particularly on non-Windows 64-bit systems. (POSIX requires int to be at least 32 bits.) long long is exactly 64 bits on every system I've seen, though it can be wider.
Note that the Standard doesn't specify sizes for types like int, short, long, etc., but rather a minimum range of values that those types must be able to represent.
For example, an int must be able to represent at least the range [-32767..32767]1, meaning it must be at least 16 bits wide. It may be wider for two reasons:
The platform offers more value bits to store a wider range (e.g., x86 uses 32 bits to store integer values and 64 bits for long integers);
The platform uses padding bits that store something other than part of the value.
As an example of the latter, supposed you have a 9-bit platform with 18-bit words. In the case of the 18-bit word, two of the bits are padding bits and are not used to store part of the value - the type can still only store [-32767..32767], even though it's wider than 16 bits.
The <stdint.h> header does define integer types with specific, fixed sizes (16-bit, 32-bit), but they may not be available everywhere.
C does not mandate 2's complement representation for signed integers, which is why the range doesn't go from [-32768..32767].
Related
I would like to know the best datatype, in c, that would allow be to store on it a NATURAL 64 bit number.
“Natural number” may mean either positive integer or non-negative integer.
For the latter, the optional type uint64_t declared in <stdint.h> is an exact match for a type that can represent all 64-bit binary numerals and takes no more space than 64 bits. Support for this type is optional; the C standard does not require an implementation to provide it. There is no exact match for the former (positive integers), as all integer types in C can represent zero.
However, the question asks for the “best” type but does not define what “best” means in this question. Some possibilities are:
If “best” means a type that can represent 64-bit binary numerals and does not take any more than 64 bits, then uint64_t is best.
If “best” means a type that can represent 64-bit binary numerals and does not take any more space than other types in the implementation that can do that, then uint_least64_t is best.
If “best” means a type that can represent 64-bit binary numerals and is usually fastest, then uint_fast64_t is best.
If “best” means a type that can represent 64-bit binary numerals and is definitely supported, then either uint_least64_t or uint_fast64_t is best, according to whether a secondary concern is space or time, respectively.
The size of data type depends on the different things, such as system architecture, compiler, and even the language. In 64 bit systems,unsigned long, unsigned long long is of 8 bytes , so it can hold 64 bit natural number (always positive).
Please visit this link : Size of data type
From <stdint.h>,
#if __WORDSIZE == 64
typedef unsigned long int uint64_t;
#else
__extension__
typedef unsigned long long int uint64_t
It is possible to find architectures where the char data type is represented on 8 bytes, so 64 bits, the same as long long and in the same time the Standard requires the CHAR_MIN and CHAR_MAX to be bound -- see 5.2.4.2.1 Sizes of integer types <limits.h> from the Standard ISO 9899.
I cannot figure out why these architectures chose to represent the char so and how does it represent char values on so a large space. So how char values are represented in such a case ?
sizeof(char)=1 all the time. My question is, what is the value of sizeof(long long) and sizeof(int) on such an architecture ?
It is possible to find architectures where the char data type is represented on 8 bytes
No. That's because a char is defined to be a byte *). But a byte doesn't necessarily have 8 bits. That's why the term octet is sometimes used to refer to a unit of 8 bits. There are architectures using more than 8 bits in a byte, but I doubt there's one with a 64bit byte, although this would be theoretically possible.
Another thing to consider is that char (as opposed to many other integer types) isn't allowed to have padding bits, so if you ever found an architecture with 64bit chars, that would mean CHAR_MIN and CHAR_MAX would be "very large" ;)
*) In fact, a byte is defined to be the unit of memory used to represent an encoded character, which is normally also the smallest addressable unit of the system. 8 bits are common, The wikipedia article mentions byte sizes up to 48 bits were used. This might not be the best source, but still, finding a 64bit byte is very unlikely.
It is possible to find architectures where the char data type is represented on 8 bytes,
I don't know any. BTW, it is not only a matter of architecture, but also of ABI. BTW, you don't define what is a byte, and the bit size of char-s matters much more.
(IIRC, someone coded a weird implementation of C in Common Lisp on Linux/x86-64 which has 32 bits char-s; of course its ABI is not the usual Linux one!)
sizeof(char)=1 all the time. My question is, what is the value of sizeof(long long) and sizeof(int) on such an architecture ?
It probably would be also 1 (assuming char, int, long long all have 64 bits) unless long long is e.g. 128 bits (which is possible but unusual).
Notice that the C standard imposes minimal bounds and bit sizes (read n1570). E.g. long long could be wider than 64 bits. I never heard of such C implementations (and I hope that when 128 bits processors become common, C will be dead).
But your question is theoretical. I know no practical C implementation with 64 bits char-s or wider than 64 bits long long. In practice assuming that char-s are 8 bits (but they could be signed or unsigned, and both exist) is a reasonable, but non universal, assumption.
Notice that C is not a universal programming language. You won't be able to code a C compiler for a ternary machine like Setun.
The C standard specifies that integer operands smaller than int will be promoted to int before any arithmetic operations are performed upon them. As a consequence, operations upon two unsigned values which are smaller than int will be performed with signed rather than unsigned math. In cases where it is important to ensure that operation on 32-bit operands will be performed using unsigned math (e.g. multiplying two numbers whose product could exceed 2⁶³) will use of the type uint_fast32_t be guaranteed by any standard to yield unsigned semantics without any Undefined Behavior? If not, is there any other unsigned type which is guaranteed to be at least 32 bits and at least as large as int?
No, it's not. In any case, I would advise against using the [u]int_fastN_t types at all. On real-world systems they're misdefined; for example, uint_fast32_t is usually defined as a 64-bit type on x86_64, despite 64-bit operations being at best (addition, subtraction, logical ops) identical speed to 32-bit ones and at worst much slower (division, and loads/stores since you use twice as many cache lines).
The C standard only requires int to be at least 16 bits and places no upper bound on its width, so uint_fast32_t could be narrower than int, or the same width, or wider.
For example, a conforming implementation could make int 64 bits and uint_fast32_t a typedef for a 32-bit unsigned short. Or, conversely, int could be 16 bits and uint_fast32_t, as the name implies, must be at least 32 bits.
One interesting consequence is that this:
uint_fast32_t x = UINT_FAST32_MAX;
uint_fast32_t y = UINT_FAST32_MAX;
x * y;
could overflow, resulting in undefined behavior. For example, if short is 32 bits and int is 64 bits, then uint_fast32_t could be a typedef for unsigned short, which would promote to signed int before being multiplied; the result, which is nearly 264, is too big to be represented as an int.
POSIX requires int and unsigned int to be at least 32 bits, but the answer to your question doesn't change even for POSIX-compliant implementations. uint_fast32_t and int could still be either 32 and 64 bits respectively, or 64 and 32 bits. (The latter would imply that a 64-bit type is faster than int, which is odd given that int is supposed to have the "natural size suggested by the architecture", but it's permitted.)
In practice, most compiler implementers will tend to try to cover 8, 16, 32, and 64-bit integers with the predefined types, which is possible only of int is no wider than 32 bits. The only compilers I've seen that don't follow this were for Cray vector machines. (Extended integer types could work around this, but I haven't seen a compiler that takes advantage of that.)
If not, is there any other unsigned type which is guaranteed to be at
least 32 bits and at least as large as int?
Yes, unsigned long (and unsigned long long which is at least 64 bits.)
I have ever read that int32_t is exact 32 bits long and int_least32_t only at least 32 bits, but they have both the same typedefs in my stdint.h:
typedef int int_least32_t;
and
typedef int int32_t;
So where is the difference? They exactly the same...
int32_t is signed integer type with width of exactly 32 bits with no padding bits and using 2's complement for negative values.
int_least32_t is smallest signed integer type with width of at least 32 bits.
These are provided only if the implementation directly supports the type.
The typedefs that you are seeing simply means that in your environment both these requirements are satisfied by the standard int type itself. This need not mean that these typedefs are the same on a different environment.
Why do you think that on another computer with different processor, different OS, different version of C standard libs you will see exactly this typedefs?
This 2 types are exactly what you wrote. One of them is 32 bits exactly, another type is at least 32 bit. So one of the possible situations is when both of them are 32 bits and on your particular case you see it in stdint.h. On another system you may see that they are different.
This is related to following question,
How to Declare a 32-bit Integer in C
Several people mentioned int is always 32-bit on most platforms. I am curious if this is true.
Do you know any modern platforms with int of a different size? Ignore dinosaur platforms with 8-bit or 16-bit architectures.
NOTE: I already know how to declare a 32-bit integer from the other question. This one is more like a survey to find out which platforms (CPU/OS/Compiler) supporting integers with other sizes.
As several people have stated, there are no guarantees that an 'int' will be 32 bits, if you want to use variables of a specific size, particularly when writing code that involves bit manipulations, you should use the 'Standard Integer Types' mandated by the c99 specification.
int8_t
uint8_t
int32_t
uint32_t
etc...
they are generally of the form [u]intN_t, where the 'u' specifies that you want an unsigned quantity, and N is the number of bits
the correct typedefs for these should be available in stdint.h on whichever platform you are compiling for, using these allows you to write nice, portable code :-)
"is always 32-bit on most platforms" - what's wrong with that snippet? :-)
The C standard does not mandate the sizes of many of its integral types. It does mandate relative sizes, for example, sizeof(int) >= sizeof(short) and so on. It also mandates minimum ranges but allows for multiple encoding schemes (two's complement, ones' complement, and sign/magnitude).
If you want a specific sized variable, you need to use one suitable for the platform you're running on, such as the use of #ifdef's, something like:
#ifdef LONG_IS_32BITS
typedef long int32;
#else
#ifdef INT_IS_32BITS
typedef int int32;
#else
#error No 32-bit data type available
#endif
#endif
Alternatively, C99 and above allows for exact width integer types intN_t and uintN_t:
The typedef name intN_t designates a signed integer type with width N, no padding bits, and a two's complement representation. Thus, int8_t denotes a signed integer type with a width of exactly 8 bits.
The typedef name uintN_t designates an unsigned integer type with width N. Thus, uint24_t denotes an unsigned integer type with a width of exactly 24 bits.
These types are optional. However, if an implementation provides integer types with widths of 8, 16, 32, or 64 bits, no padding bits, and (for the signed types) that have a two's complement representation, it shall define the corresponding typedef names.
At this moment in time, most desktop and server platforms use 32-bit integers, and even many embedded platforms (think handheld ARM or x86) use 32-bit ints. To get to a 16-bit int you have to get very small indeed: think "Berkeley mote" or some of the smaller Atmel Atmega chips. But they are out there.
No. Small embedded systems use 16 bit integers.
It vastly depends on your compiler. Some compile them as 64-bit on 64-bit machines, some compile them as 32-bit. Embedded systems are their own little special ball of wax.
Best thing you can do to check:
printf("%d\n", sizeof(int));
Note that sizeof will print out bytes. Do sizeof(int)*CHAR_BIT to get bits.
Code to print the number of bits for various types:
#include <limits.h>
#include <stdio.h>
int main(void) {
printf("short is %d bits\n", CHAR_BIT * sizeof( short ) );
printf("int is %d bits\n", CHAR_BIT * sizeof( int ) );
printf("long is %d bits\n", CHAR_BIT * sizeof( long ) );
printf("long long is %d bits\n", CHAR_BIT * sizeof(long long) );
return 0;
}
TI are still selling OMAP boards with the C55x DSPs on them, primarily used for video decoding. I believe the supplied compiler for this has a 16 bit int. It is hardly dinosaur (the Nokia 770 was released in 2005), although you can get 32 bit DSPs.
Most code you write, you can safely assume it won't ever be run on a DSP. But perhaps not all.
Well, most ARM-based processors can run Thumb code, which is a 16-bit mode. That includes the yet-only-rumored Android notebooks and the bleeding-edge smartphones.
Also, some graphing calculators use 8-bit processors, and I'd call those fairly modern as well.
If you are also interested in the actual Max/Min Value instead of the number of bits, limits.h contains pretty much everything you want to know.