C Variable Definition - c

In C integer and short integer variables are identical: both range from -32768 to 32767, and the required bytes of both are also identical, namely 2.
So why are two different types necessary?

Basic integer types in C language do not have strictly defined ranges. They only have minimum range requirements specified by the language standard. That means that your assertion about int and short having the same range is generally incorrect.
Even though the minimum range requirements for int and short are the same, in a typical modern implementation the range of int is usually greater than the range of short.

The standard only guarantees sizeof(short) <= sizeof(int) <= sizeof(long) as far as I remember. So both short and int can be the same but don't have to. 32 bit compilers usually have 2 bytes short and 4 bytes int.

The C++ standard (and the C standard, which has a very similar paragraph, but the quote is from the n3337 version of the C++11 draft specification):
Section 3.9.1, point 2:
There are five standard signed integer types : “signed char”, “short int”, “int”, “long int”, and “long
long int”. In this list, each type provides at least as much storage as those preceding it in the list.
There may also be implementation-defined extended signed integer types. The standard and extended signed
integer types are collectively called signed integer types. Plain ints have the natural size suggested by the
architecture of the execution environment ; the other signed integer types are provided to meet special
needs.
Different architectures have different size "natural" integers, so a 16-bit architecture will naturally calculate a 16-bit value, where a 32- or 64-bit architecture will use either 32 or 64-bit int's. It's a choice for the compiler producer (or the definer of the ABI for a particular architecture, which tends to be a decision formed by a combination of the OS and the "main" Compiler producer for that architecture).
In modern C and C++, there are types along the lines of int32_t that is guaranteed to be exactly 32 bits. This helps portability. If these types aren't sufficient (or the project is using a not so modern compiler), it is a good idea to NOT use int in a data structure or type that needs a particular precision/size, but to define a uint32 or int32 or something similar, that can be used in all places where the size matters.
In a lot of code, the size of a variable isn't critical, because the number is within such a range, that a few thousand is way more than you ever need - e.g. number of characters in a filename is defined by the OS, and I'm not aware of any OS where a filename/path is more than 4K characters - so a 16, 32 or 64 bit value that can go to at least 32K would be perfectly fine for counting that - it doesn't really matter what size it is - so here we SHOULD use int, not try to use a specific size. int should, in a compiler be a type that is "efficient", so should help to give good performance, where some architectures will run slower if you use short, and certainly 16-bit architectures will run slower using long.

The guaranteed minimum ranges of int and short are the same. However an implementation is free to define short with a smaller range than int (as long as it still meets the minimum), which means that it may be expected to take the same or smaller storage space than int1. The standard says of int that:
A ‘‘plain’’ int object has the natural size suggested by the
architecture of the execution environment.
Taken together, this means that (for values that fall into the range -32767 to 32767) portable code should prefer int in almost all cases. The exception would be where the a very large number of values are being stored, such that the potentially smaller storage space occupied by short is a consideration.
1. Of course a pathological implementation is free to define a short that has a larger size in bytes than int, as long as it still has equal or lesser range - there is no good reason to do so, however.

They both are identical for 16 bit IBM compatible PC. However it is not sure that it will be identical on other hardwares as well.
VAX type of system which is known as virtual address extension they treat all these 2 variables in different manner. It occupies 2 bytes for short integer and 4 bytes for integer.
So this is the reason that we have 2 different but identical variables and their property.
for general purpose in desktops and laptops we use integer.

Related

different size of c data type in 32 and 64 bit

Why there is different size of C data types in 32bit and 64 bit system...
for example: int size in 32bit is 4 byte and 8 byte in 64bit
what is the reason behind it to double the size of data types in 64bit as well as My knowledge is concern there is no performance issue if we are going to use same size in 64 bit system as it is in 32 bit....
Why there is different size of C data types in 32bit and 64 bit system[?]
The sizes of C's basic data types are implementation-dependent. They are not necessarily dependent on machine architecture, and counterexamples abound. For example, 32-bit and 64-bit implementations of GCC use the same size data types for x86 and x86_64.
what is the reason behind it to double the size of data types in 64bit[?]
The reasons for implementation decisions vary with implementors and implementation characteristics. int is often, but not always, chosen to have a size that is natural for the target machine in some sense. That might mean that operations on it are fast, or that it is efficient to load from and store to memory, or other things. These are the kinds of considerations involved.
The C language definition does not mandate a specific size for most data types; instead, specifies the range of values each type must be able to represent. char must be large enough to represent a single character of the execution character set (at least the range [-127,127]), short and int must be large enough to represent at least the range [-32767,32767], etc.
Traditionally, the size of int was the same as the "natural" word size for a given architecture, on the premise that a) it would be easier to implement, and b) operations on that type would be the most efficient. Whether that's still true today, I'm not qualified to say (not a hardware guy).

Why an extra integer type among short/int/long?

Until recently I believed that 'long' was the same thing as 'int' because of historical reasons and desktop processors all having at least 32 bits (and had troubles only with that "dupe" since only developing on 32 bits machines).
Reading this, I discover that, in fact, the C standard defines the int type to be at least an int16, while 'long' is supposed to be at least an int32.
In fact in the list
Short signed integer type. Capable of containing at least the [−32767, +32767] range
Basic signed integer type. Capable of containing at least the [−32767, +32767] range;
Long signed integer type. Capable of containing at least the [−2147483647, +2147483647] range
Long long signed integer type. Capable of containing at least the [−9223372036854775807, +9223372036854775807] range;
there are always non-empty intersections, and therefore a duplicate, whatever implementation the compiler and platform choose.
Why did the standard commitee introduced an extra type among what could be as simple as char/short/int/long (or int_k, int_2k, int_4k, int_8k)?
Was that for historical reasons, like, gcc x.x implemented int as 32 bits while another compiler implemented it as 16, or is there a real technical reason I'm missing?
The central point is that int/unsigned is not just another step of integer sizes from char, short,int, long, long long ladder. int is special. It is the size that all narrower types promote to and so typically works "best" on a given processor. So should int match short, long or is wedged distinctly between short/long is highly platform dependent.
C is designed to accommodate a wide range of processors. Given that C is 40+ years old is testament to a successfully strategy.

What's the difference between "int" and "int_fast16_t"?

As I understand it, the C specification says that type int is supposed to be the most efficient type on target platform that contains at least 16 bits.
Isn't that exactly what the C99 definition of int_fast16_t is too?
Maybe they put it in there just for consistency, since the other int_fastXX_t are needed?
Update
To summarize discussion below:
My question was wrong in many ways. The C standard does not specify bitness for int. It gives a range [-32767,32767] that it must contain.
I realize at first most people would say, "but that range implies at least 16-bits!" But C doesn't require two's-compliment storage of integers. If they had said "16-bit", there may be some platforms that have 1-bit parity, 1-bit sign, and 14-bit magnitude that would still being "meeting the standard", but not satisfy that range.
The standard does not say anything about int being the most efficient type. Aside from size requirements above, int can be decided by the compiler developer based on whatever criteria they deem most important. (speed, size, backward compatibility, etc)
On the other hand, int_fast16_t is like providing a hint to the compiler that it should use a type that is optimum for performance, possibly at the expense of any other tradeoff.
Likewise, int_least16_t would tell the compiler to use the smallest type that's >= 16-bits, even if it would be slower. Good for preserving space in large arrays and stuff.
Example: MSVC on x86-64 has a 32-bit int, even on 64-bit systems. MS chose to do this because too many people assumed int would always be exactly 32-bits, and so a lot of ABIs would break. However, it's possible that int_fast32_t would be a 64-bit number if 64-bit values were faster on x86-64. (Which I don't think is actually the case, but it just demonstrates the point)
int is a "most efficient type" in speed/size - but that is not specified by per the C spec. It must be 16 or more bits.
int_fast16_t is most efficient type in speed with at least the range of a 16 bit int.
Example: A given platform may have decided that int should be 32-bit for many reasons, not only speed. The same system may find a different type is fastest for 16-bit integers.
Example: In a 64-bit machine, where one would expect to have int as 64-bit, a compiler may use a mode with 32-bit int compilation for compatibility. In this mode, int_fast16_t could be 64-bit as that is natively the fastest width for it avoids alignment issues, etc.
int_fast16_t is guaranteed to be the fastest int with a size of at least 16 bits. int has no guarantee of its size except that:
sizeof(char) = 1 and sizeof(char) <= sizeof(short) <= sizeof(int) <= sizeof(long).
And that it can hold the range of -32767 to +32767.
(7.20.1.3p2) "The typedef name int_fastN_t designates the fastest signed integer type with a width of at least N. The typedef name uint_fastN_t designates the fastest unsigned integer type with a width of at least N."
As I understand it, the C specification says that type int is supposed to be the most efficient type on target platform that contains at least 16 bits.
Here's what the standard actually says about int: (N1570 draft, section 6.2.5, paragraph 5):
A "plain" int object has the natural size suggested by the
architecture of the execution environment (large enough to contain any
value in the range INT_MIN to INT_MAX as defined in the
header <limits.h>).
The reference to INT_MIN and INT_MAX is perhaps slightly misleading; those values are chosen based on the characteristics of type int, not the other way around.
And the phrase "the natural size" is also slightly misleading. Depending on the target architecture, there may not be just one "natural" size for an integer type.
Elsewhere, the standard says that INT_MIN must be at most -32767, and INT_MAX must be at least +32767, which implies that int is at least 16 bits.
Here's what the standard says about int_fast16_t (7.20.1.3):
Each of the following types designates an integer type that is usually
fastest to operate with among all integer types that have at least the
specified width.
with a footnote:
The designated type is not guaranteed to be fastest for all purposes;
if the implementation has no clear grounds for choosing one type over
another, it will simply pick some integer type satisfying the
signedness and width requirements.
The requirements for int and int_fast16_t are similar but not identical -- and they're similarly vague.
In practice, the size of int is often chosen based on criteria other than "the natural size" -- or that phrase is interpreted for convenience. Often the size of int for a new architecture is chosen to match the size for an existing architecture, to minimize the difficulty of porting code. And there's a fairly strong motivation to make int no wider than 32 bits, so that the types char, short, and int can cover sizes of 8, 16, and 32 bits. On 64-bit systems, particularly x86-64, the "natural" size is probably 64 bits, but most C compilers make int 32 bits rather than 64 (and some compilers even make long just 32 bits).
The choice of the underlying type for int_fast16_t is, I suspect, less dependent on such considerations, since any code that uses it is explicitly asking for a fast 16-bit signed integer type. A lot of existing code makes assumptions about the characteristics of int that go beyond what the standard guarantees, and compiler developers have to cater to such code if they want their compilers to be used.
The difference is that the fast types are allowed to be wider than their counterparts (without fast) for efficiency/optimization purposes. But the C standard by no means guarantees they are actually faster.
C11, 7.20.1.3 Fastest minimum-width integer types
1 Each of the following types designates an integer type that is
usually fastest 262) to operate with among all integer types that have
at least the specified width.
2 The typedef name int_fastN_t designates the fastest signed integer
type with a width of at least N. The typedef name uint_fastN_t
designates the fastest unsigned integer type with a width of at least
N.
262) The designated type is not guaranteed to be fastest for all
purposes; if the implementation has no clear grounds for choosing
one type over another, it will simply pick some integer type
satisfying the signedness and width requirements.
Another difference is that fast and least types are required types whereas other exact width types are optional:
3 The following types are required: int_fast8_t int_fast16_t
int_fast32_t int_fast64_t uint_fast8_t uint_fast16_t uint_fast32_t
uint_fast64_t All other types of this form are optional.
From the C99 rationale 7.8 Format conversion of integer types <inttypes.h> (document that accompanies with Standard), emphasis mine:
C89 specifies that the language should support four signed and
unsigned integer data types, char, short, int and long, but places
very little requirement on their size other than that int and short be
at least 16 bits and long be at least as long as int and not smaller
than 32 bits. For 16-bit systems, most implementations assign 8, 16,
16 and 32 bits to char, short, int, and long, respectively. For 32-bit
systems, the common practice is to assign 8, 16, 32 and 32 bits to
these types. This difference in int size can create some problems for
users who migrate from one system to another which assigns different
sizes to integer types, because Standard C’s integer promotion rule
can produce silent changes unexpectedly. The need for defining an
extended integer type increased with the introduction of 64-bit
systems.
The purpose of <inttypes.h> is to provide a set of integer types whose
definitions are consistent across machines and independent of
operating systems and other implementation idiosyncrasies. It defines,
via typedef, integer types of various sizes. Implementations are free
to typedef them as Standard C integer types or extensions that they
support. Consistent use of this header will greatly increase the
portability of a user’s program across platforms.
The main difference between int and int_fast16_t is that the latter is likely to be free of these "implementation idiosyncrasies". You may think of it as something like:
I don't care about current OS/implementation "politics" of int size. Just give me whatever the fastest signed integer type with at least 16 bits is.
On some platforms, using 16-bit values may be much slower than using 32-bit values [e.g. an 8-bit or 16-bit store would require performing a 32-bit load, modifying the loaded value, and writing back the result]. Even if one could fit twice as many 16-bit values in a cache as 32-bit values (the normal situation where 16-bit values would be faster than 32-bit values on 32-bit systems), the need to have every write preceded by a read would negate any speed advantage that could produce unless a data structure was read far more often than it was written. On such platforms, a type like int_fast16_t would likely be 32 bits.
That having been said, the Standard does not unfortunately allow what would be the most helpful semantics for a compiler, which would be to allow variables of type int_fast16_t whose address is not taken to arbitrarily behave as 16-bit types or larger types, depending upon what is convenient. Consider, for example, the method:
int32_t blah(int32_t x)
{
int_fast16_t y = x;
return y;
}
On many platforms, 16-bit integers stored in memory can often be manipulated just as those stored in registers, but there are no instructions to perform 16-bit operations on registers. If an int_fast16_t variable stored in memory are only capable of holding -32768 to +32767, that same restriction would apply to int_fast16_t variables stored in registers. Since coercing oversized values into signed integer types too small to hold them is implementation-defined behavior, that would compel the above code to add instructions to sign-extend the lower 16 bits of x before returning it; if the Standard allowed for such a type, a flexible "at least 16 bits, but more if convenient" type could eliminate the need for such instructions.
An example of how the two types might be different: suppose there’s an architecture where 8-bit, 16-bit, 32-bit and 64-bit arithmetic are equally fast. (The i386 comes close.) Then, the implementer might use a LLP64 model, or better yet allow the programmer to choose between ILP64, LP64 and LLP64, since there’s a lot of code out there that assumes long is exactly 32 bits, and that sizeof(int) <= sizeof(void*) <= sizeof(long). Any 64-bit implementation must violate at least one of these assumptions.
In that case, int would probably be 32 bits wide, because that will break the least code from other systems, but uint_fast16_t could still be 16 bits wide, saving space.

What is the difference between intXX_t and int_fastXX_t?

I have recently discovered existence of standard fastest type, mainly int_fast32_t and int_fast64_t.
I was always told that, for normal use on mainstream architecture, one should better use classical int & long which should always fit to the processor default reading capacity and so avoid useless numeric conversions.
In the C99 Standard, it says in §7.18.1.3p2 :
"The typedef name int_fastN_t designates the fastest signed integer type with a width of at least N. The typedef name uint_fastN_t designates the fastest unsigned integer type with a width of at least N."
And there is also a quote about it in §7.18.1.3p1 :
"The designated type is not guaranteed to be fastest for all purposes; if the implementation has no clear grounds for choosing one type over another, it will simply pick some integer type satisfying the signedness and width requirements."
It's unclear to me what fastest really means. I do not understand when I should use this type and when I should not.
I have googled a little on this and found that some open source projects have changed some of their functions to it, but not all of them. They didn't really explain why they have changed a part, and only a part, of their code to it.
Do you know what are the specific cases/usages when int_fastXX_t are really faster than the classical ones ?
In the C99 Standard, 7.18.1.3 Fastest minimum-width integer types.
(7.18.1.3p1) "Each of the following types designates an integer type that is usually fastest225) to operate with among all integer types that have at least the specified width."
225) "The designated type is not guaranteed to be fastest for all purposes; if the implementation has no clear grounds for choosing one type over another, it will simply pick some integer type satisfying the signedness and width requirements."
and
(7.18.1.3p2) "The typedef name int_fastN_t designates the fastest signed integer type with a width of at least N. The typedef name uint_fastN_t designates the fastest unsigned integer
type with a width of at least N."
The types int_fastN_t and uint_fastN_t are counterparts to the exact-width integer types intN_t and uintN_t. The implementation guarantees that they take at least N bits, but the implementation can take more bits if it can perform optimization using larger types; it just guarantees they take at least N bits.
For example, on a 32-bit machine, uint_fast16_t could be defined as an unsigned int rather than as an unsigned short because working with types of machine word size would be more efficent.
Another reason of their existence is the exact-width integer types are optional in C but the fastest minimum-width integer types and the minimum-width integer types (int_leastN_t and uint_leastN_t) are required.
Gnu libc defines {int,uint}_fast{16,32}_t as 64-bit when compiling for 64-bit CPUs and 32-bit otherwise. Operations on 64-bit integers are faster on Intel and AMD 64-bit x86 CPUs than the same operations on 32-bit integers.
There will probably not be a difference except on exotic hardware where int32_t and int16_t don't even exist.
In that case you might use int_least16_t to get the smallest type that can contain 16 bits. Could be important if you want to conserve space.
On the other hand, using int_fast16_t might get you another type, larger than int_least16_t but possibly faster for "typical" integer use. The implementation will have to consider what is faster and what is typical. Perhaps this is obvious for some special purpose hardware?
On most common machines these 16-bit types will all be a typedef for short, and you don't have to bother.
IMO they are pretty pointless.
The compiler doesn't care what you call a type, only what size it is and what rules apply to it. so if int, in32_t and int_fast32_t are all 32-bits on your platform they will almost certainly all perform the same.
The theory is that implementers of the language should chose based on what is fastest on their hardware but the standard writers never pinned down a clear definition of fastest. Add that to the fact that platform maintainers are reluctant to change the definition of such types (because it would be an ABI break) and the definitions end up arbitrarily picked at the start of a platforms life (or inherited from other platforms the C library was ported from) and never touched again.
If you are at the level of micro-optimisation that you think variable size may make a significant difference then benchmark the different options with your code on your processor. Otherwise don't worry about it. The "fast" types don't add anything useful IMO.

Size of C data types on different machines & sizeof(complex long double)

does anybody know a website or a paper where the sizes of C data types were compared on different machines? I'm interested in values of some 'big' machines like a System z or the like.
And:
Is there an upper bound of bytes that the biggest native datatype on any machine can have and is it always of the type complex long double?
Edit: I'm not sure about but does the SIMD register data also take advantage of the CPU's cache? Data types that will be stored in a special unit and do not use the L1/L2/L cache are out of my interesst. Only the types {char, short, int, long, long long, float, double, long double, _Bool, void *} (and with _Complex) will be examined.
The C data type size does not depend on the machine platform. It depends on the compiler implementation. Two different compilers on the same hardware platform might implement basic types differently, resulting in completely different sizes.
You should also take into account the fact that such standard types as size_t are not guaranteed to be represented by user-accessible types like unsigned int. It is quite legal and possible that size_t might be implemented through an implementation-specific unsigned type of unknown size and range.
Also, theoretically (and pedantically) C language has no "biggest" type in terms of size. C language specification makes absolutely no guarantees about the relative sizes of the basic types. C language only makes guarantees about the relative ranges of the representable values of each type. For example, the language guarantees that the range of int is not smaller than the range of short. However, since [almost] any type can contain an arbitrary amount of padding bits, theoretically the object size of type short might be greater than that of type int. This would be, of course, a very exotic situation.
In practice though, you can expect that long long int is the biggest integral type and long double is the biggest floating point type. You can also include the complex types into the consideration, if you wish to do so.
You mention "native" datatypes, but note that complex is not defined by the C spec and thus isn't a native type. The native types for C are char,int,float, double, and void.
The size of a data type is generally determined by the underlying platform as well as the compiler. The C standard defines the minimum range for these types and defines a few relative relationships (long int must be at least as long as a regular int, etc). There's no easy way to determine the absolute size of any type without testing it.
When working with a new platform and I don't know the particular data type sizes, I write up a short app that dumps the result of sizeof for all the standard C types. There are also headers like stdint.h that give you data types that you can trust to be a certain size.
There is no upper bound as to the size of a data type. The C standard defines char to be "large enough to store any member of the execution character set". This partially binds the size of the native types to the machine architecture, and so a theoretical machine that had execution characters up to 1MB in size would have sizeof(char) equal to 1MB. Practically speaking, you probably won't find a machine where this is the case.
The tricky thing about native types is that some architectures and compilers may extend them. For example, most compilers targeting modern Intel hardware offer a __m128 datatype, which is the width of a SIMD register. AVX will have a 256-bit SIMD width, Larrabee 512bit, and each has a corresponding native type in the compiler.
The long double type on x86 machines is 80 bits (10 bytes), http://en.wikipedia.org/wiki/Long_double, although this is an more-or-less obsolete form now. x64 doesn't support it.

Resources