in this article, taken from the book "Linux kernel development":
http://www.makelinux.net/books/lkd2/ch19lev1sec2
it says:
The size of the C long type is guaranteed to be the machine's word size. On the downside, however, code cannot assume that the standard C types have any specific size. Furthermore, there is no guarantee that an int is the same size as a long
Question is, i thought int is the same as the word size, not long, and i couldn't find any official standard which defines this saying.
any thoughts?
Sometimes, people on the Internet are wrong. The sizes are fixed by the ABI. Linux ports don't necessarily create an original ABI (usually another platform or manufacturer recommendation is followed), so there's nobody making guarantees about int and long. The term "machine word" is also very ill-defined.
The size of the C long type is guaranteed to be the machine's word size.
This is wrong for a lot of platforms. For example, in the embedded world usually 8-bit MCU (e.g., HC08) have a 8-bit word size and 16-bit MCU (e.g., MSP430) have a 16-bit word size but long is 32-bit in these platforms. In Windows x64 (MSVC compiler), the size of a word is 64-bit but long is 32-bit.
The C standard does not know what a word is, and a C implementation might do things in unusual ways. So your book is wrong. (for example, some C implementation might use 64 bits long on a 8 bit micro-controller).
However, the C99 standard defines the <stdint.h> header with types like intptr_t (an integral type with the same size as void* pointers) or int64_t (a 64 bits integer) etc.
See also this question, and wikipedia's page on C data types.
Related
This question already has answers here:
What is the bit size of long on 64-bit Windows?
(8 answers)
Closed 4 years ago.
Is it correct that a long in C has a size of 4 bytes for a 32-bit platform and 8 bytes for a 64-bit platform?
The size of long (and the sizes of objects generally) is determined by the C implementation, not the platform programs execute on.
Generally speaking, a C implementation is a compiler plus the libraries and other supporting software needed to run C programs.1 There can be more than one C implementation for a platform. In fact, one compiler can implement multiple C implementations by using different switches to request various configurations.
A general C implementation typically uses sizes for short, int, and long that work well with the target processor model (or models) and give the programmer good choices. However, C implementations can be designed for special purposes, such as supporting older code that was intended for a specific size of long. Generally speaking, a C compiler can write instructions for whatever size of long it defines.
The C standard imposes some lower limits on the sizes of objects. The number of bits in a character, CHAR_BIT, must be at least eight. short and int must be capable of representing values from −32767 to +32767, and long must be capable of representing −2147483647 to +2147483647. It also requires that long be capable of representing all int values, that int be capable of representing all short values, and short be capable of representing all signed char values. Other than that, the C standard imposes few requirements. It does not require that int or long be a particular size on particular platforms. And operating systems have no say in what happens inside a programming language. An operating system sets requirements for running programs and interfacing with the system, but, inside a program, software can do anything it wants. So a compiler can call 17 bits an int if it wants, and the operating system has no control over that.
Footnote
1 The C 2011 standard (draft N1570) defines an implementation, in clause 3.12, as a “particular set of software, running in a particular translation environment under particular control options, that performs translation of programs for, and supports execution of functions in, a particular execution environment.”
No. It is up to the implementation!
The only rules are char must be CHAR_BIT wide, and the sizes must be: char <= short <= int <= long <= long long, and char must be at least 8 bits, short at least 16 bits, long at least 32 bits, and long long at least 64 bits.
So actually all integer types (except long long) could be 32-bits wide and the C Standard is perfectly fine with that as long as CHAR_BIT is set to 32.
In C integer and short integer variables are identical: both range from -32768 to 32767, and the required bytes of both are also identical, namely 2.
So why are two different types necessary?
Basic integer types in C language do not have strictly defined ranges. They only have minimum range requirements specified by the language standard. That means that your assertion about int and short having the same range is generally incorrect.
Even though the minimum range requirements for int and short are the same, in a typical modern implementation the range of int is usually greater than the range of short.
The standard only guarantees sizeof(short) <= sizeof(int) <= sizeof(long) as far as I remember. So both short and int can be the same but don't have to. 32 bit compilers usually have 2 bytes short and 4 bytes int.
The C++ standard (and the C standard, which has a very similar paragraph, but the quote is from the n3337 version of the C++11 draft specification):
Section 3.9.1, point 2:
There are five standard signed integer types : “signed char”, “short int”, “int”, “long int”, and “long
long int”. In this list, each type provides at least as much storage as those preceding it in the list.
There may also be implementation-defined extended signed integer types. The standard and extended signed
integer types are collectively called signed integer types. Plain ints have the natural size suggested by the
architecture of the execution environment ; the other signed integer types are provided to meet special
needs.
Different architectures have different size "natural" integers, so a 16-bit architecture will naturally calculate a 16-bit value, where a 32- or 64-bit architecture will use either 32 or 64-bit int's. It's a choice for the compiler producer (or the definer of the ABI for a particular architecture, which tends to be a decision formed by a combination of the OS and the "main" Compiler producer for that architecture).
In modern C and C++, there are types along the lines of int32_t that is guaranteed to be exactly 32 bits. This helps portability. If these types aren't sufficient (or the project is using a not so modern compiler), it is a good idea to NOT use int in a data structure or type that needs a particular precision/size, but to define a uint32 or int32 or something similar, that can be used in all places where the size matters.
In a lot of code, the size of a variable isn't critical, because the number is within such a range, that a few thousand is way more than you ever need - e.g. number of characters in a filename is defined by the OS, and I'm not aware of any OS where a filename/path is more than 4K characters - so a 16, 32 or 64 bit value that can go to at least 32K would be perfectly fine for counting that - it doesn't really matter what size it is - so here we SHOULD use int, not try to use a specific size. int should, in a compiler be a type that is "efficient", so should help to give good performance, where some architectures will run slower if you use short, and certainly 16-bit architectures will run slower using long.
The guaranteed minimum ranges of int and short are the same. However an implementation is free to define short with a smaller range than int (as long as it still meets the minimum), which means that it may be expected to take the same or smaller storage space than int1. The standard says of int that:
A ‘‘plain’’ int object has the natural size suggested by the
architecture of the execution environment.
Taken together, this means that (for values that fall into the range -32767 to 32767) portable code should prefer int in almost all cases. The exception would be where the a very large number of values are being stored, such that the potentially smaller storage space occupied by short is a consideration.
1. Of course a pathological implementation is free to define a short that has a larger size in bytes than int, as long as it still has equal or lesser range - there is no good reason to do so, however.
They both are identical for 16 bit IBM compatible PC. However it is not sure that it will be identical on other hardwares as well.
VAX type of system which is known as virtual address extension they treat all these 2 variables in different manner. It occupies 2 bytes for short integer and 4 bytes for integer.
So this is the reason that we have 2 different but identical variables and their property.
for general purpose in desktops and laptops we use integer.
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Is int in C Always 32-bit?
AFAIK, in Pascal size of Integer depends on the platform (on 32-bit computers it has 32 bits, and on 64-bit computers it has 64 bits).
Is this the same in C (I mean, on 32-bit computers its size is 32 bits, and on 64-bit it is 64)?
Pretty much but compiler has control. Use the sizeof operator if you just want to check what is happening in your environment. stddef.h will include types like int64_t (I think in that file) if you need to make sure # of bytes is fixed and not leave this up to environment/compiler.
It's not only dependent on the processor architecture, but the operating system as well. The C Language Specifications do not mention the size of the integer types, so that task is the job of the implementor of the language. Look at the top voted answer here for more background:
What is the bit size of long on 64-bit Windows?
In summary, on both Linux and Windows, 'int' will be 32-bit. For other platforms, you'll have to check their specifications in their C compiler documention. The best practice, however, is to use the types found in <inttypes.h> -- uint32_t, int32_t, uint64_t, int64_t.
On Windows, it's a bit tougher; inttypes.h is part of C99, for which Visual C++ doesn't claim compliance. You can get inttypes.h from projects like http://code.google.com/p/msinttypes/, or use <windows.h> -- INT32, INT64, UINT32, UINT64. There's also the Microsoft extensions __int32, __int64, __uint32, __uint64, which you don't need any additional header file for.
C does not define the size of its integer types. YOu have to read the compiler manual
Only rule is sizeof char <= sizeof short <= sizeof int <= sizeof long
That decision is made by the compiler, you can see what the size of an integer is in your specific case in bytes by typing this: printf("%d", (int)sizeof(int));.
I highly suggest however that you do not write code that is dependent on the size of int being a specific amount.
That's right. It depends on the platform.
However, the usual practice these days is to make an int 32 bits on either a 32-bit or a 64-bit computer. The long int type is 64 bits on a 64-bit computer, and the long long int type is 64 bits even on a 32-bit computer.
As #E_Net4 correctly observes, the C++ standard permits considerable variation from the above answer, which speaks only to today's usual practice. (The C++ standard permits such variation because it wishes to leave a compiler free to define int and long int, not to mention short int, in ways that maximize a particular processor's performance.)
#include<stdio.h>
int main()
{
int c;
return 0;
} // on Intel architecture
#include <stdio.h>
int main()
{
int c;
return 0;
}// on AMD architecture
/*
Here I have a code on the two different machines and I want to know the 'Is the size of the data types dependent on the machine '
*/
see here:
size guarantee for integral/arithmetic types in C and C++
Fundamental C type sizes are depending on implementation (compiler) and architecture, however they have some guaranteed boundaries. One should therefore never hardcode type sizes and instead use sizeof(TYPENAME) to get their length in bytes.
Quick answer: Yes, mostly, but ...
The sizes of types in C are dependent on the decisions of compiler writers, subject to the requirements of the standard.
The decisions of compiler writers tend to be strongly influenced by the CPU architecture. For example, the C standard says:
A "plain" int object has the natural size suggested by the
architecture of the execution environment.
though that leaves a lot of room for judgement.
Such decisions can also be influenced by other considerations, such as compatibility with compilers from the same vendor for other architectures and the convenience of having types for each supported size. For example, on a 64-bit system, the obvious "natural size" for int is 64 bits, but many compilers still have 32-bit int. (With 8-bit char and 64-bit int, short would probably be either 16 or 32 bits, and you couldn't have fundamental integer types covering both sizes.)
(C99 introduces "extended integer types", which could solve the issue of covering all the supported sizes, but I don't know of any compiler that implements them.)
Yes. The size of the basic datatypes depends on the underlying CPU architecture. ISO C (and C++) guarantees only mininum sizes for datatypes.
But it's not consistent across compiler vendor for the same CPU. Consider that there are compilers with 32-bit long ints for Intel x386 CPUs, and other compilers that give you 64-bit longs.
And don't forget about the decade or so of pain that MS programmers had to deal with during the era of the Intel 286 machines, what with all of the different "memory models" that compilers forced on us. 16-bit pointers versus 32-bit segmented pointers. I for one am glad that those days are gone.
It usually does, for performance reasons. The C standard defines the minimum value ranges for all types like char, short, int, long, long long and their unsigned counterparts.
However, x86 CPUs from Intel and AMD are essentially the same hardware to most x86 compilers. At least, they expose the same registers and instructions to the programmer and most of them operate identically (if we consider what's officially defined and documented).
At any rate, it's up to the compiler or its developer(s) to use any other size, not necessarily matching the natural operand size on the target hardware as long as that size agrees with the C standard.
Does the ANSI C specification call for size of int to be equal to the word size (32 bit / 64 bit) of the system?
In other words, can I decipher the word size of the system based on the space allocated to an int?
The size of the int type is implementation-dependent, but cannot be shorter than 16 bits. See the Minimum Type Limits section here.
This Linux kernel development site claims that the size of the long type is guaranteed to be the machine's word size, but that statement is likely to be false: I couldn't find any confirmation of that in the standard, and long is only 32 bits wide on Win64 systems (since these systems use the LLP64 data model).
The language specification recommends that int should have the natural "word" size for the hardware platform. However, it is not strictly required. If you noticed, to simplify 32-bit-to-64-bit code transition some modern implementations prefer to keep int as 32-bit type even if the underlying hardware platform has 64-bit word size.
And as Frederic already noted, in any case the size may not be smaller than 16 value-forming bits.
The original intension was the int would be the word size - the most efficient data-processing size. Still, what tends to happen is that massive amounts of code are written that assume the size of int is X bits, and when the hardware that code runs on moves to larger word size, the carelessly-written code would break. Compiler vendors have to keep their customers happy, so they say "ok, we'll leave int sized as before, but we'll make long bigger now". Or, "ahhh... too many people complained about us making long bigger, we'll create a long long type while leaving sizeof(int) == sizeof(long)". So, these days, it's all a mess:
Does the ANSI C specification call for size of int to be equal to the word size (32 bit / 64 bit) of the system?
Pretty much the idea, but it doesn't insist on it.
In other words, can I decipher the word size of the system based on the space allocated to an int?
Not in practice.
You should check your system provided limits.h header file. INT_MAX declaration should help you back-calculate what is the minimum size an integer must have. For details look into http://www.opengroup.org/onlinepubs/009695399/basedefs/limits.h.html