size of int variable - c

How the size of int is decided?
Is it true that the size of int will depend on the processor. For 32-bit machine, it will be 32 bits and for 16-bit it's 16.
On my machine it's showing as 32 bits, although the machine has 64-bit processor and 64-bit Ubuntu installed.

It depends on the implementation. The only thing the C standard guarantees is that
sizeof(char) == 1
and
sizeof(char) <= sizeof(short) <= sizeof(int) <= sizeof(long) <= sizeof(long long)
and also some representable minimum values for the types, which imply that char is at least 8 bits long, int is at least 16 bit, etc.
So it must be decided by the implementation (compiler, OS, ...) and be documented.

It depends on the compiler.
For eg : Try an old turbo C compiler & it would give the size of 16 bits for an int because the word size (The size the processor could address with least effort) at the time of writing the compiler was 16.

Making int as wide as possible is not the best choice. (The choice is made by the ABI designers.)
A 64bit architecture like x86-64 can efficiently operate on int64_t, so it's natural for long to be 64 bits. (Microsoft kept long as 32bit in their x86-64 ABI, for various portability reasons that make sense given the existing codebases and APIs. This is basically irrelevant because portable code that actually cares about type sizes should be using int32_t and int64_t instead of making assumptions about int and long.)
Having int be int32_t actually makes for better, more efficient code in many cases. An array of int use only 4B per element has only half the cache footprint of an array of int64_t. Also, specific to x86-64, 32bit operand-size is the default, so 64bit instructions need an extra code byte for a REX prefix. So code density is better with 32bit (or 8bit) integers than with 16 or 64bit. (See the x86 wiki for links to docs / guides / learning resources.)
If a program requires 64bit integer types for correct operation, it won't use int. (Storing a pointer in an int instead of an intptr_t is a bug, and we shouldn't make the ABI worse to accommodate broken code like that.) A programmer writing int probably expected a 32bit type, since most platforms work that way. (The standard of course only guarantees 16bits).
Since there's no expectation that int will be 64bit in general (e.g. on 32bit platforms), and making it 64bit will make some programs slower (and almost no programs faster), int is 32bit in most 64bit ABIs.
Also, there needs to be a name for a 32bit integer type, for int32_t to be a typedef for.

It is depends on the primary compiler.
if you using turbo c means the integer size is 2 bytes.
else you are using the GNU gccompiler means the integer size is 4 bytes.
it is depends on only implementation in C compiler.

The size of integer is basically depends upon the architecture of your system.
Generally if you have a 16-bit machine then your compiler will must support a int of size 2 byte.
If your system is of 32 bit,then the compiler must support for 4 byte for integer.
In more details,
The concept of data bus comes into picture yes,16-bit ,32-bit means nothing but the size of data bus in your system.
The data bus size is required for to determine the size of an integer because,The purpose of data bus is to provide data to the processor.The max it can provide to the processor at a single fetch is important and this max size is preferred by the compiler to give a data at time.
Basing upon this data bus size of your system the compiler is
designed to provide max size of the data bus as the size of integer.
x06->16-bit->DOS->turbo c->size of int->2 byte
x306->32-bit>windows/Linux->GCC->size of int->4 byte

Yes. int size depends on the compiler size.
For 16 bit integer the range of the integer is between -32768 to 32767. For 32 & 64 bit compiler it will increase.

Related

different size of c data type in 32 and 64 bit

Why there is different size of C data types in 32bit and 64 bit system...
for example: int size in 32bit is 4 byte and 8 byte in 64bit
what is the reason behind it to double the size of data types in 64bit as well as My knowledge is concern there is no performance issue if we are going to use same size in 64 bit system as it is in 32 bit....
Why there is different size of C data types in 32bit and 64 bit system[?]
The sizes of C's basic data types are implementation-dependent. They are not necessarily dependent on machine architecture, and counterexamples abound. For example, 32-bit and 64-bit implementations of GCC use the same size data types for x86 and x86_64.
what is the reason behind it to double the size of data types in 64bit[?]
The reasons for implementation decisions vary with implementors and implementation characteristics. int is often, but not always, chosen to have a size that is natural for the target machine in some sense. That might mean that operations on it are fast, or that it is efficient to load from and store to memory, or other things. These are the kinds of considerations involved.
The C language definition does not mandate a specific size for most data types; instead, specifies the range of values each type must be able to represent. char must be large enough to represent a single character of the execution character set (at least the range [-127,127]), short and int must be large enough to represent at least the range [-32767,32767], etc.
Traditionally, the size of int was the same as the "natural" word size for a given architecture, on the premise that a) it would be easier to implement, and b) operations on that type would be the most efficient. Whether that's still true today, I'm not qualified to say (not a hardware guy).

C Variable Definition

In C integer and short integer variables are identical: both range from -32768 to 32767, and the required bytes of both are also identical, namely 2.
So why are two different types necessary?
Basic integer types in C language do not have strictly defined ranges. They only have minimum range requirements specified by the language standard. That means that your assertion about int and short having the same range is generally incorrect.
Even though the minimum range requirements for int and short are the same, in a typical modern implementation the range of int is usually greater than the range of short.
The standard only guarantees sizeof(short) <= sizeof(int) <= sizeof(long) as far as I remember. So both short and int can be the same but don't have to. 32 bit compilers usually have 2 bytes short and 4 bytes int.
The C++ standard (and the C standard, which has a very similar paragraph, but the quote is from the n3337 version of the C++11 draft specification):
Section 3.9.1, point 2:
There are five standard signed integer types : “signed char”, “short int”, “int”, “long int”, and “long
long int”. In this list, each type provides at least as much storage as those preceding it in the list.
There may also be implementation-defined extended signed integer types. The standard and extended signed
integer types are collectively called signed integer types. Plain ints have the natural size suggested by the
architecture of the execution environment ; the other signed integer types are provided to meet special
needs.
Different architectures have different size "natural" integers, so a 16-bit architecture will naturally calculate a 16-bit value, where a 32- or 64-bit architecture will use either 32 or 64-bit int's. It's a choice for the compiler producer (or the definer of the ABI for a particular architecture, which tends to be a decision formed by a combination of the OS and the "main" Compiler producer for that architecture).
In modern C and C++, there are types along the lines of int32_t that is guaranteed to be exactly 32 bits. This helps portability. If these types aren't sufficient (or the project is using a not so modern compiler), it is a good idea to NOT use int in a data structure or type that needs a particular precision/size, but to define a uint32 or int32 or something similar, that can be used in all places where the size matters.
In a lot of code, the size of a variable isn't critical, because the number is within such a range, that a few thousand is way more than you ever need - e.g. number of characters in a filename is defined by the OS, and I'm not aware of any OS where a filename/path is more than 4K characters - so a 16, 32 or 64 bit value that can go to at least 32K would be perfectly fine for counting that - it doesn't really matter what size it is - so here we SHOULD use int, not try to use a specific size. int should, in a compiler be a type that is "efficient", so should help to give good performance, where some architectures will run slower if you use short, and certainly 16-bit architectures will run slower using long.
The guaranteed minimum ranges of int and short are the same. However an implementation is free to define short with a smaller range than int (as long as it still meets the minimum), which means that it may be expected to take the same or smaller storage space than int1. The standard says of int that:
A ‘‘plain’’ int object has the natural size suggested by the
architecture of the execution environment.
Taken together, this means that (for values that fall into the range -32767 to 32767) portable code should prefer int in almost all cases. The exception would be where the a very large number of values are being stored, such that the potentially smaller storage space occupied by short is a consideration.
1. Of course a pathological implementation is free to define a short that has a larger size in bytes than int, as long as it still has equal or lesser range - there is no good reason to do so, however.
They both are identical for 16 bit IBM compatible PC. However it is not sure that it will be identical on other hardwares as well.
VAX type of system which is known as virtual address extension they treat all these 2 variables in different manner. It occupies 2 bytes for short integer and 4 bytes for integer.
So this is the reason that we have 2 different but identical variables and their property.
for general purpose in desktops and laptops we use integer.

What is the historical context for long and int often being the same size?

According to numerous answers here, long and int are both 32 bits in size on common platforms in C and C++ (Windows & Linux, 32 & 64 bit.) (I'm aware that there is no standard, but in practice, these are the observed sizes.)
So my question is, how did this come about? Why do we have two types that are the same size? I previously always assumed long would be 64 bits most of the time, and int 32. I'm not saying it "should" be one way or the other, I'm just curious as to how we got here.
From the C99 rationale (PDF) on section 6.2.5:
[...] In the 1970s, 16-bit C (for the
PDP-11) first represented file
information with 16-bit integers,
which were rapidly obsoleted by disk
progress. People switched to a 32-bit
file system, first using int[2]
constructs which were not only
awkward, but also not efficiently
portable to 32-bit hardware.
To solve the problem, the long type
was added to the language, even though
this required C on the PDP-11 to
generate multiple operations to
simulate 32-bit arithmetic. Even as
32-bit minicomputers became available
alongside 16-bit systems, people still
used int for efficiency, reserving
long for cases where larger integers
were truly needed, since long was
noticeably less efficient on 16-bit
systems. Both short and long were
added to C, making short available
for 16 bits, long for 32 bits, and
int as convenient for performance.
There was no desire to lock the
numbers 16 or 32 into the language, as
there existed C compilers for at least
24- and 36-bit CPUs, but rather to
provide names that could be used for
32 bits as needed.
PDP-11 C might have been
re-implemented with int as 32-bits,
thus avoiding the need for long; but
that would have made people change
most uses of int to short or
suffer serious performance degradation
on PDP-11s. In addition to the
potential impact on source code, the
impact on existing object code and
data files would have been worse, even
in 1976. By the 1990s, with an immense
installed base of software, and with
widespread use of dynamic linked
libraries, the impact of changing the
size of a common data object in an
existing environment is so high that
few people would tolerate it, although
it might be acceptable when creating a
new environment. Hence, many vendors,
to avoid namespace conflicts, have
added a 64-bit integer to their 32-bit
C environments using a new name, of
which long long has been the most
widely used. [...]
Historically, most of the sizes and types in C can be traced back to the PDP-11 architecture. That had bytes, words (16 bits) and doublewords (32 bits). When C and UNIX were moved to another machine (the Interdata 832 I think), the word length was 32 bits. To keep the source compatible, long and int were defined so that, strictly
sizeof(short) ≤ sizeof(int) ≤ sizeof(long).
Most machines now end up with sizeof(int) = sizeof(long) because 16 bits is no longer convenient, but we have long long to get 64 bits if needed.
Update strictly I should have said "compilers" because different compiler implmentors can make different decisions for the same instruction set architecture. GCC and Microsoft, for example.
Back in the late 70s and early 80s many architectures were 16 bit, so typically char was 8 bit, int was 16 bit and long was 32 bit. In the late 80s there was a general move to 32 bit architectures and so int became 32 bits but long remained at 32 bits.
Over the last 10 years there has been a move towards 64 bit computing and we now have a couple of different models, the most common being LP64, where ints are still 32 bits and long is now 64 bits.
Bottom line: don't make any assumptions about the sizes of different integer types (other than what's defined in the standard of course) and if you need fixed size types then use <stdint.h>.
As I understand it, the C standard requires that a long be at least 32 bits long, and be at least as long as an int. An int, on the other hand, is always (I think) equal to the native word size of the architecture.
Bear in mind that, when the standards were drawn up, 32-bit machines were not common; originally, an int would probably have been the native 16-bits, and a long would have been twice as long at 32-bits.
In 16-bit operating systems, int was 16-bit and long was 32-bit. After moving to Win32, both become 32 bit. Moving to 64 bit OS, it is a good idea to keep long size unchanged, this doesn't break existing code when it compiled in 64 bit. New types (like Microsoft-specific __int64, size_t etc.) may be used in 64 bit programs.

integer size in c depends on what?

Size of the integer depends on what?
Is the size of an int variable in C dependent on the machine or the compiler?
It's implementation-dependent. The C standard only requires that:
char has at least 8 bits
short has at least 16 bits
int has at least 16 bits
long has at least 32 bits
long long has at least 64 bits (added in 1999)
sizeof(char) ≤ sizeof(short) ≤ sizeof(int) ≤ sizeof(long) ≤ sizeof(long long)
In the 16/32-bit days, the de facto standard was:
int was the "native" integer size
the other types were the minimum size allowed
However, 64-bit systems generally did not make int 64 bits, which would have created the awkward situation of having three 64-bit types and no 32-bit type. Some compilers expanded long to 64 bits.
Formally, representations of all fundamental data types (including their sizes) are compiler-dependent and only compiler-dependent. The compiler (or, more properly, the implementation) can serve as an abstraction layer between the program and the machine, completely hiding the machine from the program or distorting it in any way it pleases.
But in practice compilers are designed to generate the most efficient code for given machine and/or OS. In order to achieve that the fundamental data types should have natural representation for the given machine and/or OS. In that sense, these representations are indirectly dependent on the machine and/or OS.
In other words, from the abstract, formal and pedantic point of view the compiler is free to completely ignore the data type representations specific to the machine. But it makes no practical sense. In practice compilers make full use of data type representations provided by the machine.
Still, if some data type is not supported by the machine, the compiler can still provide that data type to the programs by implementing its support at the compiler level ("emulating" it). For example, 64-bit integer types are normally available in 32-bit compilers for 32-bit machines, even though they are not directly supported by the machine. Back in the day the compilers would often provide compiler-level support for floating-point types for machines that were not equipped with floating-point co-processor (and therefore did not support floating-point types directly).
It depends primarily on the compiler. For example, if you have a 64-bit x86 processor, you can use an old 16-bit compiler and get 16-bit ints, a 32-bit compiler and get 32-bit ints, or a 64-bit compiler and get 64-bit ints.
It depends on the processor to the degree that the compiler targets a particular processor, and (for example) an ancient 16-bit processor simply won't run code that targets a shiny new 64-bit processor.
The C and C++ standards do guarantee some minimum size (indirectly by specifying minimum supported ranges):
char: 8 bits
short: 16 bits
long: 32 bits
long long: 64 bits
The also guarantee that the sizes/ranges are strictly non-decreasing in the following order: char, short, int, long, and long long1.
1long long is specified in C99 and C++0x, but some compilers (e.g., gcc, Intel, Comeau) allow it in C++03 code as well. If you want to, you can persuade most (if not all) to reject long long in C++03 code.
As MAK said, it's implementation dependent. That means it depends on the compiler. Typically, a compiler targets a single machine so you can also think of it as machine dependent.
AFAIK, the size of data types is implementation dependent. This means that it is entirely up to the implementer (i.e. the guy writing the compiler) to choose what it will be.
So, in short it depends on the compiler. But often it is simpler to just use whatever size it is easiest to map to the word size of the underlying machine - so the compiler often uses the size that fits the best with the underlying machine.
It depends on the running environment no matter what hardware you have. If you are using a 16bit OS like DOS, then it will be 2 bytes. On a 32 bit OS like Windows or Unix, it is 4 bytes and so on. Even if you run a 32 bit OS on a 64 bit processor, the size will be 4 bytes only. I hope this helps.
It depends on both the architecture (machine, executable type) and the compiler. C and C++ only guarantee certain minimums. (I think those are char: 8 bits, int: 16 bits, long: 32 bits)
C99 includes certain known width types like uint32_t (when possible). See stdint.h
Update: Addressed Conrad Meyer's concerns.
The size of an Integer Variable depends upon the type of compiler:
if you have a 16 bit compiler:
size of int is 2 bytes
char holds 1 byte
float occupies 4 bytes
if you have a 32 bit compiler:
size of each variable is just double of its size in a 16 bit compiler
int hold 4 bytes
char holds 2 bytes
float holds 8 bytes
Same thing happens if you have a 64 bit compiler, and so on.

Size of an Integer in C

Does the ANSI C specification call for size of int to be equal to the word size (32 bit / 64 bit) of the system?
In other words, can I decipher the word size of the system based on the space allocated to an int?
The size of the int type is implementation-dependent, but cannot be shorter than 16 bits. See the Minimum Type Limits section here.
This Linux kernel development site claims that the size of the long type is guaranteed to be the machine's word size, but that statement is likely to be false: I couldn't find any confirmation of that in the standard, and long is only 32 bits wide on Win64 systems (since these systems use the LLP64 data model).
The language specification recommends that int should have the natural "word" size for the hardware platform. However, it is not strictly required. If you noticed, to simplify 32-bit-to-64-bit code transition some modern implementations prefer to keep int as 32-bit type even if the underlying hardware platform has 64-bit word size.
And as Frederic already noted, in any case the size may not be smaller than 16 value-forming bits.
The original intension was the int would be the word size - the most efficient data-processing size. Still, what tends to happen is that massive amounts of code are written that assume the size of int is X bits, and when the hardware that code runs on moves to larger word size, the carelessly-written code would break. Compiler vendors have to keep their customers happy, so they say "ok, we'll leave int sized as before, but we'll make long bigger now". Or, "ahhh... too many people complained about us making long bigger, we'll create a long long type while leaving sizeof(int) == sizeof(long)". So, these days, it's all a mess:
Does the ANSI C specification call for size of int to be equal to the word size (32 bit / 64 bit) of the system?
Pretty much the idea, but it doesn't insist on it.
In other words, can I decipher the word size of the system based on the space allocated to an int?
Not in practice.
You should check your system provided limits.h header file. INT_MAX declaration should help you back-calculate what is the minimum size an integer must have. For details look into http://www.opengroup.org/onlinepubs/009695399/basedefs/limits.h.html

Resources