Windows: How big is a BOOL? - c

How big (in bits) is a Windows BOOL data type?
Microsoft defines the BOOL data type as:
BOOL Boolean variable (should be TRUE or FALSE).
This type is declared in WinDef.h as follows:
typedef int BOOL;
Which converts my question into:
How big (in bits) is an int data type?
Edit: In before K&R.
Edit 2: Something to think about
Pretend we're creating a typed programming language, and compiler. You have a type that represents something logically being True or False. If your compiler can also link to Windows DLLs, and you want to call an API that requires a BOOL data type, what data type from your language would you pass/return?
In order to interop with the Windows BOOL data type, you have to know how large a BOOL is. The question gets converted to how big an int is. But that's a C/C++ int, not the Integer data type in our pretend language.
So i need to either find, or create, a data-type that is the same size as an int.
Note: In my original question i'm not creating a compiler. i'm calling Windows from a language that isn't C/C++, so i need to find a data type that is the same size as what Windows expects.

int is officially "an integral type that is larger than or equal to the size of type short int, and shorter than or equal to the size of type long." It can be any size, and is implementation specific.
It is 4 bytes (32 bits), on Microsoft's current compiler implementation (this is compiler specific, not platform specific). You can see this on the Fundamental Types (C++) page of MSDN (near the bottom).
Sizes of Fundamental Types
Type Size
======================= =========
int, unsigned int 4 bytes

it is platform-dependent, but easy to find out:
sizeof(int)*8

In terms of code, you can always work out the size in bits of any type via:
#include <windows.h>
#include <limits.h>
sizeof (BOOL) * CHAR_BIT
However, from a semantic point of view, the number of bits in a BOOL is supposed to be one. That is to say, all non-zero values of BOOL should be treated equally, including the value of TRUE. FALSE (which is 0) is the only other value that should have a distinguished meaning. To follow this rule strictly actually requires a bit of thought. For example, to cast a BOOL down to a char you have do the following:
char a_CHAR_variable = (char) (0 != b_A_BOOL_variable);
(If you were to just cast directly, then values like (1 << 8) would be interpreted as FALSE instead of TRUE.) Or if you just want to avoid the multi-value issues altogether via:
char a_CHAR_variable = !!b_A_BOOL_variable;
If you are trying to use the various different values of BOOL for some other purpose, chances are what you are doing is either wrong or at the very least will lead to something unmaintainable.
Its because of these complications that the C++ language actually added a bonafide bool type.

It depends on the implementation. (Even on the same OS.) Use sizeof(int) to find the size of int on the implementation that you're currently using. You should never hard-code this into your C program.
Better yet, use sizeof(BOOL) so you don't have to worry if MS ever changes their definition of BOOL.

They are both 32 bits big (4 bytes).
In languages that have a native boolean type, booleans are usually 8 bits big (1 byte), not 1 bit as I once thought.

It's as big as sizeof(int) says it is?
(That's in bytes so multiply by 8.)

On windows 7 x64 and C# 2010 sizeof(bool) gives an answer of 1 , whereas sizeof(int) gives an answer of 4.
Therefore the answer to "how big in bits is a bool" is 8, and it is not the same as an int.

Related

When should I use UINT32_C(), INT32_C(),... macros in C?

I switched to fixed-length integer types in my projects mainly because they help me think about integer sizes more clearly when using them. Including them via #include <inttypes.h> also includes a bunch of other macros like the printing macros PRIu32, PRIu64,...
To assign a constant value to a fixed length variable I can use macros like UINT32_C() and INT32_C(). I started using them whenever I assigned a constant value.
This leads to code similar to this:
uint64_t i;
for (i = UINT64_C(0); i < UINT64_C(10); i++) { ... }
Now I saw several examples which did not care about that. One is the stdbool.h include file:
#define bool _Bool
#define false 0
#define true 1
bool has a size of 1 byte on my machine, so it does not look like an int. But 0 and 1 should be integers which should be turned automatically into the right type by the compiler. If I would use that in my example the code would be much easier to read:
uint64_t i;
for (i = 0; i < 10; i++) { ... }
So when should I use the fixed length constant macros like UINT32_C() and when should I leave that work to the compiler(I'm using GCC)? What if I would write code in MISRA C?
As a rule of thumb, you should use them when the type of the literal matters. There are two things to consider: the size and the signedness.
Regarding size:
An int type is guaranteed by the C standard values up to 32767. Since you can't get an integer literal with a smaller type than int, all values smaller than 32767 should not need to use the macros. If you need larger values, then the type of the literal starts to matter and it is a good idea to use those macros.
Regarding signedness:
Integer literals with no suffix are usually of a signed type. This is potentially dangerous, as it can cause all manner of subtle bugs during implicit type promotion. For example (my_uint8_t + 1) << 31 would cause an undefined behavior bug on a 32 bit system, while (my_uint8_t + 1u) << 31 would not.
This is why MISRA has a rule stating that all integer literals should have an u/U suffix if the intention is to use unsigned types. So in my example above you could use my_uint8_t + UINT32_C(1) but you can as well use 1u, which is perhaps the most readable. Either should be fine for MISRA.
As for why stdbool.h defines true/false to be 1/0, it is because the standard explicitly says so. Boolean conditions in C still use int type, and not bool type like in C++, for backwards compatibility reasons.
It is however considered good style to treat boolean conditions as if C had a true boolean type. MISRA-C:2012 has a whole set of rules regarding this concept, called essentially boolean type. This can give better type safety during static analysis and also prevent various bugs.
It's for using smallish integer literals where the context won't result in the compiler casting it to the correct size.
I've worked on an embedded platform where int is 16 bits and long is 32 bits. If you were trying to write portable code to work on platforms with either 16-bit or 32-bit int types, and wanted to pass a 32-bit "unsigned integer literal" to a variadic function, you'd need the cast:
#define BAUDRATE UINT32_C(38400)
printf("Set baudrate to %" PRIu32 "\n", BAUDRATE);
On the 16-bit platform, the cast creates 38400UL and on the 32-bit platform just 38400U. Those will match the PRIu32 macro of either "lu" or "u".
I think that most compilers would generate identical code for (uint32_t) X as for UINT32_C(X) when X is an integer literal, but that might not have been the case with early compilers.

int v/s. long in C

On my system, I get:
sizeof ( int ) = 4
sizeof ( long ) = 4
When I checked with a C program, both int & long overflowed to the negative after:
a = 2147483647;
a++;
If both can represent the same range of numbers, why would I ever use the long keyword?
int has a minimum range of -32767 to 32767, whereas long has a minimum range of -2147483647 to 2147483647.
If you are writing portable code that may have to compile on different C implementations, then you should use long if you need that range. If you're only writing non-portable code for one specific implementation, then you're right - it doesn't matter.
Because sizeof(int)==sizeof(long) isn't always true. int normaly represents the fastest size with at least 2*8 Bit. long on the other hand is at least 4*8 Bit.
C defines a number of integer types and specifies the relation of their sizes. Basically, what it says is that sizeof(long long) >= sizeof(long) >= sizeof(int) >= sizeof(short) >= sizeof(char), and that sizeof(char) == 1.
But the actual sizes are not defined, and depend on the architecture you are running on. On a 32-bit PC, int and long are typically four bytes and long long is 8 bytes. But on a 64-bit system, long is typically 8 bytes, and thus different from int.
There is also a type called uintptr_t (and intptr_t) that is guaranteed to have the same size as data pointers.
The important thing to remember is to not assume that you can, for example, store pointer values in a long or an int. Being portable is probably more important than you think, and it is likely that you will want to compile your code on a 64-bit system in the near future.
I think it's more of a compiler issue nowadays, since computers has gone much faster and demands more numbers, as was the case before.
On different platform or with a different compiler, the int and long may be different.
If you don't plan to port your code to anything else or use a different machine, then pick the one you want, it won't make a difference.
It depends on the compiler, and you might want to check this out: What does the C++ standard state the size of int, long type to be?
The size of built-in data types is variable depending on the C implementation, but they all have minimum ranges. Nowadays, int is typically 4 bytes long (32-bits) because most OS are 32-bit. Note that char will always be 1 bytes.
The size of a data type depends upon the compiler. Different compilers have diffrent size of int and other data types.
So if you make a code which is going to run on diffrent machine you should use long or it is depend on the range of the value tha t ur variable may have.

Should you always use 'int' for numbers in C, even if they are non-negative?

I always use unsigned int for values that should never be negative. But today I
noticed this situation in my code:
void CreateRequestHeader( unsigned bitsAvailable, unsigned mandatoryDataSize,
unsigned optionalDataSize )
{
If ( bitsAvailable – mandatoryDataSize >= optionalDataSize ) {
// Optional data fits, so add it to the header.
}
// BUG! The above includes the optional part even if
// mandatoryDataSize > bitsAvailable.
}
Should I start using int instead of unsigned int for numbers, even if they
can't be negative?
One thing that hasn't been mentioned is that interchanging signed/unsigned numbers can lead to security bugs. This is a big issue, since many of the functions in the standard C-library take/return unsigned numbers (fread, memcpy, malloc etc. all take size_t parameters)
For instance, take the following innocuous example (from real code):
//Copy a user-defined structure into a buffer and process it
char* processNext(char* data, short length)
{
char buffer[512];
if (length <= 512) {
memcpy(buffer, data, length);
process(buffer);
return data + length;
} else {
return -1;
}
}
Looks harmless, right? The problem is that length is signed, but is converted to unsigned when passed to memcpy. Thus setting length to SHRT_MIN will validate the <= 512 test, but cause memcpy to copy more than 512 bytes to the buffer - this allows an attacker to overwrite the function return address on the stack and (after a bit of work) take over your computer!
You may naively be saying, "It's so obvious that length needs to be size_t or checked to be >= 0, I could never make that mistake". Except, I guarantee that if you've ever written anything non-trivial, you have. So have the authors of Windows, Linux, BSD, Solaris, Firefox, OpenSSL, Safari, MS Paint, Internet Explorer, Google Picasa, Opera, Flash, Open Office, Subversion, Apache, Python, PHP, Pidgin, Gimp, ... on and on and on ... - and these are all bright people whose job is knowing security.
In short, always use size_t for sizes.
Man, programming is hard.
Should I always ...
The answer to "Should I always ..." is almost certainly 'no', there are a lot of factors that dictate whether you should use a datatype- consistency is important.
But, this is a highly subjective question, it's really easy to mess up unsigneds:
for (unsigned int i = 10; i >= 0; i--);
results in an infinite loop.
This is why some style guides including Google's C++ Style Guide discourage unsigned data types.
In my personal opinion, I haven't run into many bugs caused by these problems with unsigned data types — I'd say use assertions to check your code and use them judiciously (and less when you're performing arithmetic).
Some cases where you should use unsigned integer types are:
You need to treat a datum as a pure binary representation.
You need the semantics of modulo arithmetic you get with unsigned numbers.
You have to interface with code that uses unsigned types (e.g. standard library routines that accept/return size_t values.
But for general arithmetic, the thing is, when you say that something "can't be negative," that does not necessarily mean you should use an unsigned type. Because you can put a negative value in an unsigned, it's just that it will become a really large value when you go to get it out. So, if you mean that negative values are forbidden, such as for a basic square root function, then you are stating a precondition of the function, and you should assert. And you can't assert that what cannot be, is; you need a way to hold out-of-band values so you can test for them (this is the same sort of logic behind getchar() returning an int and not char.)
Additionally, the choice of signed-vs.-unsigned can have practical repercussions on performance, as well. Take a look at the (contrived) code below:
#include <stdbool.h>
bool foo_i(int a) {
return (a + 69) > a;
}
bool foo_u(unsigned int a)
{
return (a + 69u) > a;
}
Both foo's are the same except for the type of their parameter. But, when compiled with c99 -fomit-frame-pointer -O2 -S, you get:
.file "try.c"
.text
.p2align 4,,15
.globl foo_i
.type foo_i, #function
foo_i:
movl $1, %eax
ret
.size foo_i, .-foo_i
.p2align 4,,15
.globl foo_u
.type foo_u, #function
foo_u:
movl 4(%esp), %eax
leal 69(%eax), %edx
cmpl %eax, %edx
seta %al
ret
.size foo_u, .-foo_u
.ident "GCC: (Debian 4.4.4-7) 4.4.4"
.section .note.GNU-stack,"",#progbits
You can see that foo_i() is more efficient than foo_u(). This is because unsigned arithmetic overflow is defined by the standard to "wrap around," so (a + 69u) may very well be smaller than a if a is very large, and thus there must be code for this case. On the other hand, signed arithmetic overflow is undefined, so GCC will go ahead and assume signed arithmetic doesn't overflow, and so (a + 69) can't ever be less than a. Choosing unsigned types indiscriminately can therefore unnecessarily impact performance.
The answer is Yes. The "unsigned" int type of C and C++ is not an "always positive integer", no matter what the name of the type looks like. The behavior of C/C++ unsigned ints has no sense if you try to read the type as "non-negative"... for example:
The difference of two unsigned is an unsigned number (makes no sense if you read it as "The difference between two non-negative numbers is non-negative")
The addition of an int and an unsigned int is unsigned
There is an implicit conversion from int to unsigned int (if you read unsigned as "non-negative" it's the opposite conversion that would make sense)
If you declare a function accepting an unsigned parameter when someone passes a negative int you simply get that implicitly converted to a huge positive value; in other words using an unsigned parameter type doesn't help you finding errors neither at compile time nor at runtime.
Indeed unsigned numbers are very useful for certain cases because they are elements of the ring "integers-modulo-N" with N being a power of two. Unsigned ints are useful when you want to use that modulo-n arithmetic, or as bitmasks; they are NOT useful as quantities.
Unfortunately in C and C++ unsigned were also used to represent non-negative quantities to be able to use all 16 bits when the integers where that small... at that time being able to use 32k or 64k was considered a big difference. I'd classify it basically as an historical accident... you shouldn't try to read a logic in it because there was no logic.
By the way in my opinion that was a mistake... if 32k are not enough then quite soon 64k won't be enough either; abusing the modulo integer just because of one extra bit in my opinion was a cost too high to pay. Of course it would have been reasonable to do if a proper non-negative type was present or defined... but the unsigned semantic is just wrong for using it as non-negative.
Sometimes you may find who says that unsigned is good because it "documents" that you only want non-negative values... however that documentation is of any value only for people that don't actually know how unsigned works for C or C++. For me seeing an unsigned type used for non-negative values simply means that who wrote the code didn't understand the language on that part.
If you really understand and want the "wrapping" behavior of unsigned ints then they're the right choice (for example I almost always use "unsigned char" when I'm handling bytes); if you're not going to use the wrapping behavior (and that behavior is just going to be a problem for you like in the case of the difference you shown) then this is a clear indicator that the unsigned type is a poor choice and you should stick with plain ints.
Does this means that C++ std::vector<>::size() return type is a bad choice ? Yes... it's a mistake. But if you say so be prepared to be called bad names by who doesn't understand that the "unsigned" name is just a name... what it counts is the behavior and that is a "modulo-n" behavior (and no one would consider a "modulo-n" type for the size of a container a sensible choice).
Bjarne Stroustrup, creator of C++, warns about using unsigned types in his book The C++ programming language:
The unsigned integer types are ideal
for uses that treat storage as a bit
array. Using an unsigned instead of an
int to gain one more bit to represent
positive integers is almost never a
good idea. Attempts to ensure that
some values are positive by declaring
variables unsigned will typically be
defeated by the implicit conversion
rules.
I seem to be in disagreement with most people here, but I find unsigned types quite useful, but not in their raw historic form.
If you consequently stick to the semantic that a type represents for you, then there should be no problem: use size_t (unsigned) for array indices, data offsets etc. off_t (signed) for file offsets. Use ptrdiff_t (signed) for differences of pointers. Use uint8_t for small unsigned integers and int8_t for signed ones. And you avoid at least 80% of portability problems.
And don't use int, long, unsigned, char if you mustn't. They belong in the history books. (Sometimes you must, error returns, bit fields, e.g)
And to come back to your example:
bitsAvailable – mandatoryDataSize >= optionalDataSize
can be easily rewritten as
bitsAvailable >= optionalDataSize + mandatoryDataSize
which doesn't avoid the problem of a potential overflow (assert is your friend) but gets you a bit nearer to the idea of what you want to test, I think.
if (bitsAvailable >= optionalDataSize + mandatoryDataSize) {
// Optional data fits, so add it to the header.
}
Bug-free, so long as mandatoryDataSize + optionalDataSize can't overflow the unsigned integer type -- the naming of these variables leads me to believe this is likely to be the case.
You can't fully avoid unsigned types in portable code, because many typedefs in the standard library are unsigned (most notably size_t), and many functions return those (e.g. std::vector<>::size()).
That said, I generally prefer to stick to signed types wherever possible for the reasons you've outlined. It's not just the case you bring up - in case of mixed signed/unsigned arithmetic, the signed argument is quietly promoted to unsigned.
From the comments on one of Eric Lipperts Blog Posts (See here):
Jeffrey L. Whitledge
I once developed a system in which
negative values made no sense as a
parameter, so rather than validating
that the parameter values were
non-negative, I thought it would be a
great idea to just use uint instead. I
quickly discovered that whenever I
used those values for anything (like
calling BCL methods), they had be
converted to signed integers. This
meant that I had to validate that the
values didn't exceed the signed
integer range on the top end, so I
gained nothing. Also, every time the
code was called, the ints that were
being used (often received from BCL
functions) had to be converted to
uints. It didn't take long before I
changed all those uints back to ints
and took all that unnecessary casting
out. I still have to validate that the
numbers are not negative, but the code
is much cleaner!
Eric Lippert
Couldn't have said it better myself.
You almost never need the range of a
uint, and they are not CLS-compliant.
The standard way to represent a small
integer is with "int", even if there
are values in there that are out of
range. A good rule of thumb: only use
"uint" for situations where you are
interoperating with unmanaged code
that expects uints, or where the
integer in question is clearly used as
a set of bits, not a number. Always
try to avoid it in public interfaces.
Eric
The situation where (bitsAvailable – mandatoryDataSize) produces an 'unexpected' result when the types are unsigned and bitsAvailable < mandatoryDataSize is a reason that sometimes signed types are used even when the data is expected to never be negative.
I think there's no hard and fast rule - I typically 'default' to using unsigned types for data that has no reason to be negative, but then you have to take to ensure that arithmetic wrapping doesn't expose bugs.
Then again, if you use signed types, you still have to sometimes consider overflow:
MAX_INT + 1
The key is that you have to take care when performing arithmetic for these kinds of bugs.
No you should use the type that is right for your application. There is no golden rule. Sometimes on small microcontrollers it is for example more speedy and memory efficient to use say 8 or 16 bit variables wherever possible as that is often the native datapath size, but that is a very special case. I also recommend using stdint.h wherever possible. If you are using visual studio you can find BSD licensed versions.
If there is a possibility of overflow, then assign the values to the next highest data type during the calculation, ie:
void CreateRequestHeader( unsigned int bitsAvailable, unsigned int mandatoryDataSize, unsigned int optionalDataSize )
{
signed __int64 available = bitsAvailable;
signed __int64 mandatory = mandatoryDataSize;
signed __int64 optional = optionalDataSize;
if ( (mandatory + optional) <= available ) {
// Optional data fits, so add it to the header.
}
}
Otherwise, just check the values individually instead of calculating:
void CreateRequestHeader( unsigned int bitsAvailable, unsigned int mandatoryDataSize, unsigned int optionalDataSize )
{
if ( bitsAvailable < mandatoryDataSize ) {
return;
}
bitsAvailable -= mandatoryDataSize;
if ( bitsAvailable < optionalDataSize ) {
return;
}
bitsAvailable -= optionalDataSize;
// Optional data fits, so add it to the header.
}
You'll need to look at the results of the operations you perform on the variables to check if you can get over/underflows - in your case, the result being potentially negative. In that case you are better off using the signed equivalents.
I don't know if its possible in c, but in this case I would just cast the X-Y thing to an int.
If your numbers should never be less than zero, but have a chance to be < 0, by all means use signed integers and sprinkle assertions or other runtime checks around. If you're actually working with 32-bit (or 64, or 16, depending on your target architecture) values where the most significant bit means something other than "-", you should only use unsigned variables to hold them. It's easier to detect integer overflows where a number that should always be positive is very negative than when it's zero, so if you don't need that bit, go with the signed ones.
Suppose you need to count from 1 to 50000. You can do that with a two-byte unsigned integer, but not with a two-byte signed integer (if space matters that much).

doubt regarding operations on "int" flavors

I am having following doubt regarding "int" flavors (unsigned int, long int, long long int).
When we do some operations(* , /, + , -) between int and its flavors (lets say long int)
in 32bit system and 64bit system is the implicit typecast happen for "int"
for example :-
int x ;
long long int y = 2000;
x = y ; (Higher is assigned to lower one data truncation may happen)
I am expecting compiler to give warning for this But I am not getting any such warning.
Is this due to implicit typecast happen for "x" here.
I am using gcc with -Wall option. Is the behavior will change for 32bit and 64bit.
Thanks
Arpit
-Wall does not activate all possible warnings. -Wextra enables other warnings. Anyway, what you do is a perfectly "legal" operation and since the compiler can't always know at compile-time the value of the datum that could be "truncated", it is ok it does not warn: programmer should be already aware of the fact that a "large" integer could not fit into a "small" integer, so it is up to the programmer usually. If you think your program is written in not-awareness of this, add -Wconversion.
Casting without an explicit type cast operator is perfectly legal in C, but may have undefined behavior. In your case, int x; is signed, so if you try to store a value in it that's outside the range of int, your program has undefined behavior. On the other hand, if x were declared as unsigned x; the behavior is well-defined; cast is via reduction modulo UINT_MAX+1.
As for arithmetic, when you perform arithmetic between integers of different types, the 'smaller' type is promoted to the 'larger' type prior to the arithmetic. The compiler is free to optimize out this promotion of course if it does not affect the results, which leads to idioms like casting a 32bit integer to 64bit before multiplying to get a full 64bit result. Promotion gets a bit confusing and can have unexpected results when signed and unsigned values are mixed. You should look it up if you care to know since it's hard to explain informally.
If you are worried, you can include <stdint.h> and use types with defined lengths, such as uint16_t for a 16-bit unsigned integer.
Your code is perfectly valid (as already said by others). If you want to program in a portable way in most cases you should not use the bare C types int, long or unsigned int but types that tell a bit better what you are planing to do with it.
E.g for indices of arrays use always size_t. Regardless on whether or not you are on a 32 or 64 bit system this will be the right type. Or if you want to take the integer of maximal width on the platform you happen to land on use intmax_t or uintmax_t.
See http://gcc.gnu.org/ml/gcc-help/2003-06/msg00086.html -- the code is perfectly valid C/C++.
You might want to look at static analysis tools (sparse, llvm, etc.) to check for this type of truncation.

What is the size of an enum in C?

I'm creating a set of enum values, but I need each enum value to be 64 bits wide. If I recall correctly, an enum is generally the same size as an int; but I thought I read somewhere that (at least in GCC) the compiler can make the enum any width they need to be to hold their values. So, is it possible to have an enum that is 64 bits wide?
Taken from the current C Standard (C99): http://www.open-std.org/JTC1/SC22/WG14/www/docs/n1256.pdf
6.7.2.2 Enumeration specifiers
[...]
Constraints
The expression that defines the value of an enumeration constant shall be an integer
constant expression that has a value representable as an int.
[...]
Each enumerated type shall be compatible with char, a signed integer type, or an
unsigned integer type. The choice of type is implementation-defined, but shall be
capable of representing the values of all the members of the enumeration.
Not that compilers are any good at following the standard, but essentially: If your enum holds anything else than an int, you're in deep "unsupported behavior that may come back biting you in a year or two" territory.
Update: The latest publicly available draft of the C Standard (C11): http://www.open-std.org/JTC1/SC22/WG14/www/docs/n1570.pdf contains the same clauses. Hence, this answer still holds for C11.
An enum is only guaranteed to be large enough to hold int values. The compiler is free to choose the actual type used based on the enumeration constants defined so it can choose a smaller type if it can represent the values you define. If you need enumeration constants that don't fit into an int you will need to use compiler-specific extensions to do so.
While the previous answers are correct, some compilers have options to break the standard and use the smallest type that will contain all values.
Example with GCC (documentation in the GCC Manual):
enum ord {
FIRST = 1,
SECOND,
THIRD
} __attribute__ ((__packed__));
STATIC_ASSERT( sizeof(enum ord) == 1 )
Just set the last value of the enum to a value large enough to make it the size you would like the enum to be, it should then be that size:
enum value{a=0,b,c,d,e,f,g,h,i,j,l,m,n,last=0xFFFFFFFFFFFFFFFF};
We have no control over the size of an enum variable. It totally depends on the implementation, and the compiler gives the option to store a name for an integer using enum, so enum is following the size of an integer.
In C language, an enum is guaranteed to be of size of an int. There is a compile time option (-fshort-enums) to make it as short (This is mainly useful in case the values are not more than 64K). There is no compile time option to increase its size to 64 bit.
Consider this code:
enum value{a,b,c,d,e,f,g,h,i,j,l,m,n};
value s;
cout << sizeof(s) << endl;
It will give 4 as output. So no matter the number of elements an enum contains, its size is always fixed.

Resources