Define the 64 bit width integer in Linux - c

I try to define a 64 bits width integer using C language on Ubnutu 9.10. 9223372036854775808 is 2^23
long long max=9223372036854775808
long max=9223372036854775808
When I compile it, the compiler gave the warning message:
binary.c:79:19: warning: integer constant is so large that it is unsigned
binary.c: In function ‘bitReversal’:
binary.c:79: warning: this decimal constant is unsigned only in ISO C90
binary.c:79: warning: integer constant is too large for ‘long’ type
Is the long type 64 bits width?
Best Regards,

long long max=9223372036854775808LL; // Note the LL
// long max=9223372036854775808L; // Note the L
A long long type is at least 64 bit, a long type is at least 32 bit. Actual width depends on the compiler and targeting platform.
Use int64_t and int32_t to ensure the 64-/32-bit integer can fit into the variable.

The problem you are having is that the number you're using (9223372036854775808) is 2**63, and the maximum value that your long long can hold (as a 64 bit signed 2s complement type) is one less than that - 2**63 - 1, or 9223372036854775807 (that's 63 binary 1s in a row).
Constants that are too large for long long (as in this case) are given type unsigned long long, which is why you get the warning integer constant is so large that it is unsigned.
The long type is at least 32 bits, and the long long type at least 64 bits (both including the sign bit).

I'm not sure it's will fix your problem (The LL solution looks good). But here is a recomandation :
You should use hexadecimal notation to write the maximal value of something.
unsigned char max = 0xFF;
unsigned short max = 0xFFFF;
uint32_t max = 0xFFFFFFFF;
uint64_t max = 0xFFFFFFFFFFFFFFFF;
As you can see it's readable.
To display the value :
printf("%lld\n", value_of_64b_int);

I think it depends on what platform you use. If you need to know exactly what kind of integer type you use you should use types from Stdint.h if you have this include file on you system.

Related

I am trying to store value in long long int type but giving wrong return

I am trying to store "10000000000000000000"(19 zeros) inside long long int data type.
#if __WORDSIZE == 64
typedef long long int intmaximum_t;
#else
__extension__
typedef unsigned long int intmaximum_t;
#endif
const intmaximum_t = 10000000000000000000;
But it is giving output "-8446744073709551616" (in negative).
I have 64 bit machine with ubuntu OS. How to store this value?
The largest possible value for a 64 bit long long int is 9,223,372,036,854,775,807. Your number is bigger than that. (Note that this number less the one you get is one off the number you actually want - that's not a coincidence but a property of 2's complement arithmetic).
It would fit in an unsigned long long int, but if you need an unsigned type you'll need to use a large number library for your number, or int128_t if your compiler supports it.
The value 10000000000000000000 is too large to fit in a signed 64-bit integer but will fit in an unsigned 64-bit integer. So when you attempt to assign the value it gets converted in an implementation defined way (typically by just assigning the binary representation directly), and it prints as negative because you are most likely using %d or %ld as your format specifier.
You need to declare your variable as unsigned long long and print it with the %llu format specifier.
unsigned long long x = 10000000000000000000;
printf("x=%llun", x);

Can the type difference between constants 32768 and 0x8000 make a difference?

The Standard specifies that hexadecimal constants like 0x8000 (larger than fits in a signed integer) are unsigned (just like octal constants), whereas decimal constants like 32768 are signed long. (The exact types assume a 16-bit integer and a 32-bit long.) However, in regular C environments both will have the same representation, in binary 1000 0000 0000 0000.
Is a situation possible where this difference really produces a different outcome? In other words, is a situation possible where this difference matters at all?
Yes, it can matter. If your processor has a 16-bit int and a 32-bit long type, 32768 has the type long (since 32767 is the largest positive value fitting in a signed 16-bit int), whereas 0x8000 (since it is also considered for unsigned int) still fits in a 16-bit unsigned int.
Now consider the following program:
int main(int argc, char *argv[])
{
volatile long long_dec = ((long)~32768);
volatile long long_hex = ((long)~0x8000);
return 0;
}
When 32768 is considered long, the negation will invert 32 bits,
resulting in a representation 0xFFFF7FFF with type long; the cast is
superfluous.
When 0x8000 is considered unsigned int, the negation will invert
16 bits, resulting in a representation 0x7FFF with type unsigned int;
the cast will then zero-extend to a long value of 0x00007FFF.
Look at H&S5, section 2.7.1 page 24ff.
It is best to augment the constants with U, UL or L as appropriate.
On a 32 bit platform with 64 bit long, a and b in the following code will have different values:
int x = 2;
long a = x * 0x80000000; /* multiplication done in unsigned -> 0 */
long b = x * 2147483648; /* multiplication done in long -> 0x100000000 */
Another examine not yet given: compare (with greater-than or less-than operators) -1 to both 32768 and to 0x8000. Or, for that matter, try comparing each of them for equality with an 'int' variable equal to -32768.
Assuming int is 16 bits and long is 32 bits (which is actually fairly unusual these days; int is more commonly 32 bits):
printf("%ld\n", 32768); // prints "32768"
printf("%ld\n", 0x8000); // has undefined behavior
In most contexts, a numeric expression will be implicitly converted to an appropriate type determined by the context. (That's not always the type you want, though.) This doesn't apply to non-fixed arguments to variadic functions, such as any argument to one of the *printf() functions following the format string.
The difference would be if you were to try and add a value to the 16 bit int it would not be able to do so because it would exceed the bounds of the variable whereas if you were using a 32bit long you could add any number that is less than 2^16 to it.

Hexadecimal constant in C is unsigned even though L suffix

I know this is a simple question but I'm confused. I have a fairly typical gcc warning that's usually easy to fix:
warning: comparison between signed and unsigned integer expressions
Whenever I have a hexadecimal constant with the most significant bit, like 0x80000000L, the compiler interprets it as unsigned. For example compiling this code with -Wextra will cause the warning (gcc 4.4x, 4.5x):
int main()
{
long test = 1;
long *p = &test;
if(*p != 0x80000000L) printf("test");
}
I've specifically suffixed the constant as long, so why is this happening?
The answer to Unsigned hexadecimal constant in C? is relevant. A hex constant with L suffix will have the first of the following types that can hold its value:
long
unsigned long
long long
unsigned long long
See the C99 draft, section [ 6.4.4.1 ], for details.
On your platform, long is probably 32 bits, so it is not large enough to hold the (positive) constant 0x80000000. So your constant has type unsigned long, which is the next type on the list and is sufficient to hold the value.
On a platform where long was 64 bits, your constant would have type long.
Because your compiler uses 32-bit longs (and presumably 32-bit ints as well) and 0x80000000 wont fit in a 32-bit signed integer, so the compiler interprets it as unsigned. How to work around this depends on what you're trying to do.
According to the c standard hex constants are unsigned.
It's an unsigned long then. I'm guessing the compiler decides that a hex literal like that is most likely desired to be unsigned. Try casting it (unsigned long)0x80000000L
Hex constants in C/C++ are always unsigned. But you may use explicit typecast to suppress warning.

what is the difference between short signed int and signed int

I was referring a tutorial on c,I found that signed int & short signed int range are -32768 to 32767 and it's of 2 bytes, is their any difference, if not then why two kinds of declarations used.
It's platform specific - all that you can be sure of in this context is that sizeof(int) >= sizeof(short) >= 16 bits.
The best answer to your question can be found in the ANSI standard for C, section 2.2.4.2 - Numerical Limits. I reproduce the relevant parts of that section here for your convenience:
2.2.4.2 Numerical limits
A conforming implementation shall
document all the limits specified in
this section, which shall be specified
in the headers and
.
"Sizes of integral types "
The values given below shall be
replaced by constant expressions
suitable for use in #if preprocessing
directives. Their
implementation-defined values shall be
equal or greater in magnitude
(absolute value) to those shown, with
the same sign.
maximum number of bits for smallest
object that is not a bit-field (byte)
CHAR_BIT 8
minimum value for an object of type
signed char SCHAR_MIN
-127
maximum value for an object of type
signed char SCHAR_MAX
+127
maximum value for an object of type
unsigned char UCHAR_MAX
255
minimum value for an object of type
char CHAR_MIN see
below
maximum value for an object of type
char CHAR_MAX see
below
maximum number of bytes in a
multibyte character, for any supported
locale MB_LEN_MAX
1
minimum value for an object of type
short int SHRT_MIN
-32767
maximum value for an object of type
short int SHRT_MAX
+32767
maximum value for an object of type
unsigned short int USHRT_MAX
65535
minimum value for an object of type
int INT_MIN
-32767
maximum value for an object of type
int INT_MAX
+32767
maximum value for an object of type
unsigned int UINT_MAX
65535
minimum value for an object of type
long int LONG_MIN
-2147483647
maximum value for an object of type
long int LONG_MAX
+2147483647
maximum value for an object of type
unsigned long int ULONG_MAX
4294967295
The not so widely implemented C99 adds the following numeric types:
minimum value for an object of type long long int
LLONG_MIN -9223372036854775807 // -(263 - 1)
maximum value for an object of type long long int
LLONG_MAX +9223372036854775807 // 263 - 1
maximum value for an object of type unsigned long long int
ULLONG_MAX 18446744073709551615 // 264 - 1
A couple of other answers have correctly quoted the C standard, which places minimum ranges on the types. However, as you can see, those minimum ranges are identical for short int and int - so the question remains: Why are short int and int distinct? When should I choose one over the other?
The reason that int is provided is to provide a type that is intended to match the "most efficient" integer type on the hardware in question (that still meets the minimum required range). int is what you should use in C as your general purpose small integer type - it should be your default choice.
If you know that you'll need more range than -32767 to 32767, you should instead choose long int or long long int. If you are storing a large number of small integers, such that space efficiency is more important than calculation efficiency, then you can instead choose short (or even signed char, if you know that your values will fit into the -127 to 127 range).
C and C++ only make minimum size guarantees on their objects. There is no exact size guarantee that is made. You cannot rely on type short being exactly 2 bytes, only that it can hold values in the specified range (so it is at least two bytes). Type int is at least as large as short and is often larger. Note that signed int is a long-winded way to say int while signed short int is a long-winded way to say short int which is a long-winded way to say short. With the exception of type char (which some compilers will make unsigned), all the builtin integral types are signed by default. The types short int and long int are longer ways to say short and long, respectively.
A signed int is at least as large as a short signed int. On most modern hardware a short int is 2 bytes (as you saw), and a regular int is 4 bytes. Older architectures generally had a 2-byte int which may have been the cause of your confusion.
There is also a long int which is usually either 4 or 8 bytes, depending on the compiler.
Please read following expalination for signed char then we will talk about signed/unsigned int.
First I want to prepare background for your question.
................................................
char data type is of two types:
unsigned char;
signed char;
(i.e. INTEGRAL DATATYPES)
.................................................
Exaplained as per different books as:
char 1byte –128 to 127 (i.e. by default signed char)
signed char 1byte –128 to 127
unsigned char 1byte 0 to 255
.................................................
one more thing 1byte=8 bits.(zero to 7th bit)
As processor flag register reserves 7th bit for representing sign(i.e. 1=+ve & 0=-ve)
-37 will be represented as 1101 1011 (the most significant bit is 1),
+37 will be represented as 0010 0101 (the most significant bit is 0).
.................................................
similarly for char last bit is by default taken as signed
This is why?
Because char also depends on ASCII codes of perticular charectors(Eg.A=65).
In any case we are using char and using 7 bits only.
In this case to increase memory range for char/int by 1 bit we use unsigned char or unsigned int;
Thanks for the question.
similarly for 4bit int or 2bit int we need signed int & unsigned int
It depends on the platform.
Int is 32-bit wide on a 32-bit system and 64 bit wide on a 64-bit system(i am sure that this is ever the case).
I was referring a tutorial on c,I found that signed int & short signed int range are -32768 to 32767 and it's of 2 bytes.
That's a very old tutorial. The modern C standard is as per Paul R's answer. On a 32 bit architecture, normally:
short int is 16 bits
int is 32 bits
long int is 32 bits
long long int is 64 bits
the size of an int would normally only be 16 bits on a 16 bit machine. 16 bit machines are presumably limited to embedded devices these days.
On a 16 bit machine, sizes amay be like this:
short int is 16 bits
int is 16 bits
long int is 32 bits
long long int is 64 bits

Is the type "long long" always 64 bits?

I'm trying to implement George Marsaglia's Complementary Multiply-With-Carry algorithm in C. It seems to work great under Win7 64 bit and Linux 32 bit, but seems to behave strangely under Win 7 32 bit. The random number it returns is 32 bit, but there's a temporary value used internally that's supposed to be 64 bits, and it's declared:
unsigned long long t;
I suspect this might be the cause of the misbehaviour, so my question is:
Is the type "long long" 64 bits? Is it supported in 32 bit Windows?
If your compiler has stdint.h I would suggest using uint64_t instead.
The type long long is guaranteed to be at least 64 bits (although the guarantee is formally in the form of the range of values it must be able to represent).
The following is in §5.2.4.2.1 of the C99 standard (link to draft):
— maximum value for an object of type
unsigned long long int
ULLONG_MAX 18446744073709551615 // 2**64 − 1

Resources