Sum of INT_MAX and INT_MAX - c

If I declare two max integers in C:
int a = INT_MAX;
int b = INT_MAX;
and sum them into the another int:
int c = a+b;
I know there is a buffer overflow but I am not sure how to handle it.

This causes undefined behavior since you are using signed integers (which cause undefined behavior if they overflow).
You will need to find a way to avoid the overflow, or if possible, switch to unsigned integers (which use wrapping overflow).
One possible solution is to switch to long integers such that no overflow occurs.
Another possibility is checking for the overflow first:
if( (INT_MAX - a) > b) {
// Will overflow, do something else
}
Note: I'm assume here you don't actually know the exact value of a and b.

For the calculation to be meaningful, you would have to use a type large enough to hold the result. Apart from that, overflow is only a problem for signed int. If you use unsigned types, then you don't get undefined overflow, but well-defined wrap-around.
In this specific case the solution is trivial:
unsigned int c = (unsigned int)a + (unsigned int)b; // 4.29 bil
Otherwise, if you truly wish to know the signed equivalent of the raw binary value, you can do:
int c = (unsigned int)a + (unsigned int)b;
As long as the calculation is carried out on unsigned types there's no danger (and the value will fit in this case - it won't wrap-around). The result of the addition is implicitly converted through assignment to the signed type of the left operand of =. This conversion is implementation-defined, as in the result depends on signed format used. On 2's complement mainstream computers you will very likely get the value -2.

Related

Can I overflow uint32_t in a temporary result?

Basically what happens on a 32bit system when I do this:
uint32_t test (void)
{
uint32_t myInt;
myInt = ((0xFFFFFFFF * 0xFFFFFFFF) % 256u );
return myInt;
}
Let's assume that int is 32 bits.
0xFFFFFFFF will have type unsigned int. There are special rules that explain this, but because it is a hexadecimal constant and it doesn't fit in int, but does fit in unsigned int, it ends up as unsigned int.
0xFFFFFFFF * 0xFFFFFFFF will first go through the usual arithmetic conversions, but since both sides are unsigned int, nothing happens. The result of the multiplication is 0xfffffffe00000001 which is reduced to unsigned int by using the modulo 232 value, resulting in the value 1 with type unsigned int.
(unsigned int)1 % 256u is equal to 1 and has type unsigned int. Usual arithmetic conversions apply here too, but again, both operands are unsigned int so nothing happens.
The result is converted to uint32_t, but it's already unsigned int which has the same range.
However, let's instead suppose that int is 64 bits.
0xFFFFFFFF will have type int.
0xFFFFFFFF * 0xFFFFFFFF will overflow! This is undefined behavior. At this point we stop trying to figure out what the program does, because it could do anything. Maybe the compiler would decide not to emit code for this function, or something equally absurd.
This would happen in a so-called "ILP64" or "SILP64" architecture. These architectures are rare but they do exist. We can avoid these portability problems by using 0xFFFFFFFFu.
Unsigned integer overflowing means you can try to put a value greater than the range of what it can hold - but it will wrap and will put a number modulo UINT32_MAX+1. In fact in this case also that will happen provided you append the U or u with the integer literals. Otherwise integer literals are consideredwhen turns out to be signed (As you didn't specify anything), will result in overflow due to multiplication and signed integer overflow which is Undefined Behavior.
Again back to the explanation, here when you are multiplying this (ensuring that they are unsigned it (the result) will wrap in UINT32_MAX+1 by wrapping in it is meant that if it is bigger than uint32_t then the result will be applied over modulous of UINT32_MAX) and then we apply modulo operation with 256u and then that result is stored in uint32_t and returned from the method. (Note that the result of multiplication if overflows will be at first taken as modulo of UINT_MAX+1)

is it safe to subtract between unsigned integers?

Following C code displays the result correctly, -1.
#include <stdio.h>
main()
{
unsigned x = 1;
unsigned y=x-2;
printf("%d", y );
}
But in general, is it always safe to do subtraction involving
unsigned integers?
The reason I ask the question is that I want to do some conditioning
as follows:
unsigned x = 1; // x was defined by someone else as unsigned,
// which I had better not to change.
for (int i=-5; i<5; i++){
if (x+i<0) continue
f(x+i); // f is a function
}
Is it safe to do so?
How are unsigned integers and signed integers different in
representing integers? Thanks!
1: Yes, it is safe to subtract unsigned integers. The definition of arithmetic on unsigned integers includes that if an out-of-range value would be generated, then that value should be adjusted modulo the maximum value for the type, plus one. (This definition is equivalent to truncating high bits).
Your posted code has a bug though: printf("%d", y); causes undefined behaviour because %d expects an int, but you supplied unsigned int. Use %u to correct this.
2: When you write x+i, the i is converted to unsigned. The result of the whole expression is a well-defined unsigned value. Since an unsigned can never be negative, your test will always fail.
You also need to be careful using relational operators because the same implicit conversion will occur. Before I give you a fix for the code in section 2, what do you want to pass to f when x is UINT_MAX or close to it? What is the prototype of f ?
3: Unsigned integers use a "pure binary" representation.
Signed integers have three options. Two can be considered obsolete; the most common one is two's complement. All options require that a positive signed integer value has the same representation as the equivalent unsigned integer value. In two's complement, a negative signed integer is represented the same as the unsigned integer generated by adding UINT_MAX+1, etc.
If you want to inspect the representation, then do unsigned char *p = (unsigned char *)&x; printf("%02X%02X%02X%02X", p[0], p[1], p[2], p[3]);, depending on how many bytes are needed on your system.
Its always safe to subtract unsigned as in
unsigned x = 1;
unsigned y=x-2;
y will take on the value of -1 mod (UINT_MAX + 1) or UINT_MAX.
Is it always safe to do subtraction, addition, multiplication, involving unsigned integers - no UB. The answer will always be the expected mathematical result modded by UINT_MAX+1.
But do not do printf("%d", y ); - that is UB. Instead printf("%u", y);
C11 §6.2.5 9 "A computation involving unsigned operands can never overflow, because a result that cannot be represented by the resulting unsigned integer type is reduced modulo the number that is one greater than the largest value that can be represented by the resulting type."
When unsigned and int are used in +, the int is converted to an unsigned. So x+i has an unsigned result and never is that sum < 0. Safe, but now if (x+i<0) continue is pointless. f(x+i); is safe, but need to see f() prototype to best explain what may happen.
Unsigned integers are always 0 to power(2,N)-1 and have well defined "overflow" results. Signed integers are 2's complement, 1's complement, or sign-magnitude and have UB on overflow. Some compilers take advantage of that and assume it never occurs when making optimized code.
Rather than really answering your questions directly, which has already been done, I'll make some broader observations that really go to the heart of your questions.
The first is that using unsigned in loop bounds where there's any chance that a signed value might crop up will eventually bite you. I've done it a bunch of times over 20 years and it has ultimately bit me every time. I'm now generally opposed to using unsigned for values that will be used for arithmetic (as opposed to being used as bitmasks and such) without an excellent justification. I have seen it cause too many problems when used, usually with the simple and appealing rationale that “in theory, this value is non-negative and I should use the most restrictive type possible”.
I understand that x, in your example, was decided to be unsigned by someone else, and you can't change it, but you want to do something involving x over an interval potentially involving negative numbers.
The “right” way to do this, in my opinion, is first to assess the range of values that x may take. Suppose that the length of an int is 32 bits. Then the length of an unsigned int is the same. If it is guaranteed to be the case that x can never be larger than 2^31-1 (as it often is), then it is safe in principle to cast x to a signed equivalent and use that, i.e. do this:
int y = (int)x;
// Do your stuff with *y*
x = (unsigned)y;
If you have a long that is longer than unsigned, then even if x uses the full unsigned range, you can do this:
long y = (long)x;
// Do your stuff with *y*
x = (unsigned)y;
Now, the problem with either of these approaches is that before assigning back to x (e.g. x=(unsigned)y; in the immediately preceding example), you really must check that y is non-negative. However, these are exactly the cases where working with the unsigned x would have bitten you anyway, so there's no harm at all in something like:
long y = (long)x;
// Do your stuff with *y*
assert( y >= 0L );
x = (unsigned)y;
At least this way, you'll catch the problems and find a solution, rather than having a strange bug that takes hours to find because a loop bound is four billion unexpectedly.
No, it's not safe.
Integers usually are 4 bytes long, which equals to 32 bits. Their difference in representation is:
As far as signed integers is concerned, the most significant bit is used for sign, so they can represent values between -2^31 and 2^31 - 1
Unsigned integers don't use any bit for sign, so they represent values from 0 to 2^32 - 1.
Part 2 isn't safe either for the same reason as Part 1. As int and unsigned types represent integers in a different way, in this case where negative values are used in the calculations, you can't know what the result of x + i will be.
No, it's not safe. Trying to represent negative numbers with unsigned ints smells like bug. Also, you should use %u to print unsigned ints.
If we slightly modify your code to put %u in printf:
#include <stdio.h>
main()
{
unsigned x = 1;
unsigned y=x-2;
printf("%u", y );
}
The number printed is 4294967295
The reason the result is correct is because C doesn't do any overflow checks and you are printing it as a signed int (%d). This, however, does not mean it is safe practice. If you print it as it really is (%u) you won't get the correct answer.
An Unsigned integer type should be thought of not as representing a number, but as a member of something called an "abstract algebraic ring", specifically the equivalence class of integers congruent modulo (MAX_VALUE+1). For purposes of examples, I'll assume "unsigned int" is 16 bits for numerical brevity; the principles would be the same with 32 bits, but all the numbers would be bigger.
Without getting too deep into the abstract-algebraic nitty-gritty, when assigning a number to an unsigned type [abstract algebraic ring], zero maps to the ring's additive identity (so adding zero to a value yields that value), one means the ring's multiplicative identity (so multiplying a value by one yields that value). Adding a positive integer N to a value is equivalent to adding the multiplicative identity, N times; adding a negative integer -N, or subtracting a positive integer N, will yield the value which, when added to +N, would yield the original value.
Thus, assigning -1 to a 16-bit unsigned integer yields 65535, precisely because adding 1 to 65535 will yield 0. Likewise -2 yields 65534, etc.
Note that in an abstract algebraic sense, every integer can be uniquely assigned into to algebraic rings of the indicated form, and a ring member can be uniquely assigned into a smaller ring whose modulus is a factor of its own [e.g. a 16-bit unsigned integer maps uniquely to one 8-bit unsigned integer], but ring members are not uniquely convertible to larger rings or to integers. Unfortunately, C sometimes pretends that ring members are integers, and implicitly converts them; that can lead to some surprising behavior.
Subtracting a value, signed or unsigned, from an unsigned value which is no smaller than int, and no smaller than the value being subtracted, will yield a result according to the rules of algebraic rings, rather than the rules of integer arithmetic. Testing whether the result of such computation is less than zero will be meaningless, because ring values are never less than zero. If you want to operate on unsigned values as though they are numbers, you must first convert them to a type which can represent numbers (i.e. a signed integer type). If the unsigned type can be outside the range that is representable with the same-sized signed type, it will need to be upcast to a larger type.

Is arithmetic overflow equivalent to modulo operation?

I need to do modulo 256 arithmetic in C. So can I simply do
unsigned char i;
i++;
instead of
int i;
i=(i+1)%256;
No. There is nothing that guarantees that unsigned char has eight bits. Use uint8_t from <stdint.h>, and you'll be perfectly fine. This requires an implementation which supports stdint.h: any C99 compliant compiler does, but older compilers may not provide it.
Note: unsigned arithmetic never overflows, and behaves as "modulo 2^n". Signed arithmetic overflows with undefined behavior.
Yes, the behavior of both of your examples is the same. See C99 6.2.5 §9 :
A computation involving unsigned operands can never overflow,
because a result that cannot be represented by the resulting unsigned integer type is
reduced modulo the number that is one greater than the largest value that can be
represented by the resulting type.
unsigned char c = UCHAR_MAX;
c++;
Basically yes, there is no overflow, but not because c is of an unsigned type. There is a hidden promotion of c to int here and an integer conversion from int to unsigned char and it is perfectly defined.
For example,
signed char c = SCHAR_MAX;
c++;
is also not undefined behavior, because it is actually equivalent to:
c = (int) c + 1;
and the conversion from int to signed char is implementation-defined here (see c99, 6.3.1.3p3 on integer conversions). To simplify CHAR_BIT == 8 is assumed.
For more information on the example above, I suggest to read this post:
"The Little C Function From Hell"
http://blog.regehr.org/archives/482
Very probably yes, but the reasons for it in this case are actually fairly complicated.
unsigned char i = 255;
i++;
The i++ is equivalent to i = i + 1.
(Well, almost. i++ yields the value of i before it was incremented, so it's really equivalent to (tmp=i; i = i + 1; tmp). But since the result is discarded in this case, that doesn't raise any additional issues.)
Since unsigned char is a narrow type, an unsigned char operand to the + operator is promoted to int (assuming int can hold all possible values in the range of unsigned char). So if i == 255, and UCHAR_MAX == 255, then the result of the addition is 256, and is of type (signed) int.
The assignment implicitly converts the value 256 from int back to unsigned char. Conversion to an unsigned type is well defined; the result is reduced modulo MAX+1, where MAX is the maximum value of the target unsigned type.
If i were declared as an unsigned int:
unsigned int i = UINT_MAX;
i++;
there would be no type conversion, but the semantics of the + operator for unsigned types also specify reduction module MAX+1.
Keep in mind that the value assigned to i is mathematically equivalent to (i+1) % UCHAR_MAX. UCHAR_MAX is usually 255, and is guaranteed to be at least 255, but it can legally be bigger.
There could be an exotic system on which UCHAR_MAX is too be to be stored in a signed int object. This would require UCHAR_MAX > INT_MAX, which means the system would have to have at least 16-bit bytes. On such a system, the promotion would be from unsigned char to unsigned int. The final result would be the same. You're not likely to encounter such a system. I think there are C implementations for some DSPs that have bytes bigger than 8 bits. The number of bits in a byte is specified by CHAR_BIT, defined in <limits.h>.
CHAR_BIT > 8 does not necessarily imply UCHAR_MAX > INT_MAX. For example, you could have CHAR_BIT == 16 and sizeof (int) == 2 i.e., 16-bit bytes and 32 bit ints).
There's another alternative that hasn't been mentioned, if you don't want to use another data type.
unsigned int i;
// ...
i = (i+1) & 0xFF; // 0xFF == 255
This works because the modulo element == 2^n, meaning the range will be [0, 2^n-1] and thus a bitmask will easily keep the value within your desired range. It's possible this method would not be much or any less efficient than the unsigned char/uint8_t version, either, depending on what magic your compiler does behind the scenes and how the targeted system handles non-word loads (for example, some RISC architectures require additional operations to load non-word-size values). This also assumes that your compiler won't detect the usage of power-of-two modulo arithmetic on unsigned values and substitute a bitmask for you, of course, as in cases like that the modulo usage would have greater semantic value (though using that as the basis for your decision is not exactly portable, of course).
An advantage of this method is that you can use it for powers of two that are not also the size of a data type, e.g.
i = (i+1) & 0x1FF; // i %= 512
i = (i+1) & 0x3FF; // i %= 1024
// etc.
This should work fine because it should just overflow back to 0. As was pointed out in a comment on a different answer, you should only do this when the value is unsigned, as you may get undefined behavior with a signed value.
It is probably best to leave this using modulo, however, because the code will be better understood by other people maintaining the code, and a smart compiler may be doing this optimization anyway, which may make it pointless in the first place. Besides, the performance difference will probably be so small that it wouldn't matter in the first place.
It will work if the number of bits that you are using to represent the number is equal to number of bits in binary (unsigned) representation (100000000) of the divisor -1
which in this case is : 9-1= 8 (char)

Two's complement addition overflow in C

I saw a buggy code in C which was used to check whether addition results in overflow or not. It works fine with char, but gives incorrect answer when arguments are int and I couldn't figure why .
Here's the code with short arguments.
short add_ok( short x, short y ){
short sum = x+y;
return (sum-x==y) && (sum-y==x);
}
This version works fine, problem arise when you change arguments to int ( you can check it with INT_MAX )
Can you see what's wrong in here ?
Because in 2s complement, the integers can be arranged into a circle (in the sense of modulo arithmetic). Adding y and then subtracting y always gets you back where you started (undefined behaviour notwithstanding).
In your code, the addition does not overflow unless int is the same size as short. Due to default promotions, x+y is performed on the values of x and y promoted to int, and then the result is truncated to short in an implementation-defined manner.
Why not do simply: return x+y<=SHRT_MAX && x+y>=SHRT_MIN;
In C programming language, signed integers when converted to smaller signed integers, say char (for the sake of simplicity), are of implementation-defined manner. Even though many systems and programmers assume wrap-around overflow, it is not a standard. So what is wrap-around overflow?
Wrap-around overflow in Two's complement systems happens such that when a value can no longer be presented in the current type, it warps around the highest or lowest number that can be presented. So what does this mean? Take a look.
In signed char, the highest value that can be presented is 127 and the lowest is -128. Then what happens when we do: "char i = 128", is that the value stored in i becomes -128. Because the value was larger than the signed integral type, it wrapped around the lowest value, and if it was "char i = 129", then i will contain -127. Can you see it? Whenever an end reaches its maximum, it wraps around the other end (sign). Vice versa, if "char i = -129", then i will contain 127, and if it is "char i = -130", it will contain 126, because it reached its maximum and wrapped around the highest value.
(highest) 127, 126, 125, ... , -126, -127, -128 (lowest)
If the value is very large, it keeps wrapping around until it reaches a value that can be represented in its range.
UPDATE: the reason why int doesn't work in oppose to char and short is because that when both numbers are added there is a possibility of overflow (regardless of being int, short, or char, while not forgetting integral promotion), but because "short" and char are with smaller sizes than int and because they are promoted to int in expressions, they are represented again without truncation in this line:
return (sum-x==y) && (sum-y==x);
So any overflow is detected as explained later in detail, but when with int, it is not promoted to anything, so overflow will happen. For instance, if I do INT_MAX+1, then the result is INT_MIN, and if I tested for overflow by INT_MIN-1 == INT_MAX, the the result is TRUE! This is because "short" and char get promoted to int, evaluated, and then get truncated (overflowed). However, int get overflowed first and then evaluated, because they are not promoted to a larger size.
Think of char type without promotion, and try to make overflows and check them using the illustration above. You will find it that adding or subtracting values that cause the overflow returns you to where you were. However, this is not what happens in C, because char and "short" are promoted to int, thus overflow is detected, which is not true in int, because it is note promoted to a larger size.
END OF UPDATE
For your question, I checked your code in MinGW and Ubuntu 12.04, seems to work fine. I found later that the code works actually in systems where short is smaller than int, and when values don't exceed int range. This line:
return (sum-x==y) && (sum-y==x);
is true, because "sum-x" and "y" are evaluated as (int) so no wrap-around happens to, where it happened in the previous line (when assigned):
short sum = x+y;
Here is a test. If I entered 32767 for the first and 2 for the second, then when:
short sum = x+y;
sum will contain -32767, because of the wrap-around. However, when:
return (sum-x==y) && (sum-y==x);
"sum-x" (-32767 - 32767) will only be equal to y (2) (then buggy) if wrap-round occurs, but because of integral promotion, it never happen that way and "sum-x" value becomes -65534 which is not equal to y, which then leads to a correct detection.
Here is the code I used:
#include <stdio.h>
short add_ok( short x, short y ){
short sum = x+y;
return (sum-x==y) && (sum-y==x);
}
int main(void) {
short i, ii;
scanf("%hd %hd", &i, &ii);
getchar();
printf("%hd", add_ok(i, ii));
return 0;
}
Check here and here.
You need to provide the architecture you are working on, and what are the experimental values you tested, because not everyone faces what you say, and because of the implementation-defined nature of your question.
Reference: C99 6.3.1.3 here, and GNU C Manual here.
The compiler probably just replaces all calls to this expression with 1 because it's true in every case. The optimizing routine will perform copy propagation on sum and get
return (y==y) && (x==x);
and then:
return 1
It's true in every case because signed integer overflow is undefined behavior- hence, the compiler is free to guarantee that x+y-y == x and y+x-x == y.
If this was an unsigned operation it would fail similarly- since overflow is just performed as a modulo operation it is fairly easy to prove that
x+y mod SHRT_MAX - y mod SHRT_MAX == x
and similarly for the reverse case.

How to cast or convert an unsigned int to int in C?

My apologies if the question seems weird. I'm debugging my code and this seems to be the problem, but I'm not sure.
Thanks!
It depends on what you want the behaviour to be. An int cannot hold many of the values that an unsigned int can.
You can cast as usual:
int signedInt = (int) myUnsigned;
but this will cause problems if the unsigned value is past the max int can hold. This means half of the possible unsigned values will result in erroneous behaviour unless you specifically watch out for it.
You should probably reexamine how you store values in the first place if you're having to convert for no good reason.
EDIT: As mentioned by ProdigySim in the comments, the maximum value is platform dependent. But you can access it with INT_MAX and UINT_MAX.
For the usual 4-byte types:
4 bytes = (4*8) bits = 32 bits
If all 32 bits are used, as in unsigned, the maximum value will be 2^32 - 1, or 4,294,967,295.
A signed int effectively sacrifices one bit for the sign, so the maximum value will be 2^31 - 1, or 2,147,483,647. Note that this is half of the other value.
Unsigned int can be converted to signed (or vice-versa) by simple expression as shown below :
unsigned int z;
int y=5;
z= (unsigned int)y;
Though not targeted to the question, you would like to read following links :
signed to unsigned conversion in C - is it always safe?
performance of unsigned vs signed integers
Unsigned and signed values in C
What type-conversions are happening?
IMHO this question is an evergreen. As stated in various answers, the assignment of an unsigned value that is not in the range [0,INT_MAX] is implementation defined and might even raise a signal. If the unsigned value is considered to be a two's complement representation of a signed number, the probably most portable way is IMHO the way shown in the following code snippet:
#include <limits.h>
unsigned int u;
int i;
if (u <= (unsigned int)INT_MAX)
i = (int)u; /*(1)*/
else if (u >= (unsigned int)INT_MIN)
i = -(int)~u - 1; /*(2)*/
else
i = INT_MIN; /*(3)*/
Branch (1) is obvious and cannot invoke overflow or traps, since it
is value-preserving.
Branch (2) goes through some pains to avoid signed integer overflow
by taking the one's complement of the value by bit-wise NOT, casts it
to 'int' (which cannot overflow now), negates the value and subtracts
one, which can also not overflow here.
Branch (3) provides the poison we have to take on one's complement or
sign/magnitude targets, because the signed integer representation
range is smaller than the two's complement representation range.
This is likely to boil down to a simple move on a two's complement target; at least I've observed such with GCC and CLANG. Also branch (3) is unreachable on such a target -- if one wants to limit the execution to two's complement targets, the code could be condensed to
#include <limits.h>
unsigned int u;
int i;
if (u <= (unsigned int)INT_MAX)
i = (int)u; /*(1)*/
else
i = -(int)~u - 1; /*(2)*/
The recipe works with any signed/unsigned type pair, and the code is best put into a macro or inline function so the compiler/optimizer can sort it out. (In which case rewriting the recipe with a ternary operator is helpful. But it's less readable and therefore not a good way to explain the strategy.)
And yes, some of the casts to 'unsigned int' are redundant, but
they might help the casual reader
some compilers issue warnings on signed/unsigned compares, because the implicit cast causes some non-intuitive behavior by language design
If you have a variable unsigned int x;, you can convert it to an int using (int)x.
It's as simple as this:
unsigned int foo;
int bar = 10;
foo = (unsigned int)bar;
Or vice versa...
If an unsigned int and a (signed) int are used in the same expression, the signed int gets implicitly converted to unsigned. This is a rather dangerous feature of the C language, and one you therefore need to be aware of. It may or may not be the cause of your bug. If you want a more detailed answer, you'll have to post some code.
Some explain from C++Primer 5th Page 35
If we assign an out-of-range value to an object of unsigned type, the result is the remainder of the value modulo the number of values the target type can hold.
For example, an 8-bit unsigned char can hold values from 0 through 255, inclusive. If we assign a value outside the range, the compiler assigns the remainder of that value modulo 256.
unsigned char c = -1; // assuming 8-bit chars, c has value 255
If we assign an out-of-range value to an object of signed type, the result is undefined. The program might appear to work, it might crash, or it might produce garbage values.
Page 160:
If any operand is an unsigned type, the type to which the operands are converted depends on the relative sizes of the integral types on the machine.
...
When the signedness differs and the type of the unsigned operand is the same as or larger than that of the signed operand, the signed operand is converted to unsigned.
The remaining case is when the signed operand has a larger type than the unsigned operand. In this case, the result is machine dependent. If all values in the unsigned type fit in the large type, then the unsigned operand is converted to the signed type. If the values don't fit, then the signed operand is converted to the unsigned type.
For example, if the operands are long and unsigned int, and int and long have the same size, the length will be converted to unsigned int. If the long type has more bits, then the unsigned int will be converted to long.
I found reading this book is very helpful.

Resources