Suppose, I'm trying to subtract 2 unsigned integers:
247 = 1111 0111
135 = 1000 0111
If we subtract these 2 binary numbers we get = 0111 0000
Is this a underflow, since we only need 7 bits now?? Or how does that work??
Underflow in unsigned subtraction c = a - b occurs whenever b is larger than a.
However, that's somewhat of a circular definition, because how many kinds of machines perform the a < b comparison is by subtracting the operands using wraparound arithmetic, and then detecting the overflow based on the two operands and the result.
Note also that in C we don't speak about "overflow", because there is no error condition: C unsigned integers provide that wraparound arithmetic that is commonly found in hardware.
So, given that we have the wraparound arithmetic, we can detect whether wraparound (or overflow, depending on point of view) has taken place in a subtraction.
What we need is the most significant bits from a, b and c. Let's call them A, B and C. From these, the overflow V is calculated like this:
A B C | V
------+--
0 0 0 | 0
0 0 1 | 1
0 1 0 | 1
0 1 1 | 1
1 0 0 | 0
1 0 1 | 0
1 1 0 | 0
1 1 1 | 1
This simplifies to
A'B + A'C + BC
In other words, overflow in the unsigned subtraction c = a - b happens whenever:
the msb of a is 0 and that of b is 1;
or the msb of a is 0 and that of c is 1;
or the msb of b is 1 and that of c is also 1.
Subtracting 247 - 135 = 112 is clearly not overflow, since 247 is larger than 135. Applying the rules above, A = 1, B = 0 and C = 0. The 1 1 0 row of the table has a 0 in the V column: no overflow.
Generally, “underflow” means the ideal mathematical result of a calculation is below what the type can represent. If 7 is subtracted from 5 in unsigned arithmetic, the ideal mathematical result would be −2, but an unsigned type cannot represent −2, so the operation underflows. Or, in an eight-bit signed type that can represent numbers from −128 to +127, subtracting 100 from −100 would ideally produce −200, but this cannot be represented in the type, so the operation underflows.
In C, unsigned arithmetic is said not to underflow or overflow because the C standard defines the operations to be performed using modulo arithmetic instead of real-number arithmetic. For example, with 32-bit unsigned arithmetic, subtracting 7 from 5 would produce 4,294,967,294 (in hexadecimal, FFFFFFFE16), because it has wrapped modulo 232 = 4,294,967,296. People may nonetheless use the terms “underflow” or “overflow” when discussing these operations, intended to refer to the mathematical issues rather than the defined C behavior.
In other words, for whatever type you are using for arithmetic there is some lower limit L and some upper limit U that the type can represent. If the ideal mathematical result of an operation is less than L, the operation underflows. If the ideal mathematical result of an operation is greater than U, the operation overflows. “Underflow” and “overflow” mean the operation has gone out of the bounds of the type. “Overflow” may also be used to refer to any exceeding of the bounds of the type, including in the low direction.
It does not mean that fewer bits are needed to represent the result. When 100001112 is subtracted from 111101112, the result, 011100002 = 11100002, is within bounds, so there is no overflow or underflow. The fact that it needs fewer bits to represent is irrelevant.
(Note: For integer arithmetic, “underflow” or “overflow” is defined relative to the absolute bounds L and U. For floating-point arithmetic, these terms have somewhat different meanings. They may be defined relative to the magnitude of the result, neglecting the sign, and they are defined relative to the finite non-zero range of the format. A floating-point format may be able to represent 0, then various finite non-zero numbers, then infinity. Certain results between 0 and the smallest non-zero number the format can represent are said to underflow even though they are technically inside the range of representable numbers, which is from 0 to infinity in magnitude. Similarly, certain results above the greatest representable finite number are said to overflow even though they are inside the representable range, since they are less than infinity.)
Long story short, this is what happens when you have:
unsigned char n = 255; /* highest possible value for an unsigned char */
n = n + 1; /* now n is "overflowing" (although the terminology is not correct) to 0 */
printf("overflow: 255 + 1 = %u\n", n);
n = n - 1; /* n will now "underflow" from 0 to 255; */
printf("underflow: 0 - 1 = %u\n", n);
n *= 2; /* n will now be (255 * 2) % 256 = 254;
/* when the result is too high, modulo with 2 to the power of 8 is used */
/* for an 8 bit variable such as unsigned char; */
printf("large overflow: 255 * 2 = %u\n", n);
n = n * (-2) + 100; /* n should now be -408 which is 104 in terms of unsigned char. */
/* (Logic is this: 408 % 256 = 152; 256 - 152 = 104) */
printf("large underflow: 255 * 2 = %u\n", n);
The result of that is (compiled with gcc 11.1, flags -Wall -Wextra -std=c99):
overflow: 255 + 1 = 0
underflow: 0 - 1 = 255
large overflow: 255 * 2 = 254
large underflow: 255 * 2 = 104
Now the scientific version: The comments above represent just a mathematical model of what is going on. To better understand what is actually happening, the following rules apply:
Integer promotion:
Integer types smaller than int are promoted when an operation is
performed on them. If all values of the original type can be
represented as an int, the value of the smaller type is converted to
an int; otherwise, it is converted to an unsigned int.
So what actually happens in memory when the computer does n = 255; n = n + 1; is this:
First, the right side is evaluated as an int (signed), because the result fits in a signed int according to the rule of integer promotion. So the right side of the expression becomes in binary: 0b00000000000000000000000011111111 + 0b00000000000000000000000000000001 = 0b00000000000000000000000100000000 (a 32 bit int).
Truncation
The 32-bit int loses the most significant 24 bits when being assigned
back to an 8-bit number.
So, when 0b00000000000000000000000100000000 is assigned to variable n, which is an unsigned char, the 32-bit value is truncated to an 8-bit value (only the right-most 8 bits are copied) => n becomes 0b00000000.
The same thing happens for each operation. The expression on the right side evaluates to a signed int, than it is truncated to 8 bits.
Related
I was reading this Why is the sum of two large integers a negative number in the C language?. Then I tried the following code:
int main() {
unsigned int x,y,s;
x = 4294967295;
y = 4294967295;
s = x+y;
printf("%u",s);
return 0;
}
Output : 4294967294
1 ) How does it calculate sum (s)? I am a bit confused by the explanation given in the link.
When I increase the values of x and y to the extent which is beyond the range of unsigned int the result always seems to be in range of unsigned int. In fact it seems like the result keeps decreasing. It does give the following error.
sample.c:7:9: warning: large integer implicitly truncated to unsigned type [-Woverflow]
2 ) Can I brute force this program so that whenever value of x and y exceeds unsigned int range program throws an error.
The C standard has well defined behavior for unsigned integer overflow. When this happens, the mathematical result is truncated modulo the maximum allowable value + 1. In layman's terms, this means that the values wrap around.
In the case of adding 4294967295 and 4294967295, this wraparound behavior results in 4294967294.
Throwing an error would be in violation of the standard.
1 ) How does it calculate sum (s)?
See #dbush good answer.
2 ) Can I brute force this program so that whenever value of x and y exceeds unsigned int range program throws an error.
Code could detect the mathematical overflow easily with unsigned math. Math overflow occurs if sum is less than an operand of the addition. Testing against only one of x or y is sufficient.
unsigned int x,y,s;
x = 4294967295;
y = 4294967295;
s = x+y;
printf("%u\n",s);
if (s < x) puts("Math overflow");
// or
if (s < y) puts("Math overflow");
return 0;
For signed int tests, see Test if arithmetic operation will cause undefined behavior
In you case unsigned int is 32 bits, 4294967295 + 4294967295 == 8589934590 which has a 33 bit binary value:
1 1111 1111 1111 1111 1111 1111 1111 1110
^
Carry bit
The carry bit is lost because the representation has only 32 bits, and the resulting value:
1111 1111 1111 1111 1111 1111 1111 1110 = 4294967294 decimal
You must either detect an overflow before it happens to store the result in a larger type and test its value.
if( UINT_MAX - x < y )
{
puts("Math overflow") ;
}
else
{
s = x + y ;
printf( "%u\n", s ) ;
}
Or:
unsigned long long s = x + y ;
if( s > UINT_MAX )
{
puts("Math overflow") ;
}
else
{
printf( "%u\n", (unsigned int)s ) ;
}
According to kias.dyndns.org:
Most modern computers store memory in units of 8 bits, called a "byte"
(also called an "octet"). Arithmetic in such computers can be done in
bytes, but is more often done in larger units called "(short)
integers" (16 bits), "long integers" (32 bits) or "double integers"
(64 bits). Short integers can be used to store numbers between 0 and
216 - 1, or 65,535. Long integers can be used to store numbers between
0 and 232 - 1, or 4,294,967,295. and double integers can be used to
store numbers between 0 and 264 - 1, or 18,446,744,073,709,551,615.
(Check these!)
[...]
When a computer performs an unsigned integer arithmetic operation,
there are three possible problems which can occur:
if the result is too large to fit into the number of bits assigned to
it, an "overflow" is said to have occurred. For example if the result
of an operation using 16 bit integers is larger than 65,535, an
overflow results.
in the division of two integers, if the result is not itself an
integer, a "truncation" is said to have occurred: 10 divided by 3 is
truncated to 3, and the extra 1/3 is lost. This is not a problem, of
course, if the programmer's intention was to ignore the remainder!
any division by zero is an error, since division by zero is not
possible in the context of arithmetic.
[Original emphasis removed; emphasis added.]
This is my implementation to detect if an unsigned int overflow has occurred when trying to add two numbers.
The max value of unsigned int (UINT_MAX) on my system is 4294967295.
int check_addition_overflow(unsigned int a, unsigned int b) {
if (a > 0 && b > (UINT_MAX - a)) {
printf("overflow has occured\n");
}
return 0;
}
This seems to work with the values I've tried.
Any rogue cases? What do you think are the pros and cons?
You could use
if((a + b) < a)
The point is that if a + b is overflowing, the result will be trimmed and must be lower then a.
Consider the case with hypothetical bound range of 0 -> 9 (overflows at 10):
b can be 9 at the most. For any value a such that a + b >= 10, (a + 9) % 10 < a.
For any values a, b such that a + b < 10, since b is not negative, a + b >= a.
I believe OP was referring to carry-out, not overflow. Overflow occurs when the addition/subtraction of two signed numbers doesn't fit into the number of type's bits size -1 (minus sign bit). For example, if an integer type has 32 bits, then
adding 2147483647 (0x7FFFFFFF) and 1 gives us -2 (0x80000000).
So, the result fits into 32 bits and there is no carry-out. The true result should be 2147483648, but this doesn't fit into 31 bits. Cpu has no idea of signed/unsigned value, so it simply add bits together, where 0x7FFFFFFF + 1 = 0x80000000. So the carry of bits #31 was added to bit #32 (1 + 0 = 1), which is actually a sign bit, changed result from + to -.
Since the sign changed, the CPU would set the overflow flag to 1 and carry flag to 0.
Looking at round trip conversion in C from unsigned to float to unsigned, I was a bit suprised to see that unsigned to float rounding of a value like:
0x80000080
is rounded down to (float)0x80000000 instead of up to (float)0x80000100. Note that since we have 1+23 effective mantissa bits available in a float we can exactly represent any unsigned values that have the lowest 0xFF bits clear. Yes, both of these rounding possibilities are 128 distant from 0x80000080, so it could be argued that this is choice is arbitrary.
However, consider the rounding of the full range of values in the 256 bit region starting at 0x80000000
#include <stdio.h>
int main()
{
unsigned i ;
for ( i = 0 ; i < 256 ; i++ )
{
unsigned v = 0x80000000 + i ;
int roundUpDiff = 256 - i ;
float f = (float)v ;
unsigned r = (unsigned)f ;
printf( "0x%08X = 0x80000000 + %d = 0x80000100 - %d -> 0x%08X\n", v, i, roundUpDiff, r ) ;
}
return 0 ;
}
A subset of the output from this is:
0x8000007C = 0x80000000 + 124 = 0x80000100 - 132 -> 0x80000000
0x8000007D = 0x80000000 + 125 = 0x80000100 - 131 -> 0x80000000
0x8000007E = 0x80000000 + 126 = 0x80000100 - 130 -> 0x80000000
0x8000007F = 0x80000000 + 127 = 0x80000100 - 129 -> 0x80000000
0x80000080 = 0x80000000 + 128 = 0x80000100 - 128 -> 0x80000000
0x80000081 = 0x80000000 + 129 = 0x80000100 - 127 -> 0x80000100
0x80000082 = 0x80000000 + 130 = 0x80000100 - 126 -> 0x80000100
0x80000083 = 0x80000000 + 131 = 0x80000100 - 125 -> 0x80000100
If the direction of the rounding choice for all values is counted, we see that there is rounding down of all the values in the 0x80000000-0x80000080 range (i.e. 129 of the 256 values rounded down), and rounding up of all the values in the range 0x80000081-0x800000FF (i.e. 127 of the 256 values rounded up).
Using a decimal rounding analogy, if we were rounding to the nearest ten, this seems like a decision to round values:
9,8,7,6
up towards ten, but to round the digits:
5,4,3,2,1,0
down towards zero?
What's the motivation for a rounding mode like this (I presume it's the default rounding mode since I've not explicitly specified otherwise)?
Typical FP rounding mode (which can be controlled) is round to nearest, ties to even.
Integer rounding is towards 0 for compliant C compilers. Also see #Pascal Cuoq comment.
[Edit] First post was signed. Changed to unsigned per OP.
Example: uint32_t to float to uint32_t
8000007F 0x1.000000p+31 80000000 Nearer to lower value, round down
80000080 0x1.000000p+31 80000000 Tie, round down as its "even"
80000081 0x1.000002p+31 80000100 Nearer to higher value, round up
8000017F 0x1.000002p+31 80000100 Nearer to lower value, round down
80000180 0x1.000004p+31 80000200 Tie, round up as its "even"
80000181 0x1.000004p+31 80000200 Nearer to higher value, round up
"Even" in this context means of the 2 choices to rounded to, choose the one with the least significant bit of the float set to 0.
Ref
The C11dr Annex F.3 (normative) IEC 60559 floating-point arithmetic says
— The conversions from integer to floating types provide the IEC 60559 conversions from integer to floating point.
— The conversions from floating to integer types provide IEC 60559-like conversions but always round toward zero.
Due to the way conversions and operations are defined in C, it seems to rarely matter whether you use a signed or an unsigned variable:
uint8_t u; int8_t i;
u = -3; i = -3;
u *= 2; i *= 2;
u += 15; i += 15;
u >>= 2; i >>= 2;
printf("%u",u); // -> 2
printf("%u",i); // -> 2
So, is there a set of rules to tell under which conditions the signedness of a variable really makes a difference?
It matters in these contexts:
division and modulo: -2/2 = 1, -2u/2 = UINT_MAX/2-1, -3%4 = -3, -3u%4 = 1
shifts. For negative signed values, the result of >> and << are implementation defined or undefined, resp. For unsigned values, they are always defined.
relationals -2 < 0, -2u > 0
overflows. x+1 > x may be assumed by the compiler to be always true iff x has signed type.
Yes. Signedness will affect the result of Greater Than and Less Than operators in C. Consider the following code:
unsigned int a = -5;
unsigned int b = 7;
if (a < b)
printf("Less");
else
printf("More");
In this example, "More" is incorrectly output, because the -5 is converted to a very high positive number by the compiler.
This will also affect your arithmetic with different sized variables. Again, consider this example:
unsigned char a = -5;
signed short b = 12;
printf("%d", a+b);
The returned result is 263, not the expected 7. This is because -5 is actually treated as 251 by the compiler. Overflow makes your operations work correctly for same-sized variables, but when expanding, the compiler does not expand the sign bit for unsigned variables, so it treats them as their original positive representation in the larger sized space. Study how two's compliment works and you'll see where this result comes from.
It affects the range of values that you can store in the variable.
It is relevant mainly in comparison.
printf("%d", (u-3) < 0); // -> 0
printf("%d", (i-3) < 0); // -> 1
Overflow on unsigned integers just wraps around. On signed values this is undefined behavior, everything can happen.
The signedness of 2's complement numbers is simply just a matter of how you are interpreting the number. Imagine the 3 bit numbers:
000
001
010
011
100
101
110
111
If you think of 000 as zero and the numbers as they are natural to humans, you would interpret them like this:
000: 0
001: 1
010: 2
011: 3
100: 4
101: 5
110: 6
111: 7
This is called "unsigned integer". You see everything as a number bigger than/equal to zero.
Now, what if you want to have some numbers as negative? Well, 2's complement comes to rescue. 2's complement is known to most people as just a formula, but in truth it's just congruency modulo 2^n where n is the number of bits in your number.
Let me give you a few examples of congruency:
2 = 5 = 8 = -1 = -4 module 3
-2 = 6 = 14 module 8
Now, just for convenience, let's say you decide to have the left most bit of a number as its sign. So you want to have:
000: 0
001: positive
010: positive
011: positive
100: negative
101: negative
110: negative
111: negative
Viewing your numbers congruent modulo 2^3 (= 8), you know that:
4 = -4
5 = -3
6 = -2
7 = -1
Therefore, you view your numbers as:
000: 0
001: 1
010: 2
011: 3
100: -4
101: -3
110: -2
111: -1
As you can see, the actual bits for -3 and 5 (for example) are the same (if the number has 3 bits). Therefore, writing x = -3 or x = 5 gives you the same result.
Interpreting numbers congruent modulo 2^n has other benefits. If you sum 2 numbers, one negative and one positive, it could happen on paper that you have a carry that would be thrown away, yet the result is still correct. Why? That carry was a 2^n which is congruent to 0 modulo 2^n! Isn't that convenient?
Overflow is also another case of congruency. In our example, if you sum two unsigned numbers 5 and 6, you get 3, which is actually 11.
So, why do you use signed and unsigned? For the CPU there is actually very little difference. For you however:
If the number has n bits, the unsigned represents numbers from 0 to 2^n-1
If the number has n bits, the signed represents numbers from -2^(n-1) to 2^(n-1)-1
So, for example if you assign -1 to a an unsigned number, it's the same as assigning 2^n-1 to it.
As per your example, that's exactly what you are doing. you are assigning -3 to a uint8_t, which is illegal, but as far as the CPU is concerned you are assigning 253 to it. Then all the rest of the operations are the same for both types and you end up getting the same result.
There is however a point that your example misses. operator >> on signed number extends the sign when shifting. Since the result of both of your operations is 9 before shifting you don't notice this. If you didn't have the +15, you would have -6 in i and 250 in u which then >> 2 would result in -2 in i (if printed with %u, 254) and 62 in u. (See Peter Cordes' comment below for a few technicalities)
To understand this better, take this example:
(signed)101011 (-21) >> 3 ----> 111101 (-3)
(unsigned)101011 ( 43) >> 3 ----> 000101 ( 5)
If you notice, floor(-21/8) is actually -3 and floor(43/8) is 5. However, -3 and 5 are not equal (and are not congruent modulo 64 (64 because there are 6 bits))
There are two unsigned ints (x and y) that need to be subtracted. x is always larger than y. However, both x and y can wrap around; for example, if they were both bytes, after 0xff comes 0x00. The problem case is if x wraps around, while y does not. Now x appears to be smaller than y. Luckily, x will not wrap around twice (only once is guaranteed). Assuming bytes, x has wrapped and is now 0x2, whereas y has not and is 0xFE. The right answer of x - y is supposed to be 0x4.
Maybe,
( x > y) ? (x-y) : (x+0xff-y);
But I think there is another way, something involving 2s compliment?, and in this embedded system, x and y are the largest unsigned int types, so adding 0xff... is not possible
What is the best way to write the statement (target language is C)?
Assuming two unsigned integers:
If you know that one is supposed to be "larger" than the other, just subtract. It will work provided you haven't wrapped around more than once (obviously, if you have, you won't be able to tell).
If you don't know that one is larger than the other, subtract and cast the result to a signed int of the same width. It will work provided the difference between the two is in the range of the signed int (if not, you won't be able to tell).
To clarify: the scenario described by the original poster seems to be confusing people, but is typical of monotonically increasing fixed-width counters, such as hardware tick counters, or sequence numbers in protocols. The counter goes (e.g. for 8 bits) 0xfc, 0xfd, 0xfe, 0xff, 0x00, 0x01, 0x02, 0x03 etc., and you know that of the two values x and y that you have, x comes later. If x==0x02 and y==0xfe, the calculation x-y (as an 8-bit result) will give the correct answer of 4, assuming that subtraction of two n-bit values wraps modulo 2n - which C99 guarantees for subtraction of unsigned values. (Note: the C standard does not guarantee this behaviour for subtraction of signed values.)
Here's a little more detail of why it 'just works' when you subtract the 'smaller' from the 'larger'.
A couple of things going into this…
1. In hardware, subtraction uses addition: The appropriate operand is simply negated before being added.
2. In two’s complement (which pretty much everything uses), an integer is negated by inverting all the bits then adding 1.
Hardware does this more efficiently than it sounds from the above description, but that’s the basic algorithm for subtraction (even when values are unsigned).
So, lets figure 2 – 250 using 8bit unsigned integers. In binary we have
0 0 0 0 0 0 1 0
- 1 1 1 1 1 0 1 0
We negate the operand being subtracted and then add. Recall that to negate we invert all the bits then add 1. After inverting the bits of the second operand we have
0 0 0 0 0 1 0 1
Then after adding 1 we have
0 0 0 0 0 1 1 0
Now we perform addition...
0 0 0 0 0 0 1 0
+ 0 0 0 0 0 1 1 0
= 0 0 0 0 1 0 0 0 = 8, which is the result we wanted from 2 - 250
Maybe I don't understand, but what's wrong with:
unsigned r = x - y;
The question, as stated, is confusing. You said that you are subtracting unsigned values. If x is always larger than y, as you said, then x - y cannot possibly wrap around or overflow. So you just do x - y (if that's what you need) and that's it.
This is an efficient way to determine the amount of free space in a circular buffer or do sliding window flow control.
Use unsigned ints for head and tail - increment them and let them wrap!
Buffer length has to be a power of 2.
free = ((head - tail) & size_mask), where size_mask is 2^n-1 the buffer or window size.
Just to put the already correct answer into code:
If you know that x is the smaller value, the following calculation just works:
int main()
{
uint8_t x = 0xff;
uint8_t y = x + 20;
uint8_t res = y - x;
printf("Expect 20: %d\n", res); // res is 20
return 0;
}
If you do not know which one is smaller:
int main()
{
uint8_t x = 0xff;
uint8_t y = x + 20;
int8_t res1 = (int8_t)x - y;
int8_t res2 = (int8_t)y - x;
printf("Expect -20 and 20: %d and %d\n", res1, res2);
return 0;
}
Where the difference must be inside the range of uint8_t in this case.
The code experiment helped me to understand the solution better.
The problem should be stated as follows:
Let's assume the position (angle) of two pointers a and b of a clock is given by an uint8_t. The whole circumerence is devided into the 256 values of an uint8_t. How can the smaller distance between the two pointer be calculated efficiently?
A solution is:
uint8_t smaller_distance = abs( (int8_t)( a - b ) );
I suspect there is nothing more effient as otherwise there would be something more efficient than abs().
To echo everyone else replying, if you just subtract the two and interpret the result as unsigned you'll be fine.
Unless you have an explicit counterexample.
Your example of x = 0x2, y= 0x14 would not result in 0x4, it would result in 0xEE, unless you have more constraints on the math that are unstated.
Yet another answer, and hopefully easy to understand:
SUMMARY:
It's assumed the OP's x and y are assigned values from a counter, e.g., from a timer.
(x - y) will always give the value desired, even if the counter wraps.
This assumes the counter is incremented less than 2^N times between y and x,
for N-bit unsigned int's.
DESCRIPTION:
A counter variable is unsigned and it can wrap around.
A uint8 counter would have values:
0, 1, 2, ..., 255, 0, 1, 2, ..., 255, ...
The number of counter tics between two points can be calculated as shown below.
This assumes the counter is incremented less than 256 times, between y and x.
uint8 x, y, counter, counterTics;
<initalize the counter>
<do stuff while the counter increments>
y = counter;
<do stuff while the counter increments>
x = counter;
counterTics = x - y;
EXPLANATION:
For uint8, and the counter-tics from y to x is less than 256 (i.e., less than 2^8):
If (x >= y) then: the counter did not wrap, counterTics == x - y
If (x < y) then: the counter wrapped, counterTics == (256-y) + x
(256-y) is the number of tics before wrapping.
x is the number of tics after wrapping.
Note: if those calculations are made in the order shown, no negative numbers are involved.
This equation holds for both cases: counterTics == (256+x-y) mod 256
For uintN, where N is the number of bits:
counterTics == ((2^N)+x-y) mod (2^N)
The last equation also describes the result in C when subtracting unsigned int's, in general.
This is not to say the compiler or processor uses that equation when subtracting unsigned int's.
RATIONALE:
The explanation is consistent with what is described in this ACM paper:
"Understanding Integer Overflow in C/C++", by Dietz, et al.
HARDWARE INTEGER ARITHMETIC
When an n-bit addition or subtraction operation on unsigned or two’s complement integers overflows, the result “wraps around,” effectively subtracting 2n from, or adding 2n to, the true mathematical result. Equivalently, the result can be considered to occupy n+1 bits; the lower n bits are placed into the result register and the highest-order bit is placed into the processor’s carry flag.
INTEGER ARITHMETIC IN C AND C++
3.3. Unsigned Overflow
A computation involving unsigned operands can never overflow, because a result that cannot be represented by the resulting unsigned integer type is reduced modulo the number that is one greater than the largest value that can be represented by the resulting type.
Thus, the semantics for unsigned overflow in C/C++ are precisely the same as the semantics of processor-level unsigned overflow as described in Section 2. As shown in Table I, UINT MAX+1 must evaluate to zero in a conforming C and C++ implementation.
Also, it's easy to write a C program to test that the cases shown work as described.