In a hash table implementation in C, I am relying on the mod operator to transform my hash of the key by the capacity of the hash table as follows:
int i = compute_index(key, hash, capacity);
while(table[i].occupied && (strcmp(table[i].key, key) != 0))
{
...
The hash seems to be computed correctly, because in my debugger, I am able to utilize the hash function and the key to output -724412585.
The strange thing is, when I mod this number by the capacity using a regular calculator, I get 7. But within the debugger, and my code, the integer -1 is returned. This is very confusing and I would appreciate some help.
My compute_index() simply does this:
int compute_index(const char* key, int (*hash)(const char*), int cap)
{
return hash(key) % cap;
}
Would appreciate some guidance, thank you.
There are two common conventions for the sign of a % b: in one case, the sign is the same as b (i.e. always positive when b is positive), and in the other case it's the same as a (i.e. negative when a is negative). Older versions of C allowed either behavior, but C99 and later require the second behavior (which goes hand-in-hand with requiring truncating division, rather than leaving the choice of truncating or flooring division up to the compiler).
The upshot of this is that if your hash function can return a negative number, then hash(key) % cap will also be negative sometimes. The simplest thing you can do is to alter hash to return unsigned int instead of int.
I agree w/ everything #hobbs said, except for the following:
I believe the easiest thing you can do is either:
change the output of hash() to unsigned int,
or cast the output of hash() to unsigned int within the body of compute_index() before the modulo operation, or
add the modulus back in if the result of the % operation < 0 (to get the abstract-algebraic sense of "integer mod q").
The behavior of % in C for negative numbers can be confusing to those with a background in abstract algebra, particularly for one who works with the rings of integers mod q.
Related
Consider following program (C99):
#include <stdio.h>
#include <stdlib.h>
#include <inttypes.h>
int main(void)
{
printf("Enter int in range %jd .. %jd:\n > ", INTMAX_MIN, INTMAX_MAX);
intmax_t i;
if (scanf("%jd", &i) == 1)
printf("Result: |%jd| = %jd\n", i, imaxabs(i));
}
Now as I understand it, this contains easily triggerable undefined behaviour, like this:
Enter int in range -9223372036854775808 .. 9223372036854775807:
> -9223372036854775808
Result: |-9223372036854775808| = -9223372036854775808
Questions:
Is this really undefined behaviour, as in "code is allowed to trigger any code path, which any code that stroke compiler's fancy", when user enters the bad number? Or is it some other flavor of not-completely-defined?
How would a pedantic programmer go about guarding against this, without making any assumptions not guaranteed by standard?
(There are a few related questions, but I didn't find one which answers question 2 above, so if you suggest duplicate, please make sure it answers that.)
If the result of imaxabs cannot be represented, can happen if using two's complement, then the behavior is undefined.
7.8.2.1 The imaxabs function
The imaxabs function computes the absolute value of an integer j. If the result cannot
be represented, the behavior is undefined. 221)
221) The absolute value of the most negative number cannot be represented in two’s complement.
The check that makes no assumptions and is always defined is:
intmax_t i = ... ;
if( i < -INTMAX_MAX )
{
//handle error
}
(This if statement cannot be taken if using one's complement or sign-magnitude representation, so the compiler might give a unreachable code warning. The code itself is still defined and valid. )
How would a pedantic programmer go about guarding against this, without making any assumptions not guaranteed by standard?
One method is to use unsigned integers. The overflow behaviour of unsigned integers is well-defined as is the behaviour when converting from a signed to an unsigned integer.
So I think the following should be safe (turns out it's horriblly broken on some really obscure systems, see later in the post for an improved version)
uintmax_t j = i;
if (j > (uintmax_t)INTMAX_MAX) {
j = -j;
}
printf("Result: |%jd| = %ju\n", i, j);
So how does this work?
uintmax_t j = i;
This converts the signed integer into an unsigned one. IF it's positive the value stays the same, if it's negative the value increases by 2n (where n is the number of bits). This converts it to a large number (larger than INTMAX_MAX)
if (j > (uintmax_t)INTMAX_MAX) {
If the original number was positive (and hence less than or equal to INTMAX_MAX) this does nothing. If the original number was negative the inside of the if block is run.
j = -j;
The number is negated. The result of a negation is clearly negative and so cannot be represented as an unsigned integer. So it is increased by 2n.
So algebraically the result for negative i looks like
j = - (i + 2n) + 2n = -i
Clever, but this solution makes assumptions. This fails if INTMAX_MAX == UINTMAX_MAX, which is allowed by C Standard.
Hmm, lets look at this (i'm reading https://busybox.net/~landley/c99-draft.html which is apprarently the last C99 draft prior to standardisation, if anything changed in the final standard please do tell me.
When typedef names differing only in the absence or presence of the initial u are defined, they shall denote corresponding signed and unsigned types as described in 6.2.5; an implementation shall not provide a type without also providing its corresponding type.
In 6.2.5 I see
For each of the signed integer types, there is a corresponding (but different) unsigned integer type (designated with the keyword unsigned) that uses the same amount of storage (including sign information) and has the same alignment requirements.
In 6.2.6.2 I see
#1
For unsigned integer types other than unsigned char, the bits of the object representation shall be divided into two groups: value bits and padding bits (there need not be any of the latter). If there are N value bits, each bit shall represent a different power of 2 between 1 and 2N-1, so that >objects of that type shall be capable of representing values from 0 to 2N-1 >using a pure binary representation; this shall be known as the value representation. The values of any padding bits are unspecified.39)
#2
For signed integer types, the bits of the object representation shall be divided into three groups: value bits, padding bits, and the sign bit. There need not be any padding bits; there shall be exactly one sign bit. Each bit that is a value bit shall have the same value as the same bit in the object representation of the corresponding unsigned type (if there are M value bits in the signed type and N in the unsigned type, then M<=N). If the sign bit is zero, it shall not affect the resulting value.
So yes it seems you are right, while the signed and unsigned types have to be the same size it does seem to be valid for the unsigned type to have one more padding bit than the signed type.
Ok, based on the analysis above revealing a flaw in my first attempt i've written a more paranoid variant. This has two changes from my first version.
I use i < 0 rather than j > (uintmax_t)INTMAX_MAX to check for negative numbers. This means that the algorithm proceduces correct results for numbers grater than or equal to -INTMAX_MAX even when INTMAX_MAX == UINTMAX_MAX.
I add handling for the error case where INTMAX_MAX == UINTMAX_MAX, INTMAX_MIN == -INTMAX_MAX -1 and i == INTMAX_MIN. This will result in j=0 inside the if condition which we can easilly test for.
It can be seen from the requirements in the C standard that INTMAX_MIN cannot be smaller than -INTMAX_MAX -1 since there is only one sign bit and the number of value bits must be the same or lower than in the corresponding unsigned type. There are simply no bit patterns left to represent smaller numbers.
uintmax_t j = i;
if (i < 0) {
j = -j;
if (j == 0) {
printf("your platform sucks\n");
exit(1);
}
}
printf("Result: |%jd| = %ju\n", i, j);
#plugwash I think 2501 is correct. For example, -UINTMAX_MAX value becomes 1: (-UINTMAX_MAX + (UINTMAX_MAX + 1)), and is not caught by your if. – hyde 58 mins ago
Umm,
assuming INTMAX_MAX == UINTMAX_MAX and i = -INTMAX_MAX
uintmax_t j = i;
after this command j = -INTMAX_MAX + (UINTMAX_MAX + 1) = 1
if (i < 0) {
i is less than zero so we run the commands inside the if
j = -j;
after this command j = -1 + (UINTMAX_MAX + 1) = UINTMAX_MAX
which is the correct answer, so no need to trap it in an error case.
On two-complement systems getting the absolute number of the most negative value is indeed undefined behavior, as the absolute value would be out of range. And it's nothing the compiler can help you with, as the UB happens at run-time.
The only way to protect against that is to compare the input against the most negative value for the type (INTMAX_MIN in the code you show).
So calculating the absolute value of an integer invokes undefined behaviour in one single case. Actually, while the undefined behaviour can be avoided, it is impossible to give the correct result in one case.
Now consider multiplication of an integer by 3: Here we have a much more serious problem. This operation invokes undefined behaviour in 2/3rds of all cases! And for two thirds of all int values x, finding an int with the value 3x is just impossible. That's a much more serious problem than the absolute value problem.
You may want to use some bit hacks:
int v; // we want to find the absolute value of v
unsigned int r; // the result goes here
int const mask = v >> sizeof(int) * CHAR_BIT - 1;
r = (v + mask) ^ mask;
This works well when INT_MIN < v <= INT_MAX. In the case where v == INT_MIN, it remains INT_MIN , without causing undefined behavior.
You can also use bitwise operation to handle this on ones' complement and sign-magnitude systems.
Reference: https://graphics.stanford.edu/~seander/bithacks.html#IntegerAbs
according to this http://linux.die.net/man/3/imaxabs
Notes
Trying to take the absolute value of the most negative integer is not defined.
To handle the full range you could add something like this to your code
if (i != INTMAX_MIN) {
printf("Result: |%jd| = %jd\n", i, imaxabs(i));
} else { /* Code around undefined abs( INTMAX_MIN) /*
printf("Result: |%jd| = %jd%jd\n", i, -(i/10), -(i%10));
}
edit: As abs(INTMAX_MIN) cannot be represented on a 2's complement machine, 2 values within respresentable range are concatenated on output as a string.
Tested with gcc, though printf required %lld as %jd was not a supported format.
Is this really undefined behaviour, as in "code is allowed to trigger any code path, which any code that stroke compiler's fancy", when user enters the bad number? Or is it some other flavor of not-completely-defined?
The behaviour of the program is only undefined, when the bad number is successfully input-ed and passed to imaxabs(), which on a typical 2's complement system returns a -ve result as you observed.
That is the undefined behaviour in this case, the implementation would also be allowed to terminate the program with an over-flow error if the ALU set status flags.
The reason for "undefined behaviour" in C is so compiler writers don't have to guard against overflow, so programs can run more efficiently. Whilst it is within C standard for every C program using abs() to try to kill your first born, just because you call it with a too -ve value, writing such code into the object file would simply be perverse.
The real problem with these undefined behaviours, is that an optimising compiler, can reason away naive checks so code like :
r = (i < 0) ? -i : i;
if (r < 0) { // This code may be pointless
// Do overflow recovery
doRecoveryProcessing();
} else {
printf("%jd", r);
}
As a compiler optomiser can reason that negative values are negated, it could in principal determine that (r <0) is always false, so the attempt to trap the problem fails.
How would a pedantic programmer go about guarding against this, without making any assumptions not guaranteed by standard?
By far the best way, is simply to ensure that the program works on a valid range, so in this case validating the input suffices (disallow INTMAX_MIN).
Programs printing tables of abs() ought to avoid INT*_MIN and so on.
if (i != INTMAX_MIN) {
printf("Result: |%jd| = %jd\n", i, imaxabs(i));
} else { /* Code around undefined abs( INTMAX_MIN) /*
printf("Result: |%jd| = %jd%jd\n", i, -(i/10), -(i%10));
}
Appears to write out the abs( INTMAX_MIN) by fakery, allowing the program to live up to it's promise to the user.
I was given this question in an interview to describe the output in comments.
unsigned int d2(unsigned int a)
{
__int64 q = (__int64)a * 0x0AAAAAAAB; // (2^33+1) / 3
return (unsigned int)(q >> 33);
}
I have checked other questions in Stackoverflow related to division by 3 but none seems so fast and small.
Can anybody help me in explaining how the function is giving the output written in comments ?
The function divides a 32-bit unsigned number by 3.
If you multiply by 2^33 and then divide by 2^33 (by right shifting), then you get the original number. But if you multiply by (2^33)/3 and then divide by 2^33, you effectively divide by three.
The last digit is B instead of A to cause the result to be rounded up.
There is no need to actually write this in your code because the compiler will usually do this for you. Try it and see. (Furthermore, for a signed input, the compiler can safely generate a signed right shift, but the C language does not define such an operation.)
When trying to store short integer values above 32,767 in C, just to see what happens, I notice the result that gets printed to the screen is the number I am trying to store, - 65,536. E.g. if i try to store 65,536 in a short variable, the number which is printed to the screen is 0. If I try to store 32,768 I get -32,768 printed to the screen. And if I try to store 65,546 and print it to the screen I get 10. I think you get the picture. Why am I seeing these results?
Integer values are stored using Twos Complement. In twos complement, the range of possible values is -2^n to 2^n-1, where n is the number of bits used for storage. Because of the way the storage is done, when you go above 2^n-1, you end up wrapping around back to 2^n.
Shorts use 16 bits (15 bits for numeric storage, with the final bit being the sign).
EDIT: Keep in mind that this behavior is NOT guaranteed to occur. Programming languages may act completely differently. Technically, going above or below the max/min value is undefined behavior and it should be treated as such. (Thanks to Eric Postpischil for keeping me on my toes)
When a value is converted to a signed integer type but cannot be represented in that type, overflow occurs, and the behavior is undefined. It is common to see results as if a two’s complement encoding is used and as if the low bits are stored (or, equivalently, the value is wrapped modulo an appropriate power of two). However, you cannot rely on this behavior. The C standard says that, when signed integer overflow occurs, the behavior is undefined. So a compiler may act in surprising ways.
Consider this code, compiled for a target where short int is 16 bits:
void foo(int a, int b)
{
if (a)
{
short int x = b;
printf("x is %hd.\n", x);
}
else
{
printf("x is not assigned.\n");
}
}
void bar(int a, int b)
{
if (b == 65536)
foo(a, b);
}
Observe that foo is a perfectly fine function on its own, provided b is within range of a short int. And bar is a perfectly fine function, as long as it is called only with a equal to zero or b not equal to 65536.
While the compiler is in-lining foo in bar, it may deduce from the fact that b must be 65536 at this point, that there would be an overflow in short int x = b;. This implies either that this path is never taken (i.e., a must be zero) or that any behavior is permitted (because the behavior upon overflow is undefined by the C standard). In either case, the compiler is free to omit this code path. Thus, for optimization, the compiler could omit this path and generate code only for printf("x is not assigned.\n");. If you then executed code containing bar(1, 65536), the output would be “x is not assigned.”!
Compilers do make optimizations of this sort: The observation that one code path has undefined behavior implies the compiler may conclude that code path is never used.
To an observer, it looks like the effect of assigning a too-large value to a short int is to cause completely different code to be executed.
This is a rather silly question but why is int commonly used instead of unsigned int when defining a for loop for an array in C or C++?
for(int i;i<arraySize;i++){}
for(unsigned int i;i<arraySize;i++){}
I recognize the benefits of using int when doing something other than array indexing and the benefits of an iterator when using C++ containers. Is it just because it does not matter when looping through an array? Or should I avoid it all together and use a different type such as size_t?
Using int is more correct from a logical point of view for indexing an array.
unsigned semantic in C and C++ doesn't really mean "not negative" but it's more like "bitmask" or "modulo integer".
To understand why unsigned is not a good type for a "non-negative" number please consider these totally absurd statements:
Adding a possibly negative integer to a non-negative integer you get a non-negative integer
The difference of two non-negative integers is always a non-negative integer
Multiplying a non-negative integer by a negative integer you get a non-negative result
Obviously none of the above phrases make any sense... but it's how C and C++ unsigned semantic indeed works.
Actually using an unsigned type for the size of containers is a design mistake of C++ and unfortunately we're now doomed to use this wrong choice forever (for backward compatibility). You may like the name "unsigned" because it's similar to "non-negative" but the name is irrelevant and what counts is the semantic... and unsigned is very far from "non-negative".
For this reason when coding most loops on vectors my personally preferred form is:
for (int i=0,n=v.size(); i<n; i++) {
...
}
(of course assuming the size of the vector is not changing during the iteration and that I actually need the index in the body as otherwise the for (auto& x : v)... is better).
This running away from unsigned as soon as possible and using plain integers has the advantage of avoiding the traps that are a consequence of unsigned size_t design mistake. For example consider:
// draw lines connecting the dots
for (size_t i=0; i<pts.size()-1; i++) {
drawLine(pts[i], pts[i+1]);
}
the code above will have problems if the pts vector is empty because pts.size()-1 is a huge nonsense number in that case. Dealing with expressions where a < b-1 is not the same as a+1 < b even for commonly used values is like dancing in a minefield.
Historically the justification for having size_t unsigned is for being able to use the extra bit for the values, e.g. being able to have 65535 elements in arrays instead of just 32767 on 16-bit platforms. In my opinion even at that time the extra cost of this wrong semantic choice was not worth the gain (and if 32767 elements are not enough now then 65535 won't be enough for long anyway).
Unsigned values are great and very useful, but NOT for representing container size or for indexes; for size and index regular signed integers work much better because the semantic is what you would expect.
Unsigned values are the ideal type when you need the modulo arithmetic property or when you want to work at the bit level.
This is a more general phenomenon, often people don't use the correct types for their integers. Modern C has semantic typedefs that are much preferable over the primitive integer types. E.g everything that is a "size" should just be typed as size_t. If you use the semantic types systematically for your application variables, loop variables come much easier with these types, too.
And I have seen several bugs that where difficult to detect that came from using int or so. Code that all of a sudden crashed on large matrixes and stuff like that. Just coding correctly with correct types avoids that.
It's purely laziness and ignorance. You should always use the right types for indices, and unless you have further information that restricts the range of possible indices, size_t is the right type.
Of course if the dimension was read from a single-byte field in a file, then you know it's in the range 0-255, and int would be a perfectly reasonable index type. Likewise, int would be okay if you're looping a fixed number of times, like 0 to 99. But there's still another reason not to use int: if you use i%2 in your loop body to treat even/odd indices differently, i%2 is a lot more expensive when i is signed than when i is unsigned...
Not much difference. One benefit of int is it being signed. Thus int i < 0 makes sense, while unsigned i < 0 doesn't much.
If indexes are calculated, that may be beneficial (for example, you might get cases where you will never enter a loop if some result is negative).
And yes, it is less to write :-)
Using int to index an array is legacy, but still widely adopted. int is just a generic number type and does not correspond to the addressing capabilities of the platform. In case it happens to be shorter or longer than that, you may encounter strange results when trying to index a very large array that goes beyond.
On modern platforms, off_t, ptrdiff_t and size_t guarantee much more portability.
Another advantage of these types is that they give context to someone who reads the code. When you see the above types you know that the code will do array subscripting or pointer arithmetic, not just any calculation.
So, if you want to write bullet-proof, portable and context-sensible code, you can do it at the expense of a few keystrokes.
GCC even supports a typeof extension which relieves you from typing the same typename all over the place:
typeof(arraySize) i;
for (i = 0; i < arraySize; i++) {
...
}
Then, if you change the type of arraySize, the type of i changes automatically.
It really depends on the coder. Some coders prefer type perfectionism, so they'll use whatever type they're comparing against. For example, if they're iterating through a C string, you might see:
size_t sz = strlen("hello");
for (size_t i = 0; i < sz; i++) {
...
}
While if they're just doing something 10 times, you'll probably still see int:
for (int i = 0; i < 10; i++) {
...
}
I use int cause it requires less physical typing and it doesn't matter - they take up the same amount of space, and unless your array has a few billion elements you won't overflow if you're not using a 16-bit compiler, which I'm usually not.
Because unless you have an array with size bigger than two gigabyts of type char, or 4 gigabytes of type short or 8 gigabytes of type int etc, it doesn't really matter if the variable is signed or not.
So, why type more when you can type less?
Aside from the issue that it's shorter to type, the reason is that it allows negative numbers.
Since we can't say in advance whether a value can ever be negative, most functions that take integer arguments take the signed variety. Since most functions use signed integers, it is often less work to use signed integers for things like loops. Otherwise, you have the potential of having to add a bunch of typecasts.
As we move to 64-bit platforms, the unsigned range of a signed integer should be more than enough for most purposes. In these cases, there's not much reason not to use a signed integer.
Consider the following simple example:
int max = some_user_input; // or some_calculation_result
for(unsigned int i = 0; i < max; ++i)
do_something;
If max happens to be a negative value, say -1, the -1 will be regarded as UINT_MAX (when two integers with the sam rank but different sign-ness are compared, the signed one will be treated as an unsigned one). On the other hand, the following code would not have this issue:
int max = some_user_input;
for(int i = 0; i < max; ++i)
do_something;
Give a negative max input, the loop will be safely skipped.
Using a signed int is - in most cases - a mistake that could easily result in potential bugs as well as undefined behavior.
Using size_t matches the system's word size (64 bits on 64 bit systems and 32 bits on 32 bit systems), always allowing for the correct range for the loop and minimizing the risk of an integer overflow.
The int recommendation comes to solve an issue where reverse for loops were often written incorrectly by unexperienced programmers (of course, int might not be in the correct range for the loop):
/* a correct reverse for loop */
for (size_t i = count; i > 0;) {
--i; /* note that this is not part of the `for` statement */
/* code for loop where i is for zero based `index` */
}
/* an incorrect reverse for loop (bug on count == 0) */
for (size_t i = count - 1; i > 0; --i) {
/* i might have overflowed and undefined behavior occurs */
}
In general, signed and unsigned variables shouldn't be mixed together, so at times using an int in unavoidable. However, the correct type for a for loop is as a rule size_t.
There's a nice talk about this misconception that signed variables are better than unsigned variables, you can find it on YouTube (Signed Integers Considered Harmful by Robert Seacord).
TL;DR;: Signed variables are more dangerous and require more code than unsigned variables (which should be preferred almost in all cases and definitely whenever negative values aren't logically expected).
With unsigned variables the only concern is the overflow boundary which has a strictly defined behavior (wrap-around) and uses clearly defined modular mathematics.
This allows a single edge case test to catch an overflow and that test can be performed after the mathematical operation was executed.
However, with signed variables the overflow behavior is undefined (UB) and the negative range is actually larger than the positive range - things that add edge cases that must be tested for and explicitly handled before the mathematical operation can be executed.
i.e., how much INT_MIN * -1? (the pre-processor will protect you, but without it you're in a jam).
P.S.
As for the example offered by #6502 in their answer, the whole thing is again an issue of trying to cut corners and a simple missing if statement.
When a loop assumes at least 2 elements in an array, this assumption should be tested beforehand. i.e.:
// draw lines connecting the dots - forward loop
if(pts.size() > 1) { // first make sure there's enough dots
for (size_t i=0; i < pts.size()-1; i++) { // then loop
drawLine(pts[i], pts[i+1]);
}
}
// or test against i + 1 : which tests the desired pts[i+1]
for (size_t i = 0; i + 1 < pts.size(); i++) { // then loop
drawLine(pts[i], pts[i+1]);
}
// or start i as 1 : but note that `-` is slower than `+`
for (size_t i = 1; i < pts.size(); i++) { // then loop
drawLine(pts[i - 1], pts[i]);
}
Let us have
int a, b, c; // may be char or float, anything actually
c = a + b;
let int type be represented by 4 bytes. Let's say a+b requires 1 bit more than 4 bytes (ie, let's say the result is 1 00....0 (32 zeroes, in binary)). This would result in C=0, and I am sure the computer's microprocessor would set some kind of overflow flag. Is there any built in method to check this in C?
I am actually working on building a number type that is 1024 bits long (for example, int is a built in number type that is 32 bits long). I have attempted this using unsigned char type arrays with 128 elements. I also need to define addition and subtraction operations on these numbers. I have written the code for addition but I am having problem on subtraction. I don't need to worry about getting negative results because the way I will call the subtracting function always ensures that the result of subtraction is always positive, but to implement the subtraction function I need to somehow get the 2's complement of the subtrahend, which is it self my custom 1024 bit number.
I am sorry if it is difficult to understand my description. If needed I will elaborate it more. I am including my code for the adding function and the incomplete subtracting function. the NUM_OF_WORDS is a constant declared as
#define NUM_OF_WORDS 128
Please let me know if you did not understand my question or any part of my code.
PS: I don't see how to upload attachments in this forum so I am directing you to another website. My code may be found there
click on download in this page
Incidentally, I found this
I intend to replace INT_MAX by UCHAR_MAX as my 1024 bit numbers consist of array of char types (8-bit variable)
Is this check sufficient for all cases?
Update:
Yes I am working on Cryptography.
I need to implement a Montgomery Multiplication routine for 1024 bit size integers.
I had also considered using GMP library but couldn't find out how to use it.
I looked up a tutorial and after a few small modifications I was able to build the GMP project file in VC++ 6 which resulted in a lot of .obj files, but now I am not sure what to do with them.
Still it would be good if I can write my own data types, as it will give me complete control over how the arithmetic operations on my custom data type work, and I also need to be able to extend it from 1024 bits to larger numbers in the future.
If you're adding unsigned numbers then you can do this
c = a+b;
if (c<a) {
// you'll get here if and only if overflow has occurred
}
and you may even find that your compiler is clever enough to implement it by checking the overflow or carry flag instead of doing an extra comparison. For instance, I just fed this to gcc -O3 -S:
unsigned int foo() {
unsigned int x=g(), y=h();
unsigned int z = x+y;
return z<0 ? 0 : z;
}
and got this for the key bit of the code:
movl $0, %edx
addl %ebx, %eax
cmovb %edx, %eax
where you'll notice there's no extra comparison instruction.
Contrary to popular belief, an int overflow results in undefined behavior. This means that once a + b overflows, it doesn't make sense to use this value (or do anything else, for that matter). The wrap-around is just what most machines happen to do in case of overflow, but they might as well explode.
To check whether an int overflow will occur when adding two non-negative integers a and b, you can do the following:
if (INT_MAX - b < a) {
/* int overflow when evaluating a+b */
}
This is due to the fact that if a + b > INT_MAX, then INT_MAX - b < a, but INT_MAX - b can not overflow.
You will have to pay special attention to the case where b is negative, which is left as an exercise for the reader ;)
Regarding your actual goal: 1024-bit numbers suffer from exactly the same overall issues as 32-bit numbers. It might be more promising to choose a completely different approach, e.g. representing numbers as, say, linked lists of digits, using a very large base B. Usually, B is chosen such that B = sqrt(INT_MAX), so multiplication of digits doesn't overflow the machine's int type.
This way, you can represent arbitrarily large numbers, where "arbitrary" means "only limited by the amount of main memory available".
If you are working with unisigned numbers, then if a <= UINT_MAX, b <= UINT_MAX, and a + b >= UINT_MAX, then c = (a + b) % UINT_MAX will always be smaller than a and b. And this is the only case where this can happen.
So you can detect overflow this way.
int add_return_overflow(unsigned int a, unsigned int b, unsigned int* c) {
*c = a + b;
return *c < a && *c < b;
}
Information which maybe useful in this subject :
Secure Coding in C and C++
IntSafe library
You can base a solution on a particular feature of the C language. According to the specification, when you add two unsigned ints, "the result value is congruent to the modulo 2^n of the true result" ("C - A reference manual" by Harbison and Steele). This means you can use some simple arithmetic checks to detect overflow:
#include <stdio.h>
int main() {
unsigned int a, b, c;
char *overflow;
a = (unsigned int)-1;
for (b = 0; b < 3; b++) {
c = a + b;
overflow = (a < b) ? "yes" : "no";
printf("%u + %u = %u, %s overflow\n", a, b, c, overflow);
}
return 0;
}
Just xor MSB of both operands and result. Result of this operation is overflow flag.
But that will not show you if result is correct or not. (it might be correct result even whit overflow) for instance 3 + (-1) is 2 whit overflow.
In order to figure that using signed arithmetic you need to check if both operdas were same sign (xor of MSB).
Once you add 1 to INT_MAX, you end up getting INT_MIN (i.e. overflow).
In C, there's no reliable way to test for overflow, because all 32 bytes are used to represent the integer (and not a state flag). You can only test to see if the number you get will be within a valid range, as in your link.
You'll get answers suggesting that you can test if (c < a), however note that you could overflow the value of a and/or b to the point where their addition forms a number greater than a (but still overflown)