Checking overflow in C - c

Let us have
int a, b, c; // may be char or float, anything actually
c = a + b;
let int type be represented by 4 bytes. Let's say a+b requires 1 bit more than 4 bytes (ie, let's say the result is 1 00....0 (32 zeroes, in binary)). This would result in C=0, and I am sure the computer's microprocessor would set some kind of overflow flag. Is there any built in method to check this in C?
I am actually working on building a number type that is 1024 bits long (for example, int is a built in number type that is 32 bits long). I have attempted this using unsigned char type arrays with 128 elements. I also need to define addition and subtraction operations on these numbers. I have written the code for addition but I am having problem on subtraction. I don't need to worry about getting negative results because the way I will call the subtracting function always ensures that the result of subtraction is always positive, but to implement the subtraction function I need to somehow get the 2's complement of the subtrahend, which is it self my custom 1024 bit number.
I am sorry if it is difficult to understand my description. If needed I will elaborate it more. I am including my code for the adding function and the incomplete subtracting function. the NUM_OF_WORDS is a constant declared as
#define NUM_OF_WORDS 128
Please let me know if you did not understand my question or any part of my code.
PS: I don't see how to upload attachments in this forum so I am directing you to another website. My code may be found there
click on download in this page
Incidentally, I found this
I intend to replace INT_MAX by UCHAR_MAX as my 1024 bit numbers consist of array of char types (8-bit variable)
Is this check sufficient for all cases?
Update:
Yes I am working on Cryptography.
I need to implement a Montgomery Multiplication routine for 1024 bit size integers.
I had also considered using GMP library but couldn't find out how to use it.
I looked up a tutorial and after a few small modifications I was able to build the GMP project file in VC++ 6 which resulted in a lot of .obj files, but now I am not sure what to do with them.
Still it would be good if I can write my own data types, as it will give me complete control over how the arithmetic operations on my custom data type work, and I also need to be able to extend it from 1024 bits to larger numbers in the future.

If you're adding unsigned numbers then you can do this
c = a+b;
if (c<a) {
// you'll get here if and only if overflow has occurred
}
and you may even find that your compiler is clever enough to implement it by checking the overflow or carry flag instead of doing an extra comparison. For instance, I just fed this to gcc -O3 -S:
unsigned int foo() {
unsigned int x=g(), y=h();
unsigned int z = x+y;
return z<0 ? 0 : z;
}
and got this for the key bit of the code:
movl $0, %edx
addl %ebx, %eax
cmovb %edx, %eax
where you'll notice there's no extra comparison instruction.

Contrary to popular belief, an int overflow results in undefined behavior. This means that once a + b overflows, it doesn't make sense to use this value (or do anything else, for that matter). The wrap-around is just what most machines happen to do in case of overflow, but they might as well explode.
To check whether an int overflow will occur when adding two non-negative integers a and b, you can do the following:
if (INT_MAX - b < a) {
/* int overflow when evaluating a+b */
}
This is due to the fact that if a + b > INT_MAX, then INT_MAX - b < a, but INT_MAX - b can not overflow.
You will have to pay special attention to the case where b is negative, which is left as an exercise for the reader ;)
Regarding your actual goal: 1024-bit numbers suffer from exactly the same overall issues as 32-bit numbers. It might be more promising to choose a completely different approach, e.g. representing numbers as, say, linked lists of digits, using a very large base B. Usually, B is chosen such that B = sqrt(INT_MAX), so multiplication of digits doesn't overflow the machine's int type.
This way, you can represent arbitrarily large numbers, where "arbitrary" means "only limited by the amount of main memory available".

If you are working with unisigned numbers, then if a <= UINT_MAX, b <= UINT_MAX, and a + b >= UINT_MAX, then c = (a + b) % UINT_MAX will always be smaller than a and b. And this is the only case where this can happen.
So you can detect overflow this way.
int add_return_overflow(unsigned int a, unsigned int b, unsigned int* c) {
*c = a + b;
return *c < a && *c < b;
}

Information which maybe useful in this subject :
Secure Coding in C and C++
IntSafe library

You can base a solution on a particular feature of the C language. According to the specification, when you add two unsigned ints, "the result value is congruent to the modulo 2^n of the true result" ("C - A reference manual" by Harbison and Steele). This means you can use some simple arithmetic checks to detect overflow:
#include <stdio.h>
int main() {
unsigned int a, b, c;
char *overflow;
a = (unsigned int)-1;
for (b = 0; b < 3; b++) {
c = a + b;
overflow = (a < b) ? "yes" : "no";
printf("%u + %u = %u, %s overflow\n", a, b, c, overflow);
}
return 0;
}

Just xor MSB of both operands and result. Result of this operation is overflow flag.
But that will not show you if result is correct or not. (it might be correct result even whit overflow) for instance 3 + (-1) is 2 whit overflow.
In order to figure that using signed arithmetic you need to check if both operdas were same sign (xor of MSB).

Once you add 1 to INT_MAX, you end up getting INT_MIN (i.e. overflow).
In C, there's no reliable way to test for overflow, because all 32 bytes are used to represent the integer (and not a state flag). You can only test to see if the number you get will be within a valid range, as in your link.
You'll get answers suggesting that you can test if (c < a), however note that you could overflow the value of a and/or b to the point where their addition forms a number greater than a (but still overflown)

Related

Using the mod operator correctly in C

In a hash table implementation in C, I am relying on the mod operator to transform my hash of the key by the capacity of the hash table as follows:
int i = compute_index(key, hash, capacity);
while(table[i].occupied && (strcmp(table[i].key, key) != 0))
{
...
The hash seems to be computed correctly, because in my debugger, I am able to utilize the hash function and the key to output -724412585.
The strange thing is, when I mod this number by the capacity using a regular calculator, I get 7. But within the debugger, and my code, the integer -1 is returned. This is very confusing and I would appreciate some help.
My compute_index() simply does this:
int compute_index(const char* key, int (*hash)(const char*), int cap)
{
return hash(key) % cap;
}
Would appreciate some guidance, thank you.
There are two common conventions for the sign of a % b: in one case, the sign is the same as b (i.e. always positive when b is positive), and in the other case it's the same as a (i.e. negative when a is negative). Older versions of C allowed either behavior, but C99 and later require the second behavior (which goes hand-in-hand with requiring truncating division, rather than leaving the choice of truncating or flooring division up to the compiler).
The upshot of this is that if your hash function can return a negative number, then hash(key) % cap will also be negative sometimes. The simplest thing you can do is to alter hash to return unsigned int instead of int.
I agree w/ everything #hobbs said, except for the following:
I believe the easiest thing you can do is either:
change the output of hash() to unsigned int,
or cast the output of hash() to unsigned int within the body of compute_index() before the modulo operation, or
add the modulus back in if the result of the % operation < 0 (to get the abstract-algebraic sense of "integer mod q").
The behavior of % in C for negative numbers can be confusing to those with a background in abstract algebra, particularly for one who works with the rings of integers mod q.

C safely taking absolute value of integer

Consider following program (C99):
#include <stdio.h>
#include <stdlib.h>
#include <inttypes.h>
int main(void)
{
printf("Enter int in range %jd .. %jd:\n > ", INTMAX_MIN, INTMAX_MAX);
intmax_t i;
if (scanf("%jd", &i) == 1)
printf("Result: |%jd| = %jd\n", i, imaxabs(i));
}
Now as I understand it, this contains easily triggerable undefined behaviour, like this:
Enter int in range -9223372036854775808 .. 9223372036854775807:
> -9223372036854775808
Result: |-9223372036854775808| = -9223372036854775808
Questions:
Is this really undefined behaviour, as in "code is allowed to trigger any code path, which any code that stroke compiler's fancy", when user enters the bad number? Or is it some other flavor of not-completely-defined?
How would a pedantic programmer go about guarding against this, without making any assumptions not guaranteed by standard?
(There are a few related questions, but I didn't find one which answers question 2 above, so if you suggest duplicate, please make sure it answers that.)
If the result of imaxabs cannot be represented, can happen if using two's complement, then the behavior is undefined.
7.8.2.1 The imaxabs function
The imaxabs function computes the absolute value of an integer j. If the result cannot
be represented, the behavior is undefined. 221)
221) The absolute value of the most negative number cannot be represented in two’s complement.
The check that makes no assumptions and is always defined is:
intmax_t i = ... ;
if( i < -INTMAX_MAX )
{
//handle error
}
(This if statement cannot be taken if using one's complement or sign-magnitude representation, so the compiler might give a unreachable code warning. The code itself is still defined and valid. )
How would a pedantic programmer go about guarding against this, without making any assumptions not guaranteed by standard?
One method is to use unsigned integers. The overflow behaviour of unsigned integers is well-defined as is the behaviour when converting from a signed to an unsigned integer.
So I think the following should be safe (turns out it's horriblly broken on some really obscure systems, see later in the post for an improved version)
uintmax_t j = i;
if (j > (uintmax_t)INTMAX_MAX) {
j = -j;
}
printf("Result: |%jd| = %ju\n", i, j);
So how does this work?
uintmax_t j = i;
This converts the signed integer into an unsigned one. IF it's positive the value stays the same, if it's negative the value increases by 2n (where n is the number of bits). This converts it to a large number (larger than INTMAX_MAX)
if (j > (uintmax_t)INTMAX_MAX) {
If the original number was positive (and hence less than or equal to INTMAX_MAX) this does nothing. If the original number was negative the inside of the if block is run.
j = -j;
The number is negated. The result of a negation is clearly negative and so cannot be represented as an unsigned integer. So it is increased by 2n.
So algebraically the result for negative i looks like
j = - (i + 2n) + 2n = -i
Clever, but this solution makes assumptions. This fails if INTMAX_MAX == UINTMAX_MAX, which is allowed by C Standard.
Hmm, lets look at this (i'm reading https://busybox.net/~landley/c99-draft.html which is apprarently the last C99 draft prior to standardisation, if anything changed in the final standard please do tell me.
When typedef names differing only in the absence or presence of the initial u are defined, they shall denote corresponding signed and unsigned types as described in 6.2.5; an implementation shall not provide a type without also providing its corresponding type.
In 6.2.5 I see
For each of the signed integer types, there is a corresponding (but different) unsigned integer type (designated with the keyword unsigned) that uses the same amount of storage (including sign information) and has the same alignment requirements.
In 6.2.6.2 I see
#1
For unsigned integer types other than unsigned char, the bits of the object representation shall be divided into two groups: value bits and padding bits (there need not be any of the latter). If there are N value bits, each bit shall represent a different power of 2 between 1 and 2N-1, so that >objects of that type shall be capable of representing values from 0 to 2N-1 >using a pure binary representation; this shall be known as the value representation. The values of any padding bits are unspecified.39)
#2
For signed integer types, the bits of the object representation shall be divided into three groups: value bits, padding bits, and the sign bit. There need not be any padding bits; there shall be exactly one sign bit. Each bit that is a value bit shall have the same value as the same bit in the object representation of the corresponding unsigned type (if there are M value bits in the signed type and N in the unsigned type, then M<=N). If the sign bit is zero, it shall not affect the resulting value.
So yes it seems you are right, while the signed and unsigned types have to be the same size it does seem to be valid for the unsigned type to have one more padding bit than the signed type.
Ok, based on the analysis above revealing a flaw in my first attempt i've written a more paranoid variant. This has two changes from my first version.
I use i < 0 rather than j > (uintmax_t)INTMAX_MAX to check for negative numbers. This means that the algorithm proceduces correct results for numbers grater than or equal to -INTMAX_MAX even when INTMAX_MAX == UINTMAX_MAX.
I add handling for the error case where INTMAX_MAX == UINTMAX_MAX, INTMAX_MIN == -INTMAX_MAX -1 and i == INTMAX_MIN. This will result in j=0 inside the if condition which we can easilly test for.
It can be seen from the requirements in the C standard that INTMAX_MIN cannot be smaller than -INTMAX_MAX -1 since there is only one sign bit and the number of value bits must be the same or lower than in the corresponding unsigned type. There are simply no bit patterns left to represent smaller numbers.
uintmax_t j = i;
if (i < 0) {
j = -j;
if (j == 0) {
printf("your platform sucks\n");
exit(1);
}
}
printf("Result: |%jd| = %ju\n", i, j);
#plugwash I think 2501 is correct. For example, -UINTMAX_MAX value becomes 1: (-UINTMAX_MAX + (UINTMAX_MAX + 1)), and is not caught by your if. – hyde 58 mins ago
Umm,
assuming INTMAX_MAX == UINTMAX_MAX and i = -INTMAX_MAX
uintmax_t j = i;
after this command j = -INTMAX_MAX + (UINTMAX_MAX + 1) = 1
if (i < 0) {
i is less than zero so we run the commands inside the if
j = -j;
after this command j = -1 + (UINTMAX_MAX + 1) = UINTMAX_MAX
which is the correct answer, so no need to trap it in an error case.
On two-complement systems getting the absolute number of the most negative value is indeed undefined behavior, as the absolute value would be out of range. And it's nothing the compiler can help you with, as the UB happens at run-time.
The only way to protect against that is to compare the input against the most negative value for the type (INTMAX_MIN in the code you show).
So calculating the absolute value of an integer invokes undefined behaviour in one single case. Actually, while the undefined behaviour can be avoided, it is impossible to give the correct result in one case.
Now consider multiplication of an integer by 3: Here we have a much more serious problem. This operation invokes undefined behaviour in 2/3rds of all cases! And for two thirds of all int values x, finding an int with the value 3x is just impossible. That's a much more serious problem than the absolute value problem.
You may want to use some bit hacks:
int v; // we want to find the absolute value of v
unsigned int r; // the result goes here
int const mask = v >> sizeof(int) * CHAR_BIT - 1;
r = (v + mask) ^ mask;
This works well when INT_MIN < v <= INT_MAX. In the case where v == INT_MIN, it remains INT_MIN , without causing undefined behavior.
You can also use bitwise operation to handle this on ones' complement and sign-magnitude systems.
Reference: https://graphics.stanford.edu/~seander/bithacks.html#IntegerAbs
according to this http://linux.die.net/man/3/imaxabs
Notes
Trying to take the absolute value of the most negative integer is not defined.
To handle the full range you could add something like this to your code
if (i != INTMAX_MIN) {
printf("Result: |%jd| = %jd\n", i, imaxabs(i));
} else { /* Code around undefined abs( INTMAX_MIN) /*
printf("Result: |%jd| = %jd%jd\n", i, -(i/10), -(i%10));
}
edit: As abs(INTMAX_MIN) cannot be represented on a 2's complement machine, 2 values within respresentable range are concatenated on output as a string.
Tested with gcc, though printf required %lld as %jd was not a supported format.
Is this really undefined behaviour, as in "code is allowed to trigger any code path, which any code that stroke compiler's fancy", when user enters the bad number? Or is it some other flavor of not-completely-defined?
The behaviour of the program is only undefined, when the bad number is successfully input-ed and passed to imaxabs(), which on a typical 2's complement system returns a -ve result as you observed.
That is the undefined behaviour in this case, the implementation would also be allowed to terminate the program with an over-flow error if the ALU set status flags.
The reason for "undefined behaviour" in C is so compiler writers don't have to guard against overflow, so programs can run more efficiently. Whilst it is within C standard for every C program using abs() to try to kill your first born, just because you call it with a too -ve value, writing such code into the object file would simply be perverse.
The real problem with these undefined behaviours, is that an optimising compiler, can reason away naive checks so code like :
r = (i < 0) ? -i : i;
if (r < 0) { // This code may be pointless
// Do overflow recovery
doRecoveryProcessing();
} else {
printf("%jd", r);
}
As a compiler optomiser can reason that negative values are negated, it could in principal determine that (r <0) is always false, so the attempt to trap the problem fails.
How would a pedantic programmer go about guarding against this, without making any assumptions not guaranteed by standard?
By far the best way, is simply to ensure that the program works on a valid range, so in this case validating the input suffices (disallow INTMAX_MIN).
Programs printing tables of abs() ought to avoid INT*_MIN and so on.
if (i != INTMAX_MIN) {
printf("Result: |%jd| = %jd\n", i, imaxabs(i));
} else { /* Code around undefined abs( INTMAX_MIN) /*
printf("Result: |%jd| = %jd%jd\n", i, -(i/10), -(i%10));
}
Appears to write out the abs( INTMAX_MIN) by fakery, allowing the program to live up to it's promise to the user.

Short int values above 32767

When trying to store short integer values above 32,767 in C, just to see what happens, I notice the result that gets printed to the screen is the number I am trying to store, - 65,536. E.g. if i try to store 65,536 in a short variable, the number which is printed to the screen is 0. If I try to store 32,768 I get -32,768 printed to the screen. And if I try to store 65,546 and print it to the screen I get 10. I think you get the picture. Why am I seeing these results?
Integer values are stored using Twos Complement. In twos complement, the range of possible values is -2^n to 2^n-1, where n is the number of bits used for storage. Because of the way the storage is done, when you go above 2^n-1, you end up wrapping around back to 2^n.
Shorts use 16 bits (15 bits for numeric storage, with the final bit being the sign).
EDIT: Keep in mind that this behavior is NOT guaranteed to occur. Programming languages may act completely differently. Technically, going above or below the max/min value is undefined behavior and it should be treated as such. (Thanks to Eric Postpischil for keeping me on my toes)
When a value is converted to a signed integer type but cannot be represented in that type, overflow occurs, and the behavior is undefined. It is common to see results as if a two’s complement encoding is used and as if the low bits are stored (or, equivalently, the value is wrapped modulo an appropriate power of two). However, you cannot rely on this behavior. The C standard says that, when signed integer overflow occurs, the behavior is undefined. So a compiler may act in surprising ways.
Consider this code, compiled for a target where short int is 16 bits:
void foo(int a, int b)
{
if (a)
{
short int x = b;
printf("x is %hd.\n", x);
}
else
{
printf("x is not assigned.\n");
}
}
void bar(int a, int b)
{
if (b == 65536)
foo(a, b);
}
Observe that foo is a perfectly fine function on its own, provided b is within range of a short int. And bar is a perfectly fine function, as long as it is called only with a equal to zero or b not equal to 65536.
While the compiler is in-lining foo in bar, it may deduce from the fact that b must be 65536 at this point, that there would be an overflow in short int x = b;. This implies either that this path is never taken (i.e., a must be zero) or that any behavior is permitted (because the behavior upon overflow is undefined by the C standard). In either case, the compiler is free to omit this code path. Thus, for optimization, the compiler could omit this path and generate code only for printf("x is not assigned.\n");. If you then executed code containing bar(1, 65536), the output would be “x is not assigned.”!
Compilers do make optimizations of this sort: The observation that one code path has undefined behavior implies the compiler may conclude that code path is never used.
To an observer, it looks like the effect of assigning a too-large value to a short int is to cause completely different code to be executed.

Programmatically determining max value of a signed integer type

This related question is about determining the max value of a signed type at compile-time:
C question: off_t (and other signed integer types) minimum and maximum values
However, I've since realized that determining the max value of a signed type (e.g. time_t or off_t) at runtime seems to be a very difficult task.
The closest thing to a solution I can think of is:
uintmax_t x = (uintmax_t)1<<CHAR_BIT*sizeof(type)-2;
while ((type)x<=0) x>>=1;
This avoids any looping as long as type has no padding bits, but if type does have padding bits, the cast invokes implementation-defined behavior, which could be a signal or a nonsensical implementation-defined conversion (e.g. stripping the sign bit).
I'm beginning to think the problem is unsolvable, which is a bit unsettling and would be a defect in the C standard, in my opinion. Any ideas for proving me wrong?
Let's first see how C defines "integer types". Taken from ISO/IEC 9899, §6.2.6.2:
6.2.6.2 Integer types
1 For unsigned integer types other than unsigned char, the bits of the object
representation shall be divided into two groups: value bits and padding bits (there need
not be any of the latter). If there are N value bits, each bit shall represent a different
power of 2 between 1 and 2N−1, so that objects of that type shall be capable of
representing values from 0 to 2N − 1 using a pure binary representation; this shall be
known as the value representation. The values of any padding bits are unspecified.44)
2 For signed integer types, the bits of the object representation shall be divided into three
groups: value bits, padding bits, and the sign bit. There need not be any padding bits;
there shall be exactly one sign bit. Each bit that is a value bit shall have the same value as the same bit in the object representation of the corresponding unsigned type (if there are M value bits in the signed type and N in the unsigned type, then M ≤ N). If the sign bit
is zero, it shall not affect the resulting value. If the sign bit is one, the value shall be
modified in one of the following ways:
— the corresponding value with sign bit 0 is negated (sign and magnitude);
— the sign bit has the value −(2N) (two’s complement);
— the sign bit has the value −(2N − 1) (ones’ complement).
Which of these applies is implementation-defined, as is whether the value with sign bit 1
and all value bits zero (for the first two), or with sign bit and all value bits 1 (for ones’
complement), is a trap representation or a normal value. In the case of sign and
magnitude and ones’ complement, if this representation is a normal value it is called a
negative zero.
Hence we can conclude the following:
~(int)0 may be a trap representation, i.e. setting all bits to is a bad idea
There might be padding bits in an int that have no effect on its value
The order of the bits actually representing powers of two is undefined; so is the position of the sign bit, if it exists.
The good news is that:
there's only a single sign bit
there's only a single bit that represents the value 1
With that in mind, there's a simple technique to find the maximum value of an int. Find the sign bit, then set it to 0 and set all other bits to 1.
How do we find the sign bit? Consider int n = 1;, which is strictly positive and guaranteed to have only the one-bit and maybe some padding bits set to 1. Then for all other bits i, if i==0 holds true, set it to 1 and see if the resulting value is negative. If it's not, revert it back to 0. Otherwise, we've found the sign bit.
Now that we know the position of the sign bit, we take our int n, set the sign bit to zero and all other bits to 1, and tadaa, we have the maximum possible int value.
Determining the int minimum is slightly more complicated and left as an exercise to the reader.
Note that the C standard humorously doesn't require two different ints to behave the same. If I'm not mistaken, there may be two distinct int objects that have e.g. their respective sign bits at different positions.
EDIT: while discussing this approach with R.. (see comments below), I have become convinced that it is flawed in several ways and, more generally, that there is no solution at all. I can't see a way to fix this posting (except deleting it), so I let it unchanged for the comments below to make sense.
Mathematically, if you have a finite set (X, of size n (n a positive integer) and a comparison operator (x,y,z in X; x<=y and y<=z implies x<=z), it's a very simple problem to find the maximum value. (Also, it exists.)
The easiest way to solve this problem, but the most computationally expensive, is to generate an array with all possible values from, then find the max.
Part 1. For any type with a finite member set, there's a finite number of bits (m) which can be used to uniquely represent any given member of that type. We just make an array which contains all possible bit patterns, where any given bit pattern is represented by a given value in the specific type.
Part 2. Next we'd need to convert each binary number into the given type. This task is where my programming inexperience makes me unable to speak to how this may be accomplished. I've read some about casting, maybe that would do the trick? Or some other conversion method?
Part 3. Assuming that the previous step was finished, we now have a finite set of values in the desired type and a comparison operator on that set. Find the max.
But what if...
...we don't know the exact number of members of the given type? Than we over-estimate. If we can't produce a reasonable over-estimate, than there should be physical bounds on the number. Once we have an over-estimate, we check all of those possible bit patters to confirm which bit patters represent members of the type. After discarding those which aren't used, we now have a set of all possible bit patterns which represent some member of the given type. This most recently generated set is what we'd use now at part 1.
...we don't have a comparison operator in that type? Than the specific problem is not only impossible, but logically irrelevant. That is, if our program doesn't have access to give a meaningful result to if we compare two values from our given type, than our given type has no ordering in the context of our program. Without an ordering, there's no such thing as a maximum value.
...we can't convert a given binary number into a given type? Then the method breaks. But similar to the previous exception, if you can't convert types, than our tool-set seems logically very limited.
Technically, you may not need to convert between binary representations and a given type. The entire point of the conversion is to insure the generated list is exhaustive.
...we want to optimize the problem? Than we need some information about how the given type maps from binary numbers. For example, unsigned int, signed int (2's compliment), and signed int (1's compliment) each map from bits into numbers in a very documented and simple way. Thus, if we wanted the highest possible value for unsigned int and we knew we were working with m bits, than we could simply fill each bit with a 1, convert the bit pattern to decimal, then output the number.
This relates to optimization because the most expensive part of this solution is the listing of all possible answers. If we have some previous knowledge of how the given type maps from bit patterns, we can generate a subset of all possibilities by making instead all potential candidates.
Good luck.
Update: Thankfully, my previous answer below was wrong, and there seems to be a solution to this question.
intmax_t x;
for (x=INTMAX_MAX; (T)x!=x; x/=2);
This program either yields x containing the max possible value of type T, or generates an implementation-defined signal.
Working around the signal case may be possible but difficult and computationally infeasible (as in having to install a signal handler for every possible signal number), so I don't think this answer is fully satisfactory. POSIX signal semantics may give enough additional properties to make it feasible; I'm not sure.
The interesting part, especially if you're comfortable assuming you're not on an implementation that will generate a signal, is what happens when (T)x results in an implementation-defined conversion. The trick of the above loop is that it does not rely at all on the implementation's choice of value for the conversion. All it relies upon is that (T)x==x is possible if and only if x fits in type T, since otherwise the value of x is outside the range of possible values of any expression of type T.
Old idea, wrong because it does not account for the above (T)x==x property:
I think I have a sketch of a proof that what I'm looking for is impossible:
Let X be a conforming C implementation and assume INT_MAX>32767.
Define a new C implementation Y identical to X, but where the values of INT_MAX and INT_MIN are each divided by 2.
Prove that Y is a conforming C implementation.
The essential idea of this outline is that, due to the fact that everything related to out-of-bound values with signed types is implementation-defined or undefined behavior, an arbitrary number of the high value bits of a signed integer type can be considered as padding bits without actually making any changes to the implementation except the limit macros in limits.h.
Any thoughts on if this sounds correct or bogus? If it's correct, I'd be happy to award the bounty to whoever can do the best job of making it more rigorous.
I might just be writing stupid things here, since I'm relatively new to C, but wouldn't this work for getting the max of a signed?
unsigned x = ~0;
signed y=x/2;
This might be a dumb way to do it, but as far as I've seen unsigned max values are signed max*2+1. Won't it work backwards?
Sorry for the time wasted if this proves to be completely inadequate and incorrect.
Shouldn't something like the following pseudo code do the job?
signed_type_of_max_size test_values =
[(1<<7)-1, (1<<15)-1, (1<<31)-1, (1<<63)-1];
for test_value in test_values:
signed_foo_t a = test_value;
signed_foo_t b = a + 1;
if (b < a):
print "Max positive value of signed_foo_t is ", a
Or much simpler, why shouldn't the following work?
signed_foo_t signed_foo_max = (1<<(sizeof(signed_foo_t)*8-1))-1;
For my own code, I would definitely go for a build-time check defining a preprocessor macro, though.
Assuming modifying padding bits won't create trap representations, you could use an unsigned char * to loop over and flip individual bits until you hit the sign bit. If your initial value was ~(type)0, this should get you the maximum:
type value = ~(type)0;
assert(value < 0);
unsigned char *bytes = (void *)&value;
size_t i = 0;
for(; i < sizeof value * CHAR_BIT; ++i)
{
bytes[i / CHAR_BIT] ^= 1 << (i % CHAR_BIT);
if(value > 0) break;
bytes[i / CHAR_BIT] ^= 1 << (i % CHAR_BIT);
}
assert(value != ~(type)0);
// value == TYPE_MAX
Since you allow this to be at runtime you could write a function that de facto does an iterative left shift of (type)3. If you stop once the value is fallen below 0, this will never give you a trap representation. And the number of iterations - 1 will tell you the position of the sign bit.
Remains the problem of the left shift. Since just using the operator << would lead to an overflow, this would be undefined behavior, so we can't use the operator directly.
The simplest solution to that is not to use a shifted 3 as above but to iterate over the bit positions and to add always the least significant bit also.
type x;
unsigned char*B = &x;
size_t signbit = 7;
for(;;++signbit) {
size_t bpos = signbit / CHAR_BIT;
size_t apos = signbit % CHAR_BIT;
x = 1;
B[bpos] |= (1 << apos);
if (x < 0) break;
}
(The start value 7 is the minimum width that a signed type must have, I think).
Why would this present a problem? The size of the type is fixed at compile time, so the problem of determining the runtime size of the type reduces to the problem of determining the compile-time size of the type. For any given target platform, a declaration such as off_t offset will be compiled to use some fixed size, and that size will then always be used when running the resulting executable on the target platform.
ETA: You can get the size of the type type via sizeof(type). You could then compare against common integer sizes and use the corresponding MAX/MIN preprocessor define. You might find it simpler to just use:
uintmax_t bitWidth = sizeof(type) * CHAR_BIT;
intmax_t big2 = 2; /* so we do math using this integer size */
intmax_t sizeMax = big2^bitWidth - 1;
intmax_t sizeMin = -(big2^bitWidth - 1);
Just because a value is representable by the underlying "physical" type does not mean that value is valid for a value of the "logical" type. I imagine the reason max and min constants are not provided is that these are "semi-opaque" types whose use is restricted to particular domains. Where less opacity is desirable, you will often find ways of getting the information you want, such as the constants you can use to figure out how big an off_t is that are mentioned by the SUSv2 in its description of <unistd.h>.
For an opaque signed type for which you don't have a name of the associated unsigned type, this is unsolvable in a portable way, because any attempt to detect whether there is a padding bit will yield implementation-defined behavior or undefined behavior. The best thing you can deduce by testing (without additional knowledge) is that there are at least K padding bits.
BTW, this doesn't really answer the question, but can still be useful in practice: If one assumes that the signed integer type T has no padding bits, one can use the following macro:
#define MAXVAL(T) (((((T) 1 << (sizeof(T) * CHAR_BIT - 2)) - 1) * 2) + 1)
This is probably the best that one can do. It is simple and does not need to assume anything else about the C implementation.
Maybe I'm not getting the question right, but since C gives you 3 possible representations for signed integers (http://port70.net/~nsz/c/c11/n1570.html#6.2.6.2):
sign and magnitude
ones' complement
two's complement
and the max in any of these should be 2^(N-1)-1, you should be able to get it by taking the max of the corresponding unsigned, >>1-shifting it and casting the result to the proper type (which it should fit).
I don't know how to get the corresponding minimum if trap representations get in the way, but if they don't the min should be either (Tp)((Tp)-1|(Tp)TP_MAX(Tp)) (all bits set) (Tp)~TP_MAX(Tp) and which it is should be simple to find out.
Example:
#include <limits.h>
#define UNSIGNED(Tp,Val) \
_Generic((Tp)0, \
_Bool: (_Bool)(Val), \
char: (unsigned char)(Val), \
signed char: (unsigned char)(Val), \
unsigned char: (unsigned char)(Val), \
short: (unsigned short)(Val), \
unsigned short: (unsigned short)(Val), \
int: (unsigned int)(Val), \
unsigned int: (unsigned int)(Val), \
long: (unsigned long)(Val), \
unsigned long: (unsigned long)(Val), \
long long: (unsigned long long)(Val), \
unsigned long long: (unsigned long long)(Val) \
)
#define MIN2__(X,Y) ((X)<(Y)?(X):(Y))
#define UMAX__(Tp) ((Tp)(~((Tp)0)))
#define SMAX__(Tp) ((Tp)( UNSIGNED(Tp,~UNSIGNED(Tp,0))>>1 ))
#define SMIN__(Tp) ((Tp)MIN2__( \
(Tp)(((Tp)-1)|SMAX__(Tp)), \
(Tp)(~SMAX__(Tp)) ))
#define TP_MAX(Tp) ((((Tp)-1)>0)?UMAX__(Tp):SMAX__(Tp))
#define TP_MIN(Tp) ((((Tp)-1)>0)?((Tp)0): SMIN__(Tp))
int main()
{
#define STC_ASSERT(X) _Static_assert(X,"")
STC_ASSERT(TP_MAX(int)==INT_MAX);
STC_ASSERT(TP_MAX(unsigned int)==UINT_MAX);
STC_ASSERT(TP_MAX(long)==LONG_MAX);
STC_ASSERT(TP_MAX(unsigned long)==ULONG_MAX);
STC_ASSERT(TP_MAX(long long)==LLONG_MAX);
STC_ASSERT(TP_MAX(unsigned long long)==ULLONG_MAX);
/*STC_ASSERT(TP_MIN(unsigned short)==USHRT_MIN);*/
STC_ASSERT(TP_MIN(int)==INT_MIN);
/*STC_ASSERT(TP_MIN(unsigned int)==UINT_MIN);*/
STC_ASSERT(TP_MIN(long)==LONG_MIN);
/*STC_ASSERT(TP_MIN(unsigned long)==ULONG_MIN);*/
STC_ASSERT(TP_MIN(long long)==LLONG_MIN);
/*STC_ASSERT(TP_MIN(unsigned long long)==ULLONG_MIN);*/
STC_ASSERT(TP_MAX(char)==CHAR_MAX);
STC_ASSERT(TP_MAX(signed char)==SCHAR_MAX);
STC_ASSERT(TP_MAX(short)==SHRT_MAX);
STC_ASSERT(TP_MAX(unsigned short)==USHRT_MAX);
STC_ASSERT(TP_MIN(char)==CHAR_MIN);
STC_ASSERT(TP_MIN(signed char)==SCHAR_MIN);
STC_ASSERT(TP_MIN(short)==SHRT_MIN);
}
For all real machines, (two's complement and no padding):
type tmp = ((type)1)<< (CHAR_BIT*sizeof(type)-2);
max = tmp + (tmp-1);
With C++, you can calculate it at compile time.
template <class T>
struct signed_max
{
static const T max_tmp = T(T(1) << sizeof(T)*CO_CHAR_BIT-2u);
static const T value = max_tmp + T(max_tmp -1u);
};

Should you always use 'int' for numbers in C, even if they are non-negative?

I always use unsigned int for values that should never be negative. But today I
noticed this situation in my code:
void CreateRequestHeader( unsigned bitsAvailable, unsigned mandatoryDataSize,
unsigned optionalDataSize )
{
If ( bitsAvailable – mandatoryDataSize >= optionalDataSize ) {
// Optional data fits, so add it to the header.
}
// BUG! The above includes the optional part even if
// mandatoryDataSize > bitsAvailable.
}
Should I start using int instead of unsigned int for numbers, even if they
can't be negative?
One thing that hasn't been mentioned is that interchanging signed/unsigned numbers can lead to security bugs. This is a big issue, since many of the functions in the standard C-library take/return unsigned numbers (fread, memcpy, malloc etc. all take size_t parameters)
For instance, take the following innocuous example (from real code):
//Copy a user-defined structure into a buffer and process it
char* processNext(char* data, short length)
{
char buffer[512];
if (length <= 512) {
memcpy(buffer, data, length);
process(buffer);
return data + length;
} else {
return -1;
}
}
Looks harmless, right? The problem is that length is signed, but is converted to unsigned when passed to memcpy. Thus setting length to SHRT_MIN will validate the <= 512 test, but cause memcpy to copy more than 512 bytes to the buffer - this allows an attacker to overwrite the function return address on the stack and (after a bit of work) take over your computer!
You may naively be saying, "It's so obvious that length needs to be size_t or checked to be >= 0, I could never make that mistake". Except, I guarantee that if you've ever written anything non-trivial, you have. So have the authors of Windows, Linux, BSD, Solaris, Firefox, OpenSSL, Safari, MS Paint, Internet Explorer, Google Picasa, Opera, Flash, Open Office, Subversion, Apache, Python, PHP, Pidgin, Gimp, ... on and on and on ... - and these are all bright people whose job is knowing security.
In short, always use size_t for sizes.
Man, programming is hard.
Should I always ...
The answer to "Should I always ..." is almost certainly 'no', there are a lot of factors that dictate whether you should use a datatype- consistency is important.
But, this is a highly subjective question, it's really easy to mess up unsigneds:
for (unsigned int i = 10; i >= 0; i--);
results in an infinite loop.
This is why some style guides including Google's C++ Style Guide discourage unsigned data types.
In my personal opinion, I haven't run into many bugs caused by these problems with unsigned data types — I'd say use assertions to check your code and use them judiciously (and less when you're performing arithmetic).
Some cases where you should use unsigned integer types are:
You need to treat a datum as a pure binary representation.
You need the semantics of modulo arithmetic you get with unsigned numbers.
You have to interface with code that uses unsigned types (e.g. standard library routines that accept/return size_t values.
But for general arithmetic, the thing is, when you say that something "can't be negative," that does not necessarily mean you should use an unsigned type. Because you can put a negative value in an unsigned, it's just that it will become a really large value when you go to get it out. So, if you mean that negative values are forbidden, such as for a basic square root function, then you are stating a precondition of the function, and you should assert. And you can't assert that what cannot be, is; you need a way to hold out-of-band values so you can test for them (this is the same sort of logic behind getchar() returning an int and not char.)
Additionally, the choice of signed-vs.-unsigned can have practical repercussions on performance, as well. Take a look at the (contrived) code below:
#include <stdbool.h>
bool foo_i(int a) {
return (a + 69) > a;
}
bool foo_u(unsigned int a)
{
return (a + 69u) > a;
}
Both foo's are the same except for the type of their parameter. But, when compiled with c99 -fomit-frame-pointer -O2 -S, you get:
.file "try.c"
.text
.p2align 4,,15
.globl foo_i
.type foo_i, #function
foo_i:
movl $1, %eax
ret
.size foo_i, .-foo_i
.p2align 4,,15
.globl foo_u
.type foo_u, #function
foo_u:
movl 4(%esp), %eax
leal 69(%eax), %edx
cmpl %eax, %edx
seta %al
ret
.size foo_u, .-foo_u
.ident "GCC: (Debian 4.4.4-7) 4.4.4"
.section .note.GNU-stack,"",#progbits
You can see that foo_i() is more efficient than foo_u(). This is because unsigned arithmetic overflow is defined by the standard to "wrap around," so (a + 69u) may very well be smaller than a if a is very large, and thus there must be code for this case. On the other hand, signed arithmetic overflow is undefined, so GCC will go ahead and assume signed arithmetic doesn't overflow, and so (a + 69) can't ever be less than a. Choosing unsigned types indiscriminately can therefore unnecessarily impact performance.
The answer is Yes. The "unsigned" int type of C and C++ is not an "always positive integer", no matter what the name of the type looks like. The behavior of C/C++ unsigned ints has no sense if you try to read the type as "non-negative"... for example:
The difference of two unsigned is an unsigned number (makes no sense if you read it as "The difference between two non-negative numbers is non-negative")
The addition of an int and an unsigned int is unsigned
There is an implicit conversion from int to unsigned int (if you read unsigned as "non-negative" it's the opposite conversion that would make sense)
If you declare a function accepting an unsigned parameter when someone passes a negative int you simply get that implicitly converted to a huge positive value; in other words using an unsigned parameter type doesn't help you finding errors neither at compile time nor at runtime.
Indeed unsigned numbers are very useful for certain cases because they are elements of the ring "integers-modulo-N" with N being a power of two. Unsigned ints are useful when you want to use that modulo-n arithmetic, or as bitmasks; they are NOT useful as quantities.
Unfortunately in C and C++ unsigned were also used to represent non-negative quantities to be able to use all 16 bits when the integers where that small... at that time being able to use 32k or 64k was considered a big difference. I'd classify it basically as an historical accident... you shouldn't try to read a logic in it because there was no logic.
By the way in my opinion that was a mistake... if 32k are not enough then quite soon 64k won't be enough either; abusing the modulo integer just because of one extra bit in my opinion was a cost too high to pay. Of course it would have been reasonable to do if a proper non-negative type was present or defined... but the unsigned semantic is just wrong for using it as non-negative.
Sometimes you may find who says that unsigned is good because it "documents" that you only want non-negative values... however that documentation is of any value only for people that don't actually know how unsigned works for C or C++. For me seeing an unsigned type used for non-negative values simply means that who wrote the code didn't understand the language on that part.
If you really understand and want the "wrapping" behavior of unsigned ints then they're the right choice (for example I almost always use "unsigned char" when I'm handling bytes); if you're not going to use the wrapping behavior (and that behavior is just going to be a problem for you like in the case of the difference you shown) then this is a clear indicator that the unsigned type is a poor choice and you should stick with plain ints.
Does this means that C++ std::vector<>::size() return type is a bad choice ? Yes... it's a mistake. But if you say so be prepared to be called bad names by who doesn't understand that the "unsigned" name is just a name... what it counts is the behavior and that is a "modulo-n" behavior (and no one would consider a "modulo-n" type for the size of a container a sensible choice).
Bjarne Stroustrup, creator of C++, warns about using unsigned types in his book The C++ programming language:
The unsigned integer types are ideal
for uses that treat storage as a bit
array. Using an unsigned instead of an
int to gain one more bit to represent
positive integers is almost never a
good idea. Attempts to ensure that
some values are positive by declaring
variables unsigned will typically be
defeated by the implicit conversion
rules.
I seem to be in disagreement with most people here, but I find unsigned types quite useful, but not in their raw historic form.
If you consequently stick to the semantic that a type represents for you, then there should be no problem: use size_t (unsigned) for array indices, data offsets etc. off_t (signed) for file offsets. Use ptrdiff_t (signed) for differences of pointers. Use uint8_t for small unsigned integers and int8_t for signed ones. And you avoid at least 80% of portability problems.
And don't use int, long, unsigned, char if you mustn't. They belong in the history books. (Sometimes you must, error returns, bit fields, e.g)
And to come back to your example:
bitsAvailable – mandatoryDataSize >= optionalDataSize
can be easily rewritten as
bitsAvailable >= optionalDataSize + mandatoryDataSize
which doesn't avoid the problem of a potential overflow (assert is your friend) but gets you a bit nearer to the idea of what you want to test, I think.
if (bitsAvailable >= optionalDataSize + mandatoryDataSize) {
// Optional data fits, so add it to the header.
}
Bug-free, so long as mandatoryDataSize + optionalDataSize can't overflow the unsigned integer type -- the naming of these variables leads me to believe this is likely to be the case.
You can't fully avoid unsigned types in portable code, because many typedefs in the standard library are unsigned (most notably size_t), and many functions return those (e.g. std::vector<>::size()).
That said, I generally prefer to stick to signed types wherever possible for the reasons you've outlined. It's not just the case you bring up - in case of mixed signed/unsigned arithmetic, the signed argument is quietly promoted to unsigned.
From the comments on one of Eric Lipperts Blog Posts (See here):
Jeffrey L. Whitledge
I once developed a system in which
negative values made no sense as a
parameter, so rather than validating
that the parameter values were
non-negative, I thought it would be a
great idea to just use uint instead. I
quickly discovered that whenever I
used those values for anything (like
calling BCL methods), they had be
converted to signed integers. This
meant that I had to validate that the
values didn't exceed the signed
integer range on the top end, so I
gained nothing. Also, every time the
code was called, the ints that were
being used (often received from BCL
functions) had to be converted to
uints. It didn't take long before I
changed all those uints back to ints
and took all that unnecessary casting
out. I still have to validate that the
numbers are not negative, but the code
is much cleaner!
Eric Lippert
Couldn't have said it better myself.
You almost never need the range of a
uint, and they are not CLS-compliant.
The standard way to represent a small
integer is with "int", even if there
are values in there that are out of
range. A good rule of thumb: only use
"uint" for situations where you are
interoperating with unmanaged code
that expects uints, or where the
integer in question is clearly used as
a set of bits, not a number. Always
try to avoid it in public interfaces.
Eric
The situation where (bitsAvailable – mandatoryDataSize) produces an 'unexpected' result when the types are unsigned and bitsAvailable < mandatoryDataSize is a reason that sometimes signed types are used even when the data is expected to never be negative.
I think there's no hard and fast rule - I typically 'default' to using unsigned types for data that has no reason to be negative, but then you have to take to ensure that arithmetic wrapping doesn't expose bugs.
Then again, if you use signed types, you still have to sometimes consider overflow:
MAX_INT + 1
The key is that you have to take care when performing arithmetic for these kinds of bugs.
No you should use the type that is right for your application. There is no golden rule. Sometimes on small microcontrollers it is for example more speedy and memory efficient to use say 8 or 16 bit variables wherever possible as that is often the native datapath size, but that is a very special case. I also recommend using stdint.h wherever possible. If you are using visual studio you can find BSD licensed versions.
If there is a possibility of overflow, then assign the values to the next highest data type during the calculation, ie:
void CreateRequestHeader( unsigned int bitsAvailable, unsigned int mandatoryDataSize, unsigned int optionalDataSize )
{
signed __int64 available = bitsAvailable;
signed __int64 mandatory = mandatoryDataSize;
signed __int64 optional = optionalDataSize;
if ( (mandatory + optional) <= available ) {
// Optional data fits, so add it to the header.
}
}
Otherwise, just check the values individually instead of calculating:
void CreateRequestHeader( unsigned int bitsAvailable, unsigned int mandatoryDataSize, unsigned int optionalDataSize )
{
if ( bitsAvailable < mandatoryDataSize ) {
return;
}
bitsAvailable -= mandatoryDataSize;
if ( bitsAvailable < optionalDataSize ) {
return;
}
bitsAvailable -= optionalDataSize;
// Optional data fits, so add it to the header.
}
You'll need to look at the results of the operations you perform on the variables to check if you can get over/underflows - in your case, the result being potentially negative. In that case you are better off using the signed equivalents.
I don't know if its possible in c, but in this case I would just cast the X-Y thing to an int.
If your numbers should never be less than zero, but have a chance to be < 0, by all means use signed integers and sprinkle assertions or other runtime checks around. If you're actually working with 32-bit (or 64, or 16, depending on your target architecture) values where the most significant bit means something other than "-", you should only use unsigned variables to hold them. It's easier to detect integer overflows where a number that should always be positive is very negative than when it's zero, so if you don't need that bit, go with the signed ones.
Suppose you need to count from 1 to 50000. You can do that with a two-byte unsigned integer, but not with a two-byte signed integer (if space matters that much).

Resources