Detect integer overflow - c

I am working with a large C library where some array indices are computed using int.
I need to find a way to trap integer overflows at runtime in such way as to narrow to problematic line of code. Libc manual states:
FPE_INTOVF_TRAP
Integer overflow (impossible in a C program unless you enable overflow trapping in a hardware-specific fashion).
however gcc option -ffpe-trap suggests that those only apply to FP numbers?
So how I do enable integer overflow trap? My system is Xeon/Core2, gcc-4.x, Linux 2.6
I have looked through similar questions but they all boil to modifying the code. I need to know however which code is problematic in the first place.
If Xeons can't trap overflows, which processors can? I have access to non-emt64 machines as well.
I have found a tool designed for llvm meanwhile: http://embed.cs.utah.edu/ioc/
There doesn't seem to be however an equivalent for gcc/icc?

Ok, I may have to answer my own question.
I found gcc has -ftrapv option, a quick test does confirm that at least on my system overflow is trapped. I will post more detailed info as I learn more since it seems very useful tool.

Unsigned integer arithmetic does not overflow, of course.
With signed integer arithmetic, overflow leads to undefined behaviour; anything could happen. And optimizers are getting aggressive about optimizing stuff that overflows. So, your best bet is to avoid the overflow, rather than trapping it when it happens. Consider using the CERT 'Secure Integer Library' (the URL referenced there seems to have gone AWOL/404; I'm not sure what's happened yet) or Google's 'Safe Integer Operation' library.
If you must trap overflow, you are going to need to specify which platform you are interested in (O/S including version, compiler including version), because the answer will be very platform specific.

Do you know exactly which line the overflow is occuring on? If so, you might be able to look at the assembler's Carry flag if the operation in question caused an overflow. This is the flag that the CPU uses to do large number calculation and, while not available at the C level, might help you to debug the problem - or at least give you a chance to do something.
BTW, found this link for gcc (-ftrapv) that talks about an integer trap. Might be what you are looking for.

You can use inline assembler in gcc to use an instruction that might generate an overflow and then test the overflow flag to see if it actually does:
int addo(int a, int b)
{
asm goto("add %0,%1; jo %l[overflow]" : : "r"(a), "r"(b) : "cc" : overflow);
return a+b;
overflow:
return 0;
}
In this case, it tries to add a and b, and if it does, it goes to the overflow label. If there's no overflow, it continues, doing the add again and returning it.
This runs into the GCC limitation that an inline asm block cannot both output a value and maybe branch -- if it weren't for that, you wouldn't need a second add to actually get the result.

Related

What could happen practically on undefined behavior in C [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I've read a lot of articles talking about undefined behavior (UB), but all do talk about theory. I am wondering what could happen in practice, because the programs containing UB may actually run.
My questions relates to unix-like systems, not embedded systems.
I know that one should not write code that relies on undefined behavior. Please do not send answers like this:
Everything could happen
Daemons can fly out of your nose
Computer could jump and catch fire
Especially for the first one, it is not true. You obviously cannot get root by doing a signed integer overflow. I'm asking this for educational purpose only.
Question A)
Source
implementation-defined behavior: unspecified behavior where each implementation documents how the choice is made
Is the implementation the compiler?
Question B)
*"abc" = '\0';
For something else than a segfault to happen, do I need my system to be broken? What could actually happen even if it is not predictable? Could the first byte be set to zero ? What else, and how?
Question C)
int i = 0;
foo(i++, i++, i++);
This is UB because the order in which parameters are evaluated is undefined. Right. But, when the program runs, who decides in what order the parameters are evaluated: is is the compiler, the OS, or something else?
Question D)
Source
$ cat test.c
int main (void)
{
printf ("%d\n", (INT_MAX+1) < 0);
return 0;
}
$ cc test.c -o test
$ ./test
Formatting root partition, chomp chomp
According to other SO users, this is possible. How could this happen? Do I need a broken compiler?
Question E)
Use the same code as above. What could actually happen, except of the expression (INT_MAX+1) yielding a random value ?
Question F)
Does the GCC -fwrapv option defines the behavior of a signed integer overflow, or does it only make GCC assume that it will wrap around but it could in fact not wrap around at runtime?
Question G)
This one concerns embedded systems. Of course, if the PC jumps to an unexpected place, two outputs could be wired together and create a short-circuit (for example).
But, when executing code similar to this:
*"abc" = '\0';
Wouldn't the PC be vectored to the general exception handler? Or what am I missing?
In practice, most compilers use undefined behavior in either of the following ways:
Print a warning at compile time, to inform the user that he probably made a mistake
Infer properties on the values of variables and use those to simplify code
Perform unsafe optimizations as long as they only break the expected semantic of undefined behavior
Compilers are usually not designed to be malicious. The main reason to exploit undefined behavior is usually to get some performance benefit from it. But sometimes that can involve total dead code elimination.
A) Yes. The compiler should document what behavior he chose. But usually that is hard to predict or explain the consequences of UB.
B) If the string is actually instantiated in memory and is in a writable page (by default it will be in a read-only page), then its first character might become a null character. Most probably, the entire expression will be thrown out as dead-code because it is a temporary value that disappears out of the expression.
C) Usually, the order of evaluation is decided by the compiler. Here it might decide to transform it into a i += 3 (or a i = undef if it is being silly). The CPU could reorder instructions at run-time but preserve the order chosen by the compiler if it breaks the semantic of its instruction set (the compiler usually cannot forward the C semantic further down). An incrementation of a register cannot commute or be executed in parallel to an other incrementation of that same register.
D) You need a silly compiler that print "Formatting root partition, chomp chomp" when it detects undefined behavior. Most probably, it will print a warning at compile time, replace the expression by a constant of his choice and produce a binary that simply perform the print with that constant.
E) It is a syntactically correct program, so the compiler will certainly produce a "working" binary. That binary could in theory have the same behavior as any binary you could download on the internet and that you run. Most probably, you get a binary that exit straight away, or that print the aforementioned message and exit straight away.
F) It tells GCC to assume the signed integers wrap around in the C semantic using 2's complement semantic. It must therefore produce a binary that wrap around at run-time. That is rather easy because most architecture have that semantic anyway. The reason for C to have that an UB is so that compilers can assume a + 1 > a which is critical to prove that loops terminate and/or predict branches. That's why using signed integer as loop induction variable can lead to faster code, even though it is mapped to the exact same instructions in hardware.
G) Undefined behavior is undefined behavior. The produced binary could indeed run any instructions, including a jump to an unspecified place... or cleanly trigger an interruption. Most probably, your compiler will get rid of that unnecessary operation.
You obviously cannot get root by doing a signed integer overflow.
Why not?
If you assume that signed integer overflow can only yield some particular value, then you're unlikely to get root that way. But the thing about undefined behavior is that an optimizing compiler can assume that it doesn't happen, and generate code based on that assumption.
Operating systems have bugs. Exploiting those bugs can, among other things, invoke privilege escalation.
Suppose you use signed integer arithmetic to compute an index into an array. If the computation overflows, you could accidentally clobber some arbitrary chunk of memory outside the intended array. That could cause your program to do arbitrarily bad things.
If a bug can be exploited deliberately (and the existence of malware clearly indicates that that's possible), it's at least possible that it could be exploited accidentally.
Also, consider this simple contrived program:
#include <stdio.h>
#include <limits.h>
int main(void) {
int x = INT_MAX;
if (x < x + 1) {
puts("Code that gets root");
}
else {
puts("Code that doesn't get root");
}
}
On my system, it prints
Code that doesn't get root
when compiled with gcc -O0 or gcc -O1, and
Code that gets root
with gcc -O2 or gcc -O3.
I don't have concrete examples of signed integer overflow triggering a security flaw (and I wouldn't post such an example if I had one), but it's clearly possible.
Undefined behavior can in principle make your program do accidentally anything that a program starting with the same privileges could do deliberately. Unless you're using a bug-free operating system, that could include privilege escalation, erasing your hard drive, or sending a nasty e-mail message to your boss.
To my mind, the worst thing that can happen in the face of undefined behavior is something different tomorrow.
I enjoy programming, but I also enjoy finishing a program, and going on to work on something else. I do not delight in continuously tinkering with my already-written programs, to keep them working in the face of bugs they spontaneously develop as hardware, compilers, or other circumstances keep changing.
So when I write a program, it is not enough for it to work. It has to work for the right reasons. I have to know that it works, and that it will keep working next week and next month and next year. It can't just seem to work, to have given apparently correct answers on the -- necessarily finite -- set of test cases I've run it on so far.
And that's why undefined behavior is so pernicious: it might do something perfectly fine today, and then do something completely different tomorrow, when I'm not around to defend it. The behavior might change because someone ran it on a slightly different machine, or with more or less memory, or on a very different set of inputs, or after recompiling it with a different compiler.
See also the third part of this other answer (the part starting with "And now, one more thing, if you're still with me").
It used to be that you could count on the compiler to do something "reasonable". More and more often, though, compilers are truly taking advantage of their license to do weird things when you write undefined code. In the name of efficiency, these compilers are introducing very strange optimizations, which don't do anything close to what you probably want.
Read these posts:
Linus Torvalds describes a kernel bug that was much worse than it could have been given that gcc took advantage of undefined behavior
LLVM blog post on undefined behavior (first of three parts, also two, three)
another great blog post by John Regehr (also first of three parts: two, three)

Why do C variables maximum and minimum values touch?

I am working on a tutorial for binary numbers. Something I have wondered for a while is why all the integer maximum and minimum values touch. For example for an unsigned byte 255 + 1 = 0 and 0 - 1 = 255. I understand all the binary math that goes into it, but why was the decision made to have them work this way instead of a straight number line that gives an error when the extremes are breached?
Since your example is unsigned, I assume it's OK to limited the scope to unsigned types.
Allowing wrapping is useful. For example, it's what allows you (and the compiler) to always reorder (and constant-fold) a sequence of additions and subtractions. Even something such as x + 3 - 1 could not be optimized to x + 2 if the language requires trapping, because it changes the conditions under which the expression would trap. Wrapping also mixes better with bit manipulation, with the interpretation of an unsigned number as a vector of bits it makes very little sense if there's trapping. That applies especially to shifts, but addition, subtraction and even multiplication also make sense on bitvectors and combine usefully with the usual bitwise operations.
The algebraic structure you get when allowing wrapping, Z/2kZ, is fairly nice (perhaps not as nice as modulo a prime, but that would interact badly with the bitvector interpretation and it doesn't match typical hardware) and well known, so it's not like anything particularly unexpected or weird will happen, it's not like a wrapped result is a "uselessly arbitrary" result.
And of course testing the carry flag (or whatever may be required) after just about every operation has a big direct overhead as well.
Trapping on "unsigned overflow" is both expensive and undesirable, at least if it is the default behaviour.
Why not "give an error when the extremes are breached"?
Error handling is one of the hardest things in software development. When an error happens, there are many possible ways software could be required to do:
Show an annoying message to the user? Like Hey user, you have just tried to add 1 to this variable, which is already too big. Stop that! - there often is no context to show the user, that would be of any help.
Throw an exception? (BTW C has support for that) - that would show a stack trace, if you happened to execute your code in a debugger. Otherwise, it would just crash - not bad (it won't corrupt your files) but not good either (can be exploited as a denial of service attack).
Write it to a log file? - sometimes it's the best thing to do - record the error and move on, so it can be debugged later.
The right thing to do depends on your code. So a generic programming language like C doesn't want to restrict you by providing any mandatory behavior.
Instead, C provides two guidelines:
For unsigned types like unsigned int or uint8_t or (usually) char - it provides silent wraparound, for best performance.
For signed types like int - it provides "undefined behavior", which makes it possible to "choose", in a very limited way, what will happen on overflow
Throw an exception if using -ftrapv in gcc
Silent wraparound if using -fwrapv in gcc
By default (no fancy command-line options) - the compiler may assume it will never happen, which may help it produce optimized code
The idea here is that you (the programmer) should think where checking for overflow is worth doing, and how to recover from overflow (if the language provided a standard error handling mechanism, it would deny you the latter part). This approach has maximum flexibility, (potentially) maximum performance, and (usually) hardest to do - which fits the philosophy of C.

finding errors in a given c code

I am interested to know on what things I need to concentrate on debugging c code without a debugger. What are the things to look for?
Generally I look for the following:
Check whether correct value and type is being passed to a function.
Look for unallocated and uninitialized variables
Check for function syntax and function is used in right way.
Check for return values
Check for locks are used in the right way.
Check for string termination
Returning a varible in stack memory from a function
Off by one errors
Normal syntax errors
Function declaration errors
Any structured approach is very much appreciated.
Most of these errors will be picked up by passing the appropriate warning flags to the compiler.
However from the original list, points 1, 5, 6, 7, 8 are very much worth checking as a human, some compiler/flag combinations however will pick up on unhandled values, pointers to automatic memory, and off-by-one errors in array indexing etc.
You may want to take a look at such things as mudflap, valgrind, efence and others to catch runtime cases you're unaware of. You might also try splint, to augment your static analysis.
For the unautomated side of things, try statically following the flow of your program for particular cases, especially corner cases, and verify to yourself that it appears to do the right thing. Try writing unit tests/test scripts. Be sure to use some automated checking as discussed above.
If your emphasis is on testing without any test execution, splint might very well be the best place to start. The technique you want to research is called static code analysis.
I recommend trying one of the many static code analyzers. Those that I used personally and can recommend:
cppcheck - free and open-source, has cmd-line program and windows gui
Clang Static Analyzer - Apple's free and open-source, best supported on mac, also built in recent XCode versions
Visual Studio's static checker, only available in Premium and Ultimate (i.e. expensive) versions
Coverity - expensive
If you want more details, you can read an article I wrote on that subject.
A big one you left out is integer overflow. This includes both undefined behavior from overflow of signed expressions, and well-defined but possibly-dangerous behavior of unsigned overflow being reduced mod TYPE_MAX+1. In particular, things like foo=malloc(count*sizeof *foo); can be very dangerous if count came from a potentially untrusted source (like a data file), especially if sizeof *foo is large.
Some others:
mixing of signed and unsigned values in comparisons.
use of functions with locale-specific behavior (e.g. radix character, case mapping, etc.) when well-defined uniform behavior is needed.
use of char when doing anything more than copying values or comparison for equality (otherwise you probably want unsigned char or perhaps in rare cases, signed char).
use of signed expressions with /POWER_OF_2 and %POWER_OF_2 (hint: (-3)%8==-3 but (-3)&7==5).
use of signed division/modulo in general with negative numbers, since C's version of it disagrees with the usual algebraic definition when a negative number is divided by a positive one, and rarely gives the desired result.

When to address integer overflow in C

Related question: How to detect integer overflow?
In C code, should integer overflow be addressed whenever integers are added? It seems like pointers and array indexes should be checked at all. When should integer overflow be checked for?
When numbers are added in C without type explicitly mentioned, or printed with printf, when will overflow occur?
Is there a way to automatically detect integer arithmetic overflow?
I've heard about setjmp()- or longjmp()-based exception handling in C, but I think there's no native way of doing this. What I usually do is just make sure the types used are long enough to contain all the additions/multiplications I'll need to make.
The whole point of using C, as opposed to managed languages such as C#, which will throw an OverflowException, is precisely the fact that no CPU power is wasted on safety checks. C will simply turn the counter around, and go from FFFFFFFF to 00000000, so you can check for that (if a>b and such), but other than that I can just recommend using longer types. 64 bits (long long) should address all your needs.
Overflow won't occur when you print a number with printf, or at least I haven't heard of such a possibility. For additions, I'd just use adequate types and tell the compiler how to interpret the values so that you can avoid unnecessary casts (like, the literal "123" will be interpreted as 32 bit, but "123LL" will be 64 bit - same as with ".1f" vs. ".1").
For array indices - you should always make sure you don't read/write out of your array, as C in many cases will happily corrupt your data without causing an error.
As for when integer overflow should be checked for... Well, whenever it may occur and you don't want it to occur :).
The general answer is rarely. If the result should be valid, but overflowed instead then you should have used a larger type. If the largest type isn't sufficient then you should have used a big int library.
There is no automatic, standard way to detect this built into C. Some hardware supports it, but it isn't standard. This was covered in the thread you linked to.
The type of literals is always defined, it's just not always explicit. Here's a list of literal types. Generally performing arithmetic with literals will overflow either if you manage to overflow whatever type the compiler uses for intermediate operation, or when the result gets assigned to a type of lower precision that doesn't have enough space.
When you say "automatically detect overflow", what exactly do you mean? Overflow detection as a debugging tool, i.e. something that aborts our program is a way similar to a failed assertion? Or some kind of full-time run-time mechanism that would let you detect and handle the situation gracefully?
If you are interested in it as a debugging tool, then you should consult your compiler documentation. GCC, for example, provides -ftrapv option that "generates traps for signed overflow on addition, subtraction, multiplication operations" (see code generation options)

Catching overflow of left shift of constant 1 using compiler warning?

We're writing code inside the Linux kernel so, try as I might, I wasn't able to get PC-Lint/Flexelint working on Linux kernel code. Just too many built-in symbols etc. But that's a side issue.
We have any number of compilers, starting with gcc, but others also. Their warnings options have been getting stronger over time, to where they are pretty strong static analysis tools too.
Here is what I want to catch. Yes, I know it violates some things that are easy to catch in code review, such as "no magic numbers", and "beware of bit shifting", but that's only if you happen to look at that section of code. Anyway, here it is:
unsigned long long foo;
unsigned long bar;
[... lots of other code ...]
foo = ~(foo + (1<<bar));
Further UPDATED problem description -- even with bar limited to 16, still a problem. Clarifying, the problem is implicit int type of constant that, unplanned, makes the complex expression violate the rule that all calculations be carried out in the same size and signedness.
Problem: '1' is not long long, but, as a small-value constant, defaults to an int. Therefore even if bar's actual value never exceeds, say, 16, still the (1<<bar) expression will overflow and ruin the entire calculation.
Possibly correct solution: write 1ULL instead.
Is there a well-known compiler and compiler warning flag that will point out this (revised) problem?
I am not sure what criteria you are thinking of to flag
this construction as suspicious. There is clearly
something wrong if the value of bar is as large as than
the size (in bits) of an int, but usually the compiler
wouldn't know that.
From the point of view of a heuristic, bug-finding tool,
having good patterns to separate likely bugs from
normal constructions is key to avoiding too many false
positives (which make users hate the tool and refuse to
use it).
The Open Source tool in my URL flags logical shifts by a number larger
than the size of the type, but it is primarily a verification
tool for critical embedded software and expect a lot of work
to appropriate it if you intend to use it on the Linux kernel
with its linked structures and other difficulties.

Resources