Catching overflow of left shift of constant 1 using compiler warning? - c

We're writing code inside the Linux kernel so, try as I might, I wasn't able to get PC-Lint/Flexelint working on Linux kernel code. Just too many built-in symbols etc. But that's a side issue.
We have any number of compilers, starting with gcc, but others also. Their warnings options have been getting stronger over time, to where they are pretty strong static analysis tools too.
Here is what I want to catch. Yes, I know it violates some things that are easy to catch in code review, such as "no magic numbers", and "beware of bit shifting", but that's only if you happen to look at that section of code. Anyway, here it is:
unsigned long long foo;
unsigned long bar;
[... lots of other code ...]
foo = ~(foo + (1<<bar));
Further UPDATED problem description -- even with bar limited to 16, still a problem. Clarifying, the problem is implicit int type of constant that, unplanned, makes the complex expression violate the rule that all calculations be carried out in the same size and signedness.
Problem: '1' is not long long, but, as a small-value constant, defaults to an int. Therefore even if bar's actual value never exceeds, say, 16, still the (1<<bar) expression will overflow and ruin the entire calculation.
Possibly correct solution: write 1ULL instead.
Is there a well-known compiler and compiler warning flag that will point out this (revised) problem?

I am not sure what criteria you are thinking of to flag
this construction as suspicious. There is clearly
something wrong if the value of bar is as large as than
the size (in bits) of an int, but usually the compiler
wouldn't know that.
From the point of view of a heuristic, bug-finding tool,
having good patterns to separate likely bugs from
normal constructions is key to avoiding too many false
positives (which make users hate the tool and refuse to
use it).
The Open Source tool in my URL flags logical shifts by a number larger
than the size of the type, but it is primarily a verification
tool for critical embedded software and expect a lot of work
to appropriate it if you intend to use it on the Linux kernel
with its linked structures and other difficulties.

Related

Why do C variables maximum and minimum values touch?

I am working on a tutorial for binary numbers. Something I have wondered for a while is why all the integer maximum and minimum values touch. For example for an unsigned byte 255 + 1 = 0 and 0 - 1 = 255. I understand all the binary math that goes into it, but why was the decision made to have them work this way instead of a straight number line that gives an error when the extremes are breached?
Since your example is unsigned, I assume it's OK to limited the scope to unsigned types.
Allowing wrapping is useful. For example, it's what allows you (and the compiler) to always reorder (and constant-fold) a sequence of additions and subtractions. Even something such as x + 3 - 1 could not be optimized to x + 2 if the language requires trapping, because it changes the conditions under which the expression would trap. Wrapping also mixes better with bit manipulation, with the interpretation of an unsigned number as a vector of bits it makes very little sense if there's trapping. That applies especially to shifts, but addition, subtraction and even multiplication also make sense on bitvectors and combine usefully with the usual bitwise operations.
The algebraic structure you get when allowing wrapping, Z/2kZ, is fairly nice (perhaps not as nice as modulo a prime, but that would interact badly with the bitvector interpretation and it doesn't match typical hardware) and well known, so it's not like anything particularly unexpected or weird will happen, it's not like a wrapped result is a "uselessly arbitrary" result.
And of course testing the carry flag (or whatever may be required) after just about every operation has a big direct overhead as well.
Trapping on "unsigned overflow" is both expensive and undesirable, at least if it is the default behaviour.
Why not "give an error when the extremes are breached"?
Error handling is one of the hardest things in software development. When an error happens, there are many possible ways software could be required to do:
Show an annoying message to the user? Like Hey user, you have just tried to add 1 to this variable, which is already too big. Stop that! - there often is no context to show the user, that would be of any help.
Throw an exception? (BTW C has support for that) - that would show a stack trace, if you happened to execute your code in a debugger. Otherwise, it would just crash - not bad (it won't corrupt your files) but not good either (can be exploited as a denial of service attack).
Write it to a log file? - sometimes it's the best thing to do - record the error and move on, so it can be debugged later.
The right thing to do depends on your code. So a generic programming language like C doesn't want to restrict you by providing any mandatory behavior.
Instead, C provides two guidelines:
For unsigned types like unsigned int or uint8_t or (usually) char - it provides silent wraparound, for best performance.
For signed types like int - it provides "undefined behavior", which makes it possible to "choose", in a very limited way, what will happen on overflow
Throw an exception if using -ftrapv in gcc
Silent wraparound if using -fwrapv in gcc
By default (no fancy command-line options) - the compiler may assume it will never happen, which may help it produce optimized code
The idea here is that you (the programmer) should think where checking for overflow is worth doing, and how to recover from overflow (if the language provided a standard error handling mechanism, it would deny you the latter part). This approach has maximum flexibility, (potentially) maximum performance, and (usually) hardest to do - which fits the philosophy of C.

Is there any performance difference in using int versus int8_t

My main question is, Is there any difference between int and int8_t for execution time ?
In a framework I am working on, I often read code where some paramteres are set as int8_t in function because "that particular parameter cannot be outside the -126,125 range".
In many places, int8_t is used for communication protocol, or to cut a packet into many fields into a __attribute((packed)) struct.
But at some point, it was mainly put there because someone thought it would be better to use a type that match more closely the size of the data, probably think ahead of the compiler.
Given that the code is made to run on Linux, compiled with gcc using glibc, and that memory or portability is not an issue, I am wondering if it is actually a good idea, performance-wise.
My first impression comes from the rule "Trying to be smarter than the compiler is always a bad idea" (unless you know where and how you need to optimize).
However, I do not know if using int8_t is actually a cost for performance (more testing and computation to match the int8_t size, more operations are needed to ensure the variable do not go out of bounds, etc.), or if it does improve performance in some way.
I am not good at reading simple asm, so I did not compile a test code into asm to try to know which one is better.
I tried to find a related question, but all discussion I found on int<size>_t versus int is about portability rather than performance.
Thanks for your input. Assembly samples explained or sources about this issue would be greatly appreciated.
int is generally equivalent of the size of register on CPU. C standard says that any smaller types must be converted to int before using operators on them.
These conversions (sign extension) can be costly.
int8_t a=1, b=2, c=3;
...
a = b + c; // This will translate to: a = (int8_t)((int)b + (int)c);
If you need speed, int is a safe bet, or use int_fast8_t (even safer). If exact size is important, use int8_t (if available).
when you talk about code performance, there are several things you need to take into account which affect this:
CPU architecture, more to the point, which data types does the cpu support natively ( does it support 8 bit operations? 16 bit? 32 bit? etc...)
compiler, working with a well known compiler is not enough, you need to be familiar with it: they way you write your code influences the code it generates
data types and compiler intrinsics: these are always considered by the compiler when generating code, using the correct data type (even signed vs unsigned matters) can have a dramatic performance impact.
"Trying to be smarter than the compiler is always a bad idea" - that is not actually true; remember, the compiler is written to optimize the general case and you are interested in you particular case; it's always a good idea to try and be smarter than the compiler.
Your question is really too broad for me to give a "to the point" answer (i.e. what is better performance wise). The only way to know for sure is to check the generated assembly code; at least count the number of cycles the code would take to execute in both cases. But you need to understand the code to understand how to help the compiler.

Are there well-known "profiles" of the C standard?

I write C code that makes certain assumptions about the implementation, such as:
char is 8 bits.
signed integral types are two's complement.
>> on signed integers sign-extends.
integer division rounds negative quotients towards zero.
double is IEEE-754 doubles and can be type-punned to and from uint64_t with the expected result.
comparisons involving NaN always evaluate to false.
a null pointer is all zero bits.
all data pointers have the same representation, and can be converted to size_t and back again without information loss.
pointer arithmetic on char* is the same as ordinary arithmetic on size_t.
functions pointers can be cast to void* and back again without information loss.
Now, all of these are things that the C standard doesn't guarantee, so strictly speaking my code is non-portable. However, they happen to be true on the architectures and ABIs I'm currently targeting, and after careful consideration I've decided that the risk they will fail to hold on some architecture that I'll need to target in the future is acceptably low compared to the pragmatic benefits I derive from making the assumptions now.
The question is: how do I best document this decision? Many of my assumptions are made by practically everyone (non-octet chars? or sign-magnitude integers? on a future, commercially successful, architecture?). Others are more arguable -- the most risky probably being the one about function pointers. But if I just list everything I assume beyond what the standard gives me, the reader's eyes are just going to glaze over, and he may not notice the ones that actually matter.
So, is there some well-known set of assumptions about being a "somewhat orthodox" architecture that I can incorporate by reference, and then only document explicitly where I go beyond even that? (Effectively such a "profile" would define a new language that is a superset of C, but it might not acknowledge that in so many words -- and it may not be a pragmatically useful way to think of it either).
Clarification: I'm looking for a shorthand way to document my choices, not for a way to test automatically whether a given compiler matches my expectations. The latter is obviously useful too, but does not solve everything. For example, if a business partner contacts us saying, "we're making a device based on Google's new G2015 chip; will your software run on it?" -- then it would be nice to be able to answer "we haven't worked with that arch yet, but it shouldn't be a problem if it has a C compiler that satisfies such-and-such".
Clarify even more since somebody has voted to close as "not constructive": I'm not looking for discussion here, just for pointers to actual, existing, formal documents that can simplify my documentation by being incorporated by reference.
I would introduce a STATIC_ASSERT macro and put all your assumptions in such asserts.
Unfortunately, not only is there a lack of standards for a dialect of C that combines the extensions which have emerged as de facto standards during the 1990s (two's-complement, universally-ranked pointers, etc.) but compilers trends are moving in the opposite direction. Given the following requirements for a function:
* Accept int parameters x,y,z:
* Return 0 if x-y is computable as "int" and is less than Z
* Return 1 if x-y is computable as "int" and is not less than Z
* Return 0 or 1 if x-y is not computable */
The vast majority of compilers in the 1990s would have allowed:
int diffCompare(int x, int y, int z)
{ return (x-y) >= z; }
On some platforms, in cases where the difference between x-y was not computable as int, it would be faster to compute a "wrapped" two's-complement value of x-y and compare that, while on others it would be faster to perform the calculation using a type larger than int and compare that. By the late 1990s, however, nearly every C compiler would implement the above code to use one of whichever one of those approaches would have been more efficient on its hardware platform.
Since 2010, however, compiler writers seem to have taken the attitude that if computations overflow, compilers shouldn't perform the calculations in whatever fashion is normal for their platform and let what happens happens, nor should they recognizably trap (which would break some code, but could prevent certain kinds of errant program behavior), but instead they should overflows as an excuse to negate laws of time and causality. Consequently, even if a programmer would have been perfectly happy with any behavior a 1990s compiler would have produced, the programmer must replace the code with something like:
{ return ((long)x-y) >= z; }
which would greatly reduce efficiency on many platforms, or
{ return x+(INT_MAX+1U)-y >= z+(INT_MAX+1U); }
which requires specifying a bunch of calculations the programmer doesn't actually want in the hopes that the optimizer will omit them (using signed comparison to make them unnecessary), and would reduce efficiency on a number of platforms (especially DSPs) where the form using (long) would have been more efficient.
It would be helpful if there were standard profiles which would allow programmers to avoid the need for nasty horrible kludges like the above using INT_MAX+1U, but if trends continue they will become more and more necessary.
Most compiler documentation includes a section that describes the specific behavior of implementation-dependent features. Can you point to that section of the gcc or msvc docs to describe your assumptions?
You can write a header file "document.h" where you collect all your assumptions.
Then, in every file that you know that non-standard assumptions are made, you can #include such a file.
Perhaps "document.h" would not have real sentences at all, but only commented text and some macros.
// [T] DOCUMENT.H
//
#ifndef DOCUMENT_H
#define DOCUMENT_H
// [S] 1. Basic assumptions.
//
// If this file is included in a compilation unit it means that
// the following assumptions are made:
// [1] A char has 8 bits.
// [#]
#define MY_CHARBITSIZE 8
// [2] IEEE 754 doubles are addopted for type: double.
// ........
// [S] 2. Detailed information
//
#endif
The tags in brackets: [T] [S] [#] [1] [2] stand for:
* [T]: Document Title
* [S]: Section
* [#]: Print the following (non-commented) lines as a code-block.
* [1], [2]: Numbered items of a list.
Now, the idea here is to use the file "document.h" in a different way:
To parse the file in order to convert the comments in "document.h" to some printable document, or some basic HTML.
Thus, the tags [T] [S] [#] etc., are intended to be interpreted by a parser that convert any comment into an HTML line of text (for example), and generate <h1></h1>, <b></b> (or whatever you want), when a tag appears.
If you keep the parser as a simple and small program, this can give you a short hand to handle this kind of documentation.

finding errors in a given c code

I am interested to know on what things I need to concentrate on debugging c code without a debugger. What are the things to look for?
Generally I look for the following:
Check whether correct value and type is being passed to a function.
Look for unallocated and uninitialized variables
Check for function syntax and function is used in right way.
Check for return values
Check for locks are used in the right way.
Check for string termination
Returning a varible in stack memory from a function
Off by one errors
Normal syntax errors
Function declaration errors
Any structured approach is very much appreciated.
Most of these errors will be picked up by passing the appropriate warning flags to the compiler.
However from the original list, points 1, 5, 6, 7, 8 are very much worth checking as a human, some compiler/flag combinations however will pick up on unhandled values, pointers to automatic memory, and off-by-one errors in array indexing etc.
You may want to take a look at such things as mudflap, valgrind, efence and others to catch runtime cases you're unaware of. You might also try splint, to augment your static analysis.
For the unautomated side of things, try statically following the flow of your program for particular cases, especially corner cases, and verify to yourself that it appears to do the right thing. Try writing unit tests/test scripts. Be sure to use some automated checking as discussed above.
If your emphasis is on testing without any test execution, splint might very well be the best place to start. The technique you want to research is called static code analysis.
I recommend trying one of the many static code analyzers. Those that I used personally and can recommend:
cppcheck - free and open-source, has cmd-line program and windows gui
Clang Static Analyzer - Apple's free and open-source, best supported on mac, also built in recent XCode versions
Visual Studio's static checker, only available in Premium and Ultimate (i.e. expensive) versions
Coverity - expensive
If you want more details, you can read an article I wrote on that subject.
A big one you left out is integer overflow. This includes both undefined behavior from overflow of signed expressions, and well-defined but possibly-dangerous behavior of unsigned overflow being reduced mod TYPE_MAX+1. In particular, things like foo=malloc(count*sizeof *foo); can be very dangerous if count came from a potentially untrusted source (like a data file), especially if sizeof *foo is large.
Some others:
mixing of signed and unsigned values in comparisons.
use of functions with locale-specific behavior (e.g. radix character, case mapping, etc.) when well-defined uniform behavior is needed.
use of char when doing anything more than copying values or comparison for equality (otherwise you probably want unsigned char or perhaps in rare cases, signed char).
use of signed expressions with /POWER_OF_2 and %POWER_OF_2 (hint: (-3)%8==-3 but (-3)&7==5).
use of signed division/modulo in general with negative numbers, since C's version of it disagrees with the usual algebraic definition when a negative number is divided by a positive one, and rarely gives the desired result.

#defined bitflags and enums - peaceful coexistence in "c"

I have just discovered the joy of bitflags. I have several questions related to "best-practices" regarding the use of bitflags in C. I learned everything from various examples I found on the web but still have questions.
In order to save space, I am using a single 32bit integer field in a struct (A->flag) to represent several different sets of boolean properties. In all, 20 different bits are #defined. Some of these are truly presence/absence flags (STORAGE-INTERNAL vs. STORAGE-EXTERNAL). Others have more than two values (e.g. mutually exclusive set of formats: FORMAT-A, FORMAT-B, FORMAT-C). I have defined macros for setting specific bits (and simultaneously turning off mutually exclusive bits). I have also defined macros for testing if specific combination of bits are set in the flag.
However, what is lost in the above approach is the specific grouping of flags that is best captured by enums. For writing functions, I would like to use enums (e.g., STORAGE-TYPE and FORMAT-TYPE), so that function definitions look nice. I expect to use enums only for passing parameters and #defined macros for setting and testing flags.
(a) How do I define flag (A->flag) as a 32 bit integer in a portable fashion (across 32 bit / 64 bit platforms)?
(b) Should I worry about potential size differences in how A->flag vs. #defined constants vs. enums are stored?
(c) Am I making things unnecessarily complicated, meaning should I just stick to using #defined constants for passing parameters as ordinary ints? What else should I worry about in all this?
I apologize for the poorly articulated question. It reflects my ignorance about potential issues.
There is a C99 header that was intended to solve that exact problem (a) but for some reason Microsoft doesn't implement it. Fortunately, you can get <stdint.h> for Microsoft Windows here. Every other platform will already have it. The 32-bit int types are uint32_t and int32_t. These also come in 8, 16, and 64- bit flavors.
So, that takes care of (a).
(b) and (c) are kind of the same question. We do make assumptions whenever we develop something. You assume that C will be available. You assume that <stdint.h> can be found somewhere. You could always assume that int was at least 16 bits and now a >= 32 bit assumption is fairly reasonable.
In general, you should try to write conforming programs that don't depend on layout, but they will make assumptions about word length. You should worry about performance at the algorithm level, that is, am I writing something that is quadratic, polynomial, exponential?
You should not worry about performance at the operation level until (a) you notice a performance lag, and (b) you have profiled your program. You need to get your job done without bogging down worrying about individual operations. :-)
Oh, I should add that you particularly don't need to worry about low level operation performance when you are writing the program in C in the first place. C is the close-to-the-metal go-as-fast-as-possible language. We routinely write stuff in php, python, ruby, or lisp because we want a powerful language and the CPU's are so fast these days that we can get away with an entire interpreter, never mind a not-perfect choice of bit-twiddle-word-length ops. :-)
You can use bit-fields and let the compiler do the bit twiddling. For example:
struct PropertySet {
unsigned internal_storage : 1;
unsigned format : 4;
};
int main(void) {
struct PropertySet x;
struct PropertySet y[10]; /* array of structures containing bit-fields */
if (x.internal_storage) x.format |= 2;
if (y[2].internal_storage) y[2].format |= 2;
return 0;
}
Edited to add array of structures
As others have said, your problem (a) is resolvable by using <stdint.h> and either uint32_t or uint_least32_t (if you want to worry about Burroughs mainframes which have 36-bit words). Note that MSVC does not support C99, but #DigitalRoss shows where you can obtain a suitable header to use with MSVC.
Your problem (b) is not an issue; C will type convert safely for you if it is necessary, but it probably isn't even necessary.
The area of most concern is (c) and in particular the format sub-field. There, 3 values are valid. You can handle this by allocating 3 bits and requiring that the 3-bit field is one of the values 1, 2, or 4 (any other value is invalid because of too many or too few bits set). Or you could allocate a 2-bit number, and specify that either 0 or 3 (or, if you really want to, one of 1 or 2) is invalid. The first approach uses one more bit (not currently a problem since you're only using 20 of 32 bits) but is a pure bitflag approach.
When writing function calls, there is no particular problem writing:
some_function(FORMAT_A | STORAGE_INTERNAL, ...);
This will work whether FORMAT_A is a #define or an enum (as long as you specify the enum value correctly). The called code should check whether the caller had a lapse in concentration and wrote:
some_function(FORMAT_A | FORMAT_B, ...);
But that is an internal check for the module to worry about, not a check for users of the module to worry about.
If people are going to be switching bits in the flags member around a lot, the macros for setting and unsetting the format field might be beneficial. Some might argue that any pure boolean fields barely need it, though (and I'd sympathize). It might be best to treat the flags member as opaque and provide 'functions' (or macros) to get or set all the fields. The less people can get wrong, the less will go wrong.
Consider whether using bit-fields works for you. My experience is that they lead to big code and not necessarily very efficient code; YMMV.
Hmmm...nothing very definitive here, so far.
I would use enums for everything because those are guaranteed to be visible in a debugger where #define values are not.
I would probably not provide macros to get or set bits, but I'm a cruel person at times.
I would provide guidance on how to set the format part of the flags field, and might provide a macro to do that.
Like this, perhaps:
enum { ..., FORMAT_A = 0x0010, FORMAT_B = 0x0020, FORMAT_C = 0x0040, ... };
enum { FORMAT_MASK = FORMAT_A | FORMAT_B | FORMAT_C };
#define SET_FORMAT(flag, newval) (((flag) & ~FORMAT_MASK) | (newval))
#define GET_FORMAT(flag) ((flag) & FORMAT_MASK)
SET_FORMAT is safe if used accurately but horrid if abused. One advantage of the macros is that you could replace them with a function that validated things thoroughly if necessary; this works well if people use the macros consistently.
For question a, if you are using C99 (you probably are using it), you can use the uint32_t predefined type (or, if it is not predefined, it can be found in the stdint.h header file).
Regarding (c): if your enumerations are defined correctly you should be able to pass them as arguments without a problem. A few things to consider:
enumeration storage is often
compiler specific, so depending on
what kind of development you are
doing (you don't mention if it's
Windows vs. Linux vs. embedded vs.
embedded Linux :) ) you may want to
visit compiler options for enum
storage to make sure there are no
issues there. I generally agree with
the above consensus that the
compiler should cast your
enumerations appropriately - but
it's something to be aware of.
in the case that you are doing
embedded work, many static quality
checking programs such as PC Lint
will "bark" if you start getting too
clever with enums, #defines, and
bitfields. If you are doing
development that will need to pass
through any quality gates, this
might be something to keep in mind.
In fact, some automotive standards
(such as MISRA-C) get downright
irritable if you try to get trig
with bitfields.
"I have just discovered the joy of
bitflags." I agree with you - I find
them very useful.
I added comments to each answer above. I think I have some clarity. It seems enums are cleaner as it shows up in debugger and keeps fields separate. macros can be used for setting and getting values.
I have also read that enums are stored as small integers - which as I understand it, is not a problem with the boolean tests as these would be peroformed starting at the right most bits. But, can enums be used to store large integers (1 << 21)??
thanks again to you all. I have already learned more than I did two days ago!!
~Russ

Resources