When using integer values in my own code, I always try to consider the signedness, asking myself if the integer should be signed or unsigned.
When I'm sure the value will never need to be negative, I then use an unsigned integer.
And I have to say this happen most of the time.
When reading other peoples' code, I rarely see unsigned integers, even if the represented value can't be negative.
So I asked myself: «is there a good reason for this, or do people just use signed integers because the don't care»?
I've search on the subject, here and in other places, and I have to say I can't find a good reason not to use unsigned integers, when it applies.
I came across those questions: «Default int type: Signed or Unsigned?», and «Should you always use 'int' for numbers in C, even if they are non-negative?» which both present the following example:
for( unsigned int i = foo.Length() - 1; i >= 0; --i ) {}
To me, this is just bad design. Of course, it may result in an infinite loop, with unsigned integers.
But is it so hard to check if foo.Length() is 0, before the loop?
So I personally don't think this is a good reason for using signed integers all the way.
Some people may also say that signed integers may be useful, even for non-negative values, to provide an error flag, usually -1.
Ok, that's good to have a specific value that means «error».
But then, what's wrong with something like UINT_MAX, for that specific value?
I'm actually asking this question because it may lead to some huge problems, usually when using third-party libraries.
In such a case, you often have to deal with signed and unsigned values.
Most of the time, people just don't care about the signedness, and just assign a, for instance, an unsigned int to a signed int, without checking the range.
I have to say I'm a bit paranoid with the compiler warning flags, so with my setup, such an implicit cast will result in a compiler error.
For that kind of stuff, I usually use a function or macro to check the range, and then assign using an explicit cast, raising an error if needed.
This just seems logical to me.
As a last example, as I'm also an Objective-C developer (note that this question is not related to Objective-C only):
- ( NSInteger )tableView: ( UITableView * )tableView numberOfRowsInSection: ( NSInteger )section;
For those not fluent with Objective-C, NSInteger is a signed integer.
This method actually retrieves the number of rows in a table view, for a specific section.
The result will never be a negative value (as the section number, by the way).
So why use a signed integer for this?
I really don't understand.
This is just an example, but I just always see that kind of stuff, with C, C++ or Objective-C.
So again, I'm just wondering if people just don't care about that kind of problems, or if there is finally a good and valid reason not to use unsigned integers for such cases.
Looking forward to hear your answers : )
a signed return value might yield more information (think error-numbers, 0 is sometimes a valid answer, -1 indicates error, see man read) ... which might be relevant especially for developers of libraries.
if you are worrying about the one extra bit you gain when using unsigned instead of signed then you are probably using the wrong type anyway. (also kind of "premature optimization" argument)
languages like python, ruby, jscript etc are doing just fine without signed vs unsigned. that might be an indicator ...
When using integer values in my own code, I always try to consider the signedness, asking myself if the integer should be signed or unsigned.
When I'm sure the value will never need to be negative, I then use an unsigned integer.
And I have to say this happen most of the time.
To carefully consider which type that is most suitable each time you declare a variable is very good practice! This means you are careful and professional. You should not only consider signedness, but also the potential max value that you expect this type to have.
The reason why you shouldn't use signed types when they aren't needed have nothing to do with performance, but with type safety. There are lots of potential, subtle bugs that can be caused by signed types:
The various forms of implicit promotions that exist in C can cause your type to change signedness in unexpected and possibly dangerous ways. The integer promotion rule that is part of the usual arithmetic conversions, the lvalue conversion upon assignment, the default argument promotions used by for example VA lists, and so on.
When using any form of bitwise operators or similar hardware-related programming, signed types are dangerous and can easily cause various forms of undefined behavior.
By declaring your integers unsigned, you automatically skip past a whole lot of the above dangers. Similarly, by declaring them as large as unsigned int or larger, you get rid of lots of dangers caused by the integer promotions.
Both size and signedness are important when it comes to writing rugged, portable and safe code. This is the reason why you should always use the types from stdint.h and not the native, so-called "primitive data types" of C.
So I asked myself: «is there a good reason for this, or do people just use signed integers because the don't care»?
I don't really think it is because they don't care, nor because they are lazy, even though declaring everything int is sometimes referred to as "sloppy typing" - which means sloppily picked type more than it means too lazy to type.
I rather believe it is because they lack deeper knowledge of the various things I mentioned above. There's a frightening amount of seasoned C programmers who don't know how implicit type promotions work in C, nor how signed types can cause poorly-defined behavior when used together with certain operators.
This is actually a very frequent source of subtle bugs. Many programmers find themselves staring at a compiler warning or a peculiar bug, which they can make go away by adding a cast. But they don't understand why, they simply add the cast and move on.
for( unsigned int i = foo.Length() - 1; i >= 0; --i ) {}
To me, this is just bad design
Indeed it is.
Once upon a time, down-counting loops would yield more effective code, because the compiler pick add a "branch if zero" instruction instead of a "branch if larger/smaller/equal" instruction - the former is faster. But this was at a time when compilers were really dumb and I don't believe such micro-optimizations are relevant any longer.
So there is rarely ever a reason to have a down-counting loop. Whoever made the argument probably just couldn't think outside the box. The example could have been rewritten as:
for(unsigned int i=0; i<foo.Length(); i++)
{
unsigned int index = foo.Length() - i - 1;
thing[index] = something;
}
This code should not have any impact on performance, but the loop itself turned a whole lot easier to read, while at the same time fixing the bug that your example had.
As far as performance is concerned nowadays, one should probably spend the time pondering about which form of data access that is most ideal in terms of data cache use, rather than anything else.
Some people may also say that signed integers may be useful, even for non-negative values, to provide an error flag, usually -1.
That's a poor argument. Good API design uses a dedicated error type for error reporting, such as an enum.
Instead of having some hobbyist-level API like
int do_stuff (int a, int b); // returns -1 if a or b were invalid, otherwise the result
you should have something like:
err_t do_stuff (int32_t a, int32_t b, int32_t* result);
// returns ERR_A is a is invalid, ERR_B if b is invalid, ERR_XXX if... and so on
// the result is stored in [result], which is allocated by the caller
// upon errors the contents of [result] remain untouched
The API would then consistently reserve the return of every function for this error type.
(And yes, many of the standard library functions abuse return types for error handling. This is because it contains lots of ancient functions from a time before good programming practice was invented, and they have been preserved the way they are for backwards-compatibility reasons. So just because you find a poorly-written function in the standard library, you shouldn't run off to write an equally poor function yourself.)
Overall, it sounds like you know what you are doing and giving signedness some thought. That probably means that knowledge-wise, you are actually already ahead of the people who wrote those posts and guides you are referring to.
The Google style guide for example, is questionable. Similar could be said about lots of other such coding standards that use "proof by authority". Just because it says Google, NASA or Linux kernel, people blindly swallow them no matter the quality of the actual contents. There are good things in those standards, but they also contain subjective opinions, speculations or blatant errors.
Instead I would recommend referring to real professional coding standards instead, such as MISRA-C. It enforces lots of thought and care for things like signedness, type promotion and type size, where less detailed/less serious documents just skip past it.
There is also CERT C, which isn't as detailed and careful as MISRA, but at least a sound, professional document (and more focused towards desktop/hosted development).
There is one heavy-weight argument against widely unsigned integers:
Premature optimization is the root of all evil.
We all have at least on one occasion been bitten by unsigned integers. Sometimes like in your loop, sometimes in other contexts. Unsigned integers add a hazard, even though a small one, to your program. And you are introducing this hazard to change the meaning of one bit. One little, tiny, insignificant-but-for-its-sign-meaning bit. On the other hand, the integers we work with in bread and butter applications are often far below the range of integers, more in the order of 10^1 than 10^7. Thus, the different range of unsigned integers is in the vast majority of cases not needed. And when it's needed, it is quite likely that this extra bit won't cut it (when 31 is too little, 32 is rarely enough) and you'll need a wider or an arbitrary-wide integer anyway. The pragmatic approach in these cases is to just use the signed integer and spare yourself the occasional underflow bug. Your time as a programmer can be put to much better use.
From the C FAQ:
The first question in the C FAQ is which integer type should we decide to use?
If you might need large values (above 32,767 or below -32,767), use long. Otherwise, if space is very important (i.e. if there are large arrays or many structures), use short. Otherwise, use int. If well-defined overflow characteristics are important and negative values are not, or if you want to steer clear of sign-extension problems when manipulating bits or bytes, use one of the corresponding unsigned types.
Another question concerns types conversions:
If an operation involves both signed and unsigned integers, the situation is a bit more complicated. If the unsigned operand is smaller (perhaps we're operating on unsigned int and long int), such that the larger, signed type could represent all values of the smaller, unsigned type, then the unsigned value is converted to the larger, signed type, and the result has the larger, signed type. Otherwise (that is, if the signed type can not represent all values of the unsigned type), both values are converted to a common unsigned type, and the result has that unsigned type.
You can find it here. So basically using unsigned integers, mostly for arithmetic conversions can complicate the situation since you'll have to either make all your integers unsigned, or be at the risk of confusing the compiler and yourself, but as long as you know what you are doing, this is not really a risk per se. However, it could introduce simple bugs.
And when it is a good to use unsigned integers? one situation is when using bitwise operations:
The << operator shifts its first operand left by a number of bits
given by its second operand, filling in new 0 bits at the right.
Similarly, the >> operator shifts its first operand right. If the
first operand is unsigned, >> fills in 0 bits from the left, but if
the first operand is signed, >> might fill in 1 bits if the high-order
bit was already 1. (Uncertainty like this is one reason why it's
usually a good idea to use all unsigned operands when working with the
bitwise operators.)
taken from here
And I've seen this somewhere:
If it was best to use unsigned integers for values that are never negative, we would have started by using unsigned int in the main function int main(int argc, char* argv[]). One thing is sure, argc is never negative.
EDIT:
As mentioned in the comments, the signature of main is due to historical reasons and apparently it predates the existence of the unsigned keyword.
Unsigned intgers are an artifact from the past. This is from the time, where processors could do unsigned arithmetic a little bit faster.
This is a case of premature optimization which is considered evil.
Actually, in 2005 when AMD introduced x86_64 (or AMD64, how it was then called), the 64 bit architecture for x86, they brought the ghosts of the past back: If a signed integer is used as an index and the compiler can not prove that it is never negative, is has to insert a 32 to 64 bit sign extension instruction - because the default 32 to 64 bit extension is unsigned (the upper half of a 64 bit register gets cleard if you move a 32 bit value into it).
But I would recommend against using unsigned in any arithmetic at all, being it pointer arithmetic or just simple numbers.
for( unsigned int i = foo.Length() - 1; i >= 0; --i ) {}
Any recent compiler will warn about such an construct, with condition ist always true or similar. With using a signed variable you avoid such pitfalls at all. Instead use ptrdiff_t.
A problem might be the c++ library, it often uses an unsigned type for size_t, which is required because of some rare corner cases with very large sizes (between 2^31 and 2^32) on 32 bit systems with certain boot switches ( /3GB windows).
There are many more, comparisons between signed and unsigned come to my mind, where the signed value automagically gets promoted to a unsigned and thus becomes a huge positive number, when it has been a small negative before.
One exception for using unsigned exists: For bit fields, flags, masks it is quite common. Usually it doesn't make sense at all to interpret the value of these variables as a magnitude, and the reader may deduce from the type that this variable is to be interpreted in bits.
The result will never be a negative value (as the section number, by the way). So why use a signed integer for this?
Because you might want to compare the return value to a signed value, which is actually negative. The comparison should return true in that case, but the C standard specifies that the signed get promoted to an unsigned in that case and you will get a false instead. I don't know about ObjectiveC though.
Related
I have a little VM for a programming language implemented in C. It supports being compiled under both 32-bit and 64-bit architectures as well as both C and C++.
I'm trying to make it compile cleanly with as many warnings enabled as possible. When I turn on CLANG_WARN_IMPLICIT_SIGN_CONVERSION, I get a cascade of new warnings.
I'd like to have a good strategy for when to use int versus either explicitly unsigned types, and/or explicitly sized ones. So far, I'm having trouble deciding what that strategy should be.
It's certainly true that mixing them—using mostly int for things like local variables and parameters and using narrower types for fields in structs—causes lots of implicit conversion problems.
I do like using more specifically sized types for struct fields because I like the idea of explicitly controlling memory usage for objects in the heap. Also, for hash tables, I rely on unsigned overflow when hashing, so it's nice if the hash table's size is stored as uint32_t.
But, if I try to use more specific types everywhere, I find myself in a maze of twisty casts everywhere.
What do other C projects do?
Just using int everywhere may seem tempting, since it minimizes the need for casting, but there are several potential pitfalls you should be aware of:
An int might be shorter than you expect. Even though, on most desktop platforms, an int is typically 32 bits, the C standard only guarantees a minimum length of 16 bits. Could your code ever need numbers larger than 216−1 = 32,767, even for temporary values? If so, don't use an int. (You may want to use a long instead; a long is guaranteed to be at least 32 bits.)
Even a long might not always be long enough. In particular, there is no guarantee that the length of an array (or of a string, which is a char array) fits in a long. Use size_t (or ptrdiff_t, if you need a signed difference) for those.
In particular, a size_t is defined to be large enough to hold any valid array index, whereas an int or even a long might not be. Thus, for example, when iterating over an array, your loop counter (and its initial / final values) should generally be a size_t, at least unless you know for sure that the array is short enough for a smaller type to work. (But be careful when iterating backwards: size_t is unsigned, so for(size_t i = n-1; i >= 0; i--) is an infinite loop! Using i != SIZE_MAX or i != (size_t) -1 should work, though; or use a do/while loop, but beware of the case n == 0!)
An int is signed. In particular, this means that int overflow is undefined behavior. If there's ever any risk that your values might legitimately overflow, don't use an int; use an unsigned int (or an unsigned long, or uintNN_t) instead.
Sometimes, you just need a fixed bit length. If you're interfacing with an ABI, or reading / writing a file format, that requires integers of a specific length, then that's the length you need to use. (Of course, is such situations, you may also need to worry about things like endianness, and so may sometimes have to resort to manually packing data byte-by-byte anyway.)
All that said, there are also reasons to avoid using the fixed-length types all the time: not only is int32_t awkward to type all the time, but forcing the compiler to always use 32-bit integers is not always optimal, particularly on platforms where the native int size might be, say, 64 bits. You could use, say, C99 int_fast32_t, but that's even more awkward to type.
Thus, here are my personal suggestions for maximum safety and portability:
Define your own integer types for casual use in a common header file, something like this:
#include <limits.h>
typedef int i16;
typedef unsigned int u16;
#if UINT_MAX >= 4294967295U
typedef int i32;
typedef unsigned int u32;
#else
typedef long i32;
typedef unsigned long i32;
#endif
Use these types for anything where the exact size of the type doesn't matter, as long as they're big enough. The type names I've suggested are both short and self-documenting, so they should be easy to use in casts where needed, and minimize the risk of errors due to using a too-narrow type.
Conveniently, the u32 and u16 types defined as above are guaranteed to be at least as wide as unsigned int, and thus can be used safely without having to worry about them being promoted to int and causing undefined overflow behavior.
Use size_t for all array sizes and indexing, but be careful when casting between it and any other integer types. Optionally, if you don't like to type so many underscores, typedef a more convenient alias for it too.
For calculations that assume overflow at a specific number of bits, either use uintNN_t, or just use u16 / u32 as defined above and explicit bitmasking with &. If you choose to use uintNN_t, make sure to protect yourself against unexpected promotion to int; one way to do that is with a macro like:
#define u(x) (0U + (x))
which should let you safely write e.g.:
uint32_t a = foo(), b = bar();
uint32_t c = u(a) * u(b); /* this is always unsigned multiply */
For external ABIs that require a specific integer length, again define a specific type, e.g.:
typedef int32_t fooint32; /* foo ABI needs 32-bit ints */
Again, this type name is self-documenting, with regard to both its size and its purpose.
If the ABI might actually require, say, 16- or 64-bit ints instead, depending on the platform and/or compile-time options, you can change the type definition to match (and rename the type to just fooint) — but then you really do need to be careful whenever you cast anything to or from that type, because it might overflow unexpectedly.
If your code has its own structures or file formats that require specific bitlengths, consider defining custom types for those too, exactly as if it was an external ABI. Or you could just use uintNN_t instead, but you'll lose a little bit of self-documentation that way.
For all these types, don't forget to also define the corresponding _MIN and _MAX constants for easy bounds checking. This might sound like a lot of work, but it's really just a couple of lines in a single header file.
Finally, remember to be careful with integer math, especially overflows.
For example, keep in mind that the difference of two n-bit signed integers may not fit in an n-bit int. (It will fit into an n-bit unsigned int, if you know it's non-negative; but remember that you need to cast the inputs to an unsigned type before taking their difference to avoid undefined behavior!)
Similarly, to find the average of two integers (e.g. for a binary search), don't use avg = (lo + hi) / 2, but rather e.g. avg = lo + (hi + 0U - lo) / 2; the former will break if the sum overflows.
You seem to know what you are doing, judging from the linked source code, which I took a glance at.
You said it yourself - using "specific" types makes you have more casts. That's not an optimal route to take anyway. Use int as much as you can, for things that do not mandate a more specialized type.
The beauty of int is that it is abstracted over the types you speak of. It is optimal in all cases where you need not expose the construct to a system unaware of int. It is your own tool for abstracting the platform for your program(s). It may also yield you speed, size and alignment advantage, depending.
In all other cases, e.g. where you want to deliberately stay close to machine specifications, int can and sometimes should be abandoned. Typical cases include network protocols where the data goes on the wire, and interoperability facilities - bridges of sorts between C and other languages, kernel assembly routines accessing C structures. But don't forget that sometimes you would want to in fact use int even in these cases, as it follows platforms own "native" or preferred word size, and you might want to rely on that very property.
With platform types like uint32_t, a kernel might want to use these (although it may not have to) in its data structures if these are accessed from both C and assembler, as the latter doesn't typically know what int is supposed to be.
To sum up, use int as much as possible and resort to moving from more abstract types to "machine" types (bytes/octets, words, etc) in any situation which may require so.
As to size_t and other "usage-suggestive" types - as long as syntax follows semantics inherent to the type - say, using size_t for well, size values of all kinds - I would not contest. But I would not liberally apply it to anything just because it is guaranteed to be the largest type (regardless if it is actually true). That's an underwater stone you don't want to be stepping on later. Code has to be self-explanatory to the degree possible, I would say - having a size_t where none is naturally expected, would raise eyebrows, for a good reason. Use size_t for sizes. Use offset_t for offsets. Use [u]intN_t for octets, words, and such things. And so on.
This is about applying semantics inherent in a particular C type, to your source code, and about the implications on the running program.
Also, as others have illustrated, don't shy away from typedef, as it gives you the power to efficiently define your own types, an abstraction facility I personally value. A good program source code may not even expose a single int, nevertheless relying on int aliased behind a multitude of purpose-defined types. I am not going to cover typedef here, the other answers hopefully will.
Keep large numbers that are used to access members of arrays, or control buffers as size_t.
For an example of a project that makes use of size_t, refer to GNU's dd.c, line 155.
Here are a few things I do. Not sure they're for everyone but they work for me.
Never use int or unsigned int directly. There always seems to be a more appropriately named type for the job.
If a variable needs to be a specific width (e.g. for a hardware register or to match a protocol) use a width-specific type (e.g. uint32_t).
For array iterators, where I want to access array elements 0 thru n, this should also be unsigned (no reason to access any index less than 0) and I use one of the fast types (e.g. uint_fast16_t), selecting the type based on the minimum size required to access all array elements. For example, if I have a for loop that will iterate through 24 elements max, I'll use uint_fast8_t and let the compiler (or stdint.h, depending how pedantic we want to get) decide which is the fastest type for that operation.
Always use unsigned variables unless there is a specific reason for them to be signed.
If your unsigned variables and signed variables need to play together, use explicit casts and be aware of the consequences. (Luckily this will be minimized if you avoid using signed variables except where absolutely necessary.)
If you disagree with any of those or have recommended alternatives please let me know in the comments! That's the life of a software developer... we keep learning or we become irrelevant.
Always.
Unless you have specific reasons for using a more specific type, including you're on a 16-bit platform and need integers greater than 32767, or you need to ensure proper byte order and signage for data exchange over a network or in a file (and unless you're resource constrained, consider transferring data in "plain text," meaning ASCII or UTF8 if you prefer).
My experience has shown that "just use 'int'" is a good maxim to live by and makes it possible to turn out working, easily maintained, correct code quickly every time. But your specific situation may differ, so take this advice with a bit of well-deserved scrutiny.
Most of the time, using int is not ideal. The main reason is that int is signed and signed can cause UB, signed integers can also be negative, something that you don't need for most integers. Prefer unsigned integers. Secondly, data types reflect meaning and a, very limited, way to document the used range and values this variable may have. If you use int, you imply that you expect this variable to sometimes hold negative values, that this values probably do not always fit into 8 bit but always fit into INT_MAX, which can be as low as 32767. Do not assume a int is 32 bit.
Always, think about the possible values of a variable and choose the type accordingly. I use the following rules:
Use unsigned integers except when you need to be able to handle negative numbers.
If you want to index an array, from the start, use size_t except when there are good reasons not to. Almost never use int for it, a int can be too small and there is a high chance of creating a UB bug that isn't found during testing because you never tested arrays large enough.
Same for array sizes and sizes of other object, prefer size_t.
If you need to index array with negative index, which you may need for image processing, prefer ptrdiff_t. But be aware, ptrdiff_t can be too small, but that is rare.
If you have arrays that never exceed a certain size, you may use uint_fastN_t, uintN_t, or uint_leastN_t types. This can make a lot of sense especially on a 8 bit microcontroller.
Sometimes, unsigned int can be used instead of uint_fast16_t, similarly int for int_fast16_t.
To handle the value of a single byte (or character, but this is not a real character because of UTF-8 and Unicode sometimes using more than one code pointer per character), use int. int can store -1 if you need an indicator for error or not set and a character literal is of type int. (This is true for C, for C++ you may use a different strategy). There is the extremely rare possibility that a machine uses sizeof(int)==1 && CHAR_MIN==0 where a byte can not be handled with a int, but i never saw such a machine.
It can make sense to define your own types for different purposes.
Use explicit cast where casts are needed. This way the code is well defined and has the least amount of unexpected behaviour.
After a certain size, a project needs a list/enum of the native integer data types. You can use macros with the _Generic expression from C11, that only needs to handle bool, signed char, short, int, long, long long and their unsigned counterparts to get the underlying native type from a typedefed one. This way your parsers and similar parts only need to handle 11 integer types and not 56 standard integer (if i counted correctly), and a bunch of other non-standard types.
I'm trying to find all instances in a C program that I've written in which I can improve performance. In a few places the program operates very slowly on account of large arrays allocated to the heap, where some of these arrays are integer arrays. I've read, here on stackoverflow and through other sources, that we should always make use of "unsigned" if there is no instance of a negative integer.
While I don't have many instances of division by factors of two, where performance could see a significant boost by making this change, is there a difference in the handling of memory for a large array of int vs. unsigned int? Similarly, does initializing a large array of ints with calloc operate differently when initializing the same array with unsigned int?
Thanks!
In ISO C (C99), int is signed and has a minimum range of at least -32767 through 32767 include,
unsigned int is to 0 through 65535 include.
They are both coded on the same amount of bits, and thus, allocating X number of int and X number of unsigned int does not make a single difference.
As Mat pointed out, calloc doesn't even care what types you're giving it.
I think you're misinterpreting the advice you think you got. Whether to use signed or unsigned integer types can go either way with respect to performance, but unless you really need to optimize a particular bottleneck that you have measured to be a bottleneck, you should always be choosing the types that convey the intended semantic, not a type that "somebody told you is faster".
As for the specific question you asked, there is no way to search for data of a particular type "on the heap". Objects in C do not carry their types with them as part of their representation; while they formally have types, the type is represented in the code that accesses them, not in the objects themselves. So if you want to search for use of signed or unsigned integer objects, you should search your source code, not the heap. But again, this is not the way to solve a performance problem.
Firstly, your "always" in "should always make use of "unsigned" if there is no instance of a negative integer" is overly strong. It is largely a matter of personal preference and/or coding standard. There are arguments in favor of either approach. I personally prefer to follow that rule, i.e. all naturally unsigned quantities are represented by unsigned types in my code. But using a signed type in such cases does not necessarily represent a design error.
Secondly, all this has nothing to do with dynamic memory allocation or large arrays. How such arrays behave in memory is completely independent from what kind of data you store in them. Proper memory management is important for achieving good performance, but this is a completely independent matter, not in any way related to the question using signed or unsigned integer types.
Thirdly, even though unsigned types formally perform better in integer arithmetic, and even if your code performs massive amount of work with those integers, switching from signed to unsigned is unlikely to produce any notable improvement in performance.
Heap allocations is C are done with malloc, which knows nothing about your intent.
Very simple question:
I have a program doing lots and lots of mathematical computations over ints and long longs. To fit in an extra bit, I made the long longs unsigned, since I only dealt with positive numbers, and could now get a few more values.
Oddly enough, this gave me a 15% performance boost, which I confirmed to be in simply making all the long long's unsigned.
Is this possible? Are mathematical operations really faster with unsigned numbers? I remember reading that there would be no difference, and the compiler automatically picks out the fastest way to go whether signed or unsigned. Is this 15% boost really from making the vars unsigned, or could it be something else affected in my code?
And, if it really is from making the vars unsigned, should I aim to make everything (even ints) unsigned, as I never need negative numbers, and every second is important if I can save it.
In some operations, signed integers are faster, in others, unsigned are faster:
In C, signed integer operations can be assumed not to wrap. The compiler will take advantage of this in loop optimization, for example. Comparisons can be optimized away similarly. (This can also lead to subtle bugs if you don't expect this).
On the other hand, unsigned integers do not have this assumption. However, not having to deal with a sign is a big advantage for some operations, for example: division. Unsigned division by a constant power of two is a simple shift, but (depending on your rounding rules) there's a conditional off-by-1 for negative numbers.
Personally, I make a habit of only using unsigned integers unless I really, really do have a value which needs to be signed. It's not so much for performance as correctness.
You may see the effect magnified with long long, which (I'm guessing) is 64 bits in your case. The CPU usually doesn't have single instructions do deal with these types (in 32 bit mode), so the slight added complexity for signed operations will be more noticeable.
On a 32-bit processor, 64-bit integer operations are emulated; using unsigned instead of signed means the emulation library doesn't have to do extra work to propagate carry bits etc.
There are three cases where a compiler cares whether a variable is signed or unsigned:
When the variable is converted to a longer type
When the comparison operators (greater-than, etc.) are applied
When overflows might occur
On some machines, conversion of signed variables to longer types requires extra code; on other machines, a conversion may be performed as part of a 'load' or 'move' instruction.
Some machines (mainly small embedded microcontrollers) require more instructions to perform a signed-versus-signed comparison than unsigned-versus-unsigned, but most machines have a full array of both signed and unsigned compare instructions available.
When overflows occur with unsigned types, the compiler may have to add code to ensure that the defined behavior actually occurs. No such code is required for signed types, because anything that might happen in the absence of such code would be permitted by the standard.
The compiler doesn't pick if it's going to be unsigned or signed. But, yes, in theory, unsigned with unsigned is faster than signed with signed. If you really want to slow things down, you'll go with signed with unsigned. And even worse: floats with integers.
It depends on the processor, of course.
If I make a calculation with a variable where an intermediate part of the calculation goes higher then the bounds of that variable type, is there any hazard that some platforms may not like?
This is an example of what I'm asking:
int a, b;
a=30000;
b=(a*32000)/32767;
I have compiled this, and it does give the correct answer of 29297 (well, within truncating error, anyway). But the part that worries me is that 30,000*32,000 = 960,000,000, which is a 30-bit number, and thus cannot be stored in a 16-bit int. The end result is well within the bounds of an int, but I was expecting that whatever working part of memory would have the same size allocated as the largest source variables did, so an overflow error would occur.
This is just a small example to show my problem, I am trying to avoid using floating points by making the fraction be a fraction of the max amount able to be stored in that variable (in this case, a signed integer, so 32767 on the positive side), because the embedded system I'm using I believe does not have an FPU.
So how do most processors handle calculations out of the bounds of the source and destination variables?
On a 16-bit compiler/CPU, you can (almost) plan on that code giving incorrect results. This is a bit sad, since nearly every CPU (that has a multiply instruction at all) will produce and store the intermediate result, but no C compiler (of which I'm aware) will normally use it (and if you made a and b unsigned, it wouldn't be allowed to use it).
You have a few choices to deal with this. One is to write small muldiv function in assembly language that does the multiplication (preserving the high word) then the division on that, and finally returns the value to C when it's been reduced back into range.
Another option is to do the math on unsigned integers, which at least allow you to figure out when a problem occurred. Unfortunately, none of the choices is what I'd call particularly appealing though...
As far as I know, most if not all processors will hold results for a word * word multiplication in a double word -- meaning, an 8 bit * 8 bit is stored in a 16-bit register(s) on an 8-bit processor, a 32-bit * 32 bit operation is stored in a 64-bit register(s) on a 32-bit machine. (At least, that's how it's been on all the embedded microcontrollers I've used)
If that weren't the case, the processor would be severely crippled in the sense of only allowing half-word * half-word multiplication.
AFAIK this kind of thing is formally "undefined". You have to do the algebra necessary to prevent overflow. That's always your first choice. Numeric stability is no accident, it requires some care in deciding when and how to do division and multiplication.
Or, you have to guarantee that you'll use an intermediate result buffer that's big enough.
Using a large intermediate buffer is what some C compilers do anyway. The language, however, doesn't make any guarantees.
So, to be sure that it works, most folks do something like this.
short a= 30000;
int temp= a;
int temp2= (a*32000)/32767;
// here you can check for errors; if temp2 > 32767, you have overflow.
short b= a;
Signed integer overflow is undefined behavior.
Almost any implementation you could possibly meet will wrap around on integer overflow, because (a) everyone uses 2's complement, in which arithmetic operations are bitwise identical for signed and unsigned types of the same size, and (b) wraparound is the defined behavior of unsigned types in C.
So, on an implementation with a 16 bit int, I would expect the result 0 for your calculation (and that is the result that it must have if you'd used an unsigned 16 bit int). But I'd code against the possibility it might throw a hardware exception, explode, etc.
Note that if you do the calculation with two 16 bit short variables on a machine with a 32 bit int, then you will generally get the "right" answer 29297, because the intermediate value (a*32000) is an int, and only gets truncated back to short at the end. I say "generally" because converting an out-of-bounds integer value to a signed integer type either gives an unspecified result or else raises a signal. But again, any implementation you'll encounter in polite company just takes a modulus.
Are you sure your compiler has 16 bit integers? On most systems nowadays, ints are 32 bits. Another possible reason you aren't getting an error is that some compilers will recognize that it can compute something like this at compile time and will do so.
If you are really concerned that you will end up with overflow, you can sometimes reorder or factor the formula differently so that no intermediate terms will overflow. In your example that would be hard to do since all of your terms are near the limit of a 16 bit value. Do you need the number to be exactly right, or can you approximate? If you can, you can do something like this:
int a, b;
a=30000;
//b=(a*32000)/32767 ~= a * (32000/32768) = a *(125/128)
b = (a / 128) * 125 // if a=30000, b = 29250 - about 0.16% error
Another option would be to use larger sized types for intermediate terms. If your compiler had 16 bit ints and 32 bit longs, you could do something like this:
int a, b;
a=30000;
b=((long)a*32000L)/32767L;
Really, there's no set answer for how to handle overflow. You need to evaluate each case on its own and decide what the best solution is.
Your compiler and target processor both have to do with the sizes of the various data types.
Compilers will usually promote variables to the largest easy to work with size during calculations and then convert the results whatever size is needed for an assignment at the end.
There's also C rules that govern promoting to sizes which are more difficult to work with for some calculations. If you are compiling for an AVR, which has 8 bit registers but defines an int to be 16 bits, many calculations end up using more registers than you might think that they need because of this promotion and the fact that constant numbers in your code have to be thought of as being int or unsigned int unless the compiler can prove to itself that this won't effect the outcome of the calculations.
Try rewriting your code with various different sizes of integers (short, int, long, long long) and see how that goes. You may also want to write a simple program that prints out the sizeof( ) of the standard predefined types.
If you need to worry about the sizes of your integer variables and/or the intermediate results of your calculations then you should include and use things like uint32_t and int64_t for your declarations and type casting.
The Microsoft C compiler warns when you try to compare two variables, and one is signed, and the other is unsigned. For example:
int a;
unsigned b;
if ( a < b ) { // warning C4018: '<' : signed/unsigned mismatch
}
Has this warning, in the history of the world, ever caught a real bug? Why's it there, anyway?
Never ignore compiler warnings.
Oh it has. But the other way around. Ignoring that warning caused a huge headache to me one day. I was writing a function that plotted a graph, and mixed signed and unsigned variables. In one place, i compared a negative number to a unsigned one:
int32_t t; ...
uint32_t ut; ...
if(t < ut) {
...
}
Guess what happened? The signed number got promoted to the unsigned type, and thus was greater at the end, even though it was below 0 originally. It took me a couple of hours until i found the bug.
If you have to ask the question, you do not know enough about whether it is safe to disable it, so the answer is No.
I wouldn't disable it - I don't assume I always know better than the compiler (not least because I often don't), and more particularly because I sometimes make mistakes by oversight when a compiler does not.
You should change a and b to both use signed types, or both use unsigned types. But that may not be practical (e.g. it maybe outside your control).
The warning is there to catch comparisons between a signed integer with a negative value and an unsigned integer -- if the magnitudes of both numbers are small, the former will (incorrectly) be deemed larger than the latter.
binary operators often convert both types to the same before doing the comparison, since one is unsigned, it will also convert the int to unsigned. Normally this won't cause too much trouble but if your int is a negative number, this will cause errors in comparisons.
e.g. -1 is equal to 4294967295 when converted from signed to unsigned, now compare that with 100 (unsigned)
The warnings are there for a purpose... They cause you to think hard about your code!
Personally, I would always explicitly cast the signed --> unsigned and unsigned --> signed if possible. By doing this you are ensuring that you take ownership of the transaction and that you know whats going to happen. I realise that it might not always be possible, depending on the project to do this, but always aim for 0 compiler warnings ... it can only help!
I've been writing code longer than I'd care to admit. From personal experience ignoring seemingly pedantic compiler warnings can sometimes yield very unpleasent results.
If they annoy you and you accept/understand the situation then set a cast and move on.
Eventually these things move from an overlooked nuance into a conscious decision when designing new code. The result leaves less room for mickmouse corner cases to ruin your or your customers day and overall better quality software.
I have even configured the compiler to make that warning an compile error. for the reasons all the other guys already mentioned.
If I ever encounter a signed/unsigned mismatch I ask myself why I chose different "signedness". It usually is a design error.
#gimel The explanation about shoot the entire leg of found behind your link is really good for this problem.
-"Someone who avoids the simple problems may simply be heading for a not-so-simple one."
This is actually always true when you convert between different types and you don't check for those values that could hurt you.
/Johan
Update: The correct way to convert from uint to int is to check
the values against limits.h, or something like that.
(But I seldom do that my self, even thou I know I should... :-)
I think its best to convert your UNsigned number into a signed number (before the comparison).
Rather than the other way around.
Just one of the many ways in which C allows you to shoot yourself in the foot - you'd better know what you are doing. The C quote is attributed to Bjarne Stroustrup, creator of C++.