I have been advised to use the following options with GCC, as it helps to avoid a lot of common errors. It turns on a bunch of warnings and -Werror turns them into errors.
gcc -pedantic -W -Wall -Wextra -Wshadow -Wstrict-overflow=5 -Wwrite-strings -std=c99 -Werror
Given the following test code:
#include <stdio.h>
int main(void)
{
int arr[8]={0,10,20,30,40,50,60,70};
int x;
printf("sizeof(arr): %d\n", sizeof(arr));
printf("sizeof(int): %d\n", sizeof(int));
for(x = 0; x < sizeof(arr)/sizeof(int); x++)
{
printf("%d\n",arr[x]);
}
return 0;
}
I get this:
test.c:11: error: comparison between signed and unsigned
I know that one way I can fix this is turning off the warnings, but they haven't made me use these settings to turn them off in the end.
Another way is to cast the stuff, but I have been told that casting is deprecated.
Also, I could make x into an unsigned int:
unsigned x;
But it doesn't solve the general problem when I have to compare signed values with unsigned values using these compiler options. Is there an cleaner way instead of casting?
Replace
int x;
/* ... */
for(x=0;x<sizeof(arr) / sizeof(int);x++)
by
for(size_t x=0;x<sizeof(arr) / sizeof(int);x++)
But it doesn't solve the general problem when I have to compare signed values with unsigned values using these compiler options. Is there an cleaner way insted of casting?
In such cases, try to figure out if the signed number can ever have a value which will cause overflow. If not, you can ignore the warning. Otherwise a cast to the unsigned type (if this is the same size or bigger than the signed component) is good enough.
This really depends on the data type. It is possible to avoid this by implicitly casting the values to a type which contains a superset of all the values available in the signed and unsigned type. For instance you could use a 64 bit signed value to compare an unsigned 32 bit and a signed 32 bit value.
However this is a corner case and you need to do operations like verify the size of your types. Your best solution is to use the same type for both operands.
If you must cast, do consider that you could be causing an overflow and consider if that is important to your application or not.
The crux of the matter is that comparing signed and unsigned values admits some weird cases. Consider, for instance, what happens in the unsigned array length is larger than the maximum that can be represented by a signed int. The signed counter overflow (remaining "less than" the array size), and you start addressing memory you didn't mean to...
The compiler generates a warning to make sure that you're thinking about them. Using -Werror promotes that warning to an error and stops the compilation.
Either be rigorous about choosing the signedeness of your types, or cast the trouble away when you're sure it doesn't apply, or get rid of -Werror and make it a policy to address all warnings with a fix or an explanation...
One workaround would be to selectively disable that one warning in this special case.
GCC has pragma diagnostic ignored "-Wsomething"
// Disable a warning for a block of code:
#pragma GCC diagnostic push
#pragma GCC diagnostic ignored "-Wmissing-prototypes"
// ... Some code where the specified warning should be suppressed ...
#pragma GCC diagnostic pop
Recent versions of GCC (I am not actually sure since when, but 4.8.x should support it) show the corresponding -Wsomething option. This is important since most warning options are not set explicitly but en bloc with options like -Wall.
An error message would look like this:
readers.c: In function ‘buffered_fullread’:
readers.c:864:11: error: comparison between signed and unsigned integer expressions [-Werror=sign-compare]
if(got < sizeof(readbuf)) /* That naturally catches got == 0, too. */
The [-Werror=sign-compare] part tells You You can use "Wsign-compare" for "Wsomething" to suppress the warning.
And of course, You should only do that where it is appropriate (it does not quite help readability), e.g. when exactly the behaviour that the compiler warns about is wanted (or, if You may not introduce bigger changes in the code base).
test.c:11: error: comparison between signed and unsigned
You could declare x as an unsigned int, since size_t is unsigned
EDIT:
If you don't want to cast, and don't want to declare it as unsigned, i don't think there's much to do.
Maybe bitwise operations are a way of solving it, removing the sign bit. I have to say, IMO its very questionable though.
We suppress this warning in our Visual Studio compiles, since it happens a lot and almost never means anything significant. Of course, not all coding standards allow that.
You can make types agree (declaring variables to be size_t or unsigned int instead of int, for example), or you can cast, or you can change your compilation line. That's about it.
Regardless of the casting deprecation dilemma, I'd still suggest separating out the logic from the for loop.
int range = (int)(sizeof(arr) / sizeof(int));
int x;
for(x = 0; x < range; x++)
{
printf("%d\n", arr[x]);
}
Although this uses casting which you said was deprecated, it clears up the location where the casting is taking place. In general, I advise against cramming a lot of logic into your for loop declarations. Especially in your case where you're using size_t division, which (because it is integer division) could have the possibility of truncating the answer. The for loop declaration should be clean, and should generate no errors. Your casting is occuring in a different location which means if you want to change the way you are creating the range, you don't have to muck about making the for declaration even longer.
Related
I have always, for as long as I can remember and ubiquitously, done this:
for (unsigned int i = 0U; i < 10U; ++i)
{
// ...
}
In other words, I use the U specifier on unsigned integers. Now having just looked at this for far too long, I'm wondering why I do this. Apart from signifying intent, I can't think of a reason why it's useful in trivial code like this?
Is there a valid programming reason why I should continue with this convention, or is it redundant?
First, I'll state what is probably obvious to you, but your question leaves room for it, so I'm making sure we're all on the same page.
There are obvious differences between unsigned ints and regular ints: The difference in their range (-2,147,483,648 to 2,147,483,647 for an int32 and 0 to 4,294,967,295 for a uint32). There's a difference in what bits are put at the most significant bit when you use the right bitshift >> operator.
The suffix is important when you need to tell the compiler to treat the constant value as a uint instead of a regular int. This may be important if the constant is outside the range of a regular int but within the range of a uint. The compiler might throw a warning or error in that case if you don't use the U suffix.
Other than that, Daniel Daranas mentioned in comments the only thing that happens: if you don't use the U suffix, you'll be implicitly converting the constant from a regular int to a uint. That's a tiny bit extra effort for the compiler, but there's no run-time difference.
Should you care? Here's my answer, (in bold, for those who only want a quick answer): There's really no good reason to declare a constant as 10U or 0U. Most of the time, you're within the common range of uint and int, so the value of that constant looks exactly the same whether its a uint or an int. The compiler will immediately take your const int expression and convert it to a const uint.
That said, here's the only argument I can give you for the other side: semantics. It's nice to make code semantically coherent. And in that case, if your variable is a uint, it doesn't make sense to set that value to a constant int. If you have a uint variable, it's clearly for a reason, and it should only work with uint values.
That's a pretty weak argument, though, particularly because as a reader, we accept that uint constants usually look like int constants. I like consistency, but there's nothing gained by using the 'U'.
I see this often when using defines to avoid signed/unsigned mismatch warnings. I build a code base for several processors using different tool chains and some of them are very strict.
For instance, removing the ‘u’ in the MAX_PRINT_WIDTH define below:
#define MAX_PRINT_WIDTH (384u)
#define IMAGE_HEIGHT (480u) // 240 * 2
#define IMAGE_WIDTH (320u) // 160 * 2 double density
Gave the following warning:
"..\Application\Devices\MartelPrinter\mtl_print_screen.c", line 106: cc1123: {D} warning:
comparison of unsigned type with signed type
for ( x = 1; (x < IMAGE_WIDTH) && (index <= MAX_PRINT_WIDTH); x++ )
You will probably also see ‘f’ for float vs. double.
I extracted this sentence from a comment, because it's a widely believed incorrect statement, and also because it gives some insight into why explicitly marking unsigned constants as such is a good habit.
...it seems like it would only be useful to keep it when I think overflow might be an issue? But then again, haven't I gone some ways to mitigating for that by specifying unsigned in the first place...
Now, let's consider some code:
int something = get_the_value();
// Compute how many 8s are necessary to reach something
unsigned count = (something + 7) / 8;
So, does the unsigned mitigate potential overflow? Not at all.
Let's suppose something turns out to be INT_MAX (or close to that value). Assuming a 32-bit machine, we might expect count to be 229, or 268,435,456. But it's not.
Telling the compiler that the result of the computation should be unsigned has no effect whatsoever on the typing of the computation. Since something is an int, and 7 is an int, something + 7 will be computed as an int, and will overflow. Then the overflowed value will be divided by 8 (also using signed arithmetic), and whatever that works out to be will be converted to an unsigned and assigned to count.
With GCC, arithmetic is actually performed in 2s complement so the overflow will be a very large negative number; after the division it will be a not-so-large negative number, and that ends up being a largish unsigned number, much larger than the one we were expecting.
Suppose we had specified 7U instead (and maybe 8U as well, to be consistent). Now it works.. It works because now something + 7U is computed with unsigned arithmetic, which doesn't overflow (or even wrap around.)
Of course, this bug (and thousands like it) might go unnoticed for quite a lot of time, blowing up (perhaps literally) at the worst possible moment...
(Obviously, making something unsigned would have mitigated the problem. Here, that's pretty obvious. But the definition might be quite a long way from the use.)
One reason you should do this for trivial code1 is that the suffix forces a type on the literal, and the type may be very important to produce the correct result.
Consider this bit of (somewhat silly) code:
#define magic_number(x) _Generic((x), \
unsigned int : magic_number_unsigned, \
int : magic_number_signed \
)(x)
unsigned magic_number_unsigned(unsigned) {
// ...
}
unsigned magic_number_signed(int) {
// ...
}
int main(void) {
unsigned magic = magic_number(10u);
}
It's not hard to imagine those function actually doing something meaningful based on the type of their argument. Had I omitted the suffix, the generic selection would have produced a wrong result for a very trivial call.
1 But perhaps not the particular code in your post.
In this case, it's completely useless.
In other cases, a suffix might be useful. For instance:
#include <stdio.h>
int
main()
{
printf("%zu\n", sizeof(123));
printf("%zu\n", sizeof(123LL));
return 0;
}
On my system, it will print 4 then 8.
But back to your code, yes it makes your code more explicit, nothing more.
Given some code like :
unsigned short val;
//<some unimportant code that sets val>
if(val>65535) val=65535;
How can we disable the "comparison is always false due to limited range of data type" warning from gcc?
This is using GCC 4.1.2, and so doesn't have the #pragma GCC diagnostics construct.
I can't find a -W option to turn it off either.
I have tried doing a cast of val:
if(((long)val)>65535); val=65535
But it seems GCC is clever enough to still give the warning.
The compiler flags are -Wall, and that is it. No -Wextra.
I don't really want to remove the check - short might be 16-bits on this target, but doesn't mean it has to be. I am happy to write the check a different way though.
I want to turn -Werror on, so this warning has to go.
EDIT 1
Unimportant code not so unimportant:
unsigned short val;
float dbValue; (actually function parameter)
val= ((unsigned short) dbValue) & 0xffff);
if(val>65535) val=65535;
So if size of short changes, we will get overflow, and in anycase the range check becomes pointless anyway, and can be deleted, or more to the point applied to the float value instead.
EDIT 2
Whilst the answers so far have helped improve the code, it would still be useful to know if there is any means to disable this warning in gcc 4.1.2 - which is what the question was.
It seems it can be done in recent releases using -Wno-type-limits
In theory there is no answer to your question: if what you intend to write is a no-op (on the particular architecture being targeted) for all possible values of val the code can be reached with, then the compiler can warn that it is a no-op for all…
In practice, comparing (val + 0) instead of val may be enough to let the compiler produce the same code as it did with your original version and at the same time shut up about it.
I would recommend you write
#include <limits.h>
…
#if USHRT_MAX > 65535
if(val>65535) val=65535;
#endif
I feel that if you were to use it without comment, it makes the intention clearer than any convoluted trick.
You could make use of conditional compilation:
#include <limits.h>
unsigned short val;
#if USHRT_MAX > 65535
if(val>65535) val=65535;
#endif
Using the min macro :
#define min(x, y) (((x) < (y)) ? (x) : (y))
val=min(val,65535);
gets rid of the warning. So we can also use the same implementation directly
val=(val)<(65535)? val: 65535;
The trick here is that while a 16 bit integer cannot be greater than 65535, it can be 65535, and the check in this version is "is it lower than 65535
val was declared as an unsigned short, which usually (see note) means 16-bit unsigned (0 - 65535), so your test can never be true, because val can never be more than 2^16-1 (65535) with this compiler, and with this target architecture.
Casting it as a long won't change anything. You may want to declare val as unsigned long though.
EDIT
note: as rightly implied in the comments section, the exact width of short and int types depend on both the compiler and the target architecture. Wikipedia has a section on basic C data types that covers the question in greater details.
I'm trying to learn C at got stuck with datatype-sizes at the moment.
Have a look at this code snippet:
#include <stdio.h>
#include <limits.h>
int main() {
char a = 255;
char b = -128;
a = -128;
b = 255;
printf("size: %lu\n", sizeof(char));
printf("min: %d\n", CHAR_MIN);
printf("max: %d\n", CHAR_MAX);
}
The printf-output is:
size: 1
min: -128
max: 127
How is that possible? The size of char is 1 Byte and the default char seems to be signed (-128...127). So how can I assign a value > 127 without getting an overflow warning (which I get when I try to assign -128 or 256)? Is gcc automatically converting to unsigned char? And then, when I assign a negative value, does it convert back? Why does it do so? I mean, all this implicitness wouldn't make it easier to understand.
EDIT:
Okay, it's not converting anything:
char a = 255;
char b = 128;
printf("%d\n", a); /* -1 */
printf("%d\n", b); /* -128 */
So it starts counting from the bottom up. But why doesn't the compiler give me a warning? And why does it so, when I try to assign 256?
See 6.3.1.3/3 in the C99 Standard
... the new type is signed and the value cannot be represented in it; either the result is implementation-defined or an implementation-defined signal is raised.
So, if you don't get a signal (if your program doesn't stop) read the documentation for your compiler to understand what it does.
gcc documents the behaviour ( in http://gcc.gnu.org/onlinedocs/gcc/Integers-implementation.html#Integers-implementation ) as
The result of, or the signal raised by, converting an integer to a signed integer type when the value cannot be represented in an object of that type (C90 6.2.1.2, C99 6.3.1.3).
For conversion to a type of width N, the value is reduced modulo 2^N to be within range of the type; no signal is raised.
how can I assign a value > 127
The result of converting an out-of-range integer value to a signed integer type is either an implementation-defined result or an implementation-defined signal (6.3.1.3/3). So your code is legal C, it just doesn't have the same behavior on all implementations.
without getting an overflow warning
It's entirely up to GCC to decide whether to warn or not about valid code. I'm not quite sure what its rules are, but I get a warning for initializing a signed char with 256, but not with 255. I guess that's because a warning for code like char a = 0xFF would normally not be wanted by the programmer, even when char is signed. There is a portability issue, in that the same code on another compiler might raise a signal or result in the value 0 or 23.
-pedantic enables a warning for this (thanks, pmg), which makes sense since -pedantic is intended to help write portable code. Or arguably doesn't make sense, since as R.. points out it's beyond the scope of merely putting the compiler into standard-conformance mode. However, the man page for gcc says that -pedantic enables diagnostics required by the standard. This one isn't, but the man page also says:
Some users try to use -pedantic to check programs for strict ISO C
conformance. They soon find that it does not do quite what they want:
it finds some non-ISO practices, but not all---only those for which
ISO C requires a diagnostic, and some others for which diagnostics
have been added.
This leaves me wondering what a "non-ISO practice" is, and suspecting that char a = 255 is one of the ones for which a diagnostic has been specifically added. Certainly "non-ISO" means more than just things for which the standard demands a diagnostic, but gcc obviously is not going so far as to diagnose all non-strictly-conforming code of this kind.
I also get a warning for initializing an int with ((long long)UINT_MAX) + 1, but not with UINT_MAX. Looks as if by default gcc consistently gives you the first power of 2 for free, but after that it thinks you've made a mistake.
Use -Wconversion to get a warning about all of those initializations, including char a = 255. Beware that will give you a boatload of other warnings that you may or may not want.
all this implicitness wouldn't make it easier to understand
You'll have to take that up with Dennis Ritchie. C is weakly-typed as far as arithmetic types are concerned. They all implicitly convert to each other, with various levels of bad behavior when the value is out of range depending on the types involved. Again, -Wconversion warns about the dangerous ones.
There are other design decisions in C that mean the weakness is quite important to avoid unwieldy code. For example, the fact that arithmetic is always done in at least an int means that char a = 1, b = 2; a = a + b involves an implicit conversion from int to char when the result of the addition is assigned to a. If you use -Wconversion, or if C didn't have the implicit conversion at all, you'd have to write a = (char)(a+b), which wouldn't be too popular. For that matter, char a = 1 and even char a = 'a' are both implicit conversions from int to char, since C has no literals of type char. So if it wasn't for all those implicit conversions either various other parts of the language would have to be different, or else you'd have to absolutely litter your code with casts. Some programmers want strong typing, which is fair enough, but you don't get it in C.
Simple solution :
see signed char can have value from -128 to 127 okey
so now when you are assigning 129 to any char value it will take
127(this is valid) + 2(this additional) = -127
(give char a=129 & print it value comes -127)
look char register can have value like..
...126,127,-128,-127,-126...-1,0,1,2....
which ever you will assign final value will come by this calculation ...!!
In my project I have turned on treat warnings as errors and compiling using the -pedantic and -ansi tags. I am using GCC compiler. In this project I have to use a third party source code which has got lot of warnings. Since I treat warnings as errors, I am having a tough time in fixing their code.
Most of the warnings are about invalid conversion from int to size_t or viceversa. In some cases, I won't be able to make both the variables same type, I mean I won't be able to change something to size_t. In such cases I am doing an explicit cast. Something like,
size_t a = (size_t) atoi(val);
I am wondering is this the correct approach? Is there any problem in doing cast like this?
If these warnings are minor, can I suppress it only on their files? How do I do the same on MSVC?
Edit:
Casting is the only approach if you want to shut up the compiler per instance in a portable way. It is fine as long as you know what you're doing, e.g. that you can ensure the result of atoi will never be negative.
In GCC, you can turn off all sign conversion warnings with the -Wno-sign-conversion flag. There is also -Wno-sign-compare (for stuff like 2u > 1) but it won't be relevant unless you use -Wextra.
You could also use the diagnostic pragmas like
#pragma GCC diagnostic ignored "-Wsign-conversion"
In MSVC, there are several warnings relevant to signed/unsigned mismatch, e.g.:
Level 4:
C4389, C4245, C4365
Level 3:
C4018 (2u > 1)
Level 2:
C4267 (size_t → int)
To disable a warning in MSVC, you could add a #pragma warning e.g.
#pragma warning (disable : 4267)
or add a /wd4267 flag in the compiler options.
Perhaps you should use strtoul instead of atoi.
size_t a = strtoul(val, NULL, 0);
(There is no warning only if size_t is as large as unsigned long. On most platforms, this is true, but it is not guaranteed.)
The advantage is you could perform error checking with this function, e.g.
#include <stdlib.h>
#include <stdio.h>
int main () {
char val[256];
fgets(val, 256, stdin);
char* endptr;
size_t a = strtoul(val, &endptr, 0);
if (val == endptr) {
printf("Not a number\n");
} else {
printf("The value is %zu\n", a);
}
return 0;
}
Have a look at the OpenOffice wiki on the best practices for error-free code: http://wiki.services.openoffice.org/wiki/Writing_warning-free_code
They suggest static casts for these conversions, and then supply a pragma to disable warnings for a particular section of code.
I personally consider this kind of warning idiotic and would turn it off, but the fact that you're asking about it suggests that you might be sufficiently unfamiliar with conversions between integer types and the differences in signed and unsigned behavior that the warning could be useful to you.
Back on the other hand again, I really despise [explicit] casts. The suggestion to use strtoul instead of atoi is probably a very good one. I see you commented that atoi was only an example, but the same principle applies in general: use functions that return the type you want rather than forcing a different type into the type you want. If the function is one you wrote yourself rather than a library function, this may just mean fixing your functions to return size_t for sizes rather than int.
I am having following doubt regarding "int" flavors (unsigned int, long int, long long int).
When we do some operations(* , /, + , -) between int and its flavors (lets say long int)
in 32bit system and 64bit system is the implicit typecast happen for "int"
for example :-
int x ;
long long int y = 2000;
x = y ; (Higher is assigned to lower one data truncation may happen)
I am expecting compiler to give warning for this But I am not getting any such warning.
Is this due to implicit typecast happen for "x" here.
I am using gcc with -Wall option. Is the behavior will change for 32bit and 64bit.
Thanks
Arpit
-Wall does not activate all possible warnings. -Wextra enables other warnings. Anyway, what you do is a perfectly "legal" operation and since the compiler can't always know at compile-time the value of the datum that could be "truncated", it is ok it does not warn: programmer should be already aware of the fact that a "large" integer could not fit into a "small" integer, so it is up to the programmer usually. If you think your program is written in not-awareness of this, add -Wconversion.
Casting without an explicit type cast operator is perfectly legal in C, but may have undefined behavior. In your case, int x; is signed, so if you try to store a value in it that's outside the range of int, your program has undefined behavior. On the other hand, if x were declared as unsigned x; the behavior is well-defined; cast is via reduction modulo UINT_MAX+1.
As for arithmetic, when you perform arithmetic between integers of different types, the 'smaller' type is promoted to the 'larger' type prior to the arithmetic. The compiler is free to optimize out this promotion of course if it does not affect the results, which leads to idioms like casting a 32bit integer to 64bit before multiplying to get a full 64bit result. Promotion gets a bit confusing and can have unexpected results when signed and unsigned values are mixed. You should look it up if you care to know since it's hard to explain informally.
If you are worried, you can include <stdint.h> and use types with defined lengths, such as uint16_t for a 16-bit unsigned integer.
Your code is perfectly valid (as already said by others). If you want to program in a portable way in most cases you should not use the bare C types int, long or unsigned int but types that tell a bit better what you are planing to do with it.
E.g for indices of arrays use always size_t. Regardless on whether or not you are on a 32 or 64 bit system this will be the right type. Or if you want to take the integer of maximal width on the platform you happen to land on use intmax_t or uintmax_t.
See http://gcc.gnu.org/ml/gcc-help/2003-06/msg00086.html -- the code is perfectly valid C/C++.
You might want to look at static analysis tools (sparse, llvm, etc.) to check for this type of truncation.