Simple: If I test a signed vs an unsigned variable in GCC, when compiling -Wall I will get a warning.
Using this code:
#include <stdio.h>
int main(int argc, char* argv[])
{
/* const */ unsigned int i = 0;
if (i != argc)
return 1;
return 0;
}
I get this warning:
<source>: In function 'int main(int, char**)':
<source>:6:8: warning: comparison of integer expressions of different signedness: 'unsigned int' and 'int' [-Wsign-compare]
6 | if (i != argc)
| ~~^~~~~~~
Compiler returned: 0
However, if I uncomment this const - the compiler is happy. I can reproduce this on almost every GCC version (see https://godbolt.org/z/b6eoc1). Is is this a bug in GCC?
I think that what you are missing is compiler optimization. Without const, that variable is a variable, meaning that it can change. Since it is an unsigned int, it means that indeed it can be larger than an integer.
const unsigned int i = 2147483648;
You will get your error back if you assign a value greater than the largest value of int to that unsigned int.
However if it is const, the compiler knows its value, it knows, that it will not change, and there will be no problem with the comparison.
If you take a look at the assembly, you will see that without const, it actually takes the value of the variable to compare:
movl $0, -4(%rbp)
movl -20(%rbp), %eax
cmpl %eax, -4(%rbp)
Now, if it is const, it will not bother with the variable at all, it just takes the value:
movl $0, -4(%rbp)
cmpl $0, -20(%rbp)
I'd say it's a compiler bug in the -Wsign-compare option.
Test by compiling your example with -Wall -Wextra -O3. With -O3 added, the warning suddenly goes away in the const case. Even though the generated machine code with or without const is identical. This doesn't make any sense.
Naturally, neither const nor the generated machine code has any effect on the signedness of the C operands, so the warning shouldn't come inconsistently depending on type qualifiers or optimizer settings.
Simple use -Wall -Wextra and you will get your warning back.
I would advise using -Wall -Wextra -pedantic compiler options
https://godbolt.org/z/TvqeKn
EDIT
As clarification to the very unfriendly and unkind OP comment. -Wextra enables the following warnings including the one which OP wants
warning: comparison of integer expressions of different signedness:
'unsigned int' and 'int' [-Wsign-compare]
9 | if (i != argc)
The question is tagged C but links to a C++ Godbolt example. Here is a table showing when the warning is issued:1
C non-const
C const
C++ non-const
C++ const
Default warnings
No
No
No
No
-Wall
No
No
Yes
No
-Wall -Wextra
Yes
Yes
Yes
No
So, in C, GCC provides the warning in -Wextra regardless of const qualification.
In C++, GCC provides the warning in -Wall but treats const qualified objects as known values for which the warning may be suppressed.
The GCC documentation says, for -Wsign-compare:
Warn when a comparison between signed and unsigned values could produce an incorrect result when the signed value is converted to unsigned. In C++, this warning is also enabled by -Wall. In C, it is also enabled by -Wextra.
Note that it does not say it warns when there is a comparison between signed and unsigned values but rather when such a comparison could produce an incorrect result. Therefore, not providing a warning when the definition of the object is such that the comparison cannot produce an incorrect result is not a bug.
The word “could” leaves latitude for what the compiler “knows” about the object. Failing to determine the C const case cannot produce an incorrect result could be described as a bug although it may be better described as a shortcoming.
Footnote
1 “Const” in the table is specifically use of an object that is const-qualified and whose value is immediately available to the compiler via a visible definition. I did not test cases where, for example, an identifier is declared for a const-qualified object but its definition is in another translation unit.
Related
I'm posting this because I couldn't find a suitable answer elsewhere, not because similar things haven't been asked before.
A project compiles just fine with the following:
#include <stdint.h>
void foo(void)
{ if (bar)
{ static const uint8_t ConstThing = 20;
static uint8_t StaticThing = ConstThing;
//...
}
}
But a cloned project does not, throwing the above error. Looks like we've not completely cloned compiler settings / warning levels etc, but can't find the difference right now.
Using arm-none-eabi-gcc (4.7.3) with -std=gnu99. Compiling for Kinetis.
If anyone knows which settings control cases when this is legal and illegal in the same compiler, I'm all ears. Thanks in advance.
Found the difference.
If optimisation is -O0 it doesn't compile.
If optimisation is -OS it does.
I'm guessing it produces 'what you were asking for, a better way' and fixes it.
Didn't see that coming. Thanks for your input everyone.
Converting some of my comments into an answer.
In standard C, ConstThing is a constant integer, but not an integer constant, and you can only initialize static variables with integer constants. The rules in C++ are different, as befits a different language.
C11 §6.7.9 Initialization ¶4 states:
All the expressions in an initializer for an object that has static or thread storage duration shall be constant expressions or string literals.
§6.4.4.1 Integer constants defines integer constants.
§6.6 Constant expressions defines constant expressions.
…I'm not sure I understand the difference between a 'constant integer' and an 'integer constant'.
Note that ConstThing is not one of the integer constants defined in §6.4.4.1 — so, whatever else it is, it is not an integer constant. Since it is a const-qualified int, it is a constant integer, but that is not the same as an integer constant. Sometimes, the language of the standard is surprising, but it is usually very precise.
The code in the question was compiled by GCC 4.7.3, and apparently compiling with -O0 triggers the error and compiling with -Os (-OS is claimed in the question, but not supported in standard GCC — it requires the optional argument to -O to be a non-negative integer, or s, g or fast) does not. Getting different views on the validity of the code depending on the optimization level is not a comfortable experience — changing the optimization should not change the meaning of the code.
So, the result is compiler dependent — and not required by the C standard. As long as you know that you are limiting portability (in theory, even if not in practice), then that's OK. It's if you don't realize that you're breaking the standard rules and if portability matters, then you have problems of the "Don't Do It" variety.' Personally, I wouldn't risk it — code should compile with or without optimization, and should not depend on a specific optimization flag. It's too fragile otherwise.
Having said that, if it's any consolation, GCC 10.2.0 and Apple clang version 11.0.0 (clang-1100.0.33.17) both accept the code with options
gcc -std=c11 -pedantic-errors -pedantic -Werror -Wall -Wextra -O3 -c const73.c
with any of -O0, -O1, -O2, -O3, -Os, -Og, -Ofast. That surprises me — I don't think it should be accepted in pedantic (strictly) standard-conforming mode (it would be different with -std=gnu11; then extensions are deemed valid). Even adding -Weverything to the clang compilations does not trigger an error. That really does surprise me. The options are intended to diagnose extensions over the standard, but are not completely successful. Note that GCC 4.7.3 is quite old; it was released 2013-04-11. Also, GCC 7.2.0 and v7.3.0 complain about the code under -O0, but not under -Os, -O1, -O2, or -O3 etc, while GCC 8.x.0, 9.x.0 and 10.x.0 do not.
extern int bar;
extern int baz;
extern void foo(void);
#include <stdio.h>
#include <stdint.h>
void foo(void)
{
if (bar)
{
static const uint8_t ConstThing = 20;
static uint8_t StaticThing = ConstThing;
baz = StaticThing++;
}
if (baz)
printf("Got a non-zero baz (%d)\n", baz);
}
However, I suspect that you get away with it because of the limited scope of ConstThing. (See also the comment by dxiv.)
If you use extern const uint8_t ConstThing; (at file scope, or inside the function) with the initializer value omitted, you get the warning that started the question.
extern int bar;
extern int baz;
extern void foo(void);
#include <stdio.h>
#include <stdint.h>
extern const uint8_t ConstThing; // = 20;
void foo(void)
{
if (bar)
{
static uint8_t StaticThing = ConstThing;
baz = StaticThing++;
}
if (baz)
printf("Got a non-zero baz (%d)\n", baz);
}
None of the compilers accepts this at any optimization level.
The following code compiles fine with, for example, the default settings in Xcode 11.3.1:
#include <stdio.h>
int main(int argc, const char * argv[]) {
char* thing = "123";
thing[2] = '4';
printf("%s\n", thing);
return 0;
}
However, at runtime the code traps with EXC_BAD_ACCESS on thing[2] = '4'. I assume this is because the memory for the bytes representing "123" is compiled into my program's binary somewhere that on a modern processor/OS gets marked as for code rather than data. (This answer confirms that — not to mention there's a leaq 0x4d(%rip), %rsi ; "123" line in the disassembly, passing the pointer to an address relative to the instruction pointer!)
Is it just a historical artifact that C allows this, from the era of self-modifying code? I notice that I can also assign void* x = main; without any complaint that I'm discarding modifiers.
This answer says:
According to the C99 rationale, there were people in the committee who wanted string literals to be modifiable, so the standard does not explicitly forbid it.
Is there any further discussion I could read on that? More practically, is there a way to tell clang and/or gcc to flag such assignments (even though they are not actually forbidden) with a warning, without compiling as C++?
The answer you have quoted is an opinion without citation, and frankly nonsense. It is about nothing more than not breaking the vast quantity of existing legacy C code that it is desirable to remain compilable in a modern compiler.
However many compilers will issue a warning if you set the necessary warning level or options. In GCC for example:
-Wwrite-strings
When compiling C, give string constants the type const char[length] so that copying the address of one into a non-const char* pointer produces a warning. These warnings help you find at compile time code that can try to write into a string constant, but only if you have been very careful about using const in declarations and prototypes. Otherwise, it is just a nuisance. This is why we did not make -Wall request these warnings.
When compiling C++, warn about the deprecated conversion from string literals to char *. This warning is enabled by default for C++ programs.
CLANG also has -Wwrite-strings, where is a synonym for -Wwriteable-strings
-Wwritable-strings
This diagnostic is enabled by default.
Also controls -Wdeprecated-writable-strings.
Diagnostic text:
warning: ISO C++11 does not allow conversion from string literal to A
The diagnostic text is different for C compilation - I'm just quoting the manual.
In GCC with -Wwrite-strings:
int main()
{
char* x = "hello" ;
return 0;
}
produces:
main.c:3:15: warning: initialization discards ‘const’ qualifier from pointer target type [-Wdiscarded-qualifiers]
CLANG produces:
source_file.c:3:15: warning: initializing 'char *' with an expression of type 'const char [6]' discards qualifiers [-Wincompatible-pointer-types-discards-qualifiers]
Opposite to C++ In C string literals have types of non-constant character arrays.
However according to the C Standard any attempt to modify a string literal results in undefined behavior.
Historically the C language did not have the qualifier const. The qualifier const at first appeared in C++. So for the backward compatibility string literals in C have types of non-constant character arrays.
You have the -Wwrite-strings:
When compiling C, give string constants the type const char[length] so that copying the address of one into a non-const char * pointer produces a warning. These warnings help you find at compile time code that can try to write into a string constant, but only if you have been very careful about using const in declarations and prototypes. Otherwise, it is just a nuisance. This is why we did not make -Wall request these warnings.
https://gcc.gnu.org/onlinedocs/gcc-4.9.2/gcc/Warning-Options.html
Code:
char *color_name[] = {
"red",
"blue",
"green"
};
#define color_num (sizeof(color_name)/sizeof(char*))
int main(){
printf("size %d \n",color_num);
return 0;
}
It works fine with GCC 4.8.2 on Centos 7.
But I got error running above program on mac which says:
note:expanded from macro 'color_num'
Compiler on my Mac:
……include/c++/4.2.1
Apple LLVM version 6.1.0 (clang-602.0.49) (based on LLVM 3.6.0svn)
Target: x86_64-apple-darwin14.3.0
Thread model: posix
I've been told that GCC has been linked to Clang on Mac when it is used to compile program, am I right?
Qestion:
So why does Clang report that error? Is that concerning pre-processing?
And if I do this, it works fine:
int a = color_num;
printf("%d\n",a);
or:
printf("%d\n",sizeof(color_num)/sizeof(char*));
UPDATA=============
Crayon_277#Macintosh 20150525$ gcc -g -o ex11 ex1.c
ex1.c:16:21: warning: format specifies type 'int' but the argument has type 'unsigned long' [-Wformat]
printf("size %d\n",color_num);
~~ ^~~~~~~~~
%lu
ex1.c:14:19: note: expanded from macro 'color_num'
#define color_num (sizeof(color)/sizeof(char*))
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1 warning generated.
It seems no error but just that format warning.
I think it may concerning the extension I use for vim
scrooloose/syntastic
I got error from that:
It is probably complaining that the expression expanded from color_num is an unsigned (perhaps unsigned long) while the format in the printf is a signed integer.
sizeof gives size_t, which is always an unsigned type, as noted in is size_t always unsigned?, but the number of bits depends on the implementation. Compiler warnings may — and often do — refer to the mismatch in terms of the equivalent type rather than size_t as such. The C standard after all, does not specify the nature of diagnostic messages.
When you changed that to an assignment, it is less strict, since that is a different check.
The "note" lines are something that the compiler adds to a warning/error message to help you understand where the problem came from.
(As the comment notes, you should quote the entire warning message, to make the question understandable).
The sizeof gives the value with size_t type, the right format specifier for size_t is "%zu".
I get error in C(Error- Unused Variable) for variable when I type in following code
int i=10;
but when I do this(break it up into two statements)
int i;
i=10;
The error Goes away
..I am using Xcode(ver-4.1)(Macosx-Lion)..
Is something wrong with xcode....
No nothing is wrong the compiler just warns you that you declared a variable and you are not using it.
It is just a warning not an error.
While nothing is wrong, You must avoid declaring variables that you do not need because they just occupy memory and add to the overhead when they are not needed in the first place.
The compiler isn't wrong, but it is missing an opportunity to print a meaningful error.
Apparently it warns if you declare a variable but never "use" it -- and assigning a vale to it qualifies as using it. The two code snippets are equivalent; the first just happens to make it a bit easier for the compiler to detect the problem.
It could issue a warning for a variable whose value is never read. And I wouldn't be surprised if it did so at a higher optimization level. (The analysis necessary for optimization is also useful for discovering this kind of problem.)
It's simply not possible for a compiler to detect all possible problems of this kind; doing so would be equivalent to solving the Halting Problem. (I think.) Which is why language standards typically don't require warnings like this, and different compilers expend different levels of effort detecting such problems.
(Actually, a compiler probably could detect all unused variable problems, but at the expense of some false positives, i.e., issuing warnings for cases where there isn't really a problem.)
UPDATE, 11 years later:
Using gcc 11.3.0 with -Wall, I get warnings on both:
$ cat a.c
int main() {
int i = 10;
}
$ gcc -Wall -c a.c
a.c: In function ‘main’:
a.c:2:9: warning: unused variable ‘i’ [-Wunused-variable]
2 | int i = 10;
| ^
$ cat b.c
int main() {
int i;
i = 10;
}
$ gcc -Wall -c b.c
b.c: In function ‘main’:
b.c:2:9: warning: variable ‘i’ set but not used [-Wunused-but-set-variable]
2 | int i;
| ^
$
But clang 8.0.1 does not warn on the second program. (XCode probably uses clang.)
The language does not require a warning, but it would certainly make sense to issue one in this case. Tests on godbolt.org indicate that clang issues a warning for the second program starting with version 13.0.0.
(void) i;
You can cast the unused variable to void to suppress the error.
Hopefully this is a very simple question. Following is the C pgm (test.c) I have.
#include <stdio.h>
//#include <stdlib.h>
int main (int argc, char *argv[]) {
int intValue = atoi("1");
double doubleValue = atof("2");
fprintf(stdout,"The intValue is %d and the doubleValue is %g\n", intValue, doubleValue);
return 0;
}
Note that I am using atoi() and atof() from stdlib.h, but I do not include that header file. I compile the pgm (gcc test.c) and get no compiler error!
I run the pgm (./a.out) and here is the output, which is wrong.
The intValue is 1 and the doubleValue is 0
Now I include stdlib.h (by removing the comments before the #include) and recompile it and run it again. This time I get the right output:
The intValue is 1 and the doubleValue is 2
How come the compiler did not complain about not including the stdlib.h and still let me use the atoi(), atof() functions?
My gcc info:
$ gcc --version
gcc (GCC) 4.1.2 20070925 (Red Hat 4.1.2-27)
Any thoughts appreciated!
For historical reasons -- specifically, compatibility with very old C programs (pre-C89) -- using a function without having declared it first only provokes a warning from GCC, not an error. But the return type of such a function is assumed to be int, not double, which is why the program executes incorrectly.
If you use -Wall on the command line, you get a diagnostic:
$ gcc -Wall test.c
test.c: In function ‘main’:
test.c:5: warning: implicit declaration of function ‘atoi’
test.c:6: warning: implicit declaration of function ‘atof’
You should use -Wall basically always. Other very useful warning options for new code are -Wextra, -Wstrict-prototypes, -Wmissing-prototypes, -pedantic, and -Wwrite-strings, but compared to -Wall they have much higher false positive rates.
Tangentially: never use atoi nor atof, they hide input errors. Use strtol and strtod instead.
If you don't specify otherwise, I believe a C compiler will just guess that undeclared functions take the form extern int foo(). Which is why atoi works and atof doesn't. Which compiler flags were you using? I suggest using -Wall to turn on a bunch of gcc warnings, which should include referencing undeclared functions.
C allows you to call a function without having a declaration for that function.
The function will be assumed to return an int and arguments will be passed using default promotions. If those don't match what the function actually expects, you'll get undefined behavior.
Compilers will often warn for this case, but not always (and that will also depend on compiler configuration).
In C, when you use a function that was not declared, it assumes that it has the default prototype:
int FUNCTION_NAME();
Note that in C using () as prototype means it accepts any arguments.
If you compile with the flag -Wall (I recommend you to always use this flag, since it enables all recommended warnings) you will get a warning (not an error) telling you that you are using an undeclared function.
C, unfortunately, does not require functions to be prototyped (or even declared) before use -- but without a prototype, it automatically makes certain assumptions about the function. One of those is that it returns an int. In your case, atoi does return an int, so it works correctly. atof doesn't, so it doesn't work correctly. Lacking a prototype/declaration, you get undefined behavior -- typically it'll end up retrieving whatever value happens to be in the register where an int would normally be returned, and using that. It appears that in your particular case, that happens to be a zero, but it could just as easily be something else.
This is one of the reasons many people push "C++ as a better C" -- C++ does require that all functions be declared before use, and further that you specify the types of all (non-variadic) parameters as well (i.e. a C++ function declaration is like a C prototype, not like a C declaration).