In writing an emulator, I am using a lot of various large unsigned constants, and, hence, on compilation receive a very large number of warnings of the form:
warning: this decimal constant is unsigned only in ISO C90 [enabled by default]
making it difficult to debug the rest of the program. I understand that this can be suppressed with a pragma:
#pragma GCC diagnostic ignored "-W...."
But I am not sure which kind of warning this falls into and don't want to disable "-Wall" in case there are other more important warnings.
What is the smallest subset of warnings that can be disabled to remove this warning?
EDIT: As requested, one of the functions that trigger the message:
void NAND(unsigned int *to_return, unsigned int r1_value, unsigned int r2_value){
unsigned int result = (r1_value & r2_value) ^ 4294967295;
to_return[0] = result;
to_return2] = 0;
}
Specifically, the 4294967295 constant.
This was compiled under:
gcc -O3 "C source\Hard Disk Version\CPU.c" -o CEmuTest.exe
on Windows (minGW).
Related
Simple: If I test a signed vs an unsigned variable in GCC, when compiling -Wall I will get a warning.
Using this code:
#include <stdio.h>
int main(int argc, char* argv[])
{
/* const */ unsigned int i = 0;
if (i != argc)
return 1;
return 0;
}
I get this warning:
<source>: In function 'int main(int, char**)':
<source>:6:8: warning: comparison of integer expressions of different signedness: 'unsigned int' and 'int' [-Wsign-compare]
6 | if (i != argc)
| ~~^~~~~~~
Compiler returned: 0
However, if I uncomment this const - the compiler is happy. I can reproduce this on almost every GCC version (see https://godbolt.org/z/b6eoc1). Is is this a bug in GCC?
I think that what you are missing is compiler optimization. Without const, that variable is a variable, meaning that it can change. Since it is an unsigned int, it means that indeed it can be larger than an integer.
const unsigned int i = 2147483648;
You will get your error back if you assign a value greater than the largest value of int to that unsigned int.
However if it is const, the compiler knows its value, it knows, that it will not change, and there will be no problem with the comparison.
If you take a look at the assembly, you will see that without const, it actually takes the value of the variable to compare:
movl $0, -4(%rbp)
movl -20(%rbp), %eax
cmpl %eax, -4(%rbp)
Now, if it is const, it will not bother with the variable at all, it just takes the value:
movl $0, -4(%rbp)
cmpl $0, -20(%rbp)
I'd say it's a compiler bug in the -Wsign-compare option.
Test by compiling your example with -Wall -Wextra -O3. With -O3 added, the warning suddenly goes away in the const case. Even though the generated machine code with or without const is identical. This doesn't make any sense.
Naturally, neither const nor the generated machine code has any effect on the signedness of the C operands, so the warning shouldn't come inconsistently depending on type qualifiers or optimizer settings.
Simple use -Wall -Wextra and you will get your warning back.
I would advise using -Wall -Wextra -pedantic compiler options
https://godbolt.org/z/TvqeKn
EDIT
As clarification to the very unfriendly and unkind OP comment. -Wextra enables the following warnings including the one which OP wants
warning: comparison of integer expressions of different signedness:
'unsigned int' and 'int' [-Wsign-compare]
9 | if (i != argc)
The question is tagged C but links to a C++ Godbolt example. Here is a table showing when the warning is issued:1
C non-const
C const
C++ non-const
C++ const
Default warnings
No
No
No
No
-Wall
No
No
Yes
No
-Wall -Wextra
Yes
Yes
Yes
No
So, in C, GCC provides the warning in -Wextra regardless of const qualification.
In C++, GCC provides the warning in -Wall but treats const qualified objects as known values for which the warning may be suppressed.
The GCC documentation says, for -Wsign-compare:
Warn when a comparison between signed and unsigned values could produce an incorrect result when the signed value is converted to unsigned. In C++, this warning is also enabled by -Wall. In C, it is also enabled by -Wextra.
Note that it does not say it warns when there is a comparison between signed and unsigned values but rather when such a comparison could produce an incorrect result. Therefore, not providing a warning when the definition of the object is such that the comparison cannot produce an incorrect result is not a bug.
The word “could” leaves latitude for what the compiler “knows” about the object. Failing to determine the C const case cannot produce an incorrect result could be described as a bug although it may be better described as a shortcoming.
Footnote
1 “Const” in the table is specifically use of an object that is const-qualified and whose value is immediately available to the compiler via a visible definition. I did not test cases where, for example, an identifier is declared for a const-qualified object but its definition is in another translation unit.
I'm posting this because I couldn't find a suitable answer elsewhere, not because similar things haven't been asked before.
A project compiles just fine with the following:
#include <stdint.h>
void foo(void)
{ if (bar)
{ static const uint8_t ConstThing = 20;
static uint8_t StaticThing = ConstThing;
//...
}
}
But a cloned project does not, throwing the above error. Looks like we've not completely cloned compiler settings / warning levels etc, but can't find the difference right now.
Using arm-none-eabi-gcc (4.7.3) with -std=gnu99. Compiling for Kinetis.
If anyone knows which settings control cases when this is legal and illegal in the same compiler, I'm all ears. Thanks in advance.
Found the difference.
If optimisation is -O0 it doesn't compile.
If optimisation is -OS it does.
I'm guessing it produces 'what you were asking for, a better way' and fixes it.
Didn't see that coming. Thanks for your input everyone.
Converting some of my comments into an answer.
In standard C, ConstThing is a constant integer, but not an integer constant, and you can only initialize static variables with integer constants. The rules in C++ are different, as befits a different language.
C11 §6.7.9 Initialization ¶4 states:
All the expressions in an initializer for an object that has static or thread storage duration shall be constant expressions or string literals.
§6.4.4.1 Integer constants defines integer constants.
§6.6 Constant expressions defines constant expressions.
…I'm not sure I understand the difference between a 'constant integer' and an 'integer constant'.
Note that ConstThing is not one of the integer constants defined in §6.4.4.1 — so, whatever else it is, it is not an integer constant. Since it is a const-qualified int, it is a constant integer, but that is not the same as an integer constant. Sometimes, the language of the standard is surprising, but it is usually very precise.
The code in the question was compiled by GCC 4.7.3, and apparently compiling with -O0 triggers the error and compiling with -Os (-OS is claimed in the question, but not supported in standard GCC — it requires the optional argument to -O to be a non-negative integer, or s, g or fast) does not. Getting different views on the validity of the code depending on the optimization level is not a comfortable experience — changing the optimization should not change the meaning of the code.
So, the result is compiler dependent — and not required by the C standard. As long as you know that you are limiting portability (in theory, even if not in practice), then that's OK. It's if you don't realize that you're breaking the standard rules and if portability matters, then you have problems of the "Don't Do It" variety.' Personally, I wouldn't risk it — code should compile with or without optimization, and should not depend on a specific optimization flag. It's too fragile otherwise.
Having said that, if it's any consolation, GCC 10.2.0 and Apple clang version 11.0.0 (clang-1100.0.33.17) both accept the code with options
gcc -std=c11 -pedantic-errors -pedantic -Werror -Wall -Wextra -O3 -c const73.c
with any of -O0, -O1, -O2, -O3, -Os, -Og, -Ofast. That surprises me — I don't think it should be accepted in pedantic (strictly) standard-conforming mode (it would be different with -std=gnu11; then extensions are deemed valid). Even adding -Weverything to the clang compilations does not trigger an error. That really does surprise me. The options are intended to diagnose extensions over the standard, but are not completely successful. Note that GCC 4.7.3 is quite old; it was released 2013-04-11. Also, GCC 7.2.0 and v7.3.0 complain about the code under -O0, but not under -Os, -O1, -O2, or -O3 etc, while GCC 8.x.0, 9.x.0 and 10.x.0 do not.
extern int bar;
extern int baz;
extern void foo(void);
#include <stdio.h>
#include <stdint.h>
void foo(void)
{
if (bar)
{
static const uint8_t ConstThing = 20;
static uint8_t StaticThing = ConstThing;
baz = StaticThing++;
}
if (baz)
printf("Got a non-zero baz (%d)\n", baz);
}
However, I suspect that you get away with it because of the limited scope of ConstThing. (See also the comment by dxiv.)
If you use extern const uint8_t ConstThing; (at file scope, or inside the function) with the initializer value omitted, you get the warning that started the question.
extern int bar;
extern int baz;
extern void foo(void);
#include <stdio.h>
#include <stdint.h>
extern const uint8_t ConstThing; // = 20;
void foo(void)
{
if (bar)
{
static uint8_t StaticThing = ConstThing;
baz = StaticThing++;
}
if (baz)
printf("Got a non-zero baz (%d)\n", baz);
}
None of the compilers accepts this at any optimization level.
Code:
char *color_name[] = {
"red",
"blue",
"green"
};
#define color_num (sizeof(color_name)/sizeof(char*))
int main(){
printf("size %d \n",color_num);
return 0;
}
It works fine with GCC 4.8.2 on Centos 7.
But I got error running above program on mac which says:
note:expanded from macro 'color_num'
Compiler on my Mac:
……include/c++/4.2.1
Apple LLVM version 6.1.0 (clang-602.0.49) (based on LLVM 3.6.0svn)
Target: x86_64-apple-darwin14.3.0
Thread model: posix
I've been told that GCC has been linked to Clang on Mac when it is used to compile program, am I right?
Qestion:
So why does Clang report that error? Is that concerning pre-processing?
And if I do this, it works fine:
int a = color_num;
printf("%d\n",a);
or:
printf("%d\n",sizeof(color_num)/sizeof(char*));
UPDATA=============
Crayon_277#Macintosh 20150525$ gcc -g -o ex11 ex1.c
ex1.c:16:21: warning: format specifies type 'int' but the argument has type 'unsigned long' [-Wformat]
printf("size %d\n",color_num);
~~ ^~~~~~~~~
%lu
ex1.c:14:19: note: expanded from macro 'color_num'
#define color_num (sizeof(color)/sizeof(char*))
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1 warning generated.
It seems no error but just that format warning.
I think it may concerning the extension I use for vim
scrooloose/syntastic
I got error from that:
It is probably complaining that the expression expanded from color_num is an unsigned (perhaps unsigned long) while the format in the printf is a signed integer.
sizeof gives size_t, which is always an unsigned type, as noted in is size_t always unsigned?, but the number of bits depends on the implementation. Compiler warnings may — and often do — refer to the mismatch in terms of the equivalent type rather than size_t as such. The C standard after all, does not specify the nature of diagnostic messages.
When you changed that to an assignment, it is less strict, since that is a different check.
The "note" lines are something that the compiler adds to a warning/error message to help you understand where the problem came from.
(As the comment notes, you should quote the entire warning message, to make the question understandable).
The sizeof gives the value with size_t type, the right format specifier for size_t is "%zu".
I would like to report a bug against Clang and GCC for accepting multiple incompatible prototypes for the same function.
Consider the examples below:
$ clang -v
Ubuntu clang version 3.4-1ubuntu3 (tags/RELEASE_34/final) (based on LLVM 3.4)
Target: x86_64-pc-linux-gnu
…
$ gcc -v
…
gcc version 4.8.2 (Ubuntu 4.8.2-19ubuntu1)
$ cat t1.c
int f(void);
float f(void);
$ gcc -c t1.c
t1.c:3:7: error: conflicting types for ‘f’
float f(void);
^
t1.c:1:5: note: previous declaration of ‘f’ was here
int f(void);
^
$ clang -c t1.c
t1.c:3:7: error: conflicting types for 'f'
float f(void);
^
t1.c:1:5: note: previous declaration is here
int f(void);
^
1 error generated.
Both GCC and Clang conform to what I am going to call the “expected behavior”.
However, if f is set to return an enum or an unsigned int:
$ cat t2.c
typedef enum { m1 } t ;
t f();
unsigned int f();
$ gcc -c t2.c
$ clang -c t.c
When the returned types in the two separate declarations of f are a simple enum and unsigned int, neither GCC nor Clang emit a diagnostic. I would like to report this behavior as a bug. In the C11 standard, clause 6.2.7:2 make the two programs t1.c and t2.c above undefined behavior:
6.2.7:2 All declarations that refer to the same object or function shall have compatible type; otherwise, the behavior is undefined.
However, 6.2.7:2 is not inside a Constraints section, so the two compilers are allowed to do what they want with these undefined behaviors, including accepting them silently. Is there any other clause that would make a diagnostic mandatory in a program like t2.c, and would make it right to report the absence of diagnostic as a compiler bug? Or am I perhaps wrong in expecting that an enumerated type be incompatible with unsigned int?
I found the answer as I was writing the last sentence in the above question:
There is no undefined behavior in t2.c. Each enumerated type is compatible with one plain integer type, chosen by the compiler. In the example t2.c, GCC and Clang have both chosen unsigned int to be compatible with the enum typedef'd as t.
6.7.2.2:4 Each enumerated type shall be compatible with char, a signed integer type, or an unsigned integer type. The choice of type is implementation-defined,128 but shall be capable of representing the values of all the members of the enumeration […]
128) An implementation may delay the choice of which integer type until all enumeration constants have been seen.
The "expected behavior" for the first example is required by the constraints in C11 (n1570) 6.7 p4:
All declarations in the same scope that refer to the same object or function shall specify compatible types.
As your answer states, enumeration types may be compatible with unsigned int, what they usually are in case of Gcc:
Normally, the type is unsigned int if there are no negative values in the enumeration, otherwise int. If -fshort-enums is specified, then if there are negative values it is the first of signed char, short and int that can represent all the values, otherwise it is the first of unsigned char, unsigned short and unsigned int that can represent all the values.
(I couldn't find the corresponging part in the Clang documentation, but I'd expect it to be the same.)
For the second example, the diagnostic is required if and only if the enumeration type is incompatible with unsigned int. If it isn't, the behavior (beyond the diagnostic) is undefined as per the standard quote in the question.
OT: In C++, the second code is invalid, as enumeration types are types on its own, incompatible with other integer types.
I get error in C(Error- Unused Variable) for variable when I type in following code
int i=10;
but when I do this(break it up into two statements)
int i;
i=10;
The error Goes away
..I am using Xcode(ver-4.1)(Macosx-Lion)..
Is something wrong with xcode....
No nothing is wrong the compiler just warns you that you declared a variable and you are not using it.
It is just a warning not an error.
While nothing is wrong, You must avoid declaring variables that you do not need because they just occupy memory and add to the overhead when they are not needed in the first place.
The compiler isn't wrong, but it is missing an opportunity to print a meaningful error.
Apparently it warns if you declare a variable but never "use" it -- and assigning a vale to it qualifies as using it. The two code snippets are equivalent; the first just happens to make it a bit easier for the compiler to detect the problem.
It could issue a warning for a variable whose value is never read. And I wouldn't be surprised if it did so at a higher optimization level. (The analysis necessary for optimization is also useful for discovering this kind of problem.)
It's simply not possible for a compiler to detect all possible problems of this kind; doing so would be equivalent to solving the Halting Problem. (I think.) Which is why language standards typically don't require warnings like this, and different compilers expend different levels of effort detecting such problems.
(Actually, a compiler probably could detect all unused variable problems, but at the expense of some false positives, i.e., issuing warnings for cases where there isn't really a problem.)
UPDATE, 11 years later:
Using gcc 11.3.0 with -Wall, I get warnings on both:
$ cat a.c
int main() {
int i = 10;
}
$ gcc -Wall -c a.c
a.c: In function ‘main’:
a.c:2:9: warning: unused variable ‘i’ [-Wunused-variable]
2 | int i = 10;
| ^
$ cat b.c
int main() {
int i;
i = 10;
}
$ gcc -Wall -c b.c
b.c: In function ‘main’:
b.c:2:9: warning: variable ‘i’ set but not used [-Wunused-but-set-variable]
2 | int i;
| ^
$
But clang 8.0.1 does not warn on the second program. (XCode probably uses clang.)
The language does not require a warning, but it would certainly make sense to issue one in this case. Tests on godbolt.org indicate that clang issues a warning for the second program starting with version 13.0.0.
(void) i;
You can cast the unused variable to void to suppress the error.