-Wstrict-aliasing=3 throws warning where -Wstrict-aliasing=1 does not - c

Using GCC 5.4.0.
The example is trivial. The code violates the strict aliasing rule on two occasions.
"dereferencing type-punned pointer will break strict-aliasing rules"
#include <stdint.h>
#include <inttypes.h>
int main() {
uint8_t buffer[100]; // line 1
uint32_t test = *((uint32_t*)(&buffer[10])); // line 2
uint32_t* pTest2 = (uint32_t*)(&buffer[10]); // line 3
test = *pTest2; // line 4
}
-Wstrict-aliasing=3 warns about line 2 only
-Wstrict-aliasing=2 warns about line 2 and 3
-Wstrict-aliasing=1 throws no warning at all
From the GCC documentation:
Level 1: Most aggressive, quick, least accurate. Possibly useful when higher levels do not warn but -fstrict-aliasing still breaks the code, as it has very few false negatives. However, it has many false positives. Warns for all pointer conversions between possibly incompatible types, even if never dereferenced. Runs in the front end only.
Now I am unsure about how reliable -Wstrict-aliasing=1 actually is. Is this an issue with GCC 5.4.0?

Related

"initialiser element is not constant" error in C, when using static const variable - Sometimes - Compiler settings?

I'm posting this because I couldn't find a suitable answer elsewhere, not because similar things haven't been asked before.
A project compiles just fine with the following:
#include <stdint.h>
void foo(void)
{ if (bar)
{ static const uint8_t ConstThing = 20;
static uint8_t StaticThing = ConstThing;
//...
}
}
But a cloned project does not, throwing the above error. Looks like we've not completely cloned compiler settings / warning levels etc, but can't find the difference right now.
Using arm-none-eabi-gcc (4.7.3) with -std=gnu99. Compiling for Kinetis.
If anyone knows which settings control cases when this is legal and illegal in the same compiler, I'm all ears. Thanks in advance.
Found the difference.
If optimisation is -O0 it doesn't compile.
If optimisation is -OS it does.
I'm guessing it produces 'what you were asking for, a better way' and fixes it.
Didn't see that coming. Thanks for your input everyone.
Converting some of my comments into an answer.
In standard C, ConstThing is a constant integer, but not an integer constant, and you can only initialize static variables with integer constants. The rules in C++ are different, as befits a different language.
C11 §6.7.9 Initialization ¶4 states:
All the expressions in an initializer for an object that has static or thread storage duration shall be constant expressions or string literals.
§6.4.4.1 Integer constants defines integer constants.
§6.6 Constant expressions defines constant expressions.
…I'm not sure I understand the difference between a 'constant integer' and an 'integer constant'.
Note that ConstThing is not one of the integer constants defined in §6.4.4.1 — so, whatever else it is, it is not an integer constant. Since it is a const-qualified int, it is a constant integer, but that is not the same as an integer constant. Sometimes, the language of the standard is surprising, but it is usually very precise.
The code in the question was compiled by GCC 4.7.3, and apparently compiling with -O0 triggers the error and compiling with -Os (-OS is claimed in the question, but not supported in standard GCC — it requires the optional argument to -O to be a non-negative integer, or s, g or fast) does not. Getting different views on the validity of the code depending on the optimization level is not a comfortable experience — changing the optimization should not change the meaning of the code.
So, the result is compiler dependent — and not required by the C standard. As long as you know that you are limiting portability (in theory, even if not in practice), then that's OK. It's if you don't realize that you're breaking the standard rules and if portability matters, then you have problems of the "Don't Do It" variety.' Personally, I wouldn't risk it — code should compile with or without optimization, and should not depend on a specific optimization flag. It's too fragile otherwise.
Having said that, if it's any consolation, GCC 10.2.0 and Apple clang version 11.0.0 (clang-1100.0.33.17) both accept the code with options
gcc -std=c11 -pedantic-errors -pedantic -Werror -Wall -Wextra -O3 -c const73.c
with any of -O0, -O1, -O2, -O3, -Os, -Og, -Ofast. That surprises me — I don't think it should be accepted in pedantic (strictly) standard-conforming mode (it would be different with -std=gnu11; then extensions are deemed valid). Even adding -Weverything to the clang compilations does not trigger an error. That really does surprise me. The options are intended to diagnose extensions over the standard, but are not completely successful. Note that GCC 4.7.3 is quite old; it was released 2013-04-11. Also, GCC 7.2.0 and v7.3.0 complain about the code under -O0, but not under -Os, -O1, -O2, or -O3 etc, while GCC 8.x.0, 9.x.0 and 10.x.0 do not.
extern int bar;
extern int baz;
extern void foo(void);
#include <stdio.h>
#include <stdint.h>
void foo(void)
{
if (bar)
{
static const uint8_t ConstThing = 20;
static uint8_t StaticThing = ConstThing;
baz = StaticThing++;
}
if (baz)
printf("Got a non-zero baz (%d)\n", baz);
}
However, I suspect that you get away with it because of the limited scope of ConstThing. (See also the comment by dxiv.)
If you use extern const uint8_t ConstThing; (at file scope, or inside the function) with the initializer value omitted, you get the warning that started the question.
extern int bar;
extern int baz;
extern void foo(void);
#include <stdio.h>
#include <stdint.h>
extern const uint8_t ConstThing; // = 20;
void foo(void)
{
if (bar)
{
static uint8_t StaticThing = ConstThing;
baz = StaticThing++;
}
if (baz)
printf("Got a non-zero baz (%d)\n", baz);
}
None of the compilers accepts this at any optimization level.

GCC hide specific warning type

In writing an emulator, I am using a lot of various large unsigned constants, and, hence, on compilation receive a very large number of warnings of the form:
warning: this decimal constant is unsigned only in ISO C90 [enabled by default]
making it difficult to debug the rest of the program. I understand that this can be suppressed with a pragma:
#pragma GCC diagnostic ignored "-W...."
But I am not sure which kind of warning this falls into and don't want to disable "-Wall" in case there are other more important warnings.
What is the smallest subset of warnings that can be disabled to remove this warning?
EDIT: As requested, one of the functions that trigger the message:
void NAND(unsigned int *to_return, unsigned int r1_value, unsigned int r2_value){
unsigned int result = (r1_value & r2_value) ^ 4294967295;
to_return[0] = result;
to_return2] = 0;
}
Specifically, the 4294967295 constant.
This was compiled under:
gcc -O3 "C source\Hard Disk Version\CPU.c" -o CEmuTest.exe
on Windows (minGW).

why no overflow warning when converting int to char

int i=9999;
char c=i;
gives no overflow warning, While
char c=9999;
gives,
warning C4305 initializing truncation from int to char
why no overflow warning when converting int to char?
You'll get warning C4244 when compiling with /W4 (which you should always do).
warning C4244: 'initializing' : conversion from 'int' to 'char', possible loss of data
Whether any code construct produces a warning is up to the cleverness of the compiler and the choices made by its authors.
char c=9999;
9999 is a constant expression. The compiler can determine, just by analyzing the declaration with no additional context, that it's going to overflow. (Presumably plain char is signed; if it's unsigned, the conversion is well defined -- but a compiler could still choose to warn about it.)
int i=9999;
char c=i;
This has the same semantics, but for a compiler to warn about the initialization of c, it would have to know that i has the value 9999 (or at least a value outside the range of char) when it analyzes that declaration. Suppose you instead wrote:
int i = 9999;
i = 42;
char c = i;
Then clearly no warning would be necessary or appropriate.
As James McNellis's answer indicates, a suffiently clever compiler can warn about either case if it performs additional analysis of what's going to happen during the execution of the program. For some compiler, it helps to enable optimization, since the analysis required to optimize code (without breaking it) can also reveal this kind of potential run-time error.
I'll note that this is an answer to the question you asked: why is there no warning. The answer you accepted is to the implied question: "I want a warning here; how can I enable it?". I'm not complaining, just observing.

Unused Variable Error in C .. Simple Question

I get error in C(Error- Unused Variable) for variable when I type in following code
int i=10;
but when I do this(break it up into two statements)
int i;
i=10;
The error Goes away
..I am using Xcode(ver-4.1)(Macosx-Lion)..
Is something wrong with xcode....
No nothing is wrong the compiler just warns you that you declared a variable and you are not using it.
It is just a warning not an error.
While nothing is wrong, You must avoid declaring variables that you do not need because they just occupy memory and add to the overhead when they are not needed in the first place.
The compiler isn't wrong, but it is missing an opportunity to print a meaningful error.
Apparently it warns if you declare a variable but never "use" it -- and assigning a vale to it qualifies as using it. The two code snippets are equivalent; the first just happens to make it a bit easier for the compiler to detect the problem.
It could issue a warning for a variable whose value is never read. And I wouldn't be surprised if it did so at a higher optimization level. (The analysis necessary for optimization is also useful for discovering this kind of problem.)
It's simply not possible for a compiler to detect all possible problems of this kind; doing so would be equivalent to solving the Halting Problem. (I think.) Which is why language standards typically don't require warnings like this, and different compilers expend different levels of effort detecting such problems.
(Actually, a compiler probably could detect all unused variable problems, but at the expense of some false positives, i.e., issuing warnings for cases where there isn't really a problem.)
UPDATE, 11 years later:
Using gcc 11.3.0 with -Wall, I get warnings on both:
$ cat a.c
int main() {
int i = 10;
}
$ gcc -Wall -c a.c
a.c: In function ‘main’:
a.c:2:9: warning: unused variable ‘i’ [-Wunused-variable]
2 | int i = 10;
| ^
$ cat b.c
int main() {
int i;
i = 10;
}
$ gcc -Wall -c b.c
b.c: In function ‘main’:
b.c:2:9: warning: variable ‘i’ set but not used [-Wunused-but-set-variable]
2 | int i;
| ^
$
But clang 8.0.1 does not warn on the second program. (XCode probably uses clang.)
The language does not require a warning, but it would certainly make sense to issue one in this case. Tests on godbolt.org indicate that clang issues a warning for the second program starting with version 13.0.0.
(void) i;
You can cast the unused variable to void to suppress the error.

Why are no strict-aliasing warnings generated for this code?

I have the following code:
struct A
{
short b;
};
struct B
{
double a;
};
void foo (struct B* src)
{
struct B* b = src;
struct A* a = (struct A*)src;
b->a = sin(rand());
if(a->b == rand())
{
printf("Where are you strict aliasing warnings?\n");
}
}
I'm compiling the code with the following command line:
gcc -c -std=c99 -Wstrict-aliasing=2 -Wall -fstrict-aliasing -O3 foo.c
I'm using GCC 4.5.0. I expected the compiler to print out the warning:
warning: dereferencing type-punned pointer will break strict-aliasing rules
But it never is. I can get the warning to be printed out for other cases, but I'm wondering why, in this case, it isn't. Is this not an obvious example of breaking the strict aliasing rules?
GCC's docs for -Wstrict-aliasing=2 says (emphasis mine):
Level 2: Aggressive, quick, not too
precise. May still have many false
positives (not as many as level 1
though), and few false negatives (but
possibly more than level 1). Unlike
level 1, it only warns when an address
is taken. Warns about incomplete
types. Runs in the frontend only.
It seems like your code isn't too tricky, so I'm not sure why there'd be a false negative, but maybe it's because you don't use the & address-of operator to perform the aliasing (that might be what's meant by "only warns when an address is taken")
Update:
It is from not using the address-of operator. If I add the following code to the foo.c file:
int usefoo(void)
{
struct B myB = {0};
foo( &myB);
return 0;
}
The warning is issued.
If usefoo() is in a separate compilation unit, no warning is issued.

Resources