Are there any examples of semantics non-preserving optimizations (except FP optimizations)? - c

It is considered that optimizations have semantics preservation property. However, floating-point (FP) optimizations may not preserve the semantics. Usually these FP-optimizations are the result of selection of non-strict FP models (examples: ICC, MSVC, GCC, Clang/LLVM, KEIL, etc.).
Out of curiosity, are there any examples of other semantics non-preserving optimizations?

There are, but you have to look hard to find them.
Try replacing a standard library function. If it doesn't do what the standard library function does, you may find that your code doesn't do what you expect, because the compiler assumes standard library functions do what the documentation says they do.
Also, mmap() a region at address zero. The compiler may omit code that accesses it because it assumes that code is unreachable because it dereferences a NULL pointer and thus undefined behavior. However, if that mmap() call succeeds, the behavior of dereferencing a zero (NULL is zero on most platforms) just became defined. gcc has a compiler option to tell it to stop doing that. Clang eventually caved to pressure to add it because it would otherwise miscompile the kernel. https://reviews.llvm.org/D47894#change-z5AkMbcq7h1h
Back in the 90s when the aliasing rules were just starting to become things, there were more examples, as the aliasing rules changed the definition of the language. But this is well-settled now.

Related

GNU C compiler sabotages undefined behaviour

I have an embedded project that requires at some point that I write to address 0. So naturally I try:
*(int*)0 = 0 ;
But at optimisation level 2 or higher, the gcc compiler rubs its hands and says, in effect, "That is undefined behaviour! I can do what I like! Bwahaha!" and emits an invalid instruction to the code stream!
Here is my source file:
void f (void)
{
*(int*)0 = 0 ;
}
and here is the output listing:
.file "bug.c"
.text
.p2align 4,,15
.globl _f
.def _f; .scl 2; .type 32; .endef
_f:
LFB0:
.cfi_startproc
movl $0, 0
ud2 <-- Invalid instruction!
.cfi_endproc
LFE0:
.ident "GCC: (i686-posix-dwarf-rev0, Built by MinGW-W64 project) 7.3.0"
My question is: Why would anybody do this? What possible benefit could accrue from sabotaging code like this? Surely the obvious course of action is to issue a warning and carry on compiling?
I know the compiler is allowed to do this, I just wonder about the motivation of the compiler writer. It cost me two days and four engineering samples to track this down, so I'm a little peeved.
Edited to add: I have worked around this by using assembly language. So I'm not looking for solutions. I'm just curious why anybody would think this compiler behaviour was a good idea.
(Disclaimer: I'm not an expert on GCC internals, and this is more of a "post hoc" attempt to explain its behavior. But maybe it will be helpful.)
the gcc compiler rubs its hands and says, in effect, "That is undefined behaviour! I can do what I like! Bwahaha!" and emits an invalid instruction to the code stream!
I won't deny that there are cases where GCC does more or less that, but here there's a little more going on, and there is some method to its madness.
As I understand it, GCC isn't treating the null dereference as totally undefined here; it is making some assumptions about what it does. Its handling of null dereferences is controlled by a flag called -fdelete-null-pointer-checks, which is probably enabled by default when you turn on optimizations. From the manual:
-fdelete-null-pointer-checks
Assume that programs cannot safely dereference null pointers, and that no code or data element resides at address zero. This option
enables simple constant folding optimizations at all optimization
levels. In addition, other optimization passes in GCC use this flag to
control global dataflow analyses that eliminate useless checks for
null pointers; these assume that a memory access to address zero
always results in a trap, so that if a pointer is checked after it has
already been dereferenced, it cannot be null.
Note however that in some environments this assumption is not true. Use -fno-delete-null-pointer-checks to disable this optimization
for programs that depend on that behavior.
This option is enabled by default on most targets. On Nios II ELF, it defaults to off. On AVR, CR16, and MSP430, this option is
completely disabled.
Passes that use the dataflow information are enabled independently at different optimization levels.
So, if you are intending to actually access address 0, or if for some other reason your code will go on executing after the dereference, then you want to disable this with -fno-delete-null-pointer-checks. That will achieve the "carry on compiling" part of what you want. It will not give you warnings, however, presumably under the assumption that such dereferences are intentional.
But under default options, why are you seeing the generated code that you do, with the undefined instruction, and why isn't there a warning? I would guess that GCC's logic is running as follows:
Because -fdelete-null-pointer-checks is in effect, the compiler assumes that execution will not continue past the null dereference, but instead will trap. How the trap will be handled, it doesn't know: maybe program termination, maybe a signal or exception handler, maybe a longjmp up the stack. The null dereference itself is emitted as requested, perhaps under the assumption that you are intentionally exercising your trap handler. But either way, whatever code comes after the null dereference is now unreachable.
So now it does what any reasonable optimizing compiler does with unreachable code: it doesn't emit it. In your case, that's nothing but a ret, but whatever it is, as far as GCC is concerned it would just be wasted bytes of memory, and should be omitted.
You might think you should get a warning here, but GCC has a longstanding design decision not to warn about unreachable code, on the grounds that such warnings tended to be inconsistent and the false positives would do more harm than good. See for instance https://gcc.gnu.org/legacy-ml/gcc-help/2011-05/msg00360.html.
However, as a safety feature, GCC emits an undefined instruction (ud2 on x86) in place of the omitted unreachable code. The idea, I believe, is that just in case execution somehow does continue past the null dereference, it is better for the program to die, than to go off into the weeds and try to execute whatever memory contents happen to come next. (And indeed this can happen even on systems that do unmap the zero page; for instance, if you do struct huge *p = NULL; p->x = 0;, GCC understands this as a null dereference, even though p->x may not be on the zero page at all, and could conceivably be located at an accessible address.)
There is a warning flag, -Wnull-dereference, that will trigger a warning on your blatant null dereference. However, it only works if -fdelete-null-pointer-checks is enabled.
When would GCC's behavior be useful? Here's an example, maybe contrived, but it might get the idea across. Imagine your program has some allocation function that might fail:
struct foo *p = get_foo();
// do other stuff for a while
if (!p) {
// 5000 lines of elaborate backup plan in case we can't get a foo
}
frob(p->bar);
Now imagine that you redesign get_foo() so that it can't fail. You forget to take out your "backup plan" code, but you go ahead and use the returned object right away:
struct foo *p = get_foo();
frob(p->bar);
// do other stuff for a while
if (!p) {
// 5000 lines of elaborate backup plan in case we can't get a foo
}
The compiler doesn't know, a priori, that get_foo() will always return a valid pointer. But it can see that you've dereferenced it, and thus can assume that execution will only continue past that point if the pointer was not null. Therefore, it can tell that the elaborate backup plan is unreachable and should be omitted, which will save you a lot of bloat in your binary.
Incidentally, the situation with clang. Although as Eric Postpischil points out you do get a warning, what you don't get is an actual load from address 0: clang omits it and just emits ud2. This is what "doing whatever it likes" would really look like, and if you were hoping to exercise your page zero trap handler, you are out of luck.
In describing Undefined Behavior, the Standard refers to it as resulting "upon use of a nonportable or erroneous program construct or of erroneous data,", and the authors of the Standard clarify their intentions more clearly in the published Rationale: "Undefined behavior gives the implementor license not to catch certain program errors that are difficult to diagnose. It also identifies areas of possible conforming language extension: the implementor may augment the language by providing a definition of the officially undefined behavior." The question of when to extend the language in such fashion--treating various forms of UB as non-portable but correct, was left as a Quality of Implementation issue outside the Standard's jurisdiction.
The maintainers of clang and gcc take the view that the phrase "non-portable or erroneous" should be interpreted as synonymous with "erroneous", since the Standard would not forbid such an interpretation. If a compiler will never be used to process non-portable programs that will never be fed erroneous data, such an interpretation will sometimes allow them to process some strictly conforming programs which are fed exclusively valid data more quickly than would otherwise be possible, at the expense of making them less suitable for other purposes. I personally would view the range of programs that a compiler can usefully process reasonably efficiently as a much better metric of quality than the efficiency with which a compiler can process strictly-conforming programs, but people who are using compilers for different purposes may have different views about what would make a compiler more or less useful for those purposes.

What does uintptr_t have to do with strict aliasing?

I was doing some research on strict aliasing and how to handle it and found this commit on DPDK.
To fix strict aliasing (according to the comments), they are casting the void* parameters src and dst into uintptr_t. And then using the casted versions.
In my understanding, this should do nothing with the strict aliasing rule since there is no mention of casting to uintptr_t in the rule itself.
Would a cast to uintptr_t really help strict-aliasing? Or would this just fix some possible warnings from GCC?
Would a cast to uintptr_t really help strict-aliasing?
No, it would not.
Or would this just fix some possible warnings from GCC?
"Fix" in the sense of disguising the strict-aliasing violations well enough that the compiler does not diagnose them, yes, it might. And presumably it indeed did so for whoever made that change.
This is pernicious, because now, not only may the compiler do something unwanted with the code, but you cannot even prevent it from doing so by passing it the -fno-strict-aliasing option (or whatever similar option a different compiler might provide). Worse, it might work fine with the compiler used today, but break months or years later when you upgrade to a new version or when you switch to a different C implementation.
The "strict aliasing rules" specify situations where even implementations that are not intended to be suitable for low-level programming must allow for the possibility of aliasing between seemingly-unrelated objects. Compilers which are designed to be suitable for low-level programming are allowed to, and will, extend the language by behaving meaningfully--typically processing constructs "in a documented fashion characteristic of the environment" in more circumstances than mandated by the Standard, especially in the presence of constructs that would generally be useless otherwise.
Relatively few programs that aren't intending to access storage in low-level fashion will perform integer-to-pointer conversions. Thus, implementations that treat such conversions as an indication that they should avoid making any assumptions about the pointers formed thereby will be able to usefully process a wider range of programs than those which don't, without having to give up many opportunities for genuinely-useful optimizations. While it would be better to have the Standard specify a syntax for the purpose of erasing any evidence of pointer provenance, conversions through integer types presently work for almost all compilers other than clang.

Are there any categories to characterize warnings?

My empirical assumption of what compilers warn about in C-Code was actually that they warn the kind of behaving which is implementation defined, or in cases where they detect an construct causing undefined behavior, which they support nevertheless (if they detect and wouldn't they'd throw an error over just warning).
After I had an discussion about this the final proof that I was wrong was this:
#include <whatever_this_needs.h>
int main()
{
int i = 50;
return 0;
}
The compiler obvious warned about i was declared but never used.
I wasn't thinking about this kinds of warning anymore, since I was seeing them more as kind of a tool.... an information.
While I would strictly dissociate this kind of warning from something that warns me about causing inportability or droping significance without explicit cast, it is still something that can cause confusion by compiler optimizations.
So I'm now interested: Are there any categorizations of warning types?
If no standards about it are existing, what are the categories, GCC groups their warnings in?
What I noticed so far (empirical again):
Warnings about:
implementation- / un- defined behaving
unnecessary code (targeted for optimization)
breaking of optional standards (i.e. MISRA or POSIX)
But especially the 2nd point bothers me, since there are constructs (i.e. strict aliasing rules) where optimization can even result in unpredicted runtime behaving, while most cases it just cuts away code that isn't used anyway.
So are my points correct? And what (additional) official categories are there you can 'typecast' warnings in, what are their characteristics, and what is their impact?
Warnings are beyond the scope of the C standard, so there are no requirements or specification for how they should behave. The C standard is only concerned about diagnostics, as in diagnostic messages from the compiler to the programmer. The standard doesn't split those up in errors and warnings.
However, all compilers out there use errors to indicate direct violations of the C standard: syntax errors and similar. They use warnings to point out things beyond what is required by the C standard.
In almost every case, a warning simply means "oh by the way, you have a bug here".
Regarding GCC (see this), it just categories warnings in:
Things that are direct violations against the C standard but valid as non-standard GNU extensions (-pedantic)
"A handful of warnings" (-Wall). Enable all warnings, except some...
"A few warnings more" (-Wextra)
Plus numerous individual warnings with no category.
There's no obvious logic behind the system.
Note that GCC, being filled to the brim with non-standard extensions, have decided just to give warnings instead of errors for some C standard violations. So always compile with -pedantic-errors if you care about standard compliance.
Regarding implementation-defined behavior: C contains a lot of this, it would get very tedious if you would get a warning for every such case ("warning: two's complement int used"...). There's no relation between implementation-defined behavior and compiler warnings.
Regarding any case of undefined behavior, the compiler is often unable to detect it, since the definition of UB is runtime behavior beyond the scope of the standard. Therefore the responsibility to know about and avoid UB lies on the programmer.

Does C standard mandate that platforms must not define behaviors beyond those given in standard

The C standard makes clear that a compiler/library combination is allowed to do whatever it likes with the following code:
int doubleFree(char *p)
{
int temp = *p;
free(p);
free(p);
return temp;
}
In the event that a compiler does not require use of a particular bundled library, however, is there anything in the C standard which would forbid a library from defining a meaningful behavior? As a simple example, suppose code were written for a platform which had reference-counted pointers, such that following p = malloc(1234); __addref(p); __addref(p); the first two calls to free(p) would decrement the counter but not free the memory. Any code written for use with such a library would naturally work only with such a library (and the __addref() calls would likely fail on most others), but such a feature could be helpful in many cases when e.g. it is necessary to pass the a string repeatedly to a method which expects to be given a string produced with strdup and consequently calls free on it.
In the event that a library would define a useful behavior for some action like double-freeing a pointer, is there anything in the C standard which would authorize a compiler to unilaterally break it?
There is really two question here, your formally stated one and your broader one outlined in your comments to questions raised by others.
Your formal question is answers by the definition of undefined behavior and section 4 on conformance. The definition says (emphasis mine):
behavior, upon use of a nonportable or erroneous program construct or of erroneous data,
for which this International Standard imposes no requirements
With emphasis on nonportable and imposes no requirements. This really says it all, the compiler is free to optimize in unpleasant manners or can also chose to make the behavior documented and well defined, this of course mean the program is no longer strictly conforming, which brings us to section 4:
A strictly conforming program shall use only those features of the language and library
specified in this International Standard.2) It shall not produce output dependent on any
unspecified, undefined, or implementation-defined behavior, and shall not exceed any
minimum implementation limit.
but a conforming implementation is allowed extensions as long as they don't break a conforming program:
A conforming implementation may have extensions (including additional
library functions), provided they do not alter the behavior of any strictly conforming
program.3)
As the C FAQ says:
There are very few realistic, useful, strictly conforming programs. On the other hand, a merely conforming program can make use of any compiler-specific extension it wants to.
Your informal question deals with compilers taking more aggressive optimization opportunies with undefined behavior and in the long run the fear this will make real world systems programming impossible. While I do understand how this relatively new aggressive stance seems very programmer unfriendly to many in the end a compiler won't last very long if people can not build useful programs with it. A related blog post by John Regehr: Proposal for a Friendly Dialect of C.
One could argue the opposite, that compilers have made a lot of effort to build extensions to support varying needs not supported by the standard. I think the article GCC hacks in the Linux kernel demonstrates this well. It goes into the many gcc extensions that the Linux kernel relies on and clang has in general attempted to support as many gcc extensions as possible.
Whether compilers have removed useful handling of undefined behavior which hampers effective systems programming is not clear to me. I think specific questions on alternatives for individual cases of undefined behavior that has been exploited in systems programming and no longer work would be useful and interesting to the community.
Does C standard mandate that platforms must not define behaviors beyond those given in standard
Quite simply, no, it does not. The standard says:
An implementation shall be accompanied by a document that defines all implementation-
defined and locale-specific characteristics and all extensions.
There is no restriction anywhere in the standard that prohibits implementations from providing any other documentation they like. If you like, you can read N1570, the latest freely available draft of the ISO C standard, and confirm the lack of any such prohibition.
In the event that a library would define a useful behavior for some action like double-freeing a pointer, is there anything in the C standard which would authorize a compiler to unilaterally break it?
A C implementation includes both the compiler and the standard library. free() is part of the standard library. The standard does not define the behavior of passing the same pointer value to free() twice, but an implementation is free to define the behavior. Any such documentation is not required, and is outside the scope of the C standard.
If a C implementation documented, for example, that calling free() a second time on the same pointer value has no effect, but then doing so actually causes the program to crash, that would violate the implementation's own documentation, but it would not violate the C standard. There is no specific requirement in the C standard that says an implementation must conform to its own documentation, beyond the documentation that's required by the standard. An implementation's conformance to its own documentation is enforce by the market and by common sense, not by the C standard.
In the event that a library would define a useful behavior for some action like double-freeing a pointer, is there anything in the C standard which would authorize a compiler to unilaterally break it?
The compiler and the standard library (i.e. the one in which free is defined) are both part of the implementation - it isn't really coherent to talk about one of them doing something "unilaterally".
If a compiler "does not require use of a particular bundled library", then (other than perhaps as a freestanding implementation) it alone is not an implementation, so the standard doesn't apply to it at all. The behavior of a combination of a library and a compiler are the responsibility of whoever chooses to combine them (which may be the author of either component, or someone else entirely) and label this combination as an implementation. It would, of course, be wise not to document extensions implemented by the library as features of this implementation without confirming that the compiler does not break them. For that matter, you would also need to make sure that the compiler doesn't break anything used internally by the library.
In answer to your main question: no, it does not. If the end result of combining a library and a compiler (and kernel, dynamic loader, etc) is a conforming hosted environment, it is a conforming implementation even if some extensions that the library's author would like to have provided are not supported by the final result of combining them, but it does not require them to work, either. Conversely, if the result does not conform - for example if the compiler breaks the internals of the library and thereby causes some library function not to conform - then it is not a conforming implementation. Any program which calls free twice on the same pointer, or uses any reserved identifier starting with two underscores, causes undefined behavior and therefore is not a strictly conforming program.

__builtin_return_address returns null for index >0?

I want to get the return address of the caller function. I'm using __builtin_return_address() funtion, but if I give index value greater than 0 it is returning NULL.
Please help me with this or tell me any other function to get the same.
See this answer to a related question.
__builtin_return_address is GCC and processor specific (also available in some versions of Clang on some processors with some -lack of- optimizations), and documented as
On some machines it may be impossible to determine the return address of any function other than the current one
The compiler might optimize a function (e.g. when it is compiled with -fomit-frame-pointer, or for tail-calls, or by function inlining) without the relevant information.
So probably you are getting NULL because the information is not available!
In addition to compiler optimisation reasons (which IMO is the most likely reason for the issue you're facing), the GCC documentation states quite plainly:
Calling this function with a nonzero argument can have unpredictable effects, including crashing the calling program. As a result, calls that are considered unsafe are diagnosed when the -Wframe-address option is in effect. Such calls should only be made in debugging situations.
As Basile said, since it's a compiler builtin (read: very processor specific and a bad idea to use) the behaviour is exceptionally loosely defined (as it is not required by any standards and does not have to make any guarantees).
Just use backtrace(3), it's POSIX-compliant and doesn't rely on compiler builtins.

Resources