Does C99 standard define observable behavior as C++03 does? - c

In C++03 Standard 1.9/6 there's this definition of observable behavior
The observable behavior of the abstract machine is its sequence of reads and writes to volatile data and calls to library I/O functions.
and The Standard goes to length explaining that the compiler must preserve observable behavior while doing optimizations.
However there's no such or similar definition in C99 draft I'm looking at. The only time observable behavior is mentioned is 6.7.3/7
The intended use of the restrict qualifier (like the register storage class) is to promote
optimization, and deleting all instances of the qualifier from a conforming program does
not change its meaning (i.e., observable behavior)
Is there a definition of what exactly the compiler must preserve when optimizing a C99 program?

In my draft, §3.4, defines behavior as "external appearance or action". "Observable behavior" seems to be a pleonasm that occurs exactly once.
§5.1.2.3, Program execution, further defines the behavior of C programs:
The semantic descriptions in this International Standard describe the behavior of an abstract machine in which issues of optimization are irrelevant.
It then goes on to define side-effects as "changes in the state of the execution environment" caused by "[a]ccessing a volatile object, modifying an object, modifying a file, or calling a function that does any of those operations". Side-effects are sequenced at sequence points.
This seems to be stricter than C++ in that "modifying an object", i.e. writing to memory, is (observable) behavior in C.
As for allowed optimization:
In the abstract machine, all expressions are evaluated as specified by the semantics. An
actual implementation need not evaluate part of an expression if it can deduce that its
value is not used and that no needed side effects are produced (including any caused by
calling a function or accessing a volatile object).
"Needed side-effects" are then listed in the following point:
At sequence points, volatile objects are stable in the sense that previous accesses are
complete and subsequent accesses have not yet occurred.
At program termination, all data written into files shall be identical to the result that
execution of the program according to the abstract semantics would have produced.
The input and output dynamics of interactive devices shall take place as specified in
7.19.3.
The paragraph concludes with a list of examples; §7.19.3 describes files in the context of stdio.

Related

Can volatile variables be read multiple times between sequence points?

I'm making my own C compiler to try to learn as much details as possible about C. I'm now trying to understand exactly how volatile objects work.
What is confusing is that, every read access in the code must strictly be executed (C11, 6.7.3p7):
An object that has volatile-qualified type may be modified in ways unknown to the implementation or have other unknown side effects. Therefore any expression referring to such an object shall be evaluated strictly according to the rules of the abstract machine, as described in 5.1.2.3. Furthermore, at every sequence point the value last stored in the object shall agree with that prescribed by the abstract machine, except as modified by the unknown factors mentioned previously.134) What constitutes an access to an object that has volatile-qualified type is implementation-defined.
Example : in a = volatile_var - volatile_var;, the volatile variable must be read twice and thus the compiler can't optimise to a = 0;
At the same time, the order of evaluation between sequence point is undetermined (C11, 6.5p3):
The grouping of operators and operands is indicated by the syntax. Except as specified later, side effects and value computations of subexpressions are unsequenced.
Example : in b = (c + d) - (e + f) the order in which the additions are evaluated is unspecified as they are unsequenced.
But evaluations of unsequenced objects where this evaluation creates a side effect (with volatile for instance), the behaviour is undefined (C11, 6.5p2):
If a side effect on a scalar object is unsequenced relative to either a different side effect on the same scalar object or a value computation using the value of the same scalar object, the behavior is undefined. If there are multiple allowable orderings of the subexpressions of an expression, the behavior is undefined if such an unsequenced side effect occurs in any of the orderings.
Does this mean the expressions like x = volatile_var - (volatile_var + volatile_var) is undefined ? Should my compiler throw an warning if this occurs ?
I've tried to see what CLANG and GCC do. Neither thow an error nor a warning. The outputed asm shows that the variables are NOT read in the execution order, but left to right instead as show in the asm risc-v asm below :
const int volatile thingy = 0;
int main()
{
int new_thing = thingy - (thingy + thingy);
return new_thing;
}
main:
lui a4,%hi(thingy)
lw a0,%lo(thingy)(a4)
lw a5,%lo(thingy)(a4)
lw a4,%lo(thingy)(a4)
add a5,a5,a4
sub a0,a0,a5
ret
Edit: I am not asking "Why do compilers accept it", I am asking "Is it undefined behavior if we strictly follow the C11 standard". The standard seems to state that it is undefined behaviour, but I need more precision about it to correctly interpret that
Reading the (ISO 9899:2018) standard literally, then it is undefined behavior.
C17 5.1.2.3/2 - definition of side effects:
Accessing a volatile object, modifying an object, modifying a file, or calling a function that does any of those operations are all side effects
C17 6.5/2 - sequencing of operands:
If a side effect on a scalar object is unsequenced relative to either a different side effect on the same scalar object or a value computation using the value of the same scalar object, the behavior is undefined. If there are multiple allowable orderings of the subexpressions of an expression, the behavior is undefined if such an unsequenced side effect occurs in any of the orderings.
Thus when reading the standard literally, volatile_var - volatile_var is definitely undefined behavior. Twice in a row UB actually, since both of the quoted sentences apply.
Please also note that this text changed quite a bit in C11. Previously C99 said, 6.5/2:
Between the previous and next sequence point an object shall have its stored value modified at most once by the evaluation of an expression. Furthermore, the prior value shall be read only to determine the value to be stored.
That is, the behaviour was previously unspecified in C99 (unspecified order of evaluation) but was made undefined by the changes in C11.
That being said, other than re-ordering the evaluation as it pleases, a compiler doesn't really have any reason to do wild and crazy things with this expression since there isn't much that can be optimized, given volatile.
As a quality of implementation, mainstream compilers seem to maintain the previous "merely unspecified" behavior from C99.
Per C11, this is undefined behavior.
Per 5.1.2.3 Program execution, paragraph 2 (bolding mine):
Accessing a volatile object, modifying an object, modifying a file, or calling a function that does any of those operations are all side effects ...
And 6.5 Expressions, paragraph 2 (again, bolding mine):
If a side effect on a scalar object is unsequenced relative to either a different side effect on the same scalar object or a value computation using the value of the same scalar object, the behavior is undefined.
Note that, as this is your compiler, you are free to define the behavior should you wish.
As other answers have pointed out, accessing a volatile-qualified variable is a side effect, and side effects are interesting, and having multiple side effects between sequence points is especially interesting, and having multiple side effects that affect the same object between sequence points is undefined.
As an example of how/why it's undefined, consider this (wrong) code for reading a two-byte big-endian value from an input stream ifs:
uint16_t val = (getc(ifs) << 8) | getc(ifs); /* WRONG */
This code imagines (in order to implement big-endianness, that is) that the two getc calls happen in left-to-right order, but of course that's not at all guaranteed, which is why this code is wrong.
Now, one of the things the volatile qualifier is for is input registers. So if you've got a volatile variable
volatile uint8_t inputreg;
and if every time you read it you get the next byte coming in on some device — that is, if merely accessing the variable inputreg is like calling getc() on a stream — then you might write this code:
uint16_t val = (inputreg << 8) | inputreg; /* ALSO WRONG */
and it's just about exactly as wrong as the getc() code above.
The Standard has no terminology more specific than "Undefined Behavior" to describe actions which should be unambiguously defined on some implementations, or even the vast majority of them, but may behave unpredictably on others, based upon Implementation-Defined criteria. If anything, the authors of the Standard go out of their way to avoid saying anything about such behaviors.
The term is also used as a catch-all for situations where a potentially useful optimization might observably affect program behavior in some cases, to ensure that such optimizations will not affect program behavior in any defined situations.
The Standard specifies that the semantics of volatile-qualified accesses are "Implementation Defined", and there are platforms where certain kinds of optimizations involving volatile-qualified accesses might be observable if more than one such access occurs between sequence points. As a simple example, some platforms have read-modify-write operations whose semantics may be observably distinct from doing discrete read, modify, and write operations. If a programmer were to write:
void x(int volatile *dest, int volatile *src)
{
*dest = *src | 1;
}
and the two pointers were equal, the behavior of such a function might depend upon whether a compiler recognized that the pointers were equal and replaced discrete read and write operations with a combined read-modify-write.
To be sure, such distinctions would be unlikely to matter in most cases, and would be especially unlikely to matter in cases where an object is read twice. Nonetheless, the Standard makes no attempt to distinguish situations where such optimizations would actually affect program behavior, much less those where they would affect program behavior in any way that actually mattered, from those where it would be impossible to detect the effects of such optimization. The notion that the phrase "non-portable or erroneous" excludes constructs which would be non-portable but correct on the target platform would lead to an interesting irony that compiler optimizations such as read-modify-write merging would be completely useless on any "correct" programs.
No diagnostic is required for programs with Undefined Behaviour, except where specifically mentioned. So it's not wrong to accept this code.
In general, it's not possible to know whether the same volatile storage is being accessed multiple times between sequence points (consider a function taking two volatile int* parameters, without restrict, as the simplest example where analysis is impossible).
That said, when you are able to detect a problematic situation, users might find it helpful, so I encourage you to work on getting a diagnostic out.
IMO it is legal but very bad.
int new_thing = thingy - (thingy + thingy);
Multiple use of volatile variables in one expression is allowed and no warning is needed. But from the programmer's point of view, it is a very bad line of code.
Does this mean the expressions like x = volatile_var - (volatile_var +
volatile_var) is undefined ? Should my compiler throw an error if this
occurs ?
No as C standard does not say anything how those reads have to be ordered. It is left to the implementations. All known to me implementations do it the easiest way for them like in this example : https://godbolt.org/z/99498141d

Is exit status observable behavior?

C 2018 5.1.2.3 6 says:
The least requirements on a conforming implementation are:
Accesses to volatile objects are evaluated strictly according to the rules of the abstract machine.
At program termination, all data written into files shall be identical to the result that execution of the program according to the abstract semantics would have produced.
The input and output dynamics of interactive devices shall take place as specified in 7.21.3. The intent of these requirements is that unbuffered or line-buffered output appear as soon as possible, to ensure that prompting messages actually appear prior to a program waiting for input.
This is the observable behavior of the program.
On the face of it, this does not include the exit status of the program.
Regarding exit(status), 7.22.4.4 5 says:
Finally, control is returned to the host environment. If the value of status is zero or EXIT_SUCCESS, an implementation-defined form of the status successful termination is returned. If the value of status is EXIT_FAILURE, an implementation-defined form of the status unsuccessful termination is returned. Otherwise the status returned is implementation-defined.
The standard does not tell us this is part of the observable behavior. Of course, it makes no sense for this exit behavior to be a description purely of C’s abstract machine; returning a value to the environment has no meaning unless it is observable in the environment. So my question is not so much whether the exit status is observable as whether this is a defect in the C standard’s definition of observable behavior. Or is there text somewhere else in the standard that applies?
I think it’s possible to piece this together to see the answer fall under the first bullet point of § 5.1.2.3.6:
Accesses to volatile objects are evaluated strictly according to the
rules of the abstract machine
Looking further, § 3.1 defines “access” as:
to read or modify the value of an object
and § 3.15 defines an “object” as:
region of data storage in the execution environment, the contents of
which can represent values
The standard, curiously, contains no definition of “volatile object”. It does contain a definition of “an object that has volatile-qualified type” in § 6.7.3.6:
An object that has volatile-qualified type may be modified in ways
unknown to the implementation or have other unknown side effects.
Therefore any expression referring to such an object shall be
evaluated strictly according to the rules of the abstract machine, as
described in 5.1.2.3.
It seems not unreasonable to infer that an object that has volatile-qualified type has that qualification precisely to inform the compiler that it is, in fact, a volatile object, so I don’t think it’s stretching things too far to use this wording as the basis of a definition for “volatile object” itself, and define a volatile object as an object which may be modified in ways unknown to the implementation or have other unknown side effects.
§ 5.1.2.3.2 defines “side effect” as follows:
Accessing a volatile object, modifying an object, modifying a file, or
calling a function that does any of those operations are all side
effects, which are changes in the state of the execution environment.
So I think we can piece this together as follows:
Returning the exit status to be returned to the host environment is
clearly a change in the state of the execution environment, as the
execution environment after having received, for example,
EXIT_SUCCESS, is necessarily in a different state that it would be
had it received, for example, EXIT_FAILURE. Returning the exit
status is therefore a side effect per § 5.1.2.3.2
exit() is a function that does this, so calling exit() is therefore
itself also a side effect per § 5.1.2.3.2.
The standard obviously gives us no details of the inner workings of
exit() or of what mechanism exit() will use to return that value to
the host environment, but it’s nonsensical to suppose that accessing
an object will not be involved, since objects are regions of data
storage in the execution environment whose contents can represent
values, and the exit status is a value.
Since we don’t know what, if anything, the host environment will do
in response to that change in state (not least because our program
will have exited before it happens), this access has an unknown side
effect, and the object accessed is therefore a volatile object.
Since calling exit() accesses a volatile object, it is observable
behavior per § 5.1.2.3.6.
This is consistent with the normal understanding of objects that have volatile-qualified types, namely that we can’t optimize away accesses to such objects if we cannot determine that no needed side effects will be omitted because the observable behavior (in the normal everyday sense) may be affected if we do. There is no visible object of volatile-qualified type in this case, of course, because the volatile object is question is accessed internally by exit(), and exit() obviously need not even be written in C. But there seems undoubtedly to be a volatile object, and § 5.1.2.3 refers specifically (three times) to volatile objects, and not to objects of volatile-qualified type (and other than a footnote to § 6.2.4.2, this is the only place in the standard volatile objects are referred to.)
Finally, this seems to be the only reading that renders § 5.1.2.3.6 intelligible, as intuitively we’d expect the “observable behavior” of C programs using only the facilities described by the standard to be that which loosely:
Changes memory in a way that’s visible outside of the program itself;
Changes the contents of files (which are by definition visible outside of the
program itself); and
Affects interactions with interactive devices
which seems to be essentially what § 5.1.2.3.6 is trying to get at.
Edit
There seems to be some little controversy in the comments apparently centered around the ideas that the exit status may be passed in registers, and that registers cannot be objects. This objection (no pun intended) is trivially refutable:
Objects can be declared with the register storage class specifier, and such objects can be designated by lvalues;
Memory-mapped registers, fairly ubiquitous in embedded programming, provide as clear a demonstration as any that registers can be objects, can have addresses, and can be designated by lvalues. Indeed, memory-mapped registers are one of the most common uses of volatile-qualified types;
mmap() shows that even file contents can sometimes have addresses and be objects.
In general, it's a mistake to believe that objects can only reside in, or that "addresses" can only refer to, locations in core memory, or banks of DRAM chips, or anything else that one might conventionally refer to as "memory" or "RAM". Any component of the execution environment that's capable of storing a value, including registers, could be an object, could potentially have an address, and could potentially be designated by an lvalue, and this is why the definition of "object" in the standard is cast in such deliberately wide terms.
Additionally, sections such as § 5.3.2.1.9 go to some lengths to draw a distinction between "values of the actual objects" and "[values] specified by the abstract semantics", indicating that actual objects are real things existing in the execution environment distinct from the abstract machine, and are things which the specification does indeed closely concern itself with, as the definition of "object" in § 3.15 makes clear. It seems untenable to maintain a position that the standard concerns itself with such actual objects up to and only up to the point at which a standard library function is invoked, at which point all such concerns evaporate and such matters suddenly become "outside C".
The exit status is supposed to be observable to the host environment that runs the implementation. Whether the host environment does anything with this is outside the scope of the standard.
I have read the system function documentation (7.22.4.8 The system function). It contains:
Returns
If the argument is a null pointer, the system function returns nonzero only if a
command processor is available. If the argument is not a null pointer, and the system
function does return, it returns an implementation-defined value.
It looks like the standard made provision for a system where a C program (or more generally a user defined command) could not start another command, and/or where where a command would not return anything to its caller. In that latter case, the exit value would not be observable (in the common sense).
In this interpretation, the observability of the exit value would just be implementation defined. And it would be consistent with it not being explicitely cited in the observable behaviour of a program.
I can remember of an old system (Solar 16) from the 70's where commands were started with call for standard commands or run for user commands, and where parameters could only be passed on sub-commands after a specific request from the program. No C compiler existed there, but if someone had managed to implement one, the return value would not have been observable.

What do the different classifications of undefined behavior mean?

I was reading through the C11 standard. As per the C11 standard undefined behavior is classified into four different types. The parenthesized numbers refer to the subclause of the C Standard (C11) that identifies the undefined behavior.
Example 1: The program attempts to modify a string literal (6.4.5). This undefined behavior is classified as: Undefined Behavior (information/confirmation needed)
Example 2 : An lvalue does not designate an object when evaluated (6.3.2.1). This undefined behavior is classified as: Critical Undefined Behavior
Example 3: An object has its stored value accessed other than by an lvalue of an allowable type (6.5). This undefined behavior is classified as: Bounded Undefined Behavior
Example 4: The string pointed to by the mode argument in a call to the fopen function does not exactly match one of the specified character sequences (7.21.5.3). This undefined behavior is classified as: Possible Conforming Language Extension
What is the meaning of the classifications? What do these classification convey to the programmer?
I only have access to a draft of the standard, but from what I’m reading, it seems like this classification of undefined behavior isn’t mandated by the standard and only matters from the perspective of compilers and environments that specifically indicate that they want to create C programs that can be more easily analyzed for different classes of errors. (These environments have to define a special symbol __STDC_ANALYZABLE__.)
It seems like the key idea here is an “out of bounds write,” which is defined as a write operation that modifies data that isn’t otherwise allocated as part of an object. For example, if you clobber the bytes of an existing variable accidentally, that’s not an out of bounds write, but if you jumped to a random region of memory and decorated it with your favorite bit pattern you’d be performing an out of bounds write.
A specific behavior is bounded undefined behavior if the result is undefined, but won’t ever do an out of bounds write. In other words, the behavior is undefined, but you won’t jump to a random address not associated with any objects or allocated space and put bytes there. A behavior is critical undefined behavior if you get undefined behavior that cannot promise that it won’t do an out-of-bounds write.
The standard then goes on to talk about what can lead to critical undefined behavior. By default undefined behaviors are bounded undefined behaviors, but there are exceptions for UB that result from memory errors like like accessing deallocated memory or using an uninitialized pointer, which have critical undefined behavior. Remember, though, that these classifications only exist and have meaning in the context of implementations of C that choose to specifically separate out these sorts of behaviors. Unless your C environment guarantees it’s analyzable, all undefined behaviors can potentially do absolutely anything!
My guess is that this is intended for environments like building drivers or kernel plugins where you’d like to be able to analyze a piece of code and say “well, if you're going to shoot someone in the foot, it had better be your foot that you’re shooting and not mine!” If you compile a C program with these constraints, the runtime environment can instrument the very few operations that are allowed to be critical undefined behavior and have those operations trap to the OS, and assume that all other undefined behaviors will at most destroy memory that’s specifically associated with the program itself.
All of these are cases where the behaviour is undefined, i.e. the standard "imposes no requirements". Traditionally, within undefined behaviour and considering one implementation (i.e. C compiler + C standard library), one could see two kinds of undefined behaviour:
constructs for which the behaviour would not be documented, or would be documented to cause a crash, or the behaviour would be erratic,
constructs that the standard left undefined but for which the implementation defines some useful behaviour.
Sometimes these can be controlled by compiler switches. E.g. example 1 usually always causes bad behaviour - a trap, or crash, or modifies a shared value. Earlier versions of GCC allowed one to have modifiable string literals with -fwritable-strings; therefore if that switch was given, the implementation defined the behaviour in that case.
C11 added an optional orthogonal classification: bounded undefined behaviour and critical undefined behaviour. Bounded undefined behaviour is that which does not perform an out-of-bounds store, i.e. it cannot cause values being written in arbitrary locations in memory. Any undefined behaviour that is not bounded undefined behaviour is critical undefined behaviour.
Iff __STDC_ANALYZABLE__ is defined, the implementation will conform to the appendix L, which has this definitive list of critical undefined behaviour:
An object is referred to outside of its lifetime (6.2.4).
A store is performed to an object that has two incompatible declarations (6.2.7),
A pointer is used to call a function whose type is not compatible with the referenced type (6.2.7, 6.3.2.3, 6.5.2.2).
An lvalue does not designate an object when evaluated (6.3.2.1).
The program attempts to modify a string literal (6.4.5).
The operand of the unary * operator has an invalid value (6.5.3.2).
Addition or subtraction of a pointer into, or just beyond, an array object and an integer type produces a result that points just
beyond the array object and is used as the operand of a unary *
operator that is evaluated (6.5.6).
An attempt is made to modify an object defined with a const-qualified type through use of an lvalue with
non-const-qualified type (6.7.3).
An argument to a function or macro defined in the standard library has an invalid value or a type not expected by a function
with variable number of arguments (7.1.4).
The longjmp function is called with a jmp_buf argument where the most recent invocation of the setjmp macro in the same invocation of
the program with the corresponding jmp_buf argument is nonexistent,
or the invocation was from another thread of execution, or the
function containing the invocation has terminated execution in the
interim, or the invocation was within the scope of an identifier with
variably modified type and execution has left that scope in the
interim (7.13.2.1).
The value of a pointer that refers to space deallocated by a call to the free or realloc function is used (7.22.3).
A string or wide string utility function accesses an array beyond the end of an object (7.24.1, 7.29.4).
For the bounded undefined behaviour, the standard imposes no requirements other than that an out-of-bounds write is not allowed to happen.
The example 1: modification of a string literal is also. classified as critical undefined behaviour. The example 4 is critical undefined behaviour too - the value is not one expected by the standard library.
For example 4, the standard hints that while the behaviour is undefined in case of mode that is not defined by the standard, there are implementations that might define behaviour for other flags. For example glibc supports many more mode flags, such as c, e, m and x, and allow setting the character encoding of the input with ,ccs=charset modifier (and putting the stream into wide mode right away).
Some programs are intended solely for use with input that is known to be valid, or at least come from trustworthy sources. Others are not. Certain kinds of optimizations which might be useful when processing only trusted data are stupid and dangerous when used with untrusted data. The authors of Annex L unfortunately wrote it excessively vaguely, but the clear intention is to allow compilers that they won't do certain kinds of "optimizations" that are stupid and dangerous when using data from untrustworthy sources.
Consider the function (assume "int" is 32 bits):
int32_t triplet_may_be_interesting(int32_t a, int32_t b, int32_t c)
{
return a*b > c;
}
invoked from the context:
#define SCALE_FACTOR 123456
int my_array[20000];
int32_t foo(uint16_t x, uint16_t y)
{
if (x < 20000)
my_array[x]++;
if (triplet_may_be_interesting(x, SCALE_FACTOR, y))
return examine_triplet(x, SCALE_FACTOR, y);
else
return 0;
}
When C89 was written, the most common way a 32-bit compiler would process that code would have been to do a 32-bit multiply and then do a signed comparison with y. A few optimizations are possible, however, especially if a compiler in-lines the function invocation:
On platforms where unsigned compares are faster than signed compares, a compiler could infer that since none of a, b, or c can be negative, the arithmetical value of a*b is non-negative, and it may thus use an unsigned compare instead of a signed comparison. This optimization would be allowable even if __STDC_ANALYZABLE__ is non-zero.
A compiler could likewise infer that if x is non-zero, the arithmetical value of x*123456 will be greater than every possible value of y, and if x is zero, then x*123456 won't be greater than any. It could thus replace the second if condition with simply if (x). This optimization is also allowable even if __STDC_ANALYzABLE__ is non-zero.
A compiler whose authors either intend it for use only with trusted data, or else wrongly believe that cleverness and stupidity are antonyms, could infer that since any value of x larger than 17395 will result in an integer overflow, x may be safely presumed to be 17395 or less. It could thus perform my_array[x]++; unconditionally. A compiler may not define __STDC_ANALYZABLE__ with a non-zero value if it would perform this optimization. It is this latter kind of optimization which Annex L is designed to address. If an implementation can guarantee that the effect of overflow will be limited to yielding a possibly-meaningless value, it may be cheaper and easier for code to deal with the possibly of the value being meaningless than to prevent the overflow. If overflow could instead cause objects to behave as though their values were corrupted by future computations, however, there would be no way a program could deal with things like overflow after the fact, even in cases where the result of the computation would end up being irrelevant.
In this example, if the effect of integer overflow would be limited to yielding a possibly-meaningless value, and if calling examine_triplet() unnecessarily would waste time but would otherwise be harmless, a compiler may be able to usefully optimize triplet_may_be_interesting in ways that would not be possible if it were written to avoid integer overflow at all costs. Aggressive
"optimization" will thus result in less efficient code than would be possible with a compiler that instead used its freedom to offer some loose behavioral guarantees.
Annex L would be much more useful if it allowed implementations to offer specific behavioral guarantees (e.g. overflow will yield a possibly-meaningless result, but have no other side-effects). No single set of guarantees would be optimal for all programs, but the amount of text Annex L spent on its impractical proposed trapping mechanism could have been better spent specifying macros to indicate what guarantees various implementations could offer.
According to cppreference :
Critical undefined behavior
Critical UB is undefined behavior that might perform a memory write or
a volatile memory read out of bounds of any object. A program that has
critical undefined behavior may be susceptible to security exploits.
Only the following undefined behaviors are critical:
access to an object outside of its lifetime (e.g. through a dangling pointer)
write to an object whose declarations are not compatible
function call through a function pointer whose type is not compatible with the type of the function it points to
lvalue expression is evaluated, but does not designate an object attempted modification of a string literal
dereferencing an invalid (null, indeterminate, etc) or past-the-end pointer
modification of a const object through a non-const pointer
call to a standard library function or macro with an invalid argument
call to a variadic standard library function with unexpected argument type (e.g. call to printf with an argument of the type that
doesn't match its conversion specifier)
longjmp where there is no setjmp up the calling scope, across threads, or from within the scope of a VM type.
any use of the pointer that was deallocated by free or realloc
any string or wide string library function accesses an array out of bounds
Bounded undefined behavior
Bounded UB is undefined behavior that cannot perform an illegal memory
write, although it may trap and may produce or store indeterminate
values.
All undefined behavior not listed as critical is bounded, including
multithreaded data races
use of a indeterminate values with automatic storage duration
strict aliasing violations
misaligned object access
signed integer overflow
unsequenced side-effects modify the same scalar or modify and read the same scalar
floating-to-integer or pointer-to-integer conversion overflow
bitwise shift by a negative or too large bit count
integer division by zero
use of a void expression
direct assignment or memcpy of inexactly-overlapped objects
restrict violations
etc.. ALL undefined behavior that's not in the critical list.
"I was reading through the C11 standard. As per the C11 standard undefined behavior is classified into four different types."
I wonder what you were actually reading. The 2011 ISO C standard does not mention these four different classifications of undefined behavior. In fact it's quite explicit in not making any distinctions among different kinds of undefined behavior.
Here's ISO C11 section 4 paragraph 2:
If a "shall" or "shall not" requirement that appears outside of a
constraint or runtime-constraint is violated, the behavior is
undefined. Undefined behavior is otherwise indicated in this
International Standard by the words "undefined behavior" or by the
omission of any explicit definition of behavior. There is no
difference in emphasis among these three; they all describe "behavior
that is undefined".
All the examples you cite are undefined behavior, which, as far as the Standard is concerned, means nothing more or less than:
behavior, upon use of a nonportable or erroneous program construct or
of erroneous data, for which this International Standard imposes no
requirements
If you have some other reference, that discusses different kinds of undefined behavior, please update your question to cite it. Your question would then be about what that document means by its classification system, not (just) about the ISO C standard.
Some of the wording in your question appears similar to some of the information in C11 Annex L, "Analyzability" (which is optional for conforming C11 implementations), but your first example refers to "Undefined Behavior (information/confirmation needed)", and the word "confirmation" appears nowhere in the ISO C standard.

Volatile qualifier on Global in Main Code But not in ISR

My code is written in C. I have an ISR (Interrupt Service Routine) that communicates with the main code using global variables. The ISR is in a different compilation unit from the main code.
Is there any reason I cannot use "volatile" for the main code but leave it off in the ISR?
My reasoning is as follows:
The volatile qualifier is preventing the compiler from fully optimizing the ISR. From the ISR's point of view the variable is not volatile - i.e. it cannot be externally changed for the duration of the ISR and the value does not need to be output for the duration of the ISR. Additionally, if the ISR is in its own compilation unit, the compiler MUST have the ISR read the global from memory before its first use and it MUST store changes back before returning. My reasoning for this is: Different compilation units need not be compiled at the same time so the compiler has no idea what is happening beyond the confines of the ISR (or it should pretend to) and so it must ensure that the global is read/written at the boundaries of the ISR.
Perhaps, I am misunderstanding the significance of compilation units? One reference that I found said that GCC has made this volatile mismatch a compile time error; I am not sure how it could, if they are in different compilation units, shouldn't they be independent? Can I not compile a library function separately and link it in later?
Nine ways to break your systems code using volatile
Perhaps an argument could be made from the concept of sequence points. I do not fully understand the concepts of sequence points or side effects; but, the C99 spec states in 5.1.2.3 paragraph 2:
"... At certain specified points in the execution sequence called sequence points, all side effects of previous evaluations shall be complete and no side effects of subsequent evaluations shall have taken place."
Annex C, lists sequence points that include:
The call to a function, after the arguments have been evaluated.
Immediately before a library function returns.
Ref:WG14 Document: N1013, Date: 07-May-2003
Note: A previous question, Global Variable Access Relative to Function Calls and Returns asked whether globals are stored/written before/after function calls and and returns. But this is a different question which asks whether a global variable may be differently qualified as "volatile" in different compilation units. I used much of the same reasoning to justify my preliminary conclusions, which prompted some readers to think it is the same question.
ISO/IEC 9899:2011 (the C11 standard) says:
6.7.3 Type qualifiers
¶6 If an attempt is made to modify an object defined with a const-qualified type through use of an lvalue with non-const-qualified type, the behavior is undefined. If an attempt is made to refer to an object defined with a volatile-qualified type through use of an lvalue with non-volatile-qualified type, the behavior is undefined.133)
133) This applies to those objects that behave as if they were defined with qualified types, even if they are never actually defined as objects in the program (such as an object at a memory-mapped input/output address).
The second sentence of ¶6 says that you invoke undefined behaviour if you have either of the organizations shown here:
File main.c File isr.c:
volatile int thingamyjig = 37; extern int thingamyjig; // V1
extern int thingamyjig; volatile int thingamyjig = 37; // V2
In each case of V1 or V2, you run foul of the undefined behaviour specified in that section of the standard — though V1 is what I think you're describing in the question.
The volatile qualifier must be applied consistently:
File main.c File isr.c:
volatile int thingamyjig = 37; extern volatile int thingamyjig; // V3
extern volatile int thingamyjig; volatile int thingamyjig = 37; // V4
Both V3 and V4 preserve the volatile-qualifiers consistently.
Note that one valid manifestation of 'undefined behaviour' is 'it behaves sanely and as you would like it to'. Unfortunately, that is not the only, or necessarily the most plausible, possible manifestation of undefined behaviour. Don't risk it. Be self-consistent.

What does section 5.1.2.3, paragraph 4 (in n1570.pdf) mean for null operations?

I have been advised many times that accesses to volatile objects can't be optimised away, however it seems to me as though this section, present in the C89, C99 and C11 standards advises otherwise:
... An actual implementation need not evaluate part of an expression if it can deduce that its value is not used and that no needed side effects are produced (including any caused by calling a function or accessing a volatile object).
If I understand correctly, this sentence is stating that an actual implementation can optimise away part of an expression, providing these two requirements are met:
"its value is not used", and
"that no needed side effects are produced (including any caused by calling a function or accessing a volatile object)"...
It seems to me that many people are confusing the meaning for "including" with the meaning for "excluding".
Is it possible for a compiler to distinguish between a side effect that's "needed", and a side effect that isn't? If timing is considered a needed side effect, then why are compilers allowed to optimise away null operations like do_nothing(); or int unused_variable = 0;?
If a compiler is able to deduce that a function does nothing (eg. void do_nothing() { }), then is it possible that the compiler might have justification to optimise calls to that function away?
If a compiler is able to deduce that a volatile object isn't mapped to anything crucial (i.e. perhaps it's mapped to /dev/null to form a null operation), then is it possible that the compiler might also have justification to optimise that non-crucial side-effect away?
If a compiler can perform optimisations to eliminate unnecessary code such as calls to do_nothing() in a process called "dead code elimination" (which is quite the common practice), then why can't the compiler also eliminate volatile writes to a null device?
As I understand, either the compiler can optimise away calls to functions or volatile accesses or the compiler can't optimise away either, because of 5.1.2.3p4.
I think the "including any" applies to the "needed side-effects" , whereas you seem to be reading it as applying to "part of an expression".
So the intent was to say:
... An actual implementation need not evaluate part of an expression if it can deduce that its value is not used and that no needed side effects are produced .
Examples of needed side-effects include:
Needed side-effects caused by a function which this expression calls
Accesses to volatile variables
Now, the term needed side-effect is not defined by the Standard. Section /4 is not attempting to define it either -- it's trying (and not succeeding very well) to provide examples.
I think the only sensible interpretation is to treat it as meaning observable behaviour which is defined by 5.1.2.3/6. So it would have been a lot simpler to write:
An actual implementation need not evaluate part of an expression if it can deduce that its value is not used and that no observable behaviour would be caused.
Your questions in the edit are answered by 5.1.2.3/6, sometimes known as the as-if rule, which I'll quote here:
The least requirements on a conforming implementation are:
Accesses to volatile objects are evaluated strictly according to the rules of the abstract machine.
At program termination, all data written into files shall be identical to the result that execution of the program according to the abstract semantics would have produced.
The input and output dynamics of interactive devices shall take place as specified in 7.21.3. The intent of these requirements is that unbuffered or line-buffered output appear as soon as possible, to ensure that prompting messages actually appear prior to a program waiting for input.
This is the observable behaviour of the program.
Answering the specific questions in the edit:
Is it possible for a compiler to distinguish between a side effect that's "needed", and a side effect that isn't? If timing is considered a needed side effect, then why are compilers allowed to optimise away null operations like do_nothing(); or int unused_variable = 0;?
Timing isn't a side-effect. A "needed" side-effect presumably here means one that causes observable behaviour.
If a compiler is able to deduce that a function does nothing (eg. void do_nothing() { }), then is it possible that the compiler might have justification to optimise calls to that function away?
Yes, these can be optimized out because they do not cause observable behaviour.
If a compiler is able to deduce that a volatile object isn't mapped to anything crucial (i.e. perhaps it's mapped to /dev/null to form a null operation), then is it possible that the compiler might also have justification to optimise that non-crucial side-effect away?
No, because accesses to volatile objects are defined as observable behaviour.
If a compiler can perform optimisations to eliminate unnecessary code such as calls to do_nothing() in a process called "dead code elimination" (which is quite the common practice), then why can't the compiler also eliminate volatile writes to a null device?
Because volatile accesses are defined as observable behaviour and empty functions aren't.
I believe this:
(including any caused by calling a function or accessing a volatile
object)
is intended to be read as
(including:
any side-effects caused by calling a function; or
accessing a volatile variable)
This reading makes sense because accessing a volatile variable is a side-effect.

Resources