Is exit status observable behavior? - c

C 2018 5.1.2.3 6 says:
The least requirements on a conforming implementation are:
Accesses to volatile objects are evaluated strictly according to the rules of the abstract machine.
At program termination, all data written into files shall be identical to the result that execution of the program according to the abstract semantics would have produced.
The input and output dynamics of interactive devices shall take place as specified in 7.21.3. The intent of these requirements is that unbuffered or line-buffered output appear as soon as possible, to ensure that prompting messages actually appear prior to a program waiting for input.
This is the observable behavior of the program.
On the face of it, this does not include the exit status of the program.
Regarding exit(status), 7.22.4.4 5 says:
Finally, control is returned to the host environment. If the value of status is zero or EXIT_SUCCESS, an implementation-defined form of the status successful termination is returned. If the value of status is EXIT_FAILURE, an implementation-defined form of the status unsuccessful termination is returned. Otherwise the status returned is implementation-defined.
The standard does not tell us this is part of the observable behavior. Of course, it makes no sense for this exit behavior to be a description purely of C’s abstract machine; returning a value to the environment has no meaning unless it is observable in the environment. So my question is not so much whether the exit status is observable as whether this is a defect in the C standard’s definition of observable behavior. Or is there text somewhere else in the standard that applies?

I think it’s possible to piece this together to see the answer fall under the first bullet point of § 5.1.2.3.6:
Accesses to volatile objects are evaluated strictly according to the
rules of the abstract machine
Looking further, § 3.1 defines “access” as:
to read or modify the value of an object
and § 3.15 defines an “object” as:
region of data storage in the execution environment, the contents of
which can represent values
The standard, curiously, contains no definition of “volatile object”. It does contain a definition of “an object that has volatile-qualified type” in § 6.7.3.6:
An object that has volatile-qualified type may be modified in ways
unknown to the implementation or have other unknown side effects.
Therefore any expression referring to such an object shall be
evaluated strictly according to the rules of the abstract machine, as
described in 5.1.2.3.
It seems not unreasonable to infer that an object that has volatile-qualified type has that qualification precisely to inform the compiler that it is, in fact, a volatile object, so I don’t think it’s stretching things too far to use this wording as the basis of a definition for “volatile object” itself, and define a volatile object as an object which may be modified in ways unknown to the implementation or have other unknown side effects.
§ 5.1.2.3.2 defines “side effect” as follows:
Accessing a volatile object, modifying an object, modifying a file, or
calling a function that does any of those operations are all side
effects, which are changes in the state of the execution environment.
So I think we can piece this together as follows:
Returning the exit status to be returned to the host environment is
clearly a change in the state of the execution environment, as the
execution environment after having received, for example,
EXIT_SUCCESS, is necessarily in a different state that it would be
had it received, for example, EXIT_FAILURE. Returning the exit
status is therefore a side effect per § 5.1.2.3.2
exit() is a function that does this, so calling exit() is therefore
itself also a side effect per § 5.1.2.3.2.
The standard obviously gives us no details of the inner workings of
exit() or of what mechanism exit() will use to return that value to
the host environment, but it’s nonsensical to suppose that accessing
an object will not be involved, since objects are regions of data
storage in the execution environment whose contents can represent
values, and the exit status is a value.
Since we don’t know what, if anything, the host environment will do
in response to that change in state (not least because our program
will have exited before it happens), this access has an unknown side
effect, and the object accessed is therefore a volatile object.
Since calling exit() accesses a volatile object, it is observable
behavior per § 5.1.2.3.6.
This is consistent with the normal understanding of objects that have volatile-qualified types, namely that we can’t optimize away accesses to such objects if we cannot determine that no needed side effects will be omitted because the observable behavior (in the normal everyday sense) may be affected if we do. There is no visible object of volatile-qualified type in this case, of course, because the volatile object is question is accessed internally by exit(), and exit() obviously need not even be written in C. But there seems undoubtedly to be a volatile object, and § 5.1.2.3 refers specifically (three times) to volatile objects, and not to objects of volatile-qualified type (and other than a footnote to § 6.2.4.2, this is the only place in the standard volatile objects are referred to.)
Finally, this seems to be the only reading that renders § 5.1.2.3.6 intelligible, as intuitively we’d expect the “observable behavior” of C programs using only the facilities described by the standard to be that which loosely:
Changes memory in a way that’s visible outside of the program itself;
Changes the contents of files (which are by definition visible outside of the
program itself); and
Affects interactions with interactive devices
which seems to be essentially what § 5.1.2.3.6 is trying to get at.
Edit
There seems to be some little controversy in the comments apparently centered around the ideas that the exit status may be passed in registers, and that registers cannot be objects. This objection (no pun intended) is trivially refutable:
Objects can be declared with the register storage class specifier, and such objects can be designated by lvalues;
Memory-mapped registers, fairly ubiquitous in embedded programming, provide as clear a demonstration as any that registers can be objects, can have addresses, and can be designated by lvalues. Indeed, memory-mapped registers are one of the most common uses of volatile-qualified types;
mmap() shows that even file contents can sometimes have addresses and be objects.
In general, it's a mistake to believe that objects can only reside in, or that "addresses" can only refer to, locations in core memory, or banks of DRAM chips, or anything else that one might conventionally refer to as "memory" or "RAM". Any component of the execution environment that's capable of storing a value, including registers, could be an object, could potentially have an address, and could potentially be designated by an lvalue, and this is why the definition of "object" in the standard is cast in such deliberately wide terms.
Additionally, sections such as § 5.3.2.1.9 go to some lengths to draw a distinction between "values of the actual objects" and "[values] specified by the abstract semantics", indicating that actual objects are real things existing in the execution environment distinct from the abstract machine, and are things which the specification does indeed closely concern itself with, as the definition of "object" in § 3.15 makes clear. It seems untenable to maintain a position that the standard concerns itself with such actual objects up to and only up to the point at which a standard library function is invoked, at which point all such concerns evaporate and such matters suddenly become "outside C".

The exit status is supposed to be observable to the host environment that runs the implementation. Whether the host environment does anything with this is outside the scope of the standard.

I have read the system function documentation (7.22.4.8 The system function). It contains:
Returns
If the argument is a null pointer, the system function returns nonzero only if a
command processor is available. If the argument is not a null pointer, and the system
function does return, it returns an implementation-defined value.
It looks like the standard made provision for a system where a C program (or more generally a user defined command) could not start another command, and/or where where a command would not return anything to its caller. In that latter case, the exit value would not be observable (in the common sense).
In this interpretation, the observability of the exit value would just be implementation defined. And it would be consistent with it not being explicitely cited in the observable behaviour of a program.
I can remember of an old system (Solar 16) from the 70's where commands were started with call for standard commands or run for user commands, and where parameters could only be passed on sub-commands after a specific request from the program. No C compiler existed there, but if someone had managed to implement one, the return value would not have been observable.

Related

Assign volatile to non-volatile sematics and the C standard

volatile int vfoo = 0;
void func()
{
int bar;
do
{
bar = vfoo; // L.7
}while(bar!=1);
return;
}
This code busy-waits for the variable to turn to 1. If on first pass vfoo is not set to 1, will I get stuck inside.
This code compiles without warning.
What does the standard say about this?
vfoo is declared as volatile. Therefore, read to this variable should not be optimized.
However, bar is not volatile qualified. Is the compiler allowed to optimize the write to this bar? .i.e. the compiler would do a read access to vfoo, and is allowed to discard this value and not assign it to bar (at L.7).
If this is a special case where the standard has something to say, can you please include the clause and interpret the standard's lawyer talk?
What the standard has to say about this includes:
5.1.2.3 Program execution
¶2 Accessing a volatile object, modifying an object, modifying a file, or calling a function that does any of those operations are all side effects, which are changes in the state of the execution environment. Evaluation of an expression in general includes both value computations and initiation of side effects. Value computation for an lvalue expression includes determining the identity of the designated object.
¶4 In the abstract machine, all expressions are evaluated as specified by the semantics. An actual implementation need not evaluate part of an expression if it can deduce that its value is not used and that no needed side effects are produced (including any caused by calling a function or accessing a volatile object).
¶6 The least requirements on a conforming implementation are:
Accesses to volatile objects are evaluated strictly according to the rules of the abstract machine.
...
The takeaway from ¶2 in particular should be that accessing a volatile object is no different from something like calling printf - it can't be elided because it has a side effect. Imagine your program with bar = vfoo; replaced by bar = printf("hello\n");
volatile variable has to be read on any access. In your code snippet that read cannot be optimized out. The compiler knows that bar might be affected by the side effect. So the condition will be checked correctly.
https://godbolt.org/z/nFd9BB
However, bar is not volatile qualified.
Variable bar is used to hold a value. Do you care about the value stored in it, or do you care about that variable being represented exactly according to the ABI?
Volatile would guarantee you the latter. Your program depends on the former.
Is the compiler allowed to optimize the write to this bar?
Of course. Why would you possibly care whether the value read was really written to a memory location allocated to the variable on the stack?
All you specified was that the value read was tested as an exit condition:
bar = ...
}while(bar!=1);
.i.e. the compiler would do a read access to vfoo, and is allowed to
discard this value and not assign it to bar (at L.7).
Of course not!
The compiler needs to hold the value obtained by the volatile read enough time to be able to compare it to 1. But no more time, as you don't ever use bar again latter.
It may be that a strange CPU as a EQ1 ("equal to 1") flag in the condition register, that is set whenever a value equal to 1 is loaded. Then the compiler would not even store temporarily the read value and just EQ1 condition test.
Under your hypothesis that compilers can discard variable values for all non volatile variables, non volatile objects would have almost no possible uses.

Do the C compiler know when a statement operates on a file and thus has "observable behaviour"?

The C99 standard 5.1.2.3$2 says
Accessing a volatile object, modifying an object, modifying a file, or calling a function
that does any of those operations are all side effects, 12) which are changes in the state of
the execution environment. Evaluation of an expression in general includes both value
computations and initiation of side effects. Value computation for an lvalue expression
includes determining the identity of the designated object.
I guess that in a lot of cases the compiler can't inline and possibly eliminate the functions doing I/O since they live in a different translation unit. And the parameters to functions doing I/O are often pointers, further hindering the optimizer.
But link-time-optimization gives the compiler "more to chew on".
And even though the paragraph I quoted says that "modifying an object" (that's standard-speak for memory) is a side-effect, stores to memory is not automatically treated as a side effect when the optimizer kicks in. Here's an example from John Regehrs Nine Ways to Break your Systems Software using Volatile where the message store is reordered relative to the volatile ready variable.
.
volatile int ready;
int message[100];
void foo (int i) {
message[i/10] = 42;
ready = 1;
}
How do a C compiler determine if a statement operates on a file? In a free-standing embedded environment I declare registers as volatile, thus hindering the compiler from optimizing calls away and swapping order of I/O calls.
Is that the only way to tell the compiler that we're doing I/O? Or do the C standard dictate that these N calls in the standard library do I/O and thus must receive special treatment? But then, what if someone created their own system call wrapper for say read?
As C has no statement dedicated to IO, only function calls can modify files. So if the compiler sees no function call in a sequence of statements, it knows that this sequence has not modified any file.
If only functions from the standard library are called, and if the environment is hosted, the compiler could know what they do and use that to guess what will happen.
But what is really important, is that the compiler only needs to respect side effects. It is perfectly allowed when it does not know, to assume that a function call could involve side effects and act accordingly. It will not be a violation of the standard if no side effects are actually involved, it will just possibly lose a higher optimization.

What does section 5.1.2.3, paragraph 4 (in n1570.pdf) mean for null operations?

I have been advised many times that accesses to volatile objects can't be optimised away, however it seems to me as though this section, present in the C89, C99 and C11 standards advises otherwise:
... An actual implementation need not evaluate part of an expression if it can deduce that its value is not used and that no needed side effects are produced (including any caused by calling a function or accessing a volatile object).
If I understand correctly, this sentence is stating that an actual implementation can optimise away part of an expression, providing these two requirements are met:
"its value is not used", and
"that no needed side effects are produced (including any caused by calling a function or accessing a volatile object)"...
It seems to me that many people are confusing the meaning for "including" with the meaning for "excluding".
Is it possible for a compiler to distinguish between a side effect that's "needed", and a side effect that isn't? If timing is considered a needed side effect, then why are compilers allowed to optimise away null operations like do_nothing(); or int unused_variable = 0;?
If a compiler is able to deduce that a function does nothing (eg. void do_nothing() { }), then is it possible that the compiler might have justification to optimise calls to that function away?
If a compiler is able to deduce that a volatile object isn't mapped to anything crucial (i.e. perhaps it's mapped to /dev/null to form a null operation), then is it possible that the compiler might also have justification to optimise that non-crucial side-effect away?
If a compiler can perform optimisations to eliminate unnecessary code such as calls to do_nothing() in a process called "dead code elimination" (which is quite the common practice), then why can't the compiler also eliminate volatile writes to a null device?
As I understand, either the compiler can optimise away calls to functions or volatile accesses or the compiler can't optimise away either, because of 5.1.2.3p4.
I think the "including any" applies to the "needed side-effects" , whereas you seem to be reading it as applying to "part of an expression".
So the intent was to say:
... An actual implementation need not evaluate part of an expression if it can deduce that its value is not used and that no needed side effects are produced .
Examples of needed side-effects include:
Needed side-effects caused by a function which this expression calls
Accesses to volatile variables
Now, the term needed side-effect is not defined by the Standard. Section /4 is not attempting to define it either -- it's trying (and not succeeding very well) to provide examples.
I think the only sensible interpretation is to treat it as meaning observable behaviour which is defined by 5.1.2.3/6. So it would have been a lot simpler to write:
An actual implementation need not evaluate part of an expression if it can deduce that its value is not used and that no observable behaviour would be caused.
Your questions in the edit are answered by 5.1.2.3/6, sometimes known as the as-if rule, which I'll quote here:
The least requirements on a conforming implementation are:
Accesses to volatile objects are evaluated strictly according to the rules of the abstract machine.
At program termination, all data written into files shall be identical to the result that execution of the program according to the abstract semantics would have produced.
The input and output dynamics of interactive devices shall take place as specified in 7.21.3. The intent of these requirements is that unbuffered or line-buffered output appear as soon as possible, to ensure that prompting messages actually appear prior to a program waiting for input.
This is the observable behaviour of the program.
Answering the specific questions in the edit:
Is it possible for a compiler to distinguish between a side effect that's "needed", and a side effect that isn't? If timing is considered a needed side effect, then why are compilers allowed to optimise away null operations like do_nothing(); or int unused_variable = 0;?
Timing isn't a side-effect. A "needed" side-effect presumably here means one that causes observable behaviour.
If a compiler is able to deduce that a function does nothing (eg. void do_nothing() { }), then is it possible that the compiler might have justification to optimise calls to that function away?
Yes, these can be optimized out because they do not cause observable behaviour.
If a compiler is able to deduce that a volatile object isn't mapped to anything crucial (i.e. perhaps it's mapped to /dev/null to form a null operation), then is it possible that the compiler might also have justification to optimise that non-crucial side-effect away?
No, because accesses to volatile objects are defined as observable behaviour.
If a compiler can perform optimisations to eliminate unnecessary code such as calls to do_nothing() in a process called "dead code elimination" (which is quite the common practice), then why can't the compiler also eliminate volatile writes to a null device?
Because volatile accesses are defined as observable behaviour and empty functions aren't.
I believe this:
(including any caused by calling a function or accessing a volatile
object)
is intended to be read as
(including:
any side-effects caused by calling a function; or
accessing a volatile variable)
This reading makes sense because accessing a volatile variable is a side-effect.

Does C99 standard define observable behavior as C++03 does?

In C++03 Standard 1.9/6 there's this definition of observable behavior
The observable behavior of the abstract machine is its sequence of reads and writes to volatile data and calls to library I/O functions.
and The Standard goes to length explaining that the compiler must preserve observable behavior while doing optimizations.
However there's no such or similar definition in C99 draft I'm looking at. The only time observable behavior is mentioned is 6.7.3/7
The intended use of the restrict qualifier (like the register storage class) is to promote
optimization, and deleting all instances of the qualifier from a conforming program does
not change its meaning (i.e., observable behavior)
Is there a definition of what exactly the compiler must preserve when optimizing a C99 program?
In my draft, §3.4, defines behavior as "external appearance or action". "Observable behavior" seems to be a pleonasm that occurs exactly once.
§5.1.2.3, Program execution, further defines the behavior of C programs:
The semantic descriptions in this International Standard describe the behavior of an abstract machine in which issues of optimization are irrelevant.
It then goes on to define side-effects as "changes in the state of the execution environment" caused by "[a]ccessing a volatile object, modifying an object, modifying a file, or calling a function that does any of those operations". Side-effects are sequenced at sequence points.
This seems to be stricter than C++ in that "modifying an object", i.e. writing to memory, is (observable) behavior in C.
As for allowed optimization:
In the abstract machine, all expressions are evaluated as specified by the semantics. An
actual implementation need not evaluate part of an expression if it can deduce that its
value is not used and that no needed side effects are produced (including any caused by
calling a function or accessing a volatile object).
"Needed side-effects" are then listed in the following point:
At sequence points, volatile objects are stable in the sense that previous accesses are
complete and subsequent accesses have not yet occurred.
At program termination, all data written into files shall be identical to the result that
execution of the program according to the abstract semantics would have produced.
The input and output dynamics of interactive devices shall take place as specified in
7.19.3.
The paragraph concludes with a list of examples; §7.19.3 describes files in the context of stdio.

Is &errno legal C?

Per 7.5,
[errno] expands to a modifiable lvalue175) that has type int, the value of which is set to a positive error number by several library functions. It is unspecified whether errno is a macro or an identifier declared with external linkage. If a macro definition is suppressed in order to access an actual object, or a program defines an identifier with the name errno, the behavior is undefined.
175) The macro errno need not be the identifier of an object. It might expand to a modifiable lvalue resulting from a function call (for example, *errno()).
It's not clear to me whether this is sufficient to require that &errno not be a constraint violation. The C language has lvalues (such as register-storage-class variables; however these can only be automatic so errno could not be defined as such) for which the & operator is a constraint violation.
If &errno is legal C, is it required to be constant?
So §6.5.3.2p1 specifies
The operand of the unary & operator shall be either a function designator, the result of a [] or unary * operator, or an lvalue that designates an object that is not a bit-field and is not declared with the register storage-class specifier.
Which I think can be taken to mean that &lvalue is fine for any lvalue that is not in those two categories. And as you mentioned, errno cannot be declared with the register storage-class specifier, and I think (although am not chasing references to check right now) that you cannot have a bitfield that has type of plain int.
So I believe that the spec requires &(errno) to be legal C.
If &errno is legal C, is it required to be constant?
As I understand it, part of the point of allowing errno to be a macro (and the reason it is in e.g. glibc) is to allow it to be a reference to thread-local storage, in which case it will certainly not be constant across threads. And I don't see any reason to expect it must be constant. As long as the value of errno retains the semantics specified, I see no reason a perverse C library could not change &errno to refer to different memory addresses over the course of a program -- e.g. by freeing and reallocating the backing store every time you set errno.
You could imagine maintaining a ring buffer of the last N errno values set by the library, and having &errno always point to the latest. I don't think it would be particularly useful, but I can't see any way it violates the spec.
I am surprised nobody has cited the C11 spec yet. Apologies for the long quote, but I believe it is relevant.
7.5 Errors
The header defines several macros...
...and
errno
which expands to a modifiable lvalue(201) that has type int and thread local
storage duration, the value of which is set to a positive error number by
several library functions. If a
macro definition is suppressed in order to access an actual object, or
a program defines an identifier with the name errno, the behavior is
undefined.
The value of errno in the initial thread is zero at
program startup (the initial value of errno in other threads is an
indeterminate value), but is never set to zero by any library
function.(202) The value of errno may be set to nonzero by a library
function call whether or not there is an error, provided the use of
errno is not documented in the description of the function in this
International Standard.
(201) The macro errno need not be the identifier of an object. It might expand to a
modifiable lvalue resulting from a function call (for example, *errno()).
(202) Thus, a program that uses errno for error checking should set it to zero before a
library function call, then inspect it before a subsequent library function call. Of
course, a library function can save the value of errno on entry and then set it to zero,
as long as the original value is restored if errno’s value is still zero just before the
return.
"Thread local" means register is out. Type int means bitfields are out (IMO). So &errno looks legal to me.
Persistent use of words like "it" and "the value" suggests the authors of the standard did not contemplate &errno being non-constant. I suppose one could imagine an implementation where &errno was not constant within a particular thread, but to be used the way the footnotes say (set to zero, then check after calling library function), it would have to be deliberately adversarial, and possibly require specialized compiler support just to be adversarial.
In short, if the spec does permit a non-constant &errno, I do not think it was deliberate.
[update]
R. asks an excellent question in the comments. After thinking about it, I believe I now know the correct answer to his question, and to the original question. Let me see if I can convince you, dear reader.
R. points out that GCC allows something like this at the top level:
register int errno asm ("r37"); // line R
This would declare errno as a global value held in register r37. Obviously, it would be a thread-local modifiable lvalue. So, could a conforming C implementation declare errno like this?
The answer is no. When you or I use the word "declaration", we usually have a colloquial and intuitive concept in mind. But the standard does not speak colloquially or intuitively; it speaks precisely, and it aims only to use terms that are well-defined. In the case of "declaration", the standard itself defines the term; and when it uses the term, it is using its own definition.
By reading the spec, you can learn precisely what a "declaration" is and precisely what it is not. Put another way, the standard describes the language "C". It does not describe "some language that is not C". As far as the standard is concerned, "C with extensions" is just "some language that is not C".
Thus, from the standard's point of view, line R is not a declaration at all. It does not even parse! It might as well read:
long long long __Foo_e!r!r!n!o()blurfl??/**
As far as the spec is concerned, this is just as much a "declaration" as line R; i.e., not at all.
So, when C11 spec says, in section 6.5.3.2:
The operand of the unary & operator shall be either a function
designator, the result of a [] or unary * operator, or an lvalue that
designates an object that is not a bit-field and is not declared with
the register storage-class specifier.
...it means something very precise that does not refer to anything like Line R.
Now, consider the declaration of the int object to which errno refers. (Note: I do not mean the declaration of the errno name, since of course there might be no such declaration if errno is, say, a macro. I mean the declaration of the underlying int object.)
The above language says you can take the address of an lvalue unless it designates a bit-field or it designates an object "declared" register. And the spec for the underlying errno object says it is a modifiable int lvalue with thread-local duration.
Now, it is true that the spec does not say that the underlying errno object must be declared at all. Maybe it just appears via some implementation-defined compiler magic. But again, when the spec says "declared with the register storage-class specifier", it is using its own terminology.
So either the underlying errno object is "declared" in the standard sense, in which case it cannot be both register and thread-local; or it is not declared at all, in which case it is not declared register. Either way, since it is an lvalue, you may take its address.
(Unless it is a bit-field, but I think we agree that a bit field is not an object of type int.)
The original implementation of errno was as a global int variable that various Standard C Library components used to indicate an error value if they ran into an error. However even in those days one had to be careful about reentrant code or with library function calls that could set errno to a different value as you were handling an error. Normally one would save the value in a temporary variable if the error code was needed for any length of time due to the possibility of some other function or piece of code setting the value of errno either explicitly or through a library function call.
So with this original implementation of a global int, using the address of operator and depending on the address to remain constant was pretty much built into the fabric of the library.
However with multi-threading, there was no longer a single global because having a single global was not thread safe. So the idea of having thread local storage perhaps using a function that returns a pointer to an allocated area. So you might see a construct something like the following entirely imaginary example:
#define errno (*myErrno())
typedef struct {
// various memory areas for thread local stuff
int myErrNo;
// more memory areas for thread local stuff
} ThreadLocalData;
ThreadLocalData *getMyThreadData () {
ThreadLocalData *pThreadData = 0; // placeholder for the real thing
// locate the thread local data for the current thread through some means
// then return a pointer to this thread's local data for the C run time
return pThreadData;
}
int *myErrno () {
return &(getMyThreadData()->myErrNo);
}
Then errno would be used as if it were a single global rather than a thread safe int variable by errno = 0; or checking it like if (errno == 22) { // handle the error and even something like int *pErrno = &errno;. This all works because in the end the thread local data area is allocated and stays put and is not moving around and the macro definition which makes errno look like an extern int hides the plumbing of its actual implementation.
The one thing that we do not want is to have the address of errno suddenly shift between time slices of a thread with some kind of a dynamic allocate, clone, delete sequence while we are accessing the value. When your time slice is up, it is up and unless you have some kind of synchronization involved or some way to keep the CPU after your time slice expires, having the thread local area move about seems a very dicey proposition to me.
This in turn implies that you can depend on the address of operator giving you a constant value for a particular thread though the constant value will differ between threads. I can well see the library using the address of errno in order to reduce the overhead of doing some kind of thread local lookup every time a library function is called.
Having the address of errno as constant within a thread also provides backwards compatibility with older source code which used the errno.h include file as they should have done (see this man page from linux for errno which explicitly warns to not use extern int errno; as was common in the old days).
The way I read the standard is to allow for this kind of thread local storage while maintaining the semantics and syntax similar to the old extern int errno; when errno is used and allowing the old usage as well for some kind of cross compiler for an embedded device that does not support multi-threading. However the syntax may be similar due to the use of a macro definition so the old style short cut declaration should not be used because that declaration is not what the actual errno really is.
We can find a counterexample: because a bit-field could have type int, errno can be a bit-field. In that case, &errno would be invalid. The behavior of standard is here to do not explicitly say you can write &errno, so the definition of the undefined behavior applies here.
C11 (n1570), § 4. Conformance
Undefined behavior is otherwise indicated in this International
Standard by the words ‘‘undefined behavior’’ or by the omission of any
explicit definition of behavior.
This seems like a valid implementation where &errno would be a constraint violation:
struct __errno_struct {
signed int __val:12;
} *__errno_location(void);
#define errno (__errno_location()->__val)
So I think the answer is probably no...

Resources