Related
The following code produces strange things on my system:
#include <stdio.h>
void f (int x) {
int y = x + x;
int v = !y;
if (x == (1 << 31))
printf ("y: %d, !y: %d\n", y, !y);
}
int main () {
f (1 << 31);
return 0;
}
Compiled with -O1, this prints y: 0, !y: 0.
Now beyond the puzzling fact that removing the int v or the if lines produces the expected result, I'm not comfortable with undefined behavior of overflows translating to logical inconsistency.
Should this be considered a bug, or is the GCC team philosophy that one unexpected behavior can cascade into logical contradiction?
When invoking undefined behavior, anything can happen. There's a reason why it's called undefined behavior, after all.
Should this be considered a bug, or is the GCC team philosophy that one unexpected behavior can cascade into logical contradiction?
It's not a bug. I don't know much about the philosophy of the GCC team, but in general undefined behavior is "useful" to compiler developers to implement certain optimizations: assuming something will never happen makes it easier to optimize code. The reason why anything can happen after UB is exactly because of this. The compiler makes a lot of assumptions and if any of them is broken then the emitted code cannot be trusted.
As I said in another answer of mine:
Undefined behavior means that anything can happen. There is no explanation as to why anything strange happens after invoking undefined behavior, nor there needs to be. The compiler could very well emit 16-bit Real Mode x86 assembly, produce a binary that deletes your entire home folder, emit the Apollo 11 Guidance Computer assembly code, or whatever else. It is not a bug. It's perfectly conforming to the standard.
The 2018 C standard defines, in clause 3.4.3, paragraph 1, “undefined behavior” to be:
behavior, upon use of a nonportable or erroneous program construct or of erroneous data, for which this document imposes no requirements
That is quite simple. There are no requirements from the standard. So, no, the standard does not require the behavior to be “consistent.” There is no requirement.
Furthermore, compilers, operating systems, and other things involved in building and running a program generally do not impose any requirement of “consistency” in the sense asked about in this question.
Addendum
Note that answers that say “anything can happen” are incorrect. The C standard only says that it imposes no requirements when there is behavior that it deems “undefined.” It does not nullify other requirements and has no authority to nullify them. Any specifications of compilers, operating systems, machine architectures, or consumer product laws; or laws of physics; laws of logic; or other constraints still apply. One situation where this matters is simply linking to software libraries not written in C: The C standard does not define what happens, but what does happen is still constrained by the other programming language(s) used and the specifications of the libraries, as well as the linker, operating system, and so on.
Marco Bonelli has given the reasons why such behaviour is allowed; I'd like to attempt an explanation of why it might be practical.
Optimising compilers, by definition, are expected to do various stuff in order to make programs run faster. They are allowed to delete unused code, unwrap loops, rearrange operations and so on.
Taking your code, can the compiler be really expected to perform the !y operation strictly before the call to printf()? I'd say if you impose such rules, there'll be no place left for any optimisations. So, a compiler should be free to rewrite the code as
void f (int x) {
int y = x + x;
int notY = !(x + x);
if (x == (1 << 31))
printf ("y: %d, !y: %d\n", y, notY);
}
Now, it should be obvious that for any inputs which don't cause overflow the behaviour would be identical. However, in the case of overflow y and notY experience the effects of UB independently, and may both become 0 because why not.
For some reason, a myth has emerged that the authors of the Standard used the phrase "Undefined Behavior" to describe actions which earlier descriptions of the language by its inventor characterized as "machine dependent" was to allow compilers to infer that various things wouldn't happen. While it is true that the Standard doesn't require that implementations process such actions meaningfully even on platforms where there would be a natural "machine-dependent" behavior, the Standard also doesn't require that any implementation be capable of processing any useful programs meaningfully; an implementation could be conforming without being able to meaningfully process anything other than a single contrived and useless program. That's not a twisting of the Standard's intention: "While a deficient implementation could probably contrive
a program that meets this requirement, yet still succeed in being useless, the C89 Committee felt that such ingenuity would probably require more work than making something useful."
In discussing the decision to make short unsigned values promote to signed int, the authors of the Standard observed that most current implementations used quiet wraparound integer-overflow semantics, and having values promote to signed int would not adversely affect behavior if the value was used in overflow scenarios where the upper bits wouldn't matter.
From a practical perspective, guaranteeing clean wraparound semantics costs a little more than would allowing integer computations to behave as though performed on larger types at unspecified times. Even in the absence of "optimization", even straightforward code generation for an expression like long1 = int1*int2+long2; would on many platforms benefit from being able to use the result of a 16x16->32 or 32x32->64 multiply instruction directly, rather than having to sign-extend the lower half of the result. Further, allowing a compiler to evaluate x+1 as a type larger than x at its convenience would allow it to replace x+1 > y with x >= y--generally a useful and safe optimization.
Compilers like gcc go further, however. Even though the authors of the Standard observed that in the evaluation of something like:
unsigned mul(unsigned short x, unsigned short y) { return x*y; }
the Standard's decision to promote x and y to signed int wouldn't adversely affect behavior compared with using unsigned ("Both schemes give the same answer in the vast majority of cases, and both give the same effective result in even more cases in implementations with two’s-complement arithmetic and quiet wraparound on signed overflow—that is, in most current implementations."), gcc will sometimes use the above function to infer within calling code that x cannot possibly exceed INT_MAX/y. I've seen no evidence that the authors of the Standard anticipated such behavior, much less intended to encourage it. While the authors of gcc claim any code that would invoke overflow in such cases is "broken", I don't think the authors of the Standard would agree, since in discussing conformance, they note: "The goal is to give the programmer a fighting chance to make powerful C programs that are also highly portable, without seeming to demean perfectly useful C programs that happen not to be portable, thus the adverb strictly."
Because the authors of the Standard failed to forbid the authors of gcc from processing code nonsensically in case of integer overflow, even on quiet-wraparound platforms, they insist that they should jump the rails in such cases. No compiler writer who was trying to win paying customers would take such an attitude, but the authors of the Standard failed to realize that compiler writers might value cleverness over customer satisfaction.
The title field isn't long enough to a capture a detailed question, so for the record, my actual question defines "unreasonable" in a specific way:
Is it legal1 for a C implementation have an arithmetic right
shift operator that returns different results, over time, for the
identical argument values? That is, must >> be a true function?
Imagine you wanted to write portable code using right-shift >> on signed values in C. Unfortunately, for you, the most efficient implementation of some critical part of your algorithm is fastest when signed right shifts fill the sign bit (i.e., they are arithmetic right shifts). Now since this behavior is implementation defined you are kind of screwed if you want to write portable code that takes advantage of it.
Just reading the compiler's documentation (which it is required to provide in the standard, but may or may not actually be easy to access or even exist) is great if you know all the compilers you will run on and will ever run on, but since that is often impossible, you might look for a way to do this portably.
One way I thought of is to simply test the compiler's behavior at runtime: it it appears to implement arithmetic2 right shift, use the optimized algorithm, but if not use a fallback that doesn't rely on it. Of course, just checking that say (short)0xFF00 >> 4 == 0xFFF0 isn't enough since it doesn't exclude that perhaps char or int values work differently, or even the weird case that it fills for some shift amounts or values but not for others3.
So given that a comprehensive approach would be to exhaustively check all shift values and amounts. For all 8-bit char inputs that's only 28 LHS values and 23 RHS values for a total of 211, while short (typically, but let's say int16_t if you want to be pedantic) totals only 220, which could be validated in a fraction of a second on modern hardware4. Doing all 237 32-bit values would take a few seconds on decent hardware, but still possibly reasonable. 64-bit is out for the foreseeable future however.
Let's say you did that and found that the >> behavior exactly matches the desired arithmetic shift behavior. Are you safe to rely on it according to the standard: does the standard constrain the implementation not to change its behavior at runtime? That is, does the behavior have to be expressible as a function of it's inputs, or can (short)0xFF00 >> 4 be 0xFFF0 one moment and then 0x0FF0 (or any other value) the next?
Now this question is mostly theoretical, but violating this probably isn't as crazy as it might seem, given the presence of hybrid architectures such as big.LITTLE that dynamically move processes from on CPU to another with possible small behavioral differences, or live VM migration between say chips manufactured by Intel and AMD with subtle (usually undocumented) instruction behavior differences.
1 By "legal" I mean according to the C standard, not according to the laws of your country, of course.
2 I.e., it replicates the sign bit for newly shifted in bits.
3 That would be kind of insane, but not that insane: x86 has different behavior (overflow flag bit setting) for shifts of 1 rather than other shifts, and ARM masks the shift amount in a surprising way that a simple test probably wouldn't detect.
4 At least anything better than a microcontroller. The inner validation loop is a few simple instructions, so a 1 GHz CPU could validate all ~1 million short values in ~1 ms at one instruction-per-cycle.
Let's say you did that and found that the >> behavior exactly matches the desired arithmetic shift behavior. Are you safe to rely on it according to the standard: does the standard constrain the implementation not to change its behavior at runtime? That is, does the behavior have to be expressible as a function of it's inputs, or can (short)0xFF00 >> 4 be 0xFFF0 one moment and then 0x0FF0 (or any other value) the next?
The standard does not place any requirements on the form or nature of the (required) definition of the behavior of right shifting a negative integer. In particular, it does not forbid the definition to be conditional on compile-time or run-time properties beyond the operands. Indeed, this is the case for implementation-defined behavior in general. The standard defines the term simply as
unspecified behavior where each implementation documents how the choice is made.
So, for example, an implementation might provide a macro, a global variable, or a function by which to select between arithmetic and logical shifts. Implementations might also define right-shifting a negative number to do less plausible, or even wildly implausible things.
Testing the behavior is a reasonable approach, but it gets you only a probabilistic answer. Nevertheless, in practice I think it's pretty safe to perform just a small number of tests -- maybe as few as one -- for each LHS data type of interest. It is extremely unlikely that you'll run into an implementation that implements anything other than standard arithmetic (always) or logical (always) right shift, or that you'll encounter one where the choice between arithmetic and logical shifting varies for different operands of the same (promoted) type.
The authors of the Standard made no effort to imagine and forbid every unreasonable way implementations might behave. In the days prior to the Standard, implementations almost always defined many behaviors based upon how the underlying platform worked, and the authors of the Standard almost certainly expected that most implementations would do likewise. Since the authors of the Standard didn't want to rule out the possibility that it might sometimes be useful for implementations to behave in other ways (e.g. to support code which targeted other platforms), however, they left many things pretty much wide open.
The Standard is somewhat vague as to the level of precision with which "Implementation Defined" behaviors need to be documented. I think the intention is that a person of reasonable intelligence reading the specification should be able to predict how a piece of code with Implementation-Defined behavior would behave, but I'm not sure that defining an operation as "yielding whatever happens to be in register 7" would be non-conforming.
From a practical perspective, what should matter is whether one is targeting quality implementations for platforms that have been developed within the last quarter century, or whether one is trying to force even deliberately-obtuse compilers to behave in controlled fashion. If the former, it may be worth
using a static assert to ensure the -1>>1==-1, but quality compilers for modern
hardware where that test passes are going to use arithmetic right shift
consistently. While targeting code to obtuse compilers might possibly have
some purpose, it's not generally possible to guard against all the ways that
a pathological-but-conforming compiler could sabotage one's code, and efforts
spent attempting to do so could often be more effectively spent on getting a
quality compiler.
I have been messing around trying to learn C lately. Coming from Java, it surprised me that you can perform certain operations declared as "undefined".
This just seems extremely unsafe to me. I understand it is the programmer's responsibility not to perform undefined operations, but why is it even allowed to start with? Why does the compiler not catch, for instance, array indices out of bounds, or even dangling pointers? You just end up accessing blocks of memory you never should access, with no (apparent) good reason.
As a comparison, Java makes extra sure you don't do anything stupid, throwing Exceptions around like hot cakes.
Surely there must be a reason why this is allowed? What is it?
ANSWER: To my understanding, the main reason is performance. Also, Java does have undefined behaviours, although not labeled as such.
EDIT: restricted question to C
Undefined behavior is not allowed, it's just not caught by the compiler.
The tradeoff here is between the speed and the safety. Many kinds of undefined behavior could be prevented at the expense of a few additional CPU cycles.
For example, you could prevent UB that happens when you read from memory that has been allocated but not initialized by having the compiled code write zeros into it. This, however, costs you a whole additional write into a memory, which is entirely unnecessary.
Similarly, one could prevent reading/writing past the end of an array by checking its bounds inside [] operator. However, this would cost you a few additional CPU cycles on each array access.
C++ designers decided that it is better to have speed and allow potential UB than to force everyone pay for what they do not need. This approach, however, is incompatible with Java's "write once, run anywhere" requirement, so designers of Java language insisted on fully defined behavior in nearly all situations.
Originally, most forms of Undefined Behavior represented things which some implementations might trap, but other implementations might not. Because there was no way for the authors of the Standard to predict all the things a platform might do in case of a trap (including, literally, the possibility that a system would sound an alarm and lock up until an operator manually cleared the fault), the consequences of traps fell outside the jurisdiction of the C Standard, and thus almost every action for which some platform might conceivably cause a trap is--from the point of view of the Standard--considered "Undefined Behavior".
That should not be taken to imply that the authors of the Standard didn't believe implementations should try to behave sensibly for such things when practical. The authors of the C89 Standard noted, for example, that the majority of current systems of that era would define behavior for:
/* Assume USmall is half the size of "int" */
unsigned mult(USmall x, USmall y) { return x*y; }
which would in all cases, including those where the mathematical product of x and y was between INT_MAX+1 and UINT_MAX, be equivalent to (unsigned)x*y;. I see no reason to believe they wouldn't have expected that trend to continue.
Unfortunately, a new philosophy has become fashionable, based on the revisionist viewpoint that compiler writers only supported useful behaviors in cases not mandated by the Standard because they were too unsophisticated to do anything else. In gcc, for example, using optimization level 2 but no other non-default options, the above "mult" routine will sometimes generate bogus code in cases where the product would be between 0x80000000u and 0xFFFFFFFFu, even when running on platforms where such computations would historically have worked. This is supposedly being done in the name of "optimization"; it would be interesting to know how many of the "optimizations" such techniques end up performing are actually useful and could not have been achieved via safer means.
Historically, Undefined Behavior was a license for a C compiler to expose the behavior of the underlying platform; in cases where the underlying platform's behavior fit the programmer's needs, this allowed the programmer's requirements to be expressed in machine code more efficiently than if everything had to be done in ways defined by the Standard. Lately, however, it has been interpreted as license for compilers to implement behaviors which not only bear no relation to anything in the underlying platform nor to any plausible programmer expectations, but aren't even bound by laws of time and causality.
Java has a run time environment to take care of you. That's why an exception is thrown when going out of bounds - it's something that can't be figured out at compile time.
There is run time bounds checking in C++ when using the at() method for a vector. It's what distinguishes at() from the []operator
We know what undefined behavior is and we (more or less) know the reasons (performance, cross-platform compatibility) of most of them. Assuming a given platform, say Windows 32 bit, can we consider an undefined behavior as well-known and consistent across the platform? I understand there is not a general answer then I would restrict to two common UB I see pretty often in production code (in use from years).
1) Reference. Give this union:
union {
int value;
unsigned char bytes[sizeof(int)];
} test;
Initialized like this:
test.value = 0x12345678;
Then accessed with:
for (int i=0; i < sizeof(test.bytes); ++i)
printf("%d\n", test.bytes[i]);
2) Reference. Given an array of unsigned short* casting to (for example) float* and accessing it (reference, no padding between array members).
Is code relying on well-known UBs (like those) working by case (assuming compiler may change and for sure compiler version will change for sure) or even if they're UB for cross-platform code they rely on platform specific details (then it won't change if we don't change platform)? Does same reasoning apply also to unspecified behavior (when compiler documentation doesn't say anything about it)?
EDIT According to this post starting from C99 type punning is just unspecified, not undefined.
Undefined behavior means primarily a very simple thing, the behavior of the code in question is not defined so the C standard doesn't provide any clue of what can happen. Don't search more than that in it.
If the C standard doesn't define something, your platform may well do so as an extension. So if you are in such a case, you can use it on that platform. But then make sure that they document that extension and that they don't change it in the next version of your compiler.
Your examples are flawed for several reasons. As discussed in the comments, unions are made for type punning, and in particular an access to memory as any character type is always allowed. Your second example is really bad, because other than you seem to imply, this is not an acceptable cast on any platform that I know. short and float generally have different alignment properties, and using such a thing will almost certainly crash your program. Then, third, you are arguing for C on Windows, which is known for the fact that they don't follow the C standard.
First of all, any compiler implementation is free to define any behavior it likes in any situation which would, from the point of view of the standard, produce Undefined Behavior.
Secondly, code which is written for a particular compiler implementation is free to make use of any behaviors which are documented by that implementation; code which does so, however, may not be usable on other implementations.
One of the longstanding shortcomings of C is that while there are many situations where constructs which could Undefined Behavior on some implementations are handled usefully by others, only a tiny minority of such situations provide any means by which code can specify that a compiler which won't handle them a certain way should refuse compilation. Further, there are many cases in which the Standards Committee allows full-on UB even though on most implementations the "natural" consequences would be much more constrained. Consider, for example (assume int is 32 bits)
int weird(uint16_t x, int64_t y, int64_t z)
{
int r=0;
if (y > 0) return 1;
if (z < 0x80000000L) return 2;
if (x > 50000) r |= 31;
if (x*x > z) r |= 8;
if (x*x < y) r |= 16;
return r;
}
If the above code was run on a machine that simply ignores integer overflow, passing 50001,0,0x80000000L should result in the code returning 31; passing 50000,0,0x80000000L could result in it returning 0, 8, 16, or 24 depending upon how the code handles the comparison operations. The C standard, however, would allow the code to do anything whatsoever in any of those cases; because of that, some compilers might determine that none of the if statements beyond the first two could ever be true in any situation which hadn't invoked Undefined Behavior, and may thus assume that r is always zero. Note that one of the inferences would affect the behavior of a statement which precedes the Undefined Behavior.
One thing I'd really like to see would be a concept of "Implementation Constrained" behavior, which would be something of a cross between Undefined Behavior and Implementation-Defined Behavior: compilers would be required to document all possible consequences of certain constructs which under the old rules would be Undefined Behavior, but--unlike Implementation-Defined behavior--an implementation would not be required to specify one specific thing that would happen; implementations would be allowed to specify that a certain construct may have arbitrary unconstrained consequences (full UB) but would be discouraged from doing so. In the case of something like integer overflow, a reasonable compromise would be to say that the result of an expression that overflows may be a "magic" value which, if explicitly typecast, will yield an arbitrary (and "ordinary") value of the indicated type, but which may otherwise appears to have arbitrarily changing values which may or may not be representable. Compilers would be allowed to assume that the result of an operation will not be a result of overflow, but would refrain from making inferences about the operands. To use a vague analogy, the behavior would be similar to how floating-point would be if explicitly typecasting a NaN could yield any arbitrary non-NaN result.
IMHO, C would greatly benefit from combining the above concept of "implementation-constrained" behaviors with some standard predefined macros which would allow code to test whether an implementation makes any particular promises about its behavior in various situations. Additionally, it would be helpful if there were a standard means by which a section of code could request a particular "dialects" [combination of int size, implementation-constrained behaviors, etc.]. It would be possible to write a compiler for any platform which could, upon request, have promotion rules behave as though int was exactly 32 bits. For example, given code like:
uint64_t l1,l2; uint32_t w1,w2; uint16_t h1,h2;
...
l1+=(h1+h2);
l2+=(w2-w1);
A 16-bit compiler might be fastest if it performed the math on h1 and h2 using 16 bits, and a 64-bit compiler might be fastest if it added to l2 the 64-bit result of subtracting w1 from w2, but if the code was written for a 32-bit system, being able to have compilers for the other two systems generate code which would behave as it had on the 32-bit system would be more helpful than having them generate code which performed some different computation, no matter how much faster the latter code would be.
Unfortunately, there is not at present any standard means by which code can ask for such semantics [a fact which will likely limit the efficiency of 64-bit code in many cases]; the best one can do is probably to expressly document the code's environmental requirements somewhere and hope that whoever is using the code sees them.
The line
a = a++;
is undefined behaviour in C. The question I am asking is: why?
I mean, I get that it might be hard to provide a consistent order in which things should be done. But, certain compilers will always do it in one order or the other (at a given optimization level). So why exactly is this left up to the compiler to decide?
To be clear, I want to know if this was a design decision and if so, what prompted it? Or maybe there is a hardware limitation of some kind?
UPDATE: This question was the subject of my blog on June 18th, 2012. Thanks for the great question!
Why? I want to know if this was a design decision and if so, what prompted it?
You are essentially asking for the minutes of the meeting of the ANSI C design committee, and I don't have those handy. If your question can only be answered definitively by someone who was in the room that day, then you're going to have to find someone who was in that room.
However, I can answer a broader question:
What are some of the factors that lead a language design committee to leave the behaviour of a legal program (*) "undefined" or "implementation defined" (**)?
The first major factor is: are there two existing implementations of the language in the marketplace that disagree on the behaviour of a particular program? If FooCorp's compiler compiles M(A(), B()) as "call A, call B, call M", and BarCorp's compiler compiles it as "call B, call A, call M", and neither is the "obviously correct" behaviour then there is strong incentive to the language design committee to say "you're both right", and make it implementation defined behaviour. Particularly this is the case if FooCorp and BarCorp both have representatives on the committee.
The next major factor is: does the feature naturally present many different possibilities for implementation? For example, in C# the compiler's analysis of a "query comprehension" expression is specified as "do a syntactic transformation into an equivalent program that does not have query comprehensions, and then analyze that program normally". There is very little freedom for an implementation to do otherwise.
By contrast, the C# specification says that the foreach loop should be treated as the equivalent while loop inside a try block, but allows the implementation some flexibility. A C# compiler is permitted to say, for example "I know how to implement foreach loop semantics more efficiently over an array" and use the array's indexing feature rather than converting the array to a sequence as the specification suggests it should.
A third factor is: is the feature so complex that a detailed breakdown of its exact behaviour would be difficult or expensive to specify? The C# specification says very little indeed about how anonymous methods, lambda expressions, expression trees, dynamic calls, iterator blocks and async blocks are to be implemented; it merely describes the desired semantics and some restrictions on behaviour, and leaves the rest up to the implementation.
A fourth factor is: does the feature impose a high burden on the compiler to analyze? For example, in C# if you have:
Func<int, int> f1 = (int x)=>x + 1;
Func<int, int> f2 = (int x)=>x + 1;
bool b = object.ReferenceEquals(f1, f2);
Suppose we require b to be true. How are you going to determine when two functions are "the same"? Doing an "intensionality" analysis -- do the function bodies have the same content? -- is hard, and doing an "extensionality" analysis -- do the functions have the same results when given the same inputs? -- is even harder. A language specification committee should seek to minimize the number of open research problems that an implementation team has to solve!
In C# this is therefore left to be implementation-defined; a compiler can choose to make them reference equal or not at its discretion.
A fifth factor is: does the feature impose a high burden on the runtime environment?
For example, in C# dereferencing past the end of an array is well-defined; it produces an array-index-was-out-of-bounds exception. This feature can be implemented with a small -- not zero, but small -- cost at runtime. Calling an instance or virtual method with a null receiver is defined as producing a null-was-dereferenced exception; again, this can be implemented with a small, but non-zero cost. The benefit of eliminating the undefined behaviour pays for the small runtime cost.
A sixth factor is: does making the behaviour defined preclude some major optimization? For example, C# defines the ordering of side effects when observed from the thread that causes the side effects. But the behaviour of a program that observes side effects of one thread from another thread is implementation-defined except for a few "special" side effects. (Like a volatile write, or entering a lock.) If the C# language required that all threads observe the same side effects in the same order then we would have to restrict modern processors from doing their jobs efficiently; modern processors depend on out-of-order execution and sophisticated caching strategies to obtain their high level of performance.
Those are just a few factors that come to mind; there are of course many, many other factors that language design committees debate before making a feature "implementation defined" or "undefined".
Now let's return to your specific example.
The C# language does make that behaviour strictly defined(†); the side effect of the increment is observed to happen before the side effect of the assignment. So there cannot be any "well, it's just impossible" argument there, because it is possible to choose a behaviour and stick to it. Nor does this preclude major opportunities for optimizations. And there are not a multiplicity of possible complex implementation strategies.
My guess, therefore, and I emphasize that this is a guess, is that the C language committee made ordering of side effects into implementation defined behaviour because there were multiple compilers in the marketplace that did it differently, none was clearly "more correct", and the committee was unwilling to tell half of them that they were wrong.
(*) Or, sometimes, its compiler! But let's ignore that factor.
(**) "Undefined" behaviour means that the code can do anything, including erasing your hard disk. The compiler is not required to generate code that has any particular behaviour, and not required to tell you that it is generating code with undefined behaviour. "Implementation defined" behaviour means that the compiler author is given considerable freedom in choice of implementation strategy, but is required to pick a strategy, use it consistently, and document that choice.
(†) When observed from a single thread, of course.
It's undefined because there is no good reason for writing code like that, and by not requiring any specific behaviour for bogus code, compilers can more aggressively optimize well-written code. For example, *p = i++ may be optimized in a way that causes a crash if p happens to point to i, possibly because two cores write to the same memory location at the same time. The fact that this also happens to be undefined in the specific case that *p is explicitly written out as i, to get i = i++, logically follows.
It's ambiguous but not syntactically wrong. What should a be? Both = and ++ have the same "timing." So instead of defining an arbitrary order it was left undefined since either order would be in conflict with one of the two operators definitions.
With a few exceptions, the order in which expressions are evaluated is unspecified; this was a deliberate design decision, and it allows implementations to rearrange the evaluation order from what's written if that will result in more efficient machine code. Similarly, the order in which the side effects of ++ and -- are applied is unspecified beyond the requirement that it happen before the next sequence point, again to give implementations the freedom to arrange operations in an optimal manner.
Unfortunately, this means that the result of an expression like a = a++ will vary based on the compiler, compiler settings, surrounding code, etc. The behavior is specifically called out as undefined in the language standard so that compiler implementors don't have to worry about detecting such cases and issuing a diagnostic against them. Cases like a = a++ are obvious, but what about something like
void foo(int *a, int *b)
{
*a = (*b)++;
}
If that's the only function in the file (or if its caller is in a different file), there's no way to know at compile time whether a and b point to the same object; what do you do?
Note that it's entirely possible to mandate that all expressions be evaluated in a specific order, and that all side effects be applied at a specific point in evaluation; that's what Java and C# do, and in those languages expressions like a = a++ are always well-defined.
The postfix ++ operator returns the value prior to the incrementation. So, at the first step, a gets assigned to its old value (that's what ++ returns). At the next point it is undefined whether the increment or the assignment will take place first, because both operations are applied over the same object (a), and the language says nothing about the order of evaluation of these operators.
Somebody may provide another reason, but from an optimization (better say assembler presentation) point of view, a needs be loaded into a CPU register, and the postfix operator's value should be placed into another register or the same.
So the last assignment can depend on either the optimizer using one register or two.
Updating the same object twice without an intervening sequence point is undefined behaviour ...
because that makes compiler writers happier
because it allows implementations to define it anyway
because it doesn't force a specific constraint when it isn't needed
Suppose a is a pointer with value 0x0001FFFF. And suppose the architecture is segmented so that the compiler needs to apply the increment to the high and low parts separately, with a carry between them. The optimiser could conceivably reorder the writes so that the final value stored is 0x0002FFFF; that is, the low part before the increment and the high part after the increment.
This value is twice either value that you might have expected. It may point to memory not owned by the application, or it may (in general) be a trapping representation. In other words, the CPU may raise a hardware fault as soon as this value is loaded into a register, crashing the application. Even if it doesn't cause an immediate crash, it is a profoundly wrong value for the application to be using.
The same kind of thing can happen with other basic types, and the C language allows even ints to have trapping representations. C tries to allow efficient implementation on a wide range of hardware. Getting efficient code on a segmented machine such as the 8086 is hard. By making this undefined behaviour, a language implementer has a bit more freedom to optimise aggressively. I don't know if it has ever made a performance difference in practice, but evidently the language committee wanted to give every benefit to the optimiser.