Is while(1); undefined behavior in C? - c

In C++11 is it Undefined Behavior, but is it the case in C that while(1); is Undefined Behavior?

It is well defined behavior. In C11 a new clause 6.8.5 ad 6 has been added
An iteration statement whose controlling expression is not a constant expression,156) that performs no input/output operations, does not access volatile objects, and performs no synchronization or atomic operations in its body, controlling expression, or (in the case of a for statement) its expression-3, may be assumed by the implementation to terminate.157)
157)This is intended to allow compiler transformations such as removal of empty loops even when termination cannot be proven.
Since the controlling expression of your loop is a constant, the compiler may not assume the loop terminates. This is intended for reactive programs that should run forever, like an operating system.
However for the following loop the behavior is unclear
a = 1; while(a);
In effect a compiler may or may not remove this loop, resulting in a program that may terminate or may not terminate. That is not really undefined, as it is not allowed to erase your hard disk, but it is a construction to avoid.
There is however another snag, consider the following code:
a = 1; while(a) while(1);
Now since the compiler may assume the outer loop terminates, the inner loop should also terminate, how else could the outer loop terminate. So if you have a really smart compiler then a while(1); loop that should not terminate has to have such non-terminating loops around it all the way up to main. If you really want the infinite loop, you'd better read or write some volatile variable in it.
Why this clause is not practical
It is very unlikely our compiler company is ever going to make use of this clause, mainly because it is a very syntactical property. In the intermediate representation (IR), the difference between the constant and the variable in the above examples is easily lost through constant propagation.
The intention of the clause is to allow compiler writers to apply desirable transformations like the following. Consider a not so uncommon loop:
int f(unsigned int n, int *a)
{ unsigned int i;
int s;
s = 0;
for (i = 10U; i <= n; i++)
{
s += a[i];
}
return s;
}
For architectural reasons (for example hardware loops) we would like to transform this code to:
int f(unsigned int n, int *a)
{ unsigned int i;
int s;
s = 0;
for (i = 0; i < n-9; i++)
{
s += a[i+10];
}
return s;
}
Without clause 6.8.5 ad 6 this is not possible, because if n equals UINT_MAX, the loop may not terminate. Nevertheless it is pretty clear to a human that this is not the intention of the writer of this code. Clause 6.8.5 ad 6 now allows this transformation. However the way this is achieved is not very practical for a compiler writer as the syntactical requirement of an infinite loop is hard to maintain on the IR.
Note that it is essential that n and i are unsigned as overflow on signed int gives undefined behavior and thus the transformation can be justified for this reason. Efficient code however benefits from using unsigned, apart from the bigger positive range.
An alternative approach
Our approach would be that the code writer has to express his intention by for example inserting an assert(n < UINT_MAX) before the loop or some Frama-C like guarantee. This way the compiler can "prove" termination and doesn't have to rely on clause 6.8.5 ad 6.
P.S: I'm looking at a draft of April 12, 2011 as paxdiablo is clearly looking at a different version, maybe his version is newer. In his quote the element of constant expression is not mentioned.

After checking in the draft C99 standard, I would say "no", it's not undefined. I can't find any language in the draft that mentions a requirement that iterations end.
The full text of the paragraph describing the semantics of the iterating statements is:
An iteration statement causes a statement called the loop body
to be executed repeatedly until the controlling expression compares equal to 0.
I would expect any limitation such as the one specififed for C++11 to appear there, if applicable. There is also a section named "Constraints", which also doesn't mention any such constraint.
Of course, the actual standard might say something else, although I doubt it.

The simplest answer involves a quote from §5.1.2.3p6, which states the minimal requirements of a conforming implementation:
The least requirements on a conforming implementation are:
— Accesses to volatile objects are evaluated strictly according to the
rules of the abstract machine.
— At program termination, all data written into files shall be
identical to the result that execution of the program according to the
abstract semantics would have produced.
— The input and output dynamics of interactive devices shall take
place as specified in 7.21.3. The intent of these requirements is that
unbuffered or line-buffered output appear as soon as possible, to
ensure that prompting messages actually appear prior to a program
waiting for input.
This is the observable behavior of the program.
If the machine code fails to produce the observable behaviour due to optimisations performed, then the compiler isn't a C compiler. What is the observable behaviour of a program that contains only such an infinite loop, at the point of termination? The only way such a loop could end is by a signal causing it to end prematurely. In the case of SIGTERM, the program terminates. This would cause no observable behaviour. Hence, the only valid optimisation of that program is the compiler pre-empting the system closing the program and generating a program that ends immediately.
/* unoptimised version */
int main() {
for (;;);
puts("The loop has ended");
}
/* optimised version */
int main() { }
One possibility is that a signal is raised and longjmp is called to cause execution to jump to a different location. It seems like the only place that could be jumped to is somewhere reached during execution prior to the loop, so providing the compiler is intelligent enough to notice that a signal is raised causing the execution to jump to somewhere else, it could potentially optimise the loop (and the signal raising) away in favour of jumping immediately.
When multiple threads enter the equation, a valid implementation might be able to transfer ownership of the program from the main thread to a different thread, and end the main thread. The observable behaviour of the program must still be observable, regardless of optimisations.

The following statement appears in C11 6.8.5 Iteration statements /6:
An iteration statement whose controlling expression is not a constant expression, that performs no input/output operations, does not access volatile
objects, and performs no synchronization or atomic operations in its body, controlling expression, or (in the case of a for statement) its expression-3, may be assumed by the implementation to terminate.
Since while(1); uses a constant expression, the implementation is not allowed to assume it will terminate.
A compiler is free to remove such a loop entirely is the expression is non-constant and all other conditions are similarly met, even if it cannot be proven conclusively that the loop would terminate.

Related

Order of evaluation of array indices (versus the expression) in C

Looking at this code:
static int global_var = 0;
int update_three(int val)
{
global_var = val;
return 3;
}
int main()
{
int arr[5];
arr[global_var] = update_three(2);
}
Which array entry gets updated? 0 or 2?
Is there a part in the specification of C that indicates the precedence of operation in this particular case?
Order of Left and Right Operands
To perform the assignment in arr[global_var] = update_three(2), the C implementation must evaluate the operands and, as a side effect, update the stored value of the left operand. C 2018 6.5.16 (which is about assignments) paragraph 3 tells us there is no sequencing in the left and right operands:
The evaluations of the operands are unsequenced.
This means the C implementation is free to compute the lvalue arr[global_var] first (by “computing the lvalue,” we mean figuring out what this expression refers to), then to evaluate update_three(2), and finally to assign the value of the latter to the former; or to evaluate update_three(2) first, then compute the lvalue, then assign the former to the latter; or to evaluate the lvalue and update_three(2) in some intermixed fashion and then assign the right value to the left lvalue.
In all cases, the assignment of the value to the lvalue must come last, because 6.5.16 3 also says:
… The side effect of updating the stored value of the left operand is sequenced after the value computations of the left and right operands…
Sequencing Violation
Some might ponder about undefined behavior due to both using global_var and separately updating it in violation of 6.5 2, which says:
If a side effect on a scalar object is unsequenced relative to either a different side effect on the same scalar object or a value computation using the value of the same scalar object, the behavior is undefined…
It is quite familiar to many C practitioners that the behavior of expressions such as x + x++ is not defined by the C standard because they both use the value of x and separately modify it in the same expression without sequencing. However, in this case, we have a function call, which provides some sequencing. global_var is used in arr[global_var] and is updated in the function call update_three(2).
6.5.2.2 10 tells us there is a sequence point before the function is called:
There is a sequence point after the evaluations of the function designator and the actual arguments but before the actual call…
Inside the function, global_var = val; is a full expression, and so is the 3 in return 3;, per 6.8 4:
A full expression is an expression that is not part of another expression, nor part of a declarator or abstract declarator…
Then there is a sequence point between these two expressions, again per 6.8 4:
… There is a sequence point between the evaluation of a full expression and the evaluation of the next full expression to be evaluated.
Thus, the C implementation may evaluate arr[global_var] first and then do the function call, in which case there is a sequence point between them because there is one before the function call, or it may evaluate global_var = val; in the function call and then arr[global_var], in which case there is a sequence point between them because there is one after the full expression. So the behavior is unspecified—either of those two things may be evaluated first—but it is not undefined.
The result here is unspecified.
While the order of operations in an expression, which dictate how subexpressions are grouped, is well defined, the order of evaluation is not specified. In this case it means that either global_var could be read first or the call to update_three could happen first, but there’s no way to know which.
There is not undefined behavior here because a function call introduces a sequence point, as does every statement in the function including the one that modifies global_var.
To clarify, the C standard defines undefined behavior in section 3.4.3 as:
undefined behavior
behavior, upon use of a nonportable or erroneous program construct or
of erroneous data,for which this International Standard imposes no
requirements
and defines unspecified behavior in section 3.4.4 as:
unspecified behavior
use of an unspecified value, or other behavior where this
International Standard provides two or more possibilities and imposes
no further requirements on which is chosen in any instance
The standard states that the evaluation order of function arguments is unspecified, which in this case means that either arr[0] gets set to 3 or arr[2] gets set to 3.
I tried and I got the entry 0 updated.
However according to this question: will right hand side of an expression always evaluated first
The order of evaluation is unspecified and unsequenced.
So I think a code like this should be avoided.
As it makes little sense to emit code for an assignment before you have a value to assign, most C compilers will first emit code that calls the function and save the result somewhere (register, stack, etc.), then they will emit code that writes this value to its final destination and therefore they will read the global variable after it has been changed. Let us call this the "natural order", not defined by any standard but by pure logic.
Yet in the process of optimization, compilers will try to eliminate the intermediate step of temporarily storing the value somewhere and try to write the function result as directly as possible to the final destination and in that case, they often will have to read the index first, e.g. to a register, to be able to directly move the function result to the array. This may cause the global variable to be read before it was changed.
So this is basically undefined behavior with the very bad property that its quite likely that the result will be different, depending on if optimization is performed and how aggressive this optimization is. It's your task as a developer to resolve that issue by either coding:
int idx = global_var;
arr[idx] = update_three(2);
or coding:
int temp = update_three(2);
arr[global_var] = temp;
As a good rule of the thumb: Unless global variables are const (or they are not but you know that no code will ever change them as a side effect), you should never use them directly in code, as in a multi-threaded environment, even this can be undefined:
int result = global_var + (2 * global_var);
// Is not guaranteed to be equal to `3 * global_var`!
Since the compiler may read it twice and another thread can change the value in between the two reads. Yet, again, optimization would definitely cause the code to only read it once, so you may again have different results that now also depend on the timing of another thread. Thus you will have a lot less headache if you store global variables to a temporary stack variable before usage. Keep in mind if the compiler thinks this is safe, it will most likely optimize even that away and instead use the global variable directly, so in the end, it may make no difference in performance or memory use.
(Just in case someone asks why would anyone do x + 2 * x instead of 3 * x - on some CPUs addition is ultra-fast and so is multiplication by a power two as the compiler will turn these into bit shifts (2 * x == x << 1), yet multiplication with arbitrary numbers can be very slow, thus instead of multiplying by 3, you get much faster code by bit shifting x by 1 and adding x to the result - and even that trick is performed by modern compilers if you multiply by 3 and turn on aggressive optimization unless it's a modern target CPU where multiplication is equally fast as addition since then the trick would slow down the calculation.)
Global edit: sorry guys, I got all fired up and wrote a lot of nonsense. Just an old geezer ranting.
I wanted to believe C had been spared, but alas since C11 it has been brought on par with C++. Apparently, knowing what the compiler will do with side effects in expressions requires now to solve a little maths riddle involving a partial ordering of code sequences based on a "is located before the synchronization point of".
I happen to have designed and implemented a few critical real-time embedded systems back in the K&R days (including the controller of an electric car that could send people crashing into the nearest wall if the engine was not kept in check, a 10 tons industrial robot that could squash people to a pulp if not properly commanded, and a system layer that, though harmless, would have a few dozen processors suck their data bus dry with less than 1% system overhead).
I might be too senile or stupid to get the difference between undefined and unspecified, but I think I still have a pretty good idea of what concurrent execution and data access mean. In my arguably informed opinion, this obsession of the C++ and now C guys with their pet languages taking over synchronization issues is a costly pipe dream. Either you know what concurrent execution is, and you don't need any of these gizmos, or you don't, and you would do the world at large a favour not trying to mess with it.
All this truckload of eye-watering memory barrier abstractions is simply due to a temporary set of limitations of the multi-CPU cache systems, all of which can be safely encapsulated in common OS synchronization objects like, for instance, the mutexes and condition variables C++ offers.
The cost of this encapsulation is but a minute drop in performances compared with what a use of fine grained specific CPU instructions could achieve is some cases.
The volatile keyword (or a #pragma dont-mess-with-that-variable for all I, as a system programmer, care) would have been quite enough to tell the compiler to stop reordering memory accesses.
Optimal code can easily be produced with direct asm directives to sprinkle low level driver and OS code with ad hoc CPU specific instructions. Without an intimate knowledge of how the underlying hardware (cache system or bus interface) works, you're bound to write useless, inefficient or faulty code anyway.
A minute adjustment of the volatile keyword and Bob would have been everybody but the most hardboiled low level programers' uncle.
Instead of that, the usual gang of C++ maths freaks had a field day designing yet another incomprehensible abstraction, yielding to their typical tendency to design solutions looking for non existent problems and mistaking the definition of a programming language with the specs of a compiler.
Only this time the change required to deface a fundamental aspect of C too, since these "barriers" had to be generated even in low level C code to work properly. That, among other things, wrought havoc in the definition of expressions, with no explanation or justification whatsoever.
As a conclusion, the fact that a compiler could produce a consistent machine code from this absurd piece of C is only a distant consequence of the way C++ guys coped with potential inconsistencies of the cache systems of the late 2000s.
It made a terrible mess of one fundamental aspect of C (expression definition), so that the vast majority of C programmers - who don't give a damn about cache systems, and rightly so - is now forced to rely on gurus to explain the difference between a = b() + c() and a = b + c.
Trying to guess what will become of this unfortunate array is a net loss of time and efforts anyway. Regardless of what the compiler will make of it, this code is pathologically wrong. The only responsible thing to do with it is send it to the bin.
Conceptually, side effects can always be moved out of expressions, with the trivial effort of explicitly letting the modification occur before or after the evaluation, in a separate statement.
This kind of shitty code might have been justified in the 80's, when you could not expect a compiler to optimize anything. But now that compilers have long become more clever than most programmers, all that remains is a piece of shitty code.
I also fail to understand the importance of this undefined / unspecified debate. Either you can rely on the compiler to generate code with a consistent behaviour or you can't. Whether you call that undefined or unspecified seems like a moot point.
In my arguably informed opinion, C is already dangerous enough in its K&R state. A useful evolution would be to add common sense safety measures. For instance, making use of this advanced code analysis tool the specs force the compiler to implement to at least generate warnings about bonkers code, instead of silently generating a code potentially unreliable to the extreme.
But instead the guys decided, for instance, to define a fixed evaluation order in C++17. Now every software imbecile is actively incited to put side effects in his/her code on purpose, basking in the certainty that the new compilers will eagerly handle the obfuscation in a deterministic way.
K&R was one of the true marvels of the computing world. For twenty bucks you got a comprehensive specification of the language (I've seen single individuals write complete compilers just using this book), an excellent reference manual (the table of contents would usually point you within a couple of pages of the answer to your question), and a textbook that would teach you to use the language in a sensible way. Complete with rationales, examples and wise words of warning about the numerous ways you could abuse the language to do very, very stupid things.
Destroying that heritage for so little gain seems like a cruel waste to me. But again I might very well fail to see the point completely.
Maybe some kind soul could point me in the direction of an example of new C code that takes a significant advantage of these side effects?

Can I force gcc to detect ALL undefined behavior?

Is there a way to force gcc to detect all undefined behavior? I want it to detect both things that can be discovered at compile time and runtime. I know that UB is useful both for making it simpler to create the compilers and to allow the compiler to optimize the code. The latter is not relevant when you're debugging, and the need of lightweight compilers is not as big as it was 1972. Furthermore, gcc is a very mature compiler at this point, and if this was possible, it would bake debugging so much easier.
I know that -Wformat will yield a warning for printf("%d", 42) and for uninitialized variables. The parameter -Warray-bounds might catch when you try to access memory outside an array, although I needed to put some work in constructing code that actually yielded a warning. I also know that some runtime errors can be detected with -fstack-protector-all.
So my question is simply this. Is there a way to guarantee that all UB gets detected, either at compilation if possible, but at the very latest when it happens in runtime?
This is impossible. Detecting undefined behavior can literally require solving the halting problem; for example, quoting C11 6.8.5:
6 An iteration statement whose controlling expression is not a constant expression, that performs no input/output operations, does not access volatile objects, and performs no synchronization or atomic operations in its body, controlling expression, or (in the case of a for statement) its expression-3, may be assumed by the implementation to terminate.
C is not designed to make error detection easy.
That's in principle impossible. Consider that some UB can depend on runtime data in very complex ways.
If you ask your user to input a value at runtime and then use that value as a pointer (or to compute a pointer) which you dereference and write through, how do you detect the write will cause UB or not? You can check the process image and see if the write will cause a segfault right away, but if it doesn't how do you detect that the write wasn't in a place that will cause a butterfly effect that will ultimately lead to a segfault or execution of unintended code?
It doesn't have to be about pointers either. You can parse faster if you assume all input is well formed (no error checking on malformed input), but if you then parse a malformed file, anything can happen just as with the pointer example.
Imagine this example (assume we have some arbitrary-precision BigInteger class and a function random_big_int that returns the positive integer n with probability 1/2^n)
void compute_collatz(BigInteger x) {
while (x != 1) {
if (x % 2) {
x = 3*x + 1;
} else {
x = x / 2;
}
}
std::cout << "Terminated successfully!" << std::endl;
}
int main() {
BigInteger x = random_big_int();
compute_collatz(x);
}
If the Collatz conjecture is false, this may enter a side-effect-free infinite loop (if a random integer is picked for which the conjecture is false), which is undefined behavior.
So, in order to tell whether this can invoke UB, the compiler would need to know whether the Collatz conjecture is true, which is an open problem in mathematics.

Fetch-and-add ordering

I'm working on replacing the allocation system for "stable pointers" in the ghc runtime system, and I'm running up against the limits of my understanding of concurrent programming.
Suppose a variable contains 0. Thread A uses __atomic_fetch_and_add to increment the variable and notifies thread B in some fashion. In response, thread B uses __atomic_fetch_and_add to decrement the same variable, bringing it back to 0. So it seems the variable should go from 0 to 1 and back. Is it guaranteed that another thread C will not see the additions performed in the opposite order to go from 0 to -1 and back?
I just re-read this question adding some additional clarification, and realized I had assumed C11 while your question seems to be using compiler built-ins. From that perspective, if all your memorder use is __ATOMIC_SEQ_CST, there is no case in which you can observe a value of -1 for the same reasons I detail below (from C11).
TL;DR: It depends, but you'd have to really shoot yourself in the foot to not be guaranteed this behavior. Below follows an explanation of why this could happen, how this could happen, and why you're unlikely to run into it happening.
Atomic operations are guaranteed to have a global order, but that global total order is not defined. From the C11 draft, §5.1.2.4p7:
All modifications to a particular atomic object M occur in some particular total order, called the modification order of M.
It is possible by this definition of modifications to M that the total order observed by some other thread is A/B, but B/A is also permitted. This would indeed have the effect of an external observer noticing a value transitioning between -1 and 0 (assuming a signed atomic type).
To deal with this, the standard defines synchronization operations (from paragraph 5 of the same section):
A synchronization operation on one or more memory locations is either an acquire operation, a release operation, both an acquire and release operation, or a consume operation.
Later on, there are some tedious-to-read definitions for how these operations compose to introduce dependencies that ultimately yield a "happens-before" ordering. I'll omit those; §5.1.2.4p14-22 describe observability of side-effects on some object and how dependencies influence that; §7.17.3 describes the API for controlling those dependencies.
Without discussing those sections, it is hopefully enough to say that they do allow for the observer to see the "opposite order" described. You could wind up in this situation when you use atomic_fetch_add_explicit with a memory_order_relaxed argument, and your load is implemented as atomic_load_explicit with the same relaxed memory ordering requirements. In this situation, no "happens-before" relationship is defined, and a system is permitted to allow thread C to observe the modifications in either order.
This is unlikely to be what you would actually do. For one, it's a lot more typing. Secondly, the API naming and use really suggests that you should know what you're doing if you want to use it. This is what I mean in saying you'd really have to shoot yourself in the foot: you're discouraged from doing this sort of thing by default.
If you implemented this purely with atomic_fetch_add, atomic_fetch_sub, and atomic_load (as you would likely do), you would be fine; the standard in §7.17.1p5 states:
The functions not ending in _explicit have the same semantics as the
corresponding _explicit function with memory_order_seq_cst for the
memory_order argument.
The standard guarantees that this ordering will carry data dependencies such that the write from thread A is seen to "happen-before" the write from thread B. An observer C, with its own consistent memory ordering requirements, is then therefore guaranteed to see operations interleave in the order described as intended.
That all said: if you can use C11, just use ++, --, and =, and you'll be fine. Per §6.5.16.2p3, += and -= operations on atomic types are defined to behave as if using store with memory_order_seq_cst. Per §6.5.3p2, the ++ and -- operators are analogous to the equivalent x+=1 and x-=1 expressions. Simple assignment (§6.5.16.2) specifies that the LHS or RHS can be atomic types, but does not specify the memory order. Jens Gustedt says that operations on _Atomic-qualified objects are guaranteed to have sequential consistency. I can only divine this from footnote 113, and footnotes aren't normative. But I don't think that matters: if all writes are consistent, any read must observe a valid previous state from that total order, which never contains -1.

What's the reason for letting the semantics of a=a++ be undefined?

The line
a = a++;
is undefined behaviour in C. The question I am asking is: why?
I mean, I get that it might be hard to provide a consistent order in which things should be done. But, certain compilers will always do it in one order or the other (at a given optimization level). So why exactly is this left up to the compiler to decide?
To be clear, I want to know if this was a design decision and if so, what prompted it? Or maybe there is a hardware limitation of some kind?
UPDATE: This question was the subject of my blog on June 18th, 2012. Thanks for the great question!
Why? I want to know if this was a design decision and if so, what prompted it?
You are essentially asking for the minutes of the meeting of the ANSI C design committee, and I don't have those handy. If your question can only be answered definitively by someone who was in the room that day, then you're going to have to find someone who was in that room.
However, I can answer a broader question:
What are some of the factors that lead a language design committee to leave the behaviour of a legal program (*) "undefined" or "implementation defined" (**)?
The first major factor is: are there two existing implementations of the language in the marketplace that disagree on the behaviour of a particular program? If FooCorp's compiler compiles M(A(), B()) as "call A, call B, call M", and BarCorp's compiler compiles it as "call B, call A, call M", and neither is the "obviously correct" behaviour then there is strong incentive to the language design committee to say "you're both right", and make it implementation defined behaviour. Particularly this is the case if FooCorp and BarCorp both have representatives on the committee.
The next major factor is: does the feature naturally present many different possibilities for implementation? For example, in C# the compiler's analysis of a "query comprehension" expression is specified as "do a syntactic transformation into an equivalent program that does not have query comprehensions, and then analyze that program normally". There is very little freedom for an implementation to do otherwise.
By contrast, the C# specification says that the foreach loop should be treated as the equivalent while loop inside a try block, but allows the implementation some flexibility. A C# compiler is permitted to say, for example "I know how to implement foreach loop semantics more efficiently over an array" and use the array's indexing feature rather than converting the array to a sequence as the specification suggests it should.
A third factor is: is the feature so complex that a detailed breakdown of its exact behaviour would be difficult or expensive to specify? The C# specification says very little indeed about how anonymous methods, lambda expressions, expression trees, dynamic calls, iterator blocks and async blocks are to be implemented; it merely describes the desired semantics and some restrictions on behaviour, and leaves the rest up to the implementation.
A fourth factor is: does the feature impose a high burden on the compiler to analyze? For example, in C# if you have:
Func<int, int> f1 = (int x)=>x + 1;
Func<int, int> f2 = (int x)=>x + 1;
bool b = object.ReferenceEquals(f1, f2);
Suppose we require b to be true. How are you going to determine when two functions are "the same"? Doing an "intensionality" analysis -- do the function bodies have the same content? -- is hard, and doing an "extensionality" analysis -- do the functions have the same results when given the same inputs? -- is even harder. A language specification committee should seek to minimize the number of open research problems that an implementation team has to solve!
In C# this is therefore left to be implementation-defined; a compiler can choose to make them reference equal or not at its discretion.
A fifth factor is: does the feature impose a high burden on the runtime environment?
For example, in C# dereferencing past the end of an array is well-defined; it produces an array-index-was-out-of-bounds exception. This feature can be implemented with a small -- not zero, but small -- cost at runtime. Calling an instance or virtual method with a null receiver is defined as producing a null-was-dereferenced exception; again, this can be implemented with a small, but non-zero cost. The benefit of eliminating the undefined behaviour pays for the small runtime cost.
A sixth factor is: does making the behaviour defined preclude some major optimization? For example, C# defines the ordering of side effects when observed from the thread that causes the side effects. But the behaviour of a program that observes side effects of one thread from another thread is implementation-defined except for a few "special" side effects. (Like a volatile write, or entering a lock.) If the C# language required that all threads observe the same side effects in the same order then we would have to restrict modern processors from doing their jobs efficiently; modern processors depend on out-of-order execution and sophisticated caching strategies to obtain their high level of performance.
Those are just a few factors that come to mind; there are of course many, many other factors that language design committees debate before making a feature "implementation defined" or "undefined".
Now let's return to your specific example.
The C# language does make that behaviour strictly defined(†); the side effect of the increment is observed to happen before the side effect of the assignment. So there cannot be any "well, it's just impossible" argument there, because it is possible to choose a behaviour and stick to it. Nor does this preclude major opportunities for optimizations. And there are not a multiplicity of possible complex implementation strategies.
My guess, therefore, and I emphasize that this is a guess, is that the C language committee made ordering of side effects into implementation defined behaviour because there were multiple compilers in the marketplace that did it differently, none was clearly "more correct", and the committee was unwilling to tell half of them that they were wrong.
(*) Or, sometimes, its compiler! But let's ignore that factor.
(**) "Undefined" behaviour means that the code can do anything, including erasing your hard disk. The compiler is not required to generate code that has any particular behaviour, and not required to tell you that it is generating code with undefined behaviour. "Implementation defined" behaviour means that the compiler author is given considerable freedom in choice of implementation strategy, but is required to pick a strategy, use it consistently, and document that choice.
(†) When observed from a single thread, of course.
It's undefined because there is no good reason for writing code like that, and by not requiring any specific behaviour for bogus code, compilers can more aggressively optimize well-written code. For example, *p = i++ may be optimized in a way that causes a crash if p happens to point to i, possibly because two cores write to the same memory location at the same time. The fact that this also happens to be undefined in the specific case that *p is explicitly written out as i, to get i = i++, logically follows.
It's ambiguous but not syntactically wrong. What should a be? Both = and ++ have the same "timing." So instead of defining an arbitrary order it was left undefined since either order would be in conflict with one of the two operators definitions.
With a few exceptions, the order in which expressions are evaluated is unspecified; this was a deliberate design decision, and it allows implementations to rearrange the evaluation order from what's written if that will result in more efficient machine code. Similarly, the order in which the side effects of ++ and -- are applied is unspecified beyond the requirement that it happen before the next sequence point, again to give implementations the freedom to arrange operations in an optimal manner.
Unfortunately, this means that the result of an expression like a = a++ will vary based on the compiler, compiler settings, surrounding code, etc. The behavior is specifically called out as undefined in the language standard so that compiler implementors don't have to worry about detecting such cases and issuing a diagnostic against them. Cases like a = a++ are obvious, but what about something like
void foo(int *a, int *b)
{
*a = (*b)++;
}
If that's the only function in the file (or if its caller is in a different file), there's no way to know at compile time whether a and b point to the same object; what do you do?
Note that it's entirely possible to mandate that all expressions be evaluated in a specific order, and that all side effects be applied at a specific point in evaluation; that's what Java and C# do, and in those languages expressions like a = a++ are always well-defined.
The postfix ++ operator returns the value prior to the incrementation. So, at the first step, a gets assigned to its old value (that's what ++ returns). At the next point it is undefined whether the increment or the assignment will take place first, because both operations are applied over the same object (a), and the language says nothing about the order of evaluation of these operators.
Somebody may provide another reason, but from an optimization (better say assembler presentation) point of view, a needs be loaded into a CPU register, and the postfix operator's value should be placed into another register or the same.
So the last assignment can depend on either the optimizer using one register or two.
Updating the same object twice without an intervening sequence point is undefined behaviour ...
because that makes compiler writers happier
because it allows implementations to define it anyway
because it doesn't force a specific constraint when it isn't needed
Suppose a is a pointer with value 0x0001FFFF. And suppose the architecture is segmented so that the compiler needs to apply the increment to the high and low parts separately, with a carry between them. The optimiser could conceivably reorder the writes so that the final value stored is 0x0002FFFF; that is, the low part before the increment and the high part after the increment.
This value is twice either value that you might have expected. It may point to memory not owned by the application, or it may (in general) be a trapping representation. In other words, the CPU may raise a hardware fault as soon as this value is loaded into a register, crashing the application. Even if it doesn't cause an immediate crash, it is a profoundly wrong value for the application to be using.
The same kind of thing can happen with other basic types, and the C language allows even ints to have trapping representations. C tries to allow efficient implementation on a wide range of hardware. Getting efficient code on a segmented machine such as the 8086 is hard. By making this undefined behaviour, a language implementer has a bit more freedom to optimise aggressively. I don't know if it has ever made a performance difference in practice, but evidently the language committee wanted to give every benefit to the optimiser.

curious about how "loop = loop" is evaluated in Haskell

I thought expressions like this would cause Haskell to evaluate forever. But the behaviors in both GHCi and the compiled program surprised me.
For example, in GHCi, these expressions blocked until I Control+C, but consumed no CPU. Looked like it was sleeping.
let loop = loop
let loop = 1 + loop
I tried compiling these programs with GHC:
main = print loop
where loop = 1 + loop
main = print loop
where loop = if True then loop else 1
What was printed was:
Main: <<loop>>
So my question is: Obviously these expressions are compiled to something different than loops or recursive calls in imperative languages. What are they compiled to? Is this a special rule to handle 0-arg functions that have themselves in the right hand side, or it's a special case of something more general that I don't know?
[EDIT]:
One more question: If this happens to be a special handling from the compiler, what is the reason behind doing this when it's impossible to check for all infinite loops? 'Familiar' languages don't care about cases like while (true); or int f() { return f();}, right?
Many thanks.
GHC implements Haskell as a graph reduction machine. Imagine your program as a graph with each value as a node, and lines from it to each value that value depends on. Except, we're lazy, so you really start with just one node -- and to evaluate that node, GHC has to "enter" it and open it up to a function with arguments. It then replaces the function call with the body of the function, and attempts to reduce it enough to get it into head normal form, etc.
The above being very handwavy and I'm sure eliding some necessary detail in the interest of brevity.
In any case, when GHC enters a value, it generally replaces it with a black hole while the node is being evaluated (or, depending on your terminology, while the closure is being reduced) This has a number of purposes. First, it plugs a potential space leak. If the node references a value which is used nowhere else, the black hole allows that value to be garbage-collected even while the node is being evaluated. Second, this prevents certain types of duplicate work, since in a multi-threaded environment, two threads may attempt to enter the same value. The black-hole will cause the second thread to block rather than evaluate the value already being evaluated. Finally, this happens to allow for a limited form of loop detection, since if a thread attempts to re-enter its own black hole, we can throw an exception.
Here's a bit of a more metaphorical explanation. If I have a series of instructions that moves a turtle (in logo) around the screen, there's no one way to tell what shape they will produce, or whether that shape terminates without running them. But if, while running them, I notice that the path of the turtle has crossed itself, I can indicate to the user "aha! the turtle has crossed its path!" So I know that the turtle has reached a spot it has been before -- if the path is a circuit through evaluating the nodes of a graph, then that tells us we're in a loop. However, the turtle can also go in, for example, an expanding spiral. And it will never terminate, but it will also never cross its prior path.
So, because of the use of black holes, for multiple reasons, we have some notion of a marked "path" that evaluation has followed. And if the path crosses itself, we can tell and throw an exception. However, there are a million ways for things to diverge that don't involve the path crossing itself. And in those cases, we can't tell, and don't throw an exception.
For super-geeky technical detail about the current implementation of black holes, see Simon Marlow's talk from the recent Haskell Implementors Workshop, "Scheduling Lazy Evaluation on Multicore" at the bottom of http://haskell.org/haskellwiki/HaskellImplementorsWorkshop/2010.
In some, limited cases, the compiler can determine such a loop exists as part of its other control flow analyses, and at that point replaces the looping term with code that throws an appropriate exception. This cannot be done in all cases, of course, but only in some of the more obvious cases, where it falls out naturally from other work the compiler is doing.
As for why Haskell finds this more often than other languages:
These cases do not occur in languages which are strict such as C. These loops specifically happen when a lazy variable's computation depends on its own value.
Languages such as C have very specific semantics in loops; ie, what order to do what in. As such, they are forced to actually execute the loop. Haskell, however defines a special value _|_ ("the bottom"), which is used to represent erroneous values. Values which are strict on themselves - ie, they depend on their own value to compute - are _|_. The result of pattern-matching on _|_ can either be an infinite loop or an exception; your compiler is choosing the latter here.
The Haskell compiler is very interested in performing strictness analysis - ie, proving that a certain expression depends on certain other expressions - in order to perform certain optimizations. This loop analysis falls out naturally as an edge case in the strictness analyzer which must be handled in one way or another.

Resources