Related
Looking at this code:
static int global_var = 0;
int update_three(int val)
{
global_var = val;
return 3;
}
int main()
{
int arr[5];
arr[global_var] = update_three(2);
}
Which array entry gets updated? 0 or 2?
Is there a part in the specification of C that indicates the precedence of operation in this particular case?
Order of Left and Right Operands
To perform the assignment in arr[global_var] = update_three(2), the C implementation must evaluate the operands and, as a side effect, update the stored value of the left operand. C 2018 6.5.16 (which is about assignments) paragraph 3 tells us there is no sequencing in the left and right operands:
The evaluations of the operands are unsequenced.
This means the C implementation is free to compute the lvalue arr[global_var] first (by “computing the lvalue,” we mean figuring out what this expression refers to), then to evaluate update_three(2), and finally to assign the value of the latter to the former; or to evaluate update_three(2) first, then compute the lvalue, then assign the former to the latter; or to evaluate the lvalue and update_three(2) in some intermixed fashion and then assign the right value to the left lvalue.
In all cases, the assignment of the value to the lvalue must come last, because 6.5.16 3 also says:
… The side effect of updating the stored value of the left operand is sequenced after the value computations of the left and right operands…
Sequencing Violation
Some might ponder about undefined behavior due to both using global_var and separately updating it in violation of 6.5 2, which says:
If a side effect on a scalar object is unsequenced relative to either a different side effect on the same scalar object or a value computation using the value of the same scalar object, the behavior is undefined…
It is quite familiar to many C practitioners that the behavior of expressions such as x + x++ is not defined by the C standard because they both use the value of x and separately modify it in the same expression without sequencing. However, in this case, we have a function call, which provides some sequencing. global_var is used in arr[global_var] and is updated in the function call update_three(2).
6.5.2.2 10 tells us there is a sequence point before the function is called:
There is a sequence point after the evaluations of the function designator and the actual arguments but before the actual call…
Inside the function, global_var = val; is a full expression, and so is the 3 in return 3;, per 6.8 4:
A full expression is an expression that is not part of another expression, nor part of a declarator or abstract declarator…
Then there is a sequence point between these two expressions, again per 6.8 4:
… There is a sequence point between the evaluation of a full expression and the evaluation of the next full expression to be evaluated.
Thus, the C implementation may evaluate arr[global_var] first and then do the function call, in which case there is a sequence point between them because there is one before the function call, or it may evaluate global_var = val; in the function call and then arr[global_var], in which case there is a sequence point between them because there is one after the full expression. So the behavior is unspecified—either of those two things may be evaluated first—but it is not undefined.
The result here is unspecified.
While the order of operations in an expression, which dictate how subexpressions are grouped, is well defined, the order of evaluation is not specified. In this case it means that either global_var could be read first or the call to update_three could happen first, but there’s no way to know which.
There is not undefined behavior here because a function call introduces a sequence point, as does every statement in the function including the one that modifies global_var.
To clarify, the C standard defines undefined behavior in section 3.4.3 as:
undefined behavior
behavior, upon use of a nonportable or erroneous program construct or
of erroneous data,for which this International Standard imposes no
requirements
and defines unspecified behavior in section 3.4.4 as:
unspecified behavior
use of an unspecified value, or other behavior where this
International Standard provides two or more possibilities and imposes
no further requirements on which is chosen in any instance
The standard states that the evaluation order of function arguments is unspecified, which in this case means that either arr[0] gets set to 3 or arr[2] gets set to 3.
I tried and I got the entry 0 updated.
However according to this question: will right hand side of an expression always evaluated first
The order of evaluation is unspecified and unsequenced.
So I think a code like this should be avoided.
As it makes little sense to emit code for an assignment before you have a value to assign, most C compilers will first emit code that calls the function and save the result somewhere (register, stack, etc.), then they will emit code that writes this value to its final destination and therefore they will read the global variable after it has been changed. Let us call this the "natural order", not defined by any standard but by pure logic.
Yet in the process of optimization, compilers will try to eliminate the intermediate step of temporarily storing the value somewhere and try to write the function result as directly as possible to the final destination and in that case, they often will have to read the index first, e.g. to a register, to be able to directly move the function result to the array. This may cause the global variable to be read before it was changed.
So this is basically undefined behavior with the very bad property that its quite likely that the result will be different, depending on if optimization is performed and how aggressive this optimization is. It's your task as a developer to resolve that issue by either coding:
int idx = global_var;
arr[idx] = update_three(2);
or coding:
int temp = update_three(2);
arr[global_var] = temp;
As a good rule of the thumb: Unless global variables are const (or they are not but you know that no code will ever change them as a side effect), you should never use them directly in code, as in a multi-threaded environment, even this can be undefined:
int result = global_var + (2 * global_var);
// Is not guaranteed to be equal to `3 * global_var`!
Since the compiler may read it twice and another thread can change the value in between the two reads. Yet, again, optimization would definitely cause the code to only read it once, so you may again have different results that now also depend on the timing of another thread. Thus you will have a lot less headache if you store global variables to a temporary stack variable before usage. Keep in mind if the compiler thinks this is safe, it will most likely optimize even that away and instead use the global variable directly, so in the end, it may make no difference in performance or memory use.
(Just in case someone asks why would anyone do x + 2 * x instead of 3 * x - on some CPUs addition is ultra-fast and so is multiplication by a power two as the compiler will turn these into bit shifts (2 * x == x << 1), yet multiplication with arbitrary numbers can be very slow, thus instead of multiplying by 3, you get much faster code by bit shifting x by 1 and adding x to the result - and even that trick is performed by modern compilers if you multiply by 3 and turn on aggressive optimization unless it's a modern target CPU where multiplication is equally fast as addition since then the trick would slow down the calculation.)
Global edit: sorry guys, I got all fired up and wrote a lot of nonsense. Just an old geezer ranting.
I wanted to believe C had been spared, but alas since C11 it has been brought on par with C++. Apparently, knowing what the compiler will do with side effects in expressions requires now to solve a little maths riddle involving a partial ordering of code sequences based on a "is located before the synchronization point of".
I happen to have designed and implemented a few critical real-time embedded systems back in the K&R days (including the controller of an electric car that could send people crashing into the nearest wall if the engine was not kept in check, a 10 tons industrial robot that could squash people to a pulp if not properly commanded, and a system layer that, though harmless, would have a few dozen processors suck their data bus dry with less than 1% system overhead).
I might be too senile or stupid to get the difference between undefined and unspecified, but I think I still have a pretty good idea of what concurrent execution and data access mean. In my arguably informed opinion, this obsession of the C++ and now C guys with their pet languages taking over synchronization issues is a costly pipe dream. Either you know what concurrent execution is, and you don't need any of these gizmos, or you don't, and you would do the world at large a favour not trying to mess with it.
All this truckload of eye-watering memory barrier abstractions is simply due to a temporary set of limitations of the multi-CPU cache systems, all of which can be safely encapsulated in common OS synchronization objects like, for instance, the mutexes and condition variables C++ offers.
The cost of this encapsulation is but a minute drop in performances compared with what a use of fine grained specific CPU instructions could achieve is some cases.
The volatile keyword (or a #pragma dont-mess-with-that-variable for all I, as a system programmer, care) would have been quite enough to tell the compiler to stop reordering memory accesses.
Optimal code can easily be produced with direct asm directives to sprinkle low level driver and OS code with ad hoc CPU specific instructions. Without an intimate knowledge of how the underlying hardware (cache system or bus interface) works, you're bound to write useless, inefficient or faulty code anyway.
A minute adjustment of the volatile keyword and Bob would have been everybody but the most hardboiled low level programers' uncle.
Instead of that, the usual gang of C++ maths freaks had a field day designing yet another incomprehensible abstraction, yielding to their typical tendency to design solutions looking for non existent problems and mistaking the definition of a programming language with the specs of a compiler.
Only this time the change required to deface a fundamental aspect of C too, since these "barriers" had to be generated even in low level C code to work properly. That, among other things, wrought havoc in the definition of expressions, with no explanation or justification whatsoever.
As a conclusion, the fact that a compiler could produce a consistent machine code from this absurd piece of C is only a distant consequence of the way C++ guys coped with potential inconsistencies of the cache systems of the late 2000s.
It made a terrible mess of one fundamental aspect of C (expression definition), so that the vast majority of C programmers - who don't give a damn about cache systems, and rightly so - is now forced to rely on gurus to explain the difference between a = b() + c() and a = b + c.
Trying to guess what will become of this unfortunate array is a net loss of time and efforts anyway. Regardless of what the compiler will make of it, this code is pathologically wrong. The only responsible thing to do with it is send it to the bin.
Conceptually, side effects can always be moved out of expressions, with the trivial effort of explicitly letting the modification occur before or after the evaluation, in a separate statement.
This kind of shitty code might have been justified in the 80's, when you could not expect a compiler to optimize anything. But now that compilers have long become more clever than most programmers, all that remains is a piece of shitty code.
I also fail to understand the importance of this undefined / unspecified debate. Either you can rely on the compiler to generate code with a consistent behaviour or you can't. Whether you call that undefined or unspecified seems like a moot point.
In my arguably informed opinion, C is already dangerous enough in its K&R state. A useful evolution would be to add common sense safety measures. For instance, making use of this advanced code analysis tool the specs force the compiler to implement to at least generate warnings about bonkers code, instead of silently generating a code potentially unreliable to the extreme.
But instead the guys decided, for instance, to define a fixed evaluation order in C++17. Now every software imbecile is actively incited to put side effects in his/her code on purpose, basking in the certainty that the new compilers will eagerly handle the obfuscation in a deterministic way.
K&R was one of the true marvels of the computing world. For twenty bucks you got a comprehensive specification of the language (I've seen single individuals write complete compilers just using this book), an excellent reference manual (the table of contents would usually point you within a couple of pages of the answer to your question), and a textbook that would teach you to use the language in a sensible way. Complete with rationales, examples and wise words of warning about the numerous ways you could abuse the language to do very, very stupid things.
Destroying that heritage for so little gain seems like a cruel waste to me. But again I might very well fail to see the point completely.
Maybe some kind soul could point me in the direction of an example of new C code that takes a significant advantage of these side effects?
I've designed a parser in C that is able to generate AST, but when I begin to implement simplifications it really got messed up. I've successfully implemented rules for the summation below;
x + 0 -> x
x + x -> 2 * x
etc.
But it took huge amount of effort and code to do it. What I did was to search entire tree and try to find a pattern that I can use (lots of recursion) then if there was a cascade of PLUS nodes, I've added them to a list, then worked on that list (summing numbers and combining variables etc.) then I created another tree from that list, and merged it to existing one. It was this paper I used to implement it. In short given the expression 2*x+1+1+x+0 I got 3*x+2. And it was just summation that got me into so much trouble, I can even imagine the advanced stuff. So I realized I was ding something wrong.
I've read this thread but I'm really confused about term rewriting systems (what it really is, how to implement in C).
Is there a more general and effective way to do simplification on AST? Or how to write a term rewriting system in C
Term rewriting is (in simple words) like the 2 examples you provided. (How to convert x + 0 to x in a AST?). It is about pattern matching on AST's, and once there is a match, a conversion of an equivalent expression. It is also called a term rewriting rule.
Note that having a term rewriting rule is not the absolute or general solution of algebraic simplification. The general solution involves having many rewriting rules (you showed two of them), and apply them in a given AST repeatedly until no one success.
Then, the general solution involves the process or coordination on the application of the rewriting rules. i.e. in order to avoid the re-application of a rule that has previously failed, as an example.
There is not a unique way to do it. There are several systems. For proprietary systems it is not known because they keeps it in secrecy, but there are open source systems too, for example Mathomatica is written in C.
I recommend you to check the open system Fōrmulæ. In this, the process of coordination of rewriting rules (which is called "the reduction engine") is relatively simple. It is written in Java. The advantage of this system is that rewriting rules are not hardwired/hardcoded in the system or the reduction engine (they are hot pluggable). Coding a rewriting rule involves the process of pattern matching and conversion, but no when or how it will be called (it follows the Hollywood principle).
In the specific case of Fōrmulæ:
The reduction engine is based (in general terms) on the post-order tree traversal algorithm. so when a node is "visited", its sub-nodes were already visited and (possibly) transformed, but it is possible to alter such that flow (i.e. to solve the unwanted referentiation of a variable in an assignment x <- 5). Note that it is not just a tree traversal, the AST is being actually changed in the process.
In order to efficiently manage the (possibly hundred or thousand) of rewriting rules, every rule has a type of expression where it is applicable, and when a single node is "visited", only the associated rules are checked for a match. For example, your 2 rules can only be applied to "addition" nodes of an AST.
Rewriting rules are not limited to algebraic simplification, they can be used in many other fields such as programming (Fōrmulæ is also its programming language, see examples of Fōrmulæ programs, or in automatic or assisted theorem proving.
Trying to understand a non-compliant example of Rule 13.5.
MISRA-2012 Rule 13.5 states "The right hand operand of a logical && or || operator shall not contain persistent side effects" With the rationale being "... the side effects may or may not occure which may be contrary to programmer expectations."
I understand and totally agree with this. However their final example of non-compliant code is:
/* Non-compliant if fp points to a function with persistent side effects */
( fp != NULL ) && ( *fp ) ( 0 );
This construct seems perfectly safe in that the condition and the decision to call the function are directly tied, where the intent is to not dereference a NULL pointer. I understand an if statement would be clearer but would be interested if anyone has further insight.
This construct seems perfectly safe in that the condition and the decision to call the function are directly tied, where the intent is to not dereference a NULL pointer. I understand an if statement would be clearer but would be interested if anyone has further insight.
MISRA attempts to define rules that can be interpreted without having to guess the programmer's intent. So yes, the construct you present is fine if it is intentional to avoid the function call in the event that the pointer is NULL, but a machine performing MISRA analysis of that code does not necessarily recognize that likelihood. The rule is primarily aimed at conditional statements where the two operands of && or || are not directly related. Rejecting the case you describe is collateral damage.
Of course, you can replace your case with
if (fp != NULL) {
(*fp)(0);
}
. Personally, I find the if statement clearer than the original expression statement. That's not such a clear call when an expression such as your original one appears in the condition of an if, while, or for statement, but all of those can be restructured to comply with MISRA, too.
Rule 13.5 is a Required Rule, and seeks to prevent situations where a user might assume the the right hand side is executed, but for the sort-circuit evaluation.
In situations like the example cited, it is (probably) OK - in fact, this is quite a common idiom.
There are two options...
Restructure the code, as suggested by #John Bollinger
Deviate the Rule - this requires you to justify why the Rule may be safely violated, which in the case cited, should be straight-forward
See profile for affilation
Given a grammar, how can one avoid stack overflow problem when calculating FIRST and FOLLOW sets in C. The problem arose in my code when I had to recurse through a long production.
Example:
S->ABCD
A->aBc | epsilon
B->Bc
C->a | epsilon
D->B
That is just a grammar off-head. The recursion is as such:
S->A
C->A
A->B
B->D
D->aBc | epsilon
FIRST(S)=FIRST(A)=FIRST(B)=FIRST(D)={a,epsilon}.
Provide a C (not C++) code that calculates and print FIRST and FOLLOW set of the grammar above keeping in mind that you might encounter a longer grammar that has multiple implicit first/follow sets of a particular non-terminal.
For example:
FIRST(A)=FIRST(B)=FIRST(B)=FIRST(C)=FIRST(D)=FIRST(E)=FIRST(F)=FIRST(G)=FIRST(H)=FIRST(I)=FIRST(J)=FIRST(K)={k,l,epsilon}.
That is: for you to get FIRST(A) you have to calculate FIRST(B) and so on until you get to FIRST(K) that has its FIRST(K) has terminals 'k','l', and epsilon. The longer the implication, the more likely you will encounter stack-overflow due to multiple recursion.
How can this be avoided in C language and yet still get the correct output?
Explain with a C (not C++) code.
char*first(int i)
{
int j,k=0,x;
char temp[500], *str;
for(j=0;grammar[i][j]!=NULL;j++)
{
if(islower(grammar[i][j][0]) || grammar[i][j][0]=='#' || grammar[i][j][0]==' ')
{
temp[k]=grammar[i][j][0];
temp[k+1]='\0';
}
else
{
if(grammar[i][j][0]==terminals[i])
{
temp[k]=' ';
temp[k+1]='\0';
}
else
{
x=hashValue(grammar[i][j][0]);
str=first(x);
strncat(temp,str,strlen(str));
}
}
k++;
}
return temp;
}
My code goes to stack overflow. How can I avoid it?
Your program is overflowing the stack not because the grammar is "too complex" but rather because it is left-recursive. Since your program does not check to see if it has already recursed through a non-terminal, once it tries to compute first('B'), it will enter an infinite recursion, which will eventually fill the call stack. (In the example grammar, not only is B left-recursive, it is also useless because it has no non-recursive production, which means that it can never derive a sentence consisting only of terminals.)
That's not the only problem, though. The program suffers from at least two other flaws:
It does not check if a given terminal has already been added to the FIRST set for a non-terminal before adding the terminal to the set. Consequently, there will be repeated terminals in the FIRST sets.
The program only checks the first symbol in the right-hand side. However, if a non-terminal can produce ε (in other words, the non-terminal is nullable), the following symbol needs to be used as well to compute the FIRST set.
For example,
A → B C d
B → b | ε
C → c | ε
Here, FIRST(A) is {b, c, d}. (And similarly, FOLLOW(B) is {c, d}.)
Recursion doesn't help much with the computation of FIRST and FOLLOW sets. The simplest algorithm to describe is the this one, similar to the algorithm presented in the Dragon Book, which will suffice for any practical grammar:
For each non-terminal, compute whether it is nullable.
Using the above, initialize FIRST(N) for each non-terminal N to the set of leading symbols for each production for N. A symbol is a leading symbol for a production if it is either the first symbol in the right-hand side or if every symbol to its left is nullable. (These sets will contain both terminals and non-terminals; don't worry about that for now.)
Do the following until no FIRST set is changed during the loop:
For each non-terminal N, for each non-terminal M in FIRST(N), add every element in FIRST(M) to FIRST(N) (unless, of course, it is already present).
Remove all the non-terminals from all the FIRST sets.
The above assumes that you have an algorithm for computing nullability. You'll find that algorithm in the Dragon Book as well; it is somewhat similar. Also, you should eliminate useless productions; the algorithm to detect them is very similar to the nullability algorithm.
There is an algorithm which is usually faster, and actually not much more complicated. Once you've completed step 1 of the above algorithm, you have computed the relation leads-with(N, V), which is true if and only if some production for the nonterminal N starts with the terminal or non-terminal V, possibly skipping over nullable non-terminals. FIRST(N) is then the transitive closure of leads-with with its domain restricted to terminals. That can be efficiently computed (without recursion) using the Floyd-Warshall algorithm, or using a variant of Tarjan's algorithm for computing strongly connected components of a graph. (See, for example, Esko Nuutila's transitive closure page.)
The line
a = a++;
is undefined behaviour in C. The question I am asking is: why?
I mean, I get that it might be hard to provide a consistent order in which things should be done. But, certain compilers will always do it in one order or the other (at a given optimization level). So why exactly is this left up to the compiler to decide?
To be clear, I want to know if this was a design decision and if so, what prompted it? Or maybe there is a hardware limitation of some kind?
UPDATE: This question was the subject of my blog on June 18th, 2012. Thanks for the great question!
Why? I want to know if this was a design decision and if so, what prompted it?
You are essentially asking for the minutes of the meeting of the ANSI C design committee, and I don't have those handy. If your question can only be answered definitively by someone who was in the room that day, then you're going to have to find someone who was in that room.
However, I can answer a broader question:
What are some of the factors that lead a language design committee to leave the behaviour of a legal program (*) "undefined" or "implementation defined" (**)?
The first major factor is: are there two existing implementations of the language in the marketplace that disagree on the behaviour of a particular program? If FooCorp's compiler compiles M(A(), B()) as "call A, call B, call M", and BarCorp's compiler compiles it as "call B, call A, call M", and neither is the "obviously correct" behaviour then there is strong incentive to the language design committee to say "you're both right", and make it implementation defined behaviour. Particularly this is the case if FooCorp and BarCorp both have representatives on the committee.
The next major factor is: does the feature naturally present many different possibilities for implementation? For example, in C# the compiler's analysis of a "query comprehension" expression is specified as "do a syntactic transformation into an equivalent program that does not have query comprehensions, and then analyze that program normally". There is very little freedom for an implementation to do otherwise.
By contrast, the C# specification says that the foreach loop should be treated as the equivalent while loop inside a try block, but allows the implementation some flexibility. A C# compiler is permitted to say, for example "I know how to implement foreach loop semantics more efficiently over an array" and use the array's indexing feature rather than converting the array to a sequence as the specification suggests it should.
A third factor is: is the feature so complex that a detailed breakdown of its exact behaviour would be difficult or expensive to specify? The C# specification says very little indeed about how anonymous methods, lambda expressions, expression trees, dynamic calls, iterator blocks and async blocks are to be implemented; it merely describes the desired semantics and some restrictions on behaviour, and leaves the rest up to the implementation.
A fourth factor is: does the feature impose a high burden on the compiler to analyze? For example, in C# if you have:
Func<int, int> f1 = (int x)=>x + 1;
Func<int, int> f2 = (int x)=>x + 1;
bool b = object.ReferenceEquals(f1, f2);
Suppose we require b to be true. How are you going to determine when two functions are "the same"? Doing an "intensionality" analysis -- do the function bodies have the same content? -- is hard, and doing an "extensionality" analysis -- do the functions have the same results when given the same inputs? -- is even harder. A language specification committee should seek to minimize the number of open research problems that an implementation team has to solve!
In C# this is therefore left to be implementation-defined; a compiler can choose to make them reference equal or not at its discretion.
A fifth factor is: does the feature impose a high burden on the runtime environment?
For example, in C# dereferencing past the end of an array is well-defined; it produces an array-index-was-out-of-bounds exception. This feature can be implemented with a small -- not zero, but small -- cost at runtime. Calling an instance or virtual method with a null receiver is defined as producing a null-was-dereferenced exception; again, this can be implemented with a small, but non-zero cost. The benefit of eliminating the undefined behaviour pays for the small runtime cost.
A sixth factor is: does making the behaviour defined preclude some major optimization? For example, C# defines the ordering of side effects when observed from the thread that causes the side effects. But the behaviour of a program that observes side effects of one thread from another thread is implementation-defined except for a few "special" side effects. (Like a volatile write, or entering a lock.) If the C# language required that all threads observe the same side effects in the same order then we would have to restrict modern processors from doing their jobs efficiently; modern processors depend on out-of-order execution and sophisticated caching strategies to obtain their high level of performance.
Those are just a few factors that come to mind; there are of course many, many other factors that language design committees debate before making a feature "implementation defined" or "undefined".
Now let's return to your specific example.
The C# language does make that behaviour strictly defined(†); the side effect of the increment is observed to happen before the side effect of the assignment. So there cannot be any "well, it's just impossible" argument there, because it is possible to choose a behaviour and stick to it. Nor does this preclude major opportunities for optimizations. And there are not a multiplicity of possible complex implementation strategies.
My guess, therefore, and I emphasize that this is a guess, is that the C language committee made ordering of side effects into implementation defined behaviour because there were multiple compilers in the marketplace that did it differently, none was clearly "more correct", and the committee was unwilling to tell half of them that they were wrong.
(*) Or, sometimes, its compiler! But let's ignore that factor.
(**) "Undefined" behaviour means that the code can do anything, including erasing your hard disk. The compiler is not required to generate code that has any particular behaviour, and not required to tell you that it is generating code with undefined behaviour. "Implementation defined" behaviour means that the compiler author is given considerable freedom in choice of implementation strategy, but is required to pick a strategy, use it consistently, and document that choice.
(†) When observed from a single thread, of course.
It's undefined because there is no good reason for writing code like that, and by not requiring any specific behaviour for bogus code, compilers can more aggressively optimize well-written code. For example, *p = i++ may be optimized in a way that causes a crash if p happens to point to i, possibly because two cores write to the same memory location at the same time. The fact that this also happens to be undefined in the specific case that *p is explicitly written out as i, to get i = i++, logically follows.
It's ambiguous but not syntactically wrong. What should a be? Both = and ++ have the same "timing." So instead of defining an arbitrary order it was left undefined since either order would be in conflict with one of the two operators definitions.
With a few exceptions, the order in which expressions are evaluated is unspecified; this was a deliberate design decision, and it allows implementations to rearrange the evaluation order from what's written if that will result in more efficient machine code. Similarly, the order in which the side effects of ++ and -- are applied is unspecified beyond the requirement that it happen before the next sequence point, again to give implementations the freedom to arrange operations in an optimal manner.
Unfortunately, this means that the result of an expression like a = a++ will vary based on the compiler, compiler settings, surrounding code, etc. The behavior is specifically called out as undefined in the language standard so that compiler implementors don't have to worry about detecting such cases and issuing a diagnostic against them. Cases like a = a++ are obvious, but what about something like
void foo(int *a, int *b)
{
*a = (*b)++;
}
If that's the only function in the file (or if its caller is in a different file), there's no way to know at compile time whether a and b point to the same object; what do you do?
Note that it's entirely possible to mandate that all expressions be evaluated in a specific order, and that all side effects be applied at a specific point in evaluation; that's what Java and C# do, and in those languages expressions like a = a++ are always well-defined.
The postfix ++ operator returns the value prior to the incrementation. So, at the first step, a gets assigned to its old value (that's what ++ returns). At the next point it is undefined whether the increment or the assignment will take place first, because both operations are applied over the same object (a), and the language says nothing about the order of evaluation of these operators.
Somebody may provide another reason, but from an optimization (better say assembler presentation) point of view, a needs be loaded into a CPU register, and the postfix operator's value should be placed into another register or the same.
So the last assignment can depend on either the optimizer using one register or two.
Updating the same object twice without an intervening sequence point is undefined behaviour ...
because that makes compiler writers happier
because it allows implementations to define it anyway
because it doesn't force a specific constraint when it isn't needed
Suppose a is a pointer with value 0x0001FFFF. And suppose the architecture is segmented so that the compiler needs to apply the increment to the high and low parts separately, with a carry between them. The optimiser could conceivably reorder the writes so that the final value stored is 0x0002FFFF; that is, the low part before the increment and the high part after the increment.
This value is twice either value that you might have expected. It may point to memory not owned by the application, or it may (in general) be a trapping representation. In other words, the CPU may raise a hardware fault as soon as this value is loaded into a register, crashing the application. Even if it doesn't cause an immediate crash, it is a profoundly wrong value for the application to be using.
The same kind of thing can happen with other basic types, and the C language allows even ints to have trapping representations. C tries to allow efficient implementation on a wide range of hardware. Getting efficient code on a segmented machine such as the 8086 is hard. By making this undefined behaviour, a language implementer has a bit more freedom to optimise aggressively. I don't know if it has ever made a performance difference in practice, but evidently the language committee wanted to give every benefit to the optimiser.