After researching, I read that the increment operator requires the operand to have a modifiable data object: https://en.wikipedia.org/wiki/Increment_and_decrement_operators.
From this I guess that it gives compilation error because (a+b) is a temporary integer and so is not modifiable.
Is this understanding correct? This was my first time trying to research a problem so if there was something I should have looked for please advise.
It's just a rule, that's all, and is possibly there to (1) make it easier to write C compilers and (2) nobody has convinced the C standards committee to relax it.
Informally speaking you can only write ++foo if foo can appear on the left hand side of an assignment expression like foo = bar. Since you can't write a + b = bar, you can't write ++(a + b) either.
There's no real reason why a + b couldn't yield a temporary on which ++ can operate, and the result of that is the value of the expression ++(a + b).
The C11 standard states in section 6.5.3.1
The operand of the prefix increment or decrement operator shall have
atomic, qualified, or unqualified real or pointer type, and shall be a
modifiable lvalue
And "modifiable lvalue" is described in section 6.3.2.1 subsection 1
An lvalue is an expression (with an object type other than void) that
potentially designates an object; if an lvalue does not designate an
object when it is evaluated, the behavior is undefined. When an
object is said to have a particular type, the type is
specified by the lvalue used to designate the object. A modifiable
lvalue is an lvalue that does not have array type, does not have
an incomplete type, does not have a const-qualified type, and
if it is a structure or union, does not have any member
(including, recursively, any member or element of all contained
aggregates or unions) with a const-qualified type.
So (a+b) is not a modifiable lvalue and is therefore not eligible for the prefix increment operator.
You are correct. the ++ tries to assign the new value to the original variable. So ++a will take the value of a, adds 1 to it and then assign it back to a. Since, as you said, (a+b) is a temp value, and not a variable with assigned memory address the assignment can't be performed.
I think you mostly answered your own question.
I might make a small change to your phrasing and replace "temporary variable" with "rvalue" as C.Gibbons mentioned.
The terms variable, argument, temporary variable and so on will become more clear as you learn about C's memory model (this looks like a nice overview: https://www.geeksforgeeks.org/memory-layout-of-c-program/ ).
The term "rvalue" may seem opaque when you're just starting out, so I hope the following helps with developing an intuition about it.
Lvalue/rvalue are talking about the different sides of an equals sign (assignment operator):
lvalue = left hand side (lowercase L, not a "one")
rvalue = right hand side
Learning a little about how C uses memory (and registers) will be helpful for seeing why the distinction is important. In broad brush strokes, the compiler creates a list of machine language instructions that compute the result of an expression (the rvalue) and then puts that result somewhere (the lvalue). Imagine a compiler dealing with the following code fragment:
x = y * 3
In assembly pseudocode it might look something like this toy example:
load register A with the value at memory address y
load register B with a value of 3
multiply register A and B, saving the result in A
write register A to memory address x
The ++ operator (and its -- counterpart) need a "somewhere" to modify, essentially anything that can work as an lvalue.
Understanding the C memory model will be helpful because you'll get a better idea in your head about how arguments get passed to functions and (eventually) how to work with dynamic memory allocation, like the malloc() function. For similar reasons you might study some simple assembly programming at some point to get a better idea of what the compiler is doing. Also if you're using gcc, the -S option "Stop after the stage of compilation proper; do not assemble." can be interesting (though I'd recommend trying it on a small code fragment).
Just as an aside:
The ++ instruction has been around since 1969 (though it started in C's predecessor, B):
(Ken Thompson's) observation (was) that the translation of ++x was smaller than that of x=x+1."
Following that wikipedia reference will take you to an interesting writeup by Dennis Ritchie (the "R" in "K&R C") on the history of the C language, linked here for convenience: http://www.bell-labs.com/usr/dmr/www/chist.html where you can search for "++".
The reason is that the standard requires the operand being an lvalue. The expression (a+b) is not a lvalue, so applying the increment operator isn't allowed.
Now, one might say "OK, that's indeed the reason, but there is actually no *real* reason other than that", but unluckily the particular wording of how the operator works factually does require that to be the case.
The expression ++E is equivalent to (E+=1).
Obviously, you cannot write E += 1 if E isn't a lvalue. Which is a shame because one could just as well have said: "increments E by one" and be done. In that case, applying the operator on a non-lvalue would (in principle) be perfectly possible, at the expense of making the compiler slightly more complex.
Now, the definition could trivially be reworded (I think it isn't even originally C but an heirloom of B), but doing so would fundamentally change the language to something that's no longer compatible with its former versions. Since the possible benefit is rather small but the possible implications are huge, that never happened and probably is never going to happen.
If you consider C++ in addition to C (question is tagged C, but there was discussion about operator overloads), the story becomes even more complicated. In C, it's hard to imagine that this could be the case, but in C++ the result of (a+b) could very well be something that you cannot increment at all, or incrementing could have very considerable side effects (not just adding 1). The compiler must be able to cope with that, and diagnose problematic cases as they occur. On a lvalue, that's still kinda trivial to check. Not so for any kind of haphazard expression inside a parenthesis that you throw at the poor thing.
This isn't a real reason why it couldn't be done, but it sure lends as an explanation why the people who implemented this are not precisely ecstatic to add such a feature which promises very little benefit to very few people.
(a+b) evaluates to an rvalue, which cannot be incremented.
++ tries to give the value to the original variable and since (a+b) is a temp value it cannot perform the operation. And they are basically rules of the C programming conventions to make the programming easy. That's it.
When ++(a+b) expression performed, then for example :
int a, b;
a = 10;
b = 20;
/* NOTE :
//step 1: expression need to solve first to perform ++ operation over operand
++ ( exp );
// in your case
++ ( 10 + 20 );
// step 2: result of that inc by one
++ ( 30 );
// here, you're applying ++ operator over constant value and it's invalid use of ++ operator
*/
++(a+b);
Related
Looking at this code:
static int global_var = 0;
int update_three(int val)
{
global_var = val;
return 3;
}
int main()
{
int arr[5];
arr[global_var] = update_three(2);
}
Which array entry gets updated? 0 or 2?
Is there a part in the specification of C that indicates the precedence of operation in this particular case?
Order of Left and Right Operands
To perform the assignment in arr[global_var] = update_three(2), the C implementation must evaluate the operands and, as a side effect, update the stored value of the left operand. C 2018 6.5.16 (which is about assignments) paragraph 3 tells us there is no sequencing in the left and right operands:
The evaluations of the operands are unsequenced.
This means the C implementation is free to compute the lvalue arr[global_var] first (by “computing the lvalue,” we mean figuring out what this expression refers to), then to evaluate update_three(2), and finally to assign the value of the latter to the former; or to evaluate update_three(2) first, then compute the lvalue, then assign the former to the latter; or to evaluate the lvalue and update_three(2) in some intermixed fashion and then assign the right value to the left lvalue.
In all cases, the assignment of the value to the lvalue must come last, because 6.5.16 3 also says:
… The side effect of updating the stored value of the left operand is sequenced after the value computations of the left and right operands…
Sequencing Violation
Some might ponder about undefined behavior due to both using global_var and separately updating it in violation of 6.5 2, which says:
If a side effect on a scalar object is unsequenced relative to either a different side effect on the same scalar object or a value computation using the value of the same scalar object, the behavior is undefined…
It is quite familiar to many C practitioners that the behavior of expressions such as x + x++ is not defined by the C standard because they both use the value of x and separately modify it in the same expression without sequencing. However, in this case, we have a function call, which provides some sequencing. global_var is used in arr[global_var] and is updated in the function call update_three(2).
6.5.2.2 10 tells us there is a sequence point before the function is called:
There is a sequence point after the evaluations of the function designator and the actual arguments but before the actual call…
Inside the function, global_var = val; is a full expression, and so is the 3 in return 3;, per 6.8 4:
A full expression is an expression that is not part of another expression, nor part of a declarator or abstract declarator…
Then there is a sequence point between these two expressions, again per 6.8 4:
… There is a sequence point between the evaluation of a full expression and the evaluation of the next full expression to be evaluated.
Thus, the C implementation may evaluate arr[global_var] first and then do the function call, in which case there is a sequence point between them because there is one before the function call, or it may evaluate global_var = val; in the function call and then arr[global_var], in which case there is a sequence point between them because there is one after the full expression. So the behavior is unspecified—either of those two things may be evaluated first—but it is not undefined.
The result here is unspecified.
While the order of operations in an expression, which dictate how subexpressions are grouped, is well defined, the order of evaluation is not specified. In this case it means that either global_var could be read first or the call to update_three could happen first, but there’s no way to know which.
There is not undefined behavior here because a function call introduces a sequence point, as does every statement in the function including the one that modifies global_var.
To clarify, the C standard defines undefined behavior in section 3.4.3 as:
undefined behavior
behavior, upon use of a nonportable or erroneous program construct or
of erroneous data,for which this International Standard imposes no
requirements
and defines unspecified behavior in section 3.4.4 as:
unspecified behavior
use of an unspecified value, or other behavior where this
International Standard provides two or more possibilities and imposes
no further requirements on which is chosen in any instance
The standard states that the evaluation order of function arguments is unspecified, which in this case means that either arr[0] gets set to 3 or arr[2] gets set to 3.
I tried and I got the entry 0 updated.
However according to this question: will right hand side of an expression always evaluated first
The order of evaluation is unspecified and unsequenced.
So I think a code like this should be avoided.
As it makes little sense to emit code for an assignment before you have a value to assign, most C compilers will first emit code that calls the function and save the result somewhere (register, stack, etc.), then they will emit code that writes this value to its final destination and therefore they will read the global variable after it has been changed. Let us call this the "natural order", not defined by any standard but by pure logic.
Yet in the process of optimization, compilers will try to eliminate the intermediate step of temporarily storing the value somewhere and try to write the function result as directly as possible to the final destination and in that case, they often will have to read the index first, e.g. to a register, to be able to directly move the function result to the array. This may cause the global variable to be read before it was changed.
So this is basically undefined behavior with the very bad property that its quite likely that the result will be different, depending on if optimization is performed and how aggressive this optimization is. It's your task as a developer to resolve that issue by either coding:
int idx = global_var;
arr[idx] = update_three(2);
or coding:
int temp = update_three(2);
arr[global_var] = temp;
As a good rule of the thumb: Unless global variables are const (or they are not but you know that no code will ever change them as a side effect), you should never use them directly in code, as in a multi-threaded environment, even this can be undefined:
int result = global_var + (2 * global_var);
// Is not guaranteed to be equal to `3 * global_var`!
Since the compiler may read it twice and another thread can change the value in between the two reads. Yet, again, optimization would definitely cause the code to only read it once, so you may again have different results that now also depend on the timing of another thread. Thus you will have a lot less headache if you store global variables to a temporary stack variable before usage. Keep in mind if the compiler thinks this is safe, it will most likely optimize even that away and instead use the global variable directly, so in the end, it may make no difference in performance or memory use.
(Just in case someone asks why would anyone do x + 2 * x instead of 3 * x - on some CPUs addition is ultra-fast and so is multiplication by a power two as the compiler will turn these into bit shifts (2 * x == x << 1), yet multiplication with arbitrary numbers can be very slow, thus instead of multiplying by 3, you get much faster code by bit shifting x by 1 and adding x to the result - and even that trick is performed by modern compilers if you multiply by 3 and turn on aggressive optimization unless it's a modern target CPU where multiplication is equally fast as addition since then the trick would slow down the calculation.)
Global edit: sorry guys, I got all fired up and wrote a lot of nonsense. Just an old geezer ranting.
I wanted to believe C had been spared, but alas since C11 it has been brought on par with C++. Apparently, knowing what the compiler will do with side effects in expressions requires now to solve a little maths riddle involving a partial ordering of code sequences based on a "is located before the synchronization point of".
I happen to have designed and implemented a few critical real-time embedded systems back in the K&R days (including the controller of an electric car that could send people crashing into the nearest wall if the engine was not kept in check, a 10 tons industrial robot that could squash people to a pulp if not properly commanded, and a system layer that, though harmless, would have a few dozen processors suck their data bus dry with less than 1% system overhead).
I might be too senile or stupid to get the difference between undefined and unspecified, but I think I still have a pretty good idea of what concurrent execution and data access mean. In my arguably informed opinion, this obsession of the C++ and now C guys with their pet languages taking over synchronization issues is a costly pipe dream. Either you know what concurrent execution is, and you don't need any of these gizmos, or you don't, and you would do the world at large a favour not trying to mess with it.
All this truckload of eye-watering memory barrier abstractions is simply due to a temporary set of limitations of the multi-CPU cache systems, all of which can be safely encapsulated in common OS synchronization objects like, for instance, the mutexes and condition variables C++ offers.
The cost of this encapsulation is but a minute drop in performances compared with what a use of fine grained specific CPU instructions could achieve is some cases.
The volatile keyword (or a #pragma dont-mess-with-that-variable for all I, as a system programmer, care) would have been quite enough to tell the compiler to stop reordering memory accesses.
Optimal code can easily be produced with direct asm directives to sprinkle low level driver and OS code with ad hoc CPU specific instructions. Without an intimate knowledge of how the underlying hardware (cache system or bus interface) works, you're bound to write useless, inefficient or faulty code anyway.
A minute adjustment of the volatile keyword and Bob would have been everybody but the most hardboiled low level programers' uncle.
Instead of that, the usual gang of C++ maths freaks had a field day designing yet another incomprehensible abstraction, yielding to their typical tendency to design solutions looking for non existent problems and mistaking the definition of a programming language with the specs of a compiler.
Only this time the change required to deface a fundamental aspect of C too, since these "barriers" had to be generated even in low level C code to work properly. That, among other things, wrought havoc in the definition of expressions, with no explanation or justification whatsoever.
As a conclusion, the fact that a compiler could produce a consistent machine code from this absurd piece of C is only a distant consequence of the way C++ guys coped with potential inconsistencies of the cache systems of the late 2000s.
It made a terrible mess of one fundamental aspect of C (expression definition), so that the vast majority of C programmers - who don't give a damn about cache systems, and rightly so - is now forced to rely on gurus to explain the difference between a = b() + c() and a = b + c.
Trying to guess what will become of this unfortunate array is a net loss of time and efforts anyway. Regardless of what the compiler will make of it, this code is pathologically wrong. The only responsible thing to do with it is send it to the bin.
Conceptually, side effects can always be moved out of expressions, with the trivial effort of explicitly letting the modification occur before or after the evaluation, in a separate statement.
This kind of shitty code might have been justified in the 80's, when you could not expect a compiler to optimize anything. But now that compilers have long become more clever than most programmers, all that remains is a piece of shitty code.
I also fail to understand the importance of this undefined / unspecified debate. Either you can rely on the compiler to generate code with a consistent behaviour or you can't. Whether you call that undefined or unspecified seems like a moot point.
In my arguably informed opinion, C is already dangerous enough in its K&R state. A useful evolution would be to add common sense safety measures. For instance, making use of this advanced code analysis tool the specs force the compiler to implement to at least generate warnings about bonkers code, instead of silently generating a code potentially unreliable to the extreme.
But instead the guys decided, for instance, to define a fixed evaluation order in C++17. Now every software imbecile is actively incited to put side effects in his/her code on purpose, basking in the certainty that the new compilers will eagerly handle the obfuscation in a deterministic way.
K&R was one of the true marvels of the computing world. For twenty bucks you got a comprehensive specification of the language (I've seen single individuals write complete compilers just using this book), an excellent reference manual (the table of contents would usually point you within a couple of pages of the answer to your question), and a textbook that would teach you to use the language in a sensible way. Complete with rationales, examples and wise words of warning about the numerous ways you could abuse the language to do very, very stupid things.
Destroying that heritage for so little gain seems like a cruel waste to me. But again I might very well fail to see the point completely.
Maybe some kind soul could point me in the direction of an example of new C code that takes a significant advantage of these side effects?
I know this works to get the index (and do the pointer arithmetic automatically):
struct Obj *c = &p[i];
However, does this have a performance hit compared to doing the pointer arithmetic manually?
struct Obj *c = p+i;
Are there any reasons why the first should be avoided?
However, does this have a performance hit compared to doing the pointer arithmetic manually?
Expressions using the indexing operator are defined to be equivalent to corresponding operations involving pointer arithmetic and dereferencing, so
&p[i]
is exactly equivalent to
&(*(p + i))
The standard furthermore specifies that when the operand of the & operator is the result of a * operator, neither is evaluated, and the result is as if both were omitted, so that is in turn equivalent to
p + i
, which has defined behavior as long as it does not purport to produce a pointer to before the beginning of the array from which pointer p was derived, nor more than one position past its end.
Although it is conceivable that a compiler would produce different or worse code in one case than in the other, there is no reason to expect that.
Are there any reasons why the first should be avoided?
Not much. The latter has a lighter cognitive load and is easier to read, inasmuch as (syntactically) it involves only one operation rather than two. I tend to prefer it myself for that reason. On the other hand, understanding it may be a little less intuitive.
That second version is perplexing and stands out as an anomaly, so unless you have an extremely compelling case for using it, don't. If you do, then you should document why so someone doesn't go and replace it with the former.
The second form is also really brittle in that it might break if you changed something by refactoring and introducing a new struct like:
struct Mobj *c = p + i * sizeof(struct Obj);
Where you forgot to update the sizeof part and now your code is super broken.
That of course overlooks the fact that when doing pointer math it will automatically increment by the size of the structure anyway, that is:
p[i] == *(p + i)
But in your case what you're doing is effectively this:
p[i * sizeof(Obj)]
Which is not what you're intending.
Less is more. Coding is hard enough as it is, don't overcomplicate things.
Yea, it is okay. In fact, when you use p it is actually the same as &p[0] from the compiler's point of view. Always let the compiler do the pointer math for you, use the first form.
The C standard states that &array[index] is exactly equivalent to array + index. This means that both do pointer arithmetic, so they should be no different.
Even if they were different choosing one over the other would be a microoptimization that's not worth doing unless you have a significant measurable difference between them.
In C (and some other C-like languages) we have 2 unary operators for working with pointers: the dereference operator (*) and the 'address of' operator (&). They are left unary operators, which introduces an uncertainty in order of operations, for example:
*ptr->field
or
*arr[id]
The order of operations is strictly defined by the standard, but from a human perspective, it is confusing. If the * operator was a right unary operator, the order would be obvious and wouldn't require extra parentheses:
ptr*->field vs ptr->field*
and
arr*[id] vs arr[id]*
So is there a good reason why are the operators left unary, instead of right. One thing that comes to mind would be the declaration of types. Left operators stay near the type name (char *a vs char a*), but there are type declarations, which already break this rule, so why bother (char a[num], char (*a)(char), etc).
Obviously, there are some problems with this approach too, like the
val*=2
Which would be either an *= short hand for val = val * 2 or dereference and assign val* = 2.
However this can be easily solved by requiring a white space between the * and = tokens in case of dereferencing. Once again, nothing groundbreaking, since there is a precedent of such a rule (- -a vs --a).
So why are they left instead of right operators?
Edit:
I want to point out, that I asked this question, because many of the weirder aspects of C have interesting explanations, for why they are the way they are, like the existence of the -> operator or the type declarations or the indexing starting from 0. And so on. The reasons may be no longer valid, but they are still interesting in my opinion.
There indeed is an authoritative source: "The Development of the C Language" by the creator of the language, Dennis M. Ritchie:
An accident of syntax contributed to the perceived complexity of the language. The indirection operator, spelled * in C, is syntactically a unary prefix operator, just as in BCPL and B. This works well in simple expressions, but in more complex cases, parentheses are required to direct the parsing. For example, to distinguish indirection through the value returned by a function from calling a function designated by a pointer, one writes *fp() and (*pf)() respectively. The style used in expressions carries through to declarations, so the names might be declared
int *fp();
int (*pf)();
In more ornate but still realistic cases, things become worse:
int *(*pfp)();
is a pointer to a function returning a pointer to an integer. There are two effects occurring. Most important, C has a relatively rich set of ways of describing types (compared, say, with Pascal). Declarations in languages as expressive as C—Algol 68, for example—describe objects equally hard to understand, simply because the objects themselves are complex. A second effect owes to details of the syntax. Declarations in C must be read in an `inside-out' style that many find difficult to grasp [Anderson 80]. Sethi [Sethi 81] observed that many of the nested declarations and expressions would become simpler if the indirection operator had been taken as a postfix operator instead of prefix, but by then it was too late to change.
Thus the reason why * is on the left in C is because it was on the left in B.
B was partially based on BCPL, where the dereferencing operator was !.
This was on the left; the binary ! was an array indexing operator:
a!b
is equivalent to !(a+b).
!a
is the content of the cell whose address is given by a; it can appear on the left of an assignment.
Yet the 50 year old BCPL manual doesn't even contain mentions of the ! operator - instead, the operators were words: unary lv and rv. Since these were understood as if they were functions, it was natural that they preceded the operand; later the longish rv a could then be replaced with syntactic sugar !a.
Many of the current C operator practices can be traced via this route. B alike had a[b] being equivalent to *(a + b) to *(b + a) to b[a] just like in BCPL one could use a!b <=> b!a.
Notice that in B variables were untyped, so certainly similarity with declarations could not have been the reason to use * on the left there.
So the reason for unary * being on the left in C is as boring as "there wasn't any problem in the simpler programs with the unary * being on the left, in the position that everyone was accustomed to have the dereferencing operator in other languages, that no one really thought that some other way would have been better until it was too late to change it".
Can somebody explain what is happening with the precedence in this code? I've be trying to figure out what is happening by myself but I could'nt handle it alone.
#include <stdio.h>
int main(void) {
int v[]={20,35,76,80};
int *a;
a=&v[1];
--(*++a);
printf("%d,%d,%d,%d\n",v[0],v[1],v[2],v[3]);
(*++a);
printf("%d\n", *a);
*a--=*a+1; // WHAT IS HAPPENING HERE?
printf("%d\n", *a);
printf("%d,%d,%d,%d\n",v[0],v[1],v[2],v[3]);
}
//OUTPUT
20,35,75,80
80
75
20,35,75,76
*a--=*a+1; // WHAT IS HAPPENING HERE?
What's happening is that the behavior is undefined.
6.5 Expressions
...
2 If a side effect on a scalar object is unsequenced relative to either a different side effect
on the same scalar object or a value computation using the value of the same scalar
object, the behavior is undefined. If there are multiple allowable orderings of the
subexpressions of an expression, the behavior is undefined if such an unsequenced side
effect occurs in any of the orderings.84)
3 The grouping of operators and operands is indicated by the syntax.85) Except as specified
later, side effects and value computations of subexpressions are unsequenced.86)
C 2011 Online Draft (N1570)
The expressions *a-- and *a are unsequenced relative to each other. Except in a few cases, C does not guarantee that expressions are evaluated left to right; therefore, it's not guaranteed that *a-- is evaluated (and the side effect applied) before *a.
*a-- has a side effect - it updates a to point to the previous element in the sequence. *a + 1 is a value computation - it adds 1 to the value of what a currently points to.
Depending on the order that *a-- and *a are evaluated and when the side effect of the -- operator is actually applied, you could be assigning the result of v[1] + 1 to v[0], or v[1] + 1 to v[1], or v[0] + 1 to v[0], or v[0] + 1 to v[1], or something else entirely.
Since the behavior is undefined, the compiler is not required to do anything in particular - it may issue a diagnostic and halt translation, it may issue a diagnostic and finish translation, or it may finish translation without a diagnostic. At runtime, the code may crash, you may get an unexpected result, or the code may work as intended.
I'm not going to explain the whole program; I'm going to focus on the "WHAT IS HAPPENING HERE" line. I think we can agree that before this line, the v[] array looks like this, with a pointing at v's last element:
+----+----+----+----+
v: | 20 | 35 | 75 | 80 |
+----+----+----+----+
0 1 2 3
^
+-|-+
a: | * |
+---+
Now, we have
*a-- = *a+1;
It looks like this is going to assign something to where a points, and decrement a. So it looks like it will assign something to v[3], but leave a pointing at v[2].
And the value that gets assigned will evidently be the value that a points to, plus 1.
But the key question is, when we take *a+1 on the right-hand side, will it use the old or the new value of a, before or after the decrement on the right-hand side? It turns out this is a really, really hard question to answer.
If we take the value after the decrement, it'll be a[2], plus 1, or 76 that gets assigned to a[3]. It looks like that's how your compiler interpreted it. And this makes a certain amount of sense, because when we read from left to right, it's easy to imagine that by the time we get around to computing *a+1, the a-- has already happened.
Or, if we took the value before the decrement, it would be a[3], plus 1, or 81 that gets assigned to a[3]. And that's how it was interpreted by three different compilers I tried it on. And this makes a certain amount of sense, too, because of course assignments actually proceed from right to left, so it's easy to imagine that *a+1 happens before the a-- on the left-hand side.
So which compiler is correct, yours or mine, and which is wrong? This is where the answer gets a little strange, and/or surprising. The answer is that neither compiler is wrong. This is because it turns out that it's not just really hard to decide what should happen here, it is (by definition) impossible to figure out what happens here. The C standard does not define how this expression should behave. In fact, it goes one farther than not defining how this expression should behave: the C Standard explicitly says that this expression is undefined. So your compiler is right to put 76 in v[3], and my compilers are right to put 81. And since "undefined behavior" means that anything can happen, it wouldn't be wrong for a compiler to arrange to put some other number into v[3], or to end up assigning to something other than v[3].
So the other part of the answer is that you must not write code like this. You must not depend on undefined behavior. It will do different things under different compilers. It may do something completely unpredictable. It is impossible to understand, maintain, or explain.
It's pretty easy to detect when an expression is undefined due to order-of-evaluation ambiguity. There are two cases: (1) the same variable gets modified twice, as in x++ + x++. (2) The same variable gets modified in one place, and used in another, as in *a-- = *a+1.
It's worth noting that one of the three compilers I used said "eo.c:15: warning: unsequenced modification and access to 'a'", and another said "eo.c:15:5: warning: operation on ‘a’ may be undefined". If your compiler has an option to enable warnings like these, use it! (Under gcc it's -Wsequence-point or -Wall. Under clang, it's -Wunsequenced or -Wall.)
See John Bode's answer for the detailed language from the C Standard that makes this expression undefined. See also the canonical StackOverflow question on this topic, Why are these constructs (using ++) undefined behavior?
Not exactly sure which expression you have problems with. Increment and decrement operators have the highest precedence. Dereference comes after. Addition, substraction, after.
But with regards to assignment, C does not specify order of evaluation (right to left or left to right).
will right hand side of an expression always evaluated first
C does not specify which of the right hand side or left hand side of the = operator is evaluated first.
*a--=*a+1;
So it could be that your pointer a is decremented first or after it's dereferenced on the right hand side.
In other words, depending on the compiler this expression could be equivalent to either:
a--;
*a = *a+1;
or
*(a-1)=*a+1;
a--;
I personally never rely too much on operator precedence in my code. I makes it more legible to either put parenthesis or separate in different lines.
Unless you're building a compiler yourself and need to make a decision to what assembly code to generate.
This question already has answers here:
Undefined behavior and sequence points
(5 answers)
Closed 8 years ago.
For my compiler class, we are gradually creating a pseudo-PASCAL compiler. It does, however, follow the same precedence as C. That being said, in the section where we create prefix and postfix operators, I get 0 for
int a = 1;
int b = 2;
++a - b++ - --b + a--
when C returns a 1. What I don't understand is how you can even get a 1. By doing straight prefix first, the answer should be 2. And by doing postfix first, the answer should be -2. By doing everything left to right, I get zero.
My question is, what should my precedence of my operators be to return a 1?
Operator precedence tells you for example whether ++a - b means (++a) - b or ++(a - b). Clearly it should be the former since the latter isn't even valid. In your implementation it's clearly the former (or you wouldn't be getting a result at all), so you implemeneted operator precedence correctly.
Operator precedence has nothing to do with the order in which subexpressions are evaluated. In fact the order in which the operator operands to + and - are evaluated is unspecified in C and any code that modifies the same variable twice without a sequence point in between invokes undefined behavior. So whichever order you choose is fine and 0 is as valid a result as any other value.
It is illegal to change variables several times in a row like that (roughly between asignments, the standard talks about sequence points). Technically, this is what the C standard calls undefined behaviour. The compiler has no obligation to detect you are writing nonsense, and can assume you will never do. Anything whatsoever can happen when you run the program (or even while compiling). Also check nasal demons in the Jargon File.
The ++ increment and -- decrement operators can be placed before or after a value, different affect. If placed before the operand (prefix), its value is immediately changed, if placed after the operand (postfix) its value is noted first, then the value is changed.
McGrath, Mike. (2006). C programming in easy steps, 2nd Edition. United Kingdom : Computer Step.