This question already has answers here:
Why are these constructs using pre and post-increment undefined behavior?
(14 answers)
Closed 2 years ago.
Why n++==--n always equal to 1? The following code gives output as 1.
#include <stdio.h>
int main(){
int n=10;
printf("%d\n",n++==--n);
}
The output is always 1, no matter what n is.
If compiled with gcc -Wall, the following warning is obtained:
a.c: In function ‘main’:
a.c:4:20: warning: operation on ‘n’ may be undefined [-Wsequence-point]
printf("%d\n",n++==--n);
~^~
There is a good explanation in the gcc manpage about this, the section which begins:
-Wsequence-point
Warn about code that may have undefined semantics because of violations of sequence
point rules in the C and C++ standards.
[...etc...]
which is well worth a read, because it discusses the issues in more detail, although the basic point is that the result depends on the ordering of operations whose ordering is not fully constrained by the standards, and is therefore undefined.
I probably can't reproduce it in full here without also adding a lot of licence information in order to comply with the GFDL. There is a copy at https://gcc.gnu.org/onlinedocs/gcc-3.3.6/gcc/Warning-Options.html
(Take-home message: always compile your code with compiler warnings switched on, but especially if you are seeing possibly unexpected behaviour.)
Related
This question already has answers here:
Why are these constructs using pre and post-increment undefined behavior?
(14 answers)
Closed 3 years ago.
CODE
#include <stdio.h>
int main()
{
int a =1;
printf("%d%d%d%d\n",a,++a,++a,a);
a=1;
printf("%d%d%d%d\n",a,a++,++a,a);
a=1;
printf("%d%d%d\n",a,++a,a++);
return 0;
}
Output
3333
3233
331
This is "undefined behavior". You cannot rely on the order of evaluation of function arguments in C. When we say "undefined behavior" we mean that anything might happen: it might work on one compiler, it might not work on another compiler; it might work on one compiler with optimizations disabled, and not work on the same compiler with optimizations enabled; it might not work at all; it might work flawlessly; it might dump core; it might send sperm whales and bowls of petunias falling from the sky.
(See https://www.quora.com/What-is-the-passage-on-the-whale-and-the-bowl-of-petunias-about)
When investigating a problem, I stumbled upon code, which boils down to the following example:
const unsigned long long int MAX=9223372036854775807ULL; //2^63-1
void double_it(double *d){
for(unsigned long long int i=0;i<MAX; i++){
d[i]=2*d[i];
}
}
Due to some mistakes, the for loop runs much further than there is memory and the program crashes. But this is not the interesting part.
When compiled with gcc (gcc -O2 -Wall -std=c99 -c), this code leads to the following warning:
warning: iteration 2305843009213693951ull invokes undefined behavior [-Waggressive-loop-optimizations]
which cause I don't understand.
There are some similar questions on stackoverflow, e.g.:
g++ "warning: iteration ... invokes undefined behavior" for Seemingly Unrelated Variable
Why does this loop produce "warning: iteration 3u invokes undefined behavior" and output more than 4 lines?
But those problems were integer overflows, here the counter i is seemingly nowhere near an overflow.
Compiling the same code without -O2 does not lead to such a warning, so I guess -Waggressive-loop-optimizations is an important part.
So actually, I have two questions:
What is the problem with this code (compiled for Linux x86-64)?
Why there is no warning without -O2? If this code is faulty, I would expect it to be faulty no matter whether it is optimized or not.
The behavior is the same for g++ (see it online at coliru).
But those problems were integer overflows, here the counter i is seemingly nowhere near an overflow.
Why would you think that?
d[i] is the same as *(d + i). d + i clearly overflows since the size of a double is more than 2 (not exactly sure if that's spelled out anywhere in the standard, but it's a pretty safe assumption that your architecture is like that). To be perfectly correct, sizeof is not entirely related to this, but this is what the code gets turned into internally in the compiler.
In C11 §6.5.6 we can read:
If both the pointer operand and the result point to elements of the same array object, or one past the last element of the array object, the evaluation shall not produce an overflow; otherwise, the behavior is undefined
We can reverse the logic of that sentence. If the addition obviously overflows, then it must have been undefined behavior.
The reason why you don't get a warning is that the compiler is under no obligation to give you a warning on all undefined behaviors. It's a courtesy from the compiler. With optimizations the compiler spends more time reasoning about what your code does, so it gets the chance to spot more bad behaviors. Without optimizations it doesn't waste time doing that.
This question already has answers here:
Pointer arithmetic for void pointer in C
(10 answers)
Closed 5 years ago.
There are at least three different posts about how void pointer arithmetic is prohibited in C; that gcc 4.8.2 allows it, assuming that a void is of byte size; and how one can turn on extra pedantic warnings to trigger an error. Here is an example:
#include <stdio.h>
/* compile gcc -Wall -o try try.c */
int main() {
char *str="string";
void *vp= (void *) str;
++vp; /* arithmetic on void point. huh? */
printf("%s\n", (char*)vp);
return 0;
}
My question is about thinking about what a C compiler is supposed to do in case of invalid code. Is it not considered a bug when a compiler does not issue a compile error on invalid code?
And this seems like bizarre behavior for a compiler, anyway — even if gcc does not issue a compile error, at the very least, it could issue a "deprecated" warning with the default compiler flags. And, even with
-Wall, it is still not even giving a warning. Huh? It surprised me because gcc seems very mature otherwise and C is not exactly a novel or complex language.
The C standard makes an attempt to perform pointer arithmetic on void* a constraint violation, which means that any conforming C compiler must issue at least one diagnostic message for any program containing such an attempt. The warning may be a non-fatal error; in that case, the compiler may then go on to generate code whose behavior is defined by the implementation.
gcc, by default, does not warn about pointer arithmetic on void*. This means that gcc, by default, is not a conforming C compiler.
One could argue that this is a bug, but in its default mode gcc is not a compiler for standard C but for GNU C. (A Fortran compiler's failure to be a conforming C compiler is also not a bug.)
Carefully chosen command-line options can force gcc to at least attempt to be conforming. For example:
gcc -std=cXX -pedantic
where XX is one of 90, 99, or 11, will cause gcc to warn about pointer arithmetic on void*. Replacing -pedantic with -pedantic-errors causes it to treat such arithmetic as a fatal error.
Sure invalid standard C code could be legal in a specific compiler, it's called compiler extension.
It's true in this case, from https://gcc.gnu.org/onlinedocs/gcc/Pointer-Arith.html
In GNU C, addition and subtraction operations are supported on pointers to void and on pointers to functions. This is done by treating the size of a void or of a function as 1.
If you need your code to be portable, it's always a good idea to stick with standard C, but if your code runs only on a specific platform, it's no harm to use certain compiler extensions.
C11 standard n1570 S6.5.6/2:
For addition, either both operands shall have arithmetic type, or one operand shall be a
pointer to a complete object type and the other shall have integer type. (Incrementing is
equivalent to adding 1.)
The language for C++ is similar.
It's definitely not standards-conforming behaviour. I think the GCC team already know that.
The answer is that a compliant compiler should issue a diagnostic, and then generate whatever code it likes (or not).
This question already has answers here:
Why are these constructs using pre and post-increment undefined behavior?
(14 answers)
Closed 9 years ago.
I want to know on what criterias,gcc compiler decides to optimize the values of variables.?
Here is sample
int a=2;
printf("%d %d\n",a++,++a);
It gives output
3 4
Why does gcc optimizes and gives latest value of ain pre-increment and not in post increment?On which basis it takes decision?
It's undefined behavior. There is no specified order in which arguments are evaluated.
The code has two problems.
You change the value of a twice in the same expression, with no so-called "sequence point" between them. This is undefined behavior and anything can happen. See the FAQ for more information.
You have side effects in the parameters passed to a function, the side effect being a ++ increment. The order of evaluation of function parameters is unspecified behavior, meaning that the compiler has implemented it in some way, but we can't know how. It may be different from function to function, and certainly different from compiler to compiler.
One should never write code that relies on undefined or unspecified behavior. Even more info in the FAQ.
This question already has answers here:
Why are these constructs using pre and post-increment undefined behavior?
(14 answers)
I am confused between True and False .. does a True value stands for Non-Zero and False value stands for Zero? [duplicate]
(4 answers)
Closed 9 years ago.
Recently I faced an issue understanding the behaviour for printf() function.
This is what I was working with
#include<stdio.h>
int main(){
int a=5;
printf("%d %d %d",a++,a++,++a);
return 0;
}
When I ran this code snippet on gcc (linux) I got output as 7 6 8.
But while running it on turbo (windows) I got output as 7 6 6.
What I understood is in turbo the parameters are passed in right to left order.
Can anyone explain how it works in linux using gcc.
Your code contains several modifications of the same variable without any sequence points between the modifications. Thus, the code is incorrect, and results are unpredictable.
Also, order of evaluation of function parameters is implementation-defined.
Different compilers may give different results in this situation. The question is not only about printf but also about parameter evaluation sequence.
What is an implementation defined behaviour?:
A language standard defines the semantics of the language constructs. When standard doesn't include the specifications of what to do in some case. Compiler designers may chose the path they think is correct. So, these constructs become implementation defined.
Since, that is not defined in standard, it's called undefined behaviour.
Unfortunately, these questions were given blindly given by many instructors in exams by merely testing in a compiler to set the question.
Example:
What is the output of following statement? But options don't include
undefined behaviour
#include<stdio.h>
int main(){
int a=5;
printf("%d %d %d",a++,a++,++a);
return 0;
}