This int c = (a==b) is exactly what I'd like to say in my C program, compiling with GCC. I can do it, obviously (it works just fine), but I don't know whether it may cause undefined behavior. My program will not be compiled with some other compiler or in other architectures. Is this legal ANSI C? Thanks.
It's completely legal. if a is equal to b, then c will be 1. else, it will be 0.
int c = (a == b);
this is perfectly legal. Initialization is part of the C standard (C99 §6.7.8), the right hand side can just be any assignment-expression, including a == b (of course, assuming a and b are defined and have comparable type).
It is perfectly valid if c is declared at block scope.
When declared at file scope it is not valid because the initializer has to be a constant expression.
a == b is an expression and in that sense is not different that another expression like a + b or a & b.
Well, it depends on what the types of a and b are. If they are types that support equality check, then yes, it's perfectly legal.
Related
Suppose we declare a variable
int i = 10;
And we have these 2 statements-
printf("%d",i);
printf("%d",*(&i));
These two statements print the same value, i.e. 10
From my understanding of pointers, not just their output is same, but the above two statements mean exactly the same. They are just two different ways of writing the same statement.
However, I came across an interesting code-
#include <stdio.h>
int main(){
const int i = 10;
int* pt1 = &i;
*pt1 = 20;
printf("%d\n", i);
printf("%d\n", *(&i));
return 0;
}
To my surprise, the result is-
10
20
This suggests that i and *(&i) don't mean the same if i is declared with the const qualifier. Can anyone explain?
The behavior of *pt1 = 20; is not defined by the C standard, because pt1 has been improperly set to point to the const int i. C 2018 6.7.3 says:
If an attempt is made to modify an object defined with a const-qualified type through use of an lvalue with non-const-qualified type, the behavior is undefined…
Because of this, the behavior of the entire program is not defined by the C standard.
In C code with defined behavior, *(&i) is defined to produce the value of i. However, in code with behavior not defined by the C standard, the normal rules that would apply to *(&i) are canceled. It could produce the value that the const i was initialized to, it could produce the value the program attempted to change i to, it could produce some other value, it could cause the program to crash, or it could cause other behavior.
Think of what it means to declare something as const - you're telling the compiler you don't want the value of i to change, and that it should flag any code that has an expression like i = 20.
However, you're going behind the compiler's back and trying to change the value of i indirectly through pt1. The behavior of this action is undefined - the language standard places no requirements on the compiler or the runtime environment to handle this situation in any particular way. The code is erroneous and you should not expect a meaningful result.
One possible explanation (of many) for this result is that the i in the first printf statement was replaced with the constant 10. After all, you promised that the value of i would never change, so the compiler is allowed to make that optimization.
But since you take the address of i, storage must be allocated for it somewhere, and in this case it apparently allocated storage in a writable segment (const does not mean "put this in read-only memory"), so the update through pt1 was successful. The compiler actually evaluates the expression *(&i) to retrieve the current value of i, rather than replacing it with a constant. On a different platform, or in a different program, or in a different build of the same program, you may get a different result.
The moral of this story is "don't do that". Don't attempt to modify a const object through a non-const expression.
I'm new to C programming and I'm struggling really hard to understand why this works
#include <stdio.h>
int main() {
int l = -5;
char arr[l];
printf("content: %s; sizeof: %d\n", arr, sizeof(arr));
}
with output:
content: ; sizeof: -5
and this doesn't:
#include <stdio.h>
int main() {
char arr[-5];
printf("content: %s; sizeof: %d\n", arr, sizeof(arr));
}
with output:
name.c:6:10: error: size of array ‘arr’ is negative
6 | char arr[-5];
| ^~~
I was expecting an error also from the first example, but I really don't know what's happening here.
Neither version of the program conforms to the C language specification (even after the format specifiers are corrected to properly match the size_t argument). But the two cases are semantically different, and they violate different provisions of the language specification.
Taking this one first:
char arr[-5];
The expression -5 is an integer constant expression, so this is a declaration of an ordinary array (not a variable-length array). It is subject to paragraph 6.7.6.2/1 of the C17 language spec, which says, in part:
In addition to optional type qualifiers and the keyword static, the [
and ] may delimit an expression or *. If they delimit an expression
(which specifies the size of an array), the expression shall have an
integer type. If the expression is a constant expression, it shall
have a value greater than zero.
(Emphasis added.)
That is part of a language constraint, which means that the compiler is obligated to emit a diagnostic message when it observes a violation. In principle, implementations are not required to reject code containing constraint violations, but if they accept such codes then the language does not define the results.
On the other hand, consider
int l = -5;
char arr[l];
Because l is not a constant expression (and would not be even if l were declared const), the provision discussed above does not apply, and, separately, arr is a variable-length array. This is subject to paragraph 6.7.6.2/5 of the spec, the relevant part requiring of the size expression that:
each time it is evaluated it shall have a value greater than zero
The program violates that provision, but it is a semantic rule, not a language constraint, so the compiler is not obligated to diagnose it, much less to reject the code. In the general case, the compiler cannot recognize or diagnose violations of this particular rule, though in principle, it could do so in this particular case. If it accepts the code then the runtime behavior is undefined.
Why this program emits -5 when you compile and run it with your particular C implementation on your particular hardware is not established by C. It might be specified by your implementation, or it might not. Small variations in the program or different versions of your C implementation might produce different results.
Overall, this is yet another example of C refusing to hold your hand. New coders and those used to interpreted languages and virtual machines seem often to have the expectation that some component of the system will inform them when they have written bad code. Sometimes it does, but other times it just does something with that bad code that might or might not resemble what the programmer had in mind. Effective programming in C requires attention to detail.
I know the following is an example of pass by reference in C++, input is passed as a reference:
void add(int &input){
++input;
}
I also know pass by reference is not available in C. My question is, does the above syntax mean something else in C (i.e pass by value or something), or is it meaningless?
Trying to compile it in C gives this error:
error: parameter name omitted
does the above syntax mean something else in C?
No, it does not. It's not valid C at all.
The & operator means two things in C. The binary one is bitwise "and", and the unary is "address of". You cannot use it in declarations.
C++ chose this for reference variable for two reasons. The first is that since it is not valid C, it will not collide with existing C code. When C++ came, they focused pretty hard on making C++ backwards compatible with C. In later versions of C++, the backwards compability with C is not a very high priority. To a large degree, this is because C++ was a fork of a pretty early version of C, and since then both languages have evolved somewhat independently. For instance C99 added (but it was removed later) variable length arrays, which were never added to C++. Another example is designated initializers.
The other reason is that the meaning of the operator is pretty similar. You can interpret it as "instead of forcing the caller to send the address, I will take the address of whatever he is sending". They simply just moved the & to the function prototype instead of the function call.
And yes, there are a few other differences between pointers and references too
A reference must be initialized. (Assigned upon declaration)
A reference cannot be reassigned to "point" to another object.
A reference must always "point" at an object. It cannot be NULL.
There is one danger with references. In C, you can be certain that a function will never change the variables you send as arguments to a function unless you're sending the address to them. This C code:
int main(void)
{
int a = 42;
foo(a);
printf("%d\n", a);
}
will ALWAYS print "42", no matter how the function foo is defined. Provided that the code compiles and there's no weird undefined behavior. In C++, you don't have that guarantee.
No, it is simply invalid syntax in C.
That is actually one of the reasons that C++ picked this syntax for the feature: it wouldn't change the meaning of any existing C code.
While C does not have pass by reference (and the code will produce compile error), you can get something closer by following the rules:
In the prototype, replace & with * const (reference cannot be reassigned).
In the body, replace reference to varname with (*varname)
When calling the method, replace arg with &(arg).
void add (int *const in)
{
++(*in) ; // increment
(*in) = 5 ; // assign
int x = *in ; // Copy value
}
does the above syntax mean something else in C (i.e pass by value or something), or it's meaningless?
It is meaningless. The program is syntactically ill-formed .
why there is a difference between the 2 next code segments:
struct g {
int m[100];
};
struct a {
struct g ttt[40];
struct g hhh[40];
}man;
extern int bar(int z);
//this code generate a call to memcopy.
void foo1(int idx){
bar(((idx == 5) ? man.hhh[idx+7] : man.ttt[idx+7]).m[idx+3]);
}
//this code doesn't generate a call to memcopy.
void foo2(int idx){
bar(((idx == 5) ? man.hhh[idx+7].m[idx+3] : man.ttt[idx+7].m[idx+3]));
}
In both codes segment I want to send the same field (depends on the conditional expression) to bar function. However one the first code generate a call to memcopy (when compiled with clang to powerpc arch it can be seen clearly). I wrote a little main and run the 2 functions and they gave me the same output (compiled with gcc 4.4.7).
This answer applies to C only - the question is dual-tagged but I am assuming OP is using C for reasons that will become clear later.
Here's the first expression again:
((idx == 5) ? man.hhh[idx+7] : man.ttt[idx+7]).m[idx+3]
The type of the conditional expression is struct g. However, the result of the conditional operator in C is not an lvalue. What is it then?
In C11 6.2.4p8 it's explicitly defined as a value of temporary lifetime.
In C90 the m[idx+3] is ill-formed: m is not an lvalue because the . operator only yields an lvalue if the left operand was an lvalue; and the array-pointer decay only applies to lvalues.
In C99 array-pointer decay happens to all values, but it's not explicitly stated where decayed m points.
Personally I think it's clear enough that in C99, something akin to the C11 behaviour was intended, so I would regard the code as well-defined in C99. Further discussion here. This is probably a moot point, as on all the compilers I tried, they gave the same result for -std=c99 as they did for -std=c11.
Moving forward then: In C11 (and probably C99), Snippet 1 should give the right result. Your compiler does that, but it seems that it optimizes the code poorly. It naively copies the whole value resulting from the conditional operator before indexing into it.
Testing with godbolt, I found that all versions of "x86 clang" and "PowerPC gcc 4.8" used memcpy; but "x86 gcc" was able to optimize the code.
In C++, the result of the conditional operator is an lvalue if the second and third operands were lvalues of the same type, so this problem shouldn't arise in that language.
To avoid this problem, use an alternative where the result of the conditional operator is not a struct or union value. For example you could just use Snippet 2; or either of:
bar( ((idx == 5) ? &man.hhh[idx+7] : &man.ttt[idx+7]))->m[idx+3] );
bar( ((idx == 5) ? man.hhh : man.ttt)[idx+7].m[idx+3] );
#include <stdio.h>
int main() {
int c = c;
printf("c is %i\n", c);
return 0;
}
I'm defining an integer variable called c, and I'm assigning its value to itself. But how can this even compile? c hasn't been initialized, so how can its value be assigned to itself? When I run the program, I get c is 0.
I am assuming that the compiler is generating assembly code that is assigning space for the the c variable (when the compiler encounters the int c statement). Then it takes whatever junk value is in that un-initialized space and assigns it back to c. Is this what's happening?
I remember quoting this in a previous answer but I can't find it at the moment.
C++03 §3.3.1/1:
The point of declaration for a name is immediately after its complete declarator (clause 8) and before its initializer (if any), ...
Therefore the variable c is usable even before the initializer part.
Edit: Sorry, you asked about C specifically; though I'm sure there is an equivalent line in there. James McNellis found it:
C99 §6.2.1/7: Any identifier that is not a structure, union, or enumeration tag "has scope that begins just after the completion of its declarator." The declarator is followed by the initializer.
Your guess is exactly right. int c pushes space onto the stack for the variable, which is then read from and re-written to for the c = c part (although the compiler may optimize that out). Your compiler is pushing the value on as 0, but this isn't guaranteed to always be the case.
It's undefined behavior to use an uninitialized value (§C99 J.2 "The value of an object with automatic storage duration is used while it is
indeterminate"). So anything can happen from nasal demons to c = 0, to playing Nethack.
c has been initialized!
Although this is one line of code, it is in fact initializing c first, then assigning c to it. You are just lucky that the compiler is initializing c to zero for you.
The C specification don't assure that variables will be initialized to 0, 0.0 nor "" or ''.
That is a feature of compilers and never you must thrust that will happen.
I always set my IDE/Compiler to warning about that.