sizeof of the conditional operator's value ?: - c

#include <stdio.h>
int main()
{ int x = 1;
short int i = 2;
float f = 3;
if(sizeof((x == 2) ? f : i) == sizeof(float))
printf("float\n");
else if (sizeof((x == 2) ? f : i) == sizeof(short int))
printf("short int\n");
}
Here the expression ((x == 2) ? f : i) evaluates to i which is of type short int.. size of short int =2 whereas sizeof float is 4 byts.output should be "short int" but i m getting output "float"

Here the expression ((x == 2) ? f : i) evaluates to i which is of type short int
This is not how usual arithmetic conversions work in C. The second and third operands of ? : are first converted to a common type, and that type is the type of the result of the expression. And also that type will not in any case be smaller than int, because of promotions.
This is all described in clause 6.3.1 Arithmetic operands of the C11 standard, which is slightly too long to cite here.

sizeof is a compile-time operator, so it cannot evaluate x==2. It evaluates the type of the ternary expression, which in this case is float, via a conversion to a common type (the second and third operands of the ternary expression must be of the same type, and the int gets converted to float.)

Related

Casting to _Bool

Traditionally, Boolean values in C were represented with int or char. The new _Bool type makes intent clearer, but also has another interesting feature: it seems casting floating-point numbers to it, does not truncate toward zero, but compares with exact zero:
#include <stdio.h>
int main(int argc, char **argv) {
double a = 0.1;
int i = (int)a;
printf("%d\n", i);
_Bool b = (_Bool)a;
printf("%d\n", b);
return 0;
}
prints
0
1
So this is a semantic difference. And one I'm happy with; it duplicates the effect of using a floating-point number as a conditional.
Is this something that can be depended on across-the-board? Does the new C standard define the result of casting X to _Bool as identical to X ? 1 : 0 for all X for which that is a valid operation?
In the C Standard (6.3.1.2 Boolean type) there is written
1 When any scalar value is converted to _Bool, the result is 0 if the
value compares equal to 0; otherwise, the result is 1.
That is during conversion to the type _Bool the compiler does not try to represent the value (truncating toward zero or something else) of the operand of an expression with the casting operator as an integer. It only checks whether the value is equal or unequal to zero.
Actually this declaration
_Bool b = (_Bool)a;
is equivalent to
_Bool b = a;
It is entirely consistent, a non-zero value implicitly cast to _Bool is true. Since _Bool is a true Boolean type and not "faked", it can behave correctly. So:
_Bool b = (_Bool)a ;
is equivalent to:
_Bool b = (a == 0) ;
not:
_Bool b = ((int)a == 0) ;
In the end interpreting a float as Boolean is nonsense and ill-advised as is comparing it for equality to zero. If you want the semantic you expect, you must code it explicitly:
_Bool b = (_Bool)((int)a);
Semantically that is equivalent to :
_Bool b = (a < 1.0);
It is clearer and safer to use a Boolean expression than to force a value to Boolean with a cast.

Why this conditional expression has the size of a float?

I expect the output to be "short int" but the output is "float".
#include <stdio.h>
int main(void)
{
int x = 1;
short int i = 2;
float f = 3;
if (sizeof((x == 2) ? f : i) == sizeof(float))
printf("float\n");
else if (sizeof((x == 2) ? f : i) == sizeof(short int))
printf("short int\n");
}
You expect (x == 2) ? f : i to have a type based on the value of x. But that is not how the C type system operates. The conditional operator is an expression, and all* expressions in C have a fixed type at compile time. It is this type that sizeof operates on. The value of the expression will depend on the value of x, but the type depends on f and i alone.
In this case, the type is the determined by the usual arithmetic conversions, which nominate float as the type of the result, same as if you had written f + i, where the result would unsurprisingly be a float too.
(*) - VLA's produce exemptions to this rule, but your question is not about one, so it's irrelevant.
You are asking the compiler to compute the size of (x == 2) ? f : i and that expression is a float.
Remember that sizeof is a compile-time operator, and that the ?: ternary conditional operator will have as type something which is convertible from both the "then" and the "else" case.
For details, refer to some C reference and to the C11 standard n1570

If statement executes even when condition is false

if( (*ptr != ',') || strlen(ptr+1) < sizeof(struct A) * num1)
{
printf("\n Condition satisfied.");
}
This is the code in question. I have a string of the format str = "-1,ABCDEFGH", and a struct A of size 15 bytes.
I'm performing this operation beforehand:
number = strtoul(str, &ptr, 10);
After this operation, ptr points to the ',' and number = -1
Looking at the IF condition, the first statement evaluates to be false (because *ptr = ',') and the second statement executes to be TRUE even though it should be false ( strlen(ptr+1) is positive, and (sizeof(struct A) * number) is negative, simply because num1 is a negative value ).
Why is this statement evaluating to be true and entering the IF block? I'm getting the output 'Condition satisfied', whereas I shouldn't be. Thanks in advance.
(sizeof(struct A) * number) is negative, simply because num1 is a negative value
Not quite. sizeof(struct A) has type size_t (unsigned type).
Assuming that
num is of type int,
precision of signed type, corresponding to size_t is the same as or bigger than precision of int,
sizeof(struct A) * num is an unsigned value (and hence non negative), even if num is negative.
See Arithmetic operators:
Otherwise, if the unsigned operand's conversion rank is greater or equal to the conversion rank of the signed operand, the signed operand is converted to the unsigned operand's type.
Please note, as #user3386109 commented, that strlen uses an unsigned type too. So there could be the same problem with < as with *.
sizeof(struct A) is undefined and also it is multiplied by a number so it provides a

"comparison between signed and unsigned integer expressions" with only unsigned integers

This warning should not appear for this code should it?
#include <stdio.h>
int main(void) {
unsigned char x = 5;
unsigned char y = 4;
unsigned int z = 3;
puts((z >= x - y) ? "A" : "B");
return 0;
}
z is a different size but it is the same signedness. Is there something about integer conversions that I'm not aware about? Here's the gcc output:
$ gcc -o test test.c -Wsign-compare
test.c: In function ‘main’:
test.c:10:10: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
puts((z >= x - y) ? "A" : "B");
^
$ gcc --version
gcc (Debian 4.9.1-15) 4.9.1
If z is an unsigned char I do not get the error.
The issue is that additive operators perform the usual arithmetic conversions on arithmetic types which. In this case it results in the integer promotions being performed on the operands, which results in unsigned char being converted to int since signed int can represent all the values of the type of unsigned char.
A related thread Why must a short be converted to an int before arithmetic operations in C and C++? explains the rationale for promotions.
C has this concept called "Integer Promotion".
Basically it means that all maths is done in signed int unless you really insist otherwise, or it doesn't fit.
If I put in the implicit conversions, your example actually reads like this:
puts((z >= (int)x - (int)y) ? "A" : "B");
So, now you see the signed/unsigned mismatch.
Unfortunately, you can't safely correct this problem using casts alone. There are a few options:
puts((z >= (unsigned int)(x - y)) ? "A" : "B");
or
puts((z >= (unsigned int)x - (unsigned int)y) ? "A" : "B");
or
puts(((int)z >= x - y) ? "A" : "B");
But they all suffer from the same problem: what if y is larger than x, and what if z is larger than INTMAX (not that it will in the example)?
A properly correct solution might look like this:
puts((y > x || z >= (unsigned)(x - y)) ? "A" : "B")
In the end, unless you really need the extra bit, it usually best to avoid unsigned integers.

A Macro using sizeof(arrays) is not giving the expected output

#include <stdio.h>
int arr[] = {1, 2,3,4,5};
#define TOT (sizeof(arr)/sizeof(arr[0]))
int main()
{
int d = -1, x = 0;
if(d<= TOT){
x = arr[4];
printf("%d", TOT);
}
printf("%d", TOT);
}
TOT has the value 5 but the if condition is failing..why is that?
Because there are "the usual arithmetic conversions" at work for the if.
The sizeof operator returns an unsigned type ... and d is converted to unsigned making it greater than the number of elements in arr.
Try
#define TOT (int)(sizeof(arr)/sizeof(arr[0]))
or
if(d<= (int)TOT){
That's because sizeof returns an unsigned number, while d is signed. When d implicitly converted to a singed number, and then it is much much larger than TOT.
You should get a warning about comparison of signed-unsigned comparison from the compiler.
Your expression for TOT is an unsigned value because the sizeof() operator always returns unsigned (positive) values.
When you compare the signed variable d with it, d gets automatically converted to a very large unsigned value, and hence becomes larger than TOT.
return type of sizeof is unsigned integer ....that is why if is failing ...because "d" which is treated as signed by the compiler is greater than TOT

Resources