What is the reason behind the "False" output of this code? - c

This C code gives output "False" and the else block is executing.
The value of sizeof(int) is 4 but the value of sizeof(int) > -1 is 0.
I don't understand what is happening.
#include <stdio.h>
void main()
{
if (sizeof(int) > -1 )
{
printf("True");
}
else
{
printf("False");
}
printf("\n%d", (sizeof(int)) ); //output: 4
printf("\n%d", (sizeof(int) > -1) ); //output: 0
}

Your sizeof(int) > -1 test is comparing two unsigned integers. This is because the sizeof operator returns a size_t value, which is of unsigned type, so the -1 value is converted to its 'equivalent' representation as an unsigned value, which will actually be the largest possible value for an unsigned int.
To fix this, you need to explicitly cast the sizeof value to a (signed) int:
if ((int)sizeof(int) > -1) {
printf("True");
}

The sizeof operator gives a size_t result.
And size_t is an unsigned type while -1 is not.
That leads to problem when converting -1 to the same type as size_t (-1 turns into a very large number, much larger than sizeof(int)).
Since sizeof returns an unsigned value (which by definition can't be negative), a comparison like yours makes no sense. And besides standard C doesn't allow zero-sized objects or types, so even sizof(any_type_or_expression) > 0 will always be true.

You have to be careful when mixing signed and unsigned values in expressions (and sizeof yields a size_t, which is unsigned).
In a fair number of cases (including this one) the compiler will convert both values to the same type before carrying out operations on them--and when you mix signed and unsigned values, that same type will usually be the unsigned type involved. So what happens in this case is that the -1 gets converted to an unsigned--and when converted to an unsigned value, -1 always converts to the largest value that unsigned type can hold.
From there, the rest is probably fairly clear, I'd guess.

Related

Why does this comparison to sizeof fail? [duplicate]

This question already has answers here:
How can (-1 >= sizeof(buffer)) ever be true? Program fail to get right results of comparison
(1 answer)
void main() { if(sizeof(int) > -1) printf("true"); else printf("false"); ; [duplicate]
(3 answers)
sizeof() operator in if-statement
(5 answers)
Why is sizeof(int) less than -1? [duplicate]
(3 answers)
Closed 2 years ago.
#include<stdio.h>
int main()
{
if(sizeof(double) > -1)
printf("M");
else
printf("m");
return 0;
}
Size of double is greater than -1, so why is the output m?
This is because sizeof(double) is of type size_t, which is an implementation-defined unsigned integer type of at least 16 bits. See this for info on sizeof in C. See this for more info on size_t
The -1 gets converted to an unsigned type for the comparison (0xffff...), which will always be bigger than sizeof(double).
To compare to a negative number, you can cast it like this: (int)sizeof(double) > -1. Depending on what your objective is, it may also be better to compare to 0 instead.
Your code has implementation defined behavior: sizeof(double) has type size_t, which is an unsigned type with at least 16 bits.
If this type is smaller than int, for example if size_t has 16 bits and int 32 bits, the comparison would be true because size_t would be promoted to int and the comparison would use signed int arithmetics.
Yet on most current platforms, size_t is either the same as unsigned or even larger than unsigned: the rules for evaluating the expression specify that the type with the lesser rank is converted to the type with the higher rank, which means that int is converted to the type of size_t if it is larger, and the comparison is performed using unsigned arithmetics. Converting -1 to an unsigned type produces the largest value of this type, which is guaranteed to be larger than the size of a double, so the comparison is false on these architectures, which is alas very confusing.
I have never seen an architecture with a size_t type smaller than int, but the C Standard allows it, so your code can behave both ways, depending on the characteristics of the target system.

Comparision between a signed/unsigned value with a negative value

Program 1:
#include <stdio.h>
int main()
{
if (sizeof(int) > -1)
printf("Yes");
else
printf("No");
return 0;
}
Output : No
Program 2:
#include <stdio.h>
int main()
{
if (2 > -1)
printf("Yes");
else
printf("No");
return 0;
}
Output: Yes
Questions:
What is the difference between program 1 and program 2?
Why sizeof(int) is considered as unsigned?
Why is 2 in program 2 considered as signed?
It is common issue with usual arithmetic conversions between signed and unsigned integers. The sizeof operator returns value of type size_t, that is some implementation-defined unsigned integer type, defined in <stddef.h> (see also this answer).
Integer constant -1 is of type int. When size_t is implemented as "at least" unsigned int (which is very likely to happen in your case), then both operands of binary operator < are converted to unsigned type. Unsigned value cannot be negative, hence -1 is conveted into a large number.
The type of the value returned by the sizeof operator is size_t, which is specified to be an unsigned type (often equivalent to unsigned long).
Simple plain integer literals, like 2 or -1 are always of type int, and int is signed.
If you want an unsigned integer literal, you have to add the U suffix, like 2U.
This is because the sizeof operator returns a value in size_t. This is supposed to be an unsigned type, often implemented as unsigned int.
The number 2 by itself is an int, not unsigned int.

Why this C programming code output is False? expected output is true

We know that sizeof(int) = 4 and 4 > -1 is true so the expected output of the following piece of code is true.
However, it's printing "False". Why is wrong?
#include <stdio.h>
main(){
if (sizeof(int) > -1)
printf("True");
else
printf("False");
}
sizeof returns an unsigned int. -1 cast to an unsigned int ends up being a fairly large number.
sizeof returns size_t type which is of unsigned type. -1 is of signed type and it will be converted to unsigned type implicitly by adding UINT_MAX before comparison .
if(sizeof(int) > -1)
Reason is that sizeof returns (unsigned) value, so -1 was converted to unsigned before comparing.
The Standard says:
if the operand that has unsigned integer type has rank greater or equal to the rank of the type of the other operand, then the operand with signed integer type is converted to the type of the operand with unsigned integer type.
Note that if the second operand has a greater rank, the result is different. My compiler gives true for long long:
if (sizeof(int) > -1LL)
Operator sizeof returns a value of some unsigned integral type that has typedef name size_t. For example it can be unsigned long But in any case the rank of size_t is not less than the rank of int.
According to the rules of usual aruthmetic conversions (the C Standard, 6.3.1.8 Usual arithmetic conversions)
Otherwise, if the operand that has unsigned integer type has rank
greater or equal to the rank of the type of the other operand, then
the operand with signed integer type is converted to the type of the
operand with unsigned integer type.
So in this expression of the if statement
if (sizeof(int) > -1)
integer constant -1 that has type int is converted to type size_t and has value SIZE_MAX.
SIZE_MAX is greater than 4 (or something else that corresponds to sizeof( int )) returned by the sizeof operator.
Thus the above statement may be rewritten like
if (sizeof(int) > SIZE_MAX)
and it yields false.
Take into account that you could prevent the conversion of the integer constant if its rank would be greater than the rank of size_t.
For example try the following if statement
if (sizeof(int) > -1ll)
in this case if size_t is not defined like unsigned long long then the result of evaluation of the expression will be equal to true as you expected.
Here is a demonstrative program
#include <stdio.h>
int main(void)
{
if ( sizeof( int ) > -1 )
{
puts( "True" );
}
else
{
puts( "False" );
}
if ( sizeof( int ) > -1ll )
{
puts( "True" );
}
else
{
puts( "False" );
}
return 0;
}
Its output is
False
True
Comparing an unsigned integer to a signed integer casts the signed to unsigned, resulting in a garbage value, which happens to be larger than the size of an int.
Now if you were to if ((int)sizeof(int) > -1) that would convert the size of int to a signed integer and produce the expected result when compared to -1.

C cast and char signedness

So lately, I read on an issue regarding the three distinct types in C, char/unsigned char/signed char. The problem that I now encounter is not something I have experienced up till now (my program works correctly on all tested computers and only targets little-endian (basically all modern desktops and servers using Windows/Linux right?). I frequently reuse a char array I defined for holding a "string" (not a real string of course) as temporary variables. E.g. instead of adding another char to the stack I just reuse one of the members like array[0]. However, I based this tactic on the fact that a char would always be signed, until I read today that it actually depends on the implementation. What will happen if I now have a char and I assign a negative value to it?
char unknownsignedness = -1;
If I wrote
unsigned char A = -1;
I think that the C-style cast will simply reinterpret the bits and the value that A represents as an unsigned type becomes different. Am I right that these C-Style casts are simply reinterpretation of bits? I am now referring to signed <-> unsigned conversions.
So if an implementation has char as unsigned, would my program stop working as intended? Take the last variable, if I now do
if (A == -1)
I am now comparing a unsigned char to a signed char value, so will this simply compare the bits not caring about the signedness or will this return false because obviously A cannot be -1? I am confused what happens in this case. This is also my greatest concern, as I use chars like this frequently.
The following code prints No:
#include <stdio.h>
int
main()
{
unsigned char a;
a = -1;
if(a == -1)
printf("Yes\n");
else
printf("No\n");
return 0;
}
The code a = -1 assigns an implementation-defined value to a; on most machines, a will be 255. The test a == -1 compares an unsigned char to an int, so the usual promotion rules apply; hence, it is interpreted as
`(int)a == -1`
Since a is 255, (int)a is still 255, and the test yields false.
unsigned char a = -1;
ISO/IEC 9899:1999 says in 6.3.1.3/2:
if the new type is unsigned, the value is converted by repeatedly adding or
subtracting one more than the maximum value that can be represented in the new type until the value is in the range of the new type
We add (UCHAR_MAX+1) to -1 once, and the result is UCHAR_MAX, which is obviously in range for unsigned char.
if (a == -1)
There's a long passage in 6.3.1.8/1:
If both operands have the same type, then no further conversion is needed.
Otherwise, if both operands have signed integer types or both have unsigned
integer types, the operand with the type of lesser integer conversion rank is
converted to the type of the operand with greater rank.
Otherwise, if the operand that has unsigned integer type has rank greater or
equal to the rank of the type of the other operand, then the operand with
signed integer type is converted to the type of the operand with unsigned
integer type.
Otherwise, if the type of the operand with signed integer type can represent
all of the values of the type of the operand with unsigned integer type, then
the operand with unsigned integer type is converted to the type of the
operand with signed integer type.
Otherwise, both operands are converted to the unsigned integer type
corresponding to the type of the operand with signed integer type.
The rank of unsigned char is less than that of int.
If int can represent all the values that unsigned char can (which is usually the case), then both operands are converted to int, and the comparison returns false.
If int cannot represent all values in unsigned char, which can happen on rare machines with sizeof(int)==sizeof(char), then both are converted to unsigned int, -1 gets converted to UINT_MAX which happens to be the same as UCHAR_MAX, and the comparison returns true.
unsigned char A = -1;
results in 255. There is no reinterpretation upon assignment or initialization. A -1 is just a bunch of 1 bits in two's complement notation and 8 of them are copied verbatim.
Comparisons are a bit different, as the literal -1 is of int type.
if (A == -1)
will do a promotion (implicit cast) (int)A before comparison, so you end up comparing 255 with -1. Not equal.
And yes, you have to be careful with plain char.
I think this question is best answered by a quick example (warning: C++, but see explanation for my reasoning):
char c = -1;
unsigned char u = -1;
signed char s = -1;
if (c == u)
printf("c == u\n");
if (s == u)
printf("s == u\n");
if (s == c)
printf("s == c\n");
if (static_cast<unsigned char>(s) == u)
printf("(unsigned char)s == u\n");
if (c == static_cast<char>(u))
printf("c == (char)u\n");
The output:
s == c
(unsigned char)s == u
c == (char)u
C is treating the values differently when used as-is, but you are correct in that casting will just reinterpret the bits. I used a C++ static_cast here instead to show that the compiler is okay with doing this casting. In C, you would just cast by prefixing the type in parenthesis. There is no compiler checking to ensure that the cast is safe in C.

Comparison operation on unsigned and signed integers

See this code snippet
int main()
{
unsigned int a = 1000;
int b = -1;
if (a>b) printf("A is BIG! %d\n", a-b);
else printf("a is SMALL! %d\n", a-b);
return 0;
}
This gives the output: a is SMALL: 1001
I don't understand what's happening here. How does the > operator work here? Why is "a" smaller than "b"? If it is indeed smaller, why do i get a positive number (1001) as the difference?
Binary operations between different integral types are performed within a "common" type defined by so called usual arithmetic conversions (see the language specification, 6.3.1.8). In your case the "common" type is unsigned int. This means that int operand (your b) will get converted to unsigned int before the comparison, as well as for the purpose of performing subtraction.
When -1 is converted to unsigned int the result is the maximal possible unsigned int value (same as UINT_MAX). Needless to say, it is going to be greater than your unsigned 1000 value, meaning that a > b is indeed false and a is indeed small compared to (unsigned) b. The if in your code should resolve to else branch, which is what you observed in your experiment.
The same conversion rules apply to subtraction. Your a-b is really interpreted as a - (unsigned) b and the result has type unsigned int. Such value cannot be printed with %d format specifier, since %d only works with signed values. Your attempt to print it with %d results in undefined behavior, so the value that you see printed (even though it has a logical deterministic explanation in practice) is completely meaningless from the point of view of C language.
Edit: Actually, I could be wrong about the undefined behavior part. According to C language specification, the common part of the range of the corresponding signed and unsigned integer type shall have identical representation (implying, according to the footnote 31, "interchangeability as arguments to functions"). So, the result of a - b expression is unsigned 1001 as described above, and unless I'm missing something, it is legal to print this specific unsigned value with %d specifier, since it falls within the positive range of int. Printing (unsigned) INT_MAX + 1 with %d would be undefined, but 1001u is fine.
On a typical implementation where int is 32-bit, -1 when converted to an unsigned int is 4,294,967,295 which is indeed ≥ 1000.
Even if you treat the subtraction in an unsigned world, 1000 - (4,294,967,295) = -4,294,966,295 = 1,001 which is what you get.
That's why gcc will spit a warning when you compare unsigned with signed. (If you don't see a warning, pass the -Wsign-compare flag.)
You are doing unsigned comparison, i.e. comparing 1000 to 2^32 - 1.
The output is signed because of %d in printf.
N.B. sometimes the behavior when you mix signed and unsigned operands is compiler-specific. I think it's best to avoid them and do casts when in doubt.
#include<stdio.h>
int main()
{
int a = 1000;
signed int b = -1, c = -2;
printf("%d",(unsigned int)b);
printf("%d\n",(unsigned int)c);
printf("%d\n",(unsigned int)a);
if(1000>-1){
printf("\ntrue");
}
else
printf("\nfalse");
return 0;
}
For this you need to understand the precedence of operators
Relational Operators works left to right ...
so when it comes
if(1000>-1)
then first of all it will change -1 to unsigned integer because int is by default treated as unsigned number and it range it greater than the signed number
-1 will change into the unsigned number ,it changes into a very big number
Find a easy way to compare, maybe useful when you can not get rid of unsigned declaration, (for example, [NSArray count]), just force the "unsigned int" to an "int".
Please correct me if I am wrong.
if (((int)a)>b) {
....
}
The hardware is designed to compare signed to signed and unsigned to unsigned.
If you want the arithmetic result, convert the unsigned value to a larger signed type first. Otherwise the compiler wil assume that the comparison is really between unsigned values.
And -1 is represented as 1111..1111, so it a very big quantity ... The biggest ... When interpreted as unsigned.
while comparing a>b where a is unsigned int type and b is int type, b is type casted to unsigned int so, signed int value -1 is converted into MAX value of unsigned**(range: 0 to (2^32)-1 )**
Thus, a>b i.e., (1000>4294967296) becomes false. Hence else loop printf("a is SMALL! %d\n", a-b); executed.

Resources