This question already has answers here:
Comparison operation on unsigned and signed integers
(7 answers)
In a C expression where unsigned int and signed int are present, which type will be promoted to what type?
(2 answers)
Closed 7 years ago.
This is the code,
#include <stdio.h>
int main()
{
unsigned int i = 0xFFFFFFFF;
if (i == -1)
printf("signed variable\n");
else
printf("unsigned variable\n");
return 0;
}
This is the output,
signed variable
Why is i's value -1 even it is declared as unsigned?
Is it something related to implicit type conversations?
This is the build environment,
Ubuntu 14.04, GCC 4.8.2
The == operator causes its operands to be promoted to a common type according to C's promotion rules. Converting -1 to unsigned yields UINT_MAX.
i's value is 0xFFFFFFFF, which is exactly the same as -1, at least when the later is converted to an unsigned integer. And this is exactly what is happening with the comparison operators:
If both of the operands have arithmetic type, the usual arithmetic conversions are performed. [...]
[N1570 $6.5.9/4]
-1 in two's complement is "all bits set", which is also what 0xFFFFFFFF for an unsigned int (of size 4) is.
Related
This question already has answers here:
Comparison operation on unsigned and signed integers
(7 answers)
Closed 1 year ago.
#include <stdio.h>
int main() {
unsigned int i = 23;
signed char c = -23;
if (i<c)
puts("TRUE");
return 0;
}
Why the output of the following program is TRUE, even though I have used signed char which can store from -128 to 127.
Your i < c comparison has operands of two different types, so the operand with the smaller type (lower rank) is converted to the type of the other. In this case, as -23 cannot be represented as an unsigned int, and following the "usual arithmetic conversions," it will end up with a value considerably greater than +23 (actually, UINT_MAX + 1 - 23), according to Paragraph #2 in the following excerpt from this (Draft) C11 Standard:
6.3.1.3 Signed and unsigned integers
1 When a value with integer type is converted to another integer type other than _Bool, if the value can be represented by the new type, it is unchanged.
2 Otherwise, if the new type is unsigned, the value is converted by
repeatedly adding or subtracting one more than the maximum value that
can be represented in the new type until the value is in the range of
the new type.
3 Otherwise, the new type is signed and the
value cannot be represented in it; either the result is
implementation-defined or an implementation-defined signal is
raised.
You can see this conversion in action by assigning c to a separate unsigned int and printing its value, as in the following code. Assuming a 32-bit unsigned int, you will likely see a value of 4294967273 (which is, indeed, greater than 23).
#include <stdio.h>
int main()
{
unsigned int i = 23;
signed char c = -23;
unsigned int uc = c; // Add a diagnostic line ...
printf("%u\n", uc); // ... and show the value being used in the comparison
if (i < c) {
printf("TRUE");
}
return 0;
}
Note: The relative size of the two variables (i and c) here is something of a red herring; even if you declare c as signed int c = -23;, you will still get the same result, due to the Usual Arithmetic Conversions (link courtesy of Jonathan Leffler's comment) – note that a signed int cannot represent all possible values of an unsigned int, so the very last of the 'hollow' bullets will come into play.
In order to make the comparison, the smallest variable (char, 8 bit) must be promoted to the size and signedness of the larger one (integer, 32 bit). But in so doing, -23 becomes 4294967273. Which is much more than 23.
If you compare c with a signed integer, the sign promotion causes no change and the test behaves as expected.
This question already has answers here:
My computer thinks that signed int is smaller then -1? [duplicate]
(3 answers)
sizeof() operator in if-statement
(5 answers)
Closed 5 years ago.
Why does it print 2 when the value of SIZE is greater than -1?
Link to the code: http://ideone.com/VCdrKy
#include <stdio.h>
int array[] = {1,2,3,4,5,6,7,8};
#define SIZE (sizeof(array)/sizeof(int))
int main(void) {
if(-1<=SIZE) printf("1");
else printf("2");
return 0;
}
Both arguments are in different type
Argument are 'converted' to 'common' type, and 'common' between signed -1 and unsigned SIZE is unsigned.
So -1 is converted -> 0xfffffff (depends on architecture) that is grater than SIZE
From the C standard on integer conversions:
If the operand that has unsigned integer type has rank greater than
or equal to the rank of the type of the other operand, the operand
with signed integer type is converted to the type of the operand with
unsigned integer type.
Here -1 is of type int, and SIZE is of type size_t. On your c compiler, size_t is unsigned and has greater than or equal rank to int, so the -1 is converted to size_t, which gives a large positive number (SIZE_MAX)
This question already has answers here:
unsigned int and signed char comparison
(4 answers)
Closed 6 years ago.
I came across this question.
#include <stdio.h>
int main(void) {
// your code goes here
unsigned int i = 23;
signed char c = -23;
if (i > c)
printf("yes");
else
printf("no");
return 0;
}
I am unable to understand why the output of this code is no.
Can someone help me in understanding how the comparison operator works when comparison is done between int and char in C ?
You are comparing an unsigned int to a signed char. The semantics of this kind of comparison are counter-intuitive: most binary operations involving signed and unsigned operands are performed on unsigned operands, after converting the signed value to unsigned (if both operands have the same size after promotion). Here are the steps:
The signed char value is promoted to an int with the same value -23.
comparison is to be performed on int and unsigned int, the common type is unsigned int as defined in the C Standard.
The int is converted to an unsigned int with value UINT_MAX - 23, a very large number.
The comparison is performed on the unsigned int values: 23 is the smaller value, the comparison evaluates to false.
The else branch is evaluated, no is printed.
To make matters worse, if c had been defined as a long, the result would have depended on whether long and int have the same size or not. On Windows, it would print no, whereas on 64 bit Linux, it would print yes.
Never mix signed and unsigned values in comparisons. Enable compiler warnings to prevent this kind of mistake (-Wall or -Weverything). You might as well make all these warnings fatal with -Werror to avoid this kind of ill-fated code entirely.
For a complete reference, read the following sections of the C Standard (C11) under 6.3 Conversions:
integer promotions are explained in 6.3.1.1 Boolean, characters, and integers.
operand conversions are detailed in 6.3.1.8 Usual arithmetic conversions.
You can download the latest draft of the C11 Standard from the working group website: http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1570.pdf
This question already has answers here:
Why should be there the involvement of type promotion in this code?
(1 answer)
Comparison operation on unsigned and signed integers
(7 answers)
Closed 7 years ago.
Can You justify the below code:
#include<stdio.h>
int main()
{
if(sizeof(int) > -1)
{
printf("\nTrue\n");
}
else
{
printf("\nFALSE\n");
}
}
The output is FALSE .....suggest me the reason
sizeof(int) has type size_t, which is an unsigned integer type.
-1 has type int, which is a signed integer type.
When comparing a signed integer with an unsigned integer, first the signed integer is converted to unsigned, then the comparison is performed with two unsigned integers.
sizeof(int) > (unsigned int)-1 is false, because (unsigned int)-1 is a very large number on most implementations (equal to UINT_MAX, or the largest number which fits in an unsigned int).
sizeof
yields a value of an unsigned type (namely size_t).
In sizeof(int) > -1 expression, the usual arithmetic conversion applies and -1 is converted to the unsigned type of sizeof which results in a huge unsigned value greater than -1.
It's because the sizeof operator returns an unsigned integer type. When compared with a signed integer type, the signed integer is converted to unsigned. So in effect, you were comparing sizeof(int) against the largest possible unsigned integer.
You can force the size to signed by casting:
#include <stdio.h>
int main()
{
if((int)sizeof(int) > -1)
{
printf("\nTrue\n");
}
else
{
printf("\nFALSE\n");
}
}
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
A riddle (in C)
1.
main()
{
if(-1<(unsigned char)1)
printf("-1 is less than (unsigned char)1:ANSI semantics");
else
printf("-1 NOT less than (unsigned char)1:K&R semantics");
}
2.
int array[] = {23,41,12,24,52,11};
#define TOTAL_ELEMENTS (sizeof(array)/sizeof(array[0]))
main()
{
int d = -1,x;
if(d<=TOTAL_ELEMENTS -2)
x = array[d+1];
}
The first convert unsigned char 1 to a signed variable in ANSI C,
while the second program convert d to an unsigned int that makes the
condition expression return false in ANSI C.
Why did they behave differently?
For the first one the right-hand side is an unsigned char, and all unsigned char values fit into a signed int, so it is converted to signed int.
For the second one the right-hand side is an unsigned int, so the left-hand side is converted from signed int to unsigned int.
See also this CERT document on integer conversions.
starblue explained the first part of your question. I'll take the second part. Because TOTAL_ELEMENTS is a size_t, which is unsigned, the int is converted to that unsigned type. Your size_t is so that int cannot represent all values of it, so the conversion of the int to size_t happens, instead of the size_t to the int.
Conversion of negative numbers to unsigned is perfectly defined: The value wraps around. If you convert -1 to an unsigned int, it ends up at UINT_MAX. That is true whether or not you use twos' complement to represent negative numbers.
The rationale for C document has more information about that value preserving conversion.
Here's the way I remember how automatic conversions are applied:
if the operand sizes are different, the conversion is applied to the smaller operand to make it the same type as the larger operand (with sign extension if the smaller operand is signed)
if the operands are the same size, but one is signed and the other unsigned, then the signed operand is converted to unsigned
While the above may not be true for all implementations, I believe it is correct for all twos-complement implementations.