I'm trying to understand why the following code doesn't issue a warning at the indicated place.
//from limits.h
#define UINT_MAX 0xffffffff /* maximum unsigned int value */
#define INT_MAX 2147483647 /* maximum (signed) int value */
/* = 0x7fffffff */
int a = INT_MAX;
//_int64 a = INT_MAX; // makes all warnings go away
unsigned int b = UINT_MAX;
bool c = false;
if(a < b) // warning C4018: '<' : signed/unsigned mismatch
c = true;
if(a > b) // warning C4018: '<' : signed/unsigned mismatch
c = true;
if(a <= b) // warning C4018: '<' : signed/unsigned mismatch
c = true;
if(a >= b) // warning C4018: '<' : signed/unsigned mismatch
c = true;
if(a == b) // no warning <--- warning expected here
c = true;
if(((unsigned int)a) == b) // no warning (as expected)
c = true;
if(a == ((int)b)) // no warning (as expected)
c = true;
I thought it was to do with background promotion, but the last two seem to say otherwise.
To my mind, the first == comparison is just as much a signed/unsigned mismatch as the others?
When comparing signed with unsigned, the compiler converts the signed value to unsigned. For equality, this doesn't matter, -1 == (unsigned) -1. For other comparisons it matters, e.g. the following is true: -1 > 2U.
EDIT: References:
5/9: (Expressions)
Many binary operators that expect
operands of arithmetic or enumeration
type cause conversions and yield
result types in a similar way. The
purpose is to yield a common type,
which is also the type of the result.
This pattern is called the usual
arithmetic conversions, which are
defined as follows:
If either
operand is of type long double, the
other shall be converted to long
double.
Otherwise, if either operand
is double, the other shall be
converted to double.
Otherwise, if
either operand is float, the other
shall be converted to float.
Otherwise, the integral promotions
(4.5) shall be performed on both
operands.54)
Then, if either operand
is unsigned long the other shall be
converted to unsigned long.
Otherwise, if one operand is a long
int and the other unsigned int, then
if a long int can represent all the
values of an unsigned int, the
unsigned int shall be converted to a
long int; otherwise both operands
shall be converted to unsigned long
int.
Otherwise, if either operand is
long, the other shall be converted to
long.
Otherwise, if either operand
is unsigned, the other shall be
converted to unsigned.
4.7/2: (Integral conversions)
If the destination type is unsigned,
the resulting value is the least
unsigned integer congruent to the
source integer (modulo 2n where n is
the number of bits used to represent
the unsigned type). [Note: In a two’s
complement representation, this
conversion is conceptual and there is
no change in the bit pattern (if there
is no truncation). ]
EDIT2: MSVC warning levels
What is warned about on the different warning levels of MSVC is, of course, choices made by the developers. As I see it, their choices in relation to signed/unsigned equality vs greater/less comparisons make sense, this is entirely subjective of course:
-1 == -1 means the same as -1 == (unsigned) -1 - I find that an intuitive result.
-1 < 2 does not mean the same as -1 < (unsigned) 2 - This is less intuitive at first glance, and IMO deserves an "earlier" warning.
Why signed/unsigned warnings are important and programmers must pay heed to them, is demonstrated by the following example.
Guess the output of this code?
#include <iostream>
int main() {
int i = -1;
unsigned int j = 1;
if ( i < j )
std::cout << " i is less than j";
else
std::cout << " i is greater than j";
return 0;
}
Output:
i is greater than j
Surprised? Online Demo : http://www.ideone.com/5iCxY
Bottomline: in comparison, if one operand is unsigned, then the other operand is implicitly converted into unsigned if its type is signed!
The == operator just does a bitwise comparison (by simple division to see if it is 0).
The smaller/bigger than comparisons rely much more on the sign of the number.
4 bit Example:
1111 = 15 ? or -1 ?
so if you have 1111 < 0001 ... it's ambiguous...
but if you have 1111 == 1111 ... It's the same thing although you didn't mean it to be.
In a system that represents the values using 2-complement (most modern processors) they are equal even in their binary form. This may be why compiler doesn't complain about a == b.
And to me it's strange compiler doesn't warn you on a == ((int)b). I think it should give you an integer truncation warning or something.
Starting from C++20 we have special functions for correct comparing signed-unsigned values
https://en.cppreference.com/w/cpp/utility/intcmp
The line of code in question does not generate a C4018 warning because Microsoft have used a different warning number (i.e. C4389) to handle that case, and C4389 is not enabled by default (i.e. at level 3).
From the Microsoft docs for C4389:
// C4389.cpp
// compile with: /W4
#pragma warning(default: 4389)
int main()
{
int a = 9;
unsigned int b = 10;
if (a == b) // C4389
return 0;
else
return 0;
};
The other answers have explained quite well why Microsoft might have decided to make a special case out of the equality operator, but I find those answers are not super helpful without mentioning C4389 or how to enable it in Visual Studio.
I should also mention that if you are going to enable C4389, you might also consider enabling C4388. Unfortunately there is no official documentation for C4388 but it seems to pop up in expressions like the following:
int a = 9;
unsigned int b = 10;
bool equal = (a == b); // C4388
Related
While all those assertions hold true on my system, I am obviously calling several undefined and/or implementation-specific behaviors. Some of which
are apparently not actual overflow.
See this comment for reference: this is the reason why I am asking this question.
num = num + 1 does not cause an overflow. num is automatically promoted to int, and then the addition is performed in int, which yields 128 without overflow. Then the assignment performs a conversion to char.
This is not an overflow but, per C 2018 6.3.1.3, produces an implementation-defined result or signal. This differs from overflow because the C standard does not specify the behavior upon overflow at all, but, in this code, it specifies that the implementation must define the behavior. - Eric Postpischil
I put in comment what I believe to be the actual behavior.
Because I have relied on misconceptions, I prefer not to assume anything.
#include <limits.h>
#include <assert.h>
#include <stdint.h>
#include <stddef.h>
int main(void)
{
signed char sc = CHAR_MAX;
unsigned char uc = UCHAR_MAX;
signed short ss = SHRT_MAX;
unsigned short us = USHRT_MAX;
signed int si = INT_MAX;
unsigned int ui = UINT_MAX;
signed long sl = LONG_MAX;
unsigned long ul = ULONG_MAX;
size_t zu = SIZE_MAX;
++sc;
++uc;
++ss;
++us;
++si;
++ui;
++sl;
++ul;
++zu;
assert(sc == CHAR_MIN); //integer promotion, implementation specific ?
assert(uc == 0); //integer promotion, implementation specific ?
assert(ss == SHRT_MIN); //integer promotion, implementation specific ?
assert(us == 0); //integer promotion, implementation specific ?
assert(si == INT_MIN); //overflow & undefined
assert(ui == 0); //wrap around: Guaranteed
assert(sl == LONG_MIN); //overflow & undefined ?
assert(ul == 0); //wrap around: Guaranteed ?
assert(zu == 0); //wrap around : Guaranteed ?
return (0);
}
All citations below are from C 2018, official version.
Signed Integers Narrower Than int, Binary +
Let us discuss this case first since it is the one that prompted this question. Consider this code, which does not appear in the question:
signed char sc = SCHAR_MAX;
sc = sc + 1;
assert(sc == SCHAR_MIN);
6.5.6 discusses the binary + operator. Paragraph 4 says the usual arithmetic conversions are performed on them. This results in the sc in sc + 1 being converted to int1, and 1 is already int. So sc + 1 yields one more than SCHAR_MAX (commonly 127 + 1 = 128), and there is no overflow or representation problem in the addition.
Then we must perform the assignment, which is discussed in 6.5.16.1. Paragraph 2 says “… the value of the right operand is converted to the type of the assignment expression and replaces the value stored in the object designated by the left operand.” So we must convert this value greater than SCHAR_MAX to signed char, and it clearly cannot be represented in signed char.
6.3.1.3 tells us about the conversions of integers. Regarding this situation, it says “… Otherwise, the new type is signed and the value cannot be represented in it; either the result is implementation-defined or an implementation-defined signal is raised.”
Thus, we have an implementation-defined result or signal. This differs from overflow, which is what happens when, during evaluation of an expression, the result is not representable. 6.5 5 says “If an exceptional condition occurs during the evaluation of an expression (that is, if the result is not mathematically defined or not in the range of representable values for its type), the behavior is undefined.” For example, if we evaluate INT_MAX + 1, then both INT_MAX and 1 have type int, so the operation is performed with type int, but the mathematical result is not representable in int, so this is an exceptional condition, and the behavior is not defined by the C standard. In contrast, during the conversion, the behavior is partially defined by the standard: The standard requires the implementation to define the behavior, and it must either produce a result it defines or define a signal.
In many implementations, the assertion will evaluate to true. See the “Signed Integers Not Narrower Than int” section below for further discussion.
Signed Integers Narrower Than int, Prefix ++
Next, consider this case, extracted from the question, except that I changed CHAR_MAX and and CHAR_MIN to SCHAR_MAX and SCHAR_MIN to match the signed char type:
signed char sc = SCHAR_MAX;
++sc;
assert(sc == SCHAR_MIN);
We have unary ++ instead of binary +. 6.5.3.1 2 says “The value of the operand of the prefix ++ is incremented…” This clause does not explicitly say the usual arithmetic conversions or integer promotions are performed, but it does say, also in paragraph 2, “See the discussions of additive operators and compound assignment for information on constraints, types, side effects, and conversions and the effects of operations on pointers.” That tells us it behaves like sc = sc + 1;, and the above section about binary + applies to prefix ++, so the behavior is the same.
Unsigned Integers Narrower Than int, Binary +
Consider this code modified to use binary + instead of prefix ++:
unsigned char uc = UCHAR_MAX;
uc = uc + 1;
assert(uc == 0);
As with signed char, the arithmetic is performed with int and then converted to the assignment destination type. This conversion is specified by 6.3.1.3: “Otherwise, if the new type is unsigned, the value is converted by repeatedly adding or subtracting one more than the maximum value that can be represented in the new type until the value is in the range of the new type.” Thus, from the mathematical result (UCHAR_MAX + 1), one more than the maximum (also UCHAR_MAX + 1) is subtracted until the value is in range. A single subtraction yields 0, which is in range, so the result is 0, and the assertion is true.
Unsigned Integers Narrower Than int, Prefix ++
Consider this code extracted from the question:
unsigned char uc = UCHAR_MAX;
++uc;
assert(uc == 0);
As with the earlier prefix ++ case, the arithmetic is the same as uc = uc + 1, discussed above.
Signed Integers Not Narrower Than int
In this code:
signed int si = INT_MAX;
++si;
assert(si == INT_MIN);
or this code:
signed int si = INT_MAX;
si = si + 1;
assert(si == INT_MIN);
the arithmetic is performed using int. In either case, the computation overflows, and the behavior is not defined by the C standard.
If we ponder what implementations will do, several possibilities are:
In a two’s complement implementation, the bit pattern resulting from adding 1 to INT_MAX overflows to the bit pattern for INT_MIN, and this is the value the implementation effectively uses.
In a one’s complement implementation, the bit pattern resulting from adding 1 to INT_MAX overflows to the bit pattern for INT_MIN, although it is a different value than we are familiar with for INT_MIN (−2−31+1 instead of −2−31).
In a sign-and-magnitude implementation, the bit pattern resulting from adding 1 to INT_MAX overflows to the bit pattern for −0.
The hardware detects overflow, and a signal occurs.
The compiler detects the overflow and transform the code in unexpected ways during optimization.
Unsigned Integers Not Narrower than int
This cases are unremarkable; the behavior is the same as for the narrower-than-int cases discussed above: The arithmetic wraps.
Footnote
1 Per discussion elsewhere in Stack Overflow, it may be theoretically possible for the char (and signed char) type to be as wide as an int. This strains the C standard regarding EOF and possibly other issues and was certainly not anticipated by the C committee. This answer disregards such esoteric C implementations and considers only implementations in which char is narrower than int.
assert(sc == CHAR_MIN); //integer promotion, implementation specific ?
Depends on implementation-defined conversion of CHAR_MAX+1 to char if char is signed; otherwise it's false because CHAR_MIN != SCHAR_MIN. And if CHAR_MAX==INT_MAX (possible, but not viable for meeting other requirements of a hosted implementation; see Can sizeof(int) ever be 1 on a hosted implementation?) then the original sc++ was UB.
assert(uc == 0); //integer promotion, implementation specific ?
Always true.
assert(ss == SHRT_MIN); //integer promotion, implementation specific ?
Same logic as sc case. Depends on implementation-defined conversion of SHRT_MAX+1 to short, or UB if SHRT_MAX==INT_MAX.
assert(us == 0); //integer promotion, implementation specific ?
Always true.
assert(si == INT_MIN); //overflow & undefined
UB.
assert(ui == 0); //wrap around: Guaranteed
Always true.
assert(sl == LONG_MIN); //overflow & undefined ?
UB.
assert(ul == 0); //wrap around: Guaranteed ?
Always true.
assert(zu == 0); //wrap around : Guaranteed ?
Always true.
“Overflow” assertions on various C data type:
True according to the C Standard
assert(uc == 0);
assert(us == 0);
assert(ui == 0);
assert(ul == 0);
assert(zu == 0);
I think you wanted to test signed char sc = SCHAR_MAX; ... assert(sc == SCHAR_MIN);
When the signed type has a narrower range than int:
"result is implementation-defined or an implementation-defined signal is raised" as part of the ++ re-assignment.
When the signed type is as wide or wider range than int:
UB due to signed integer overflow during a ++.
I found these two implementation are NOT equal:
1.num = sign ? (int)va_arg(args, int) : (unsigned int)va_arg(args, unsigned int);
2.if (sign)
num = (int)va_arg(args, int);
else
num = (unsigned int)va_arg(args, unsigned int);
the 1st implementation, it always chooses the false branch, no matter what value the sign is.
the 2nd one works as expected.
what happends here? I'm using GCC/ARM GCC 64bit
I would guess that the problem you are running into is the subtle, implicit promotion that takes place in the ?: operator. The 2nd and 3rd operands are balanced against each other through the usual arithmetic conversions. This is mandated by C11 6.5.15:
If both the second and third operands have arithmetic type, the result
type that would be determined by the usual arithmetic conversions,
were they applied to those two operands, is the type of the result.
Meaning if one is signed and the other is unsigned, the signed operand gets converted to unsigned. This happens regardless of which one of the 2nd or 3rd operands that gets evaluated and used as result.
This can cause curious bugs if you aren't aware of this oddity:
#include <stdio.h>
int main (void)
{
int x;
if( (-1 ? (printf("Expression evaluates to -1\n"),-1) : 0xFFFFFFFF) < 0)
{
printf("-1 is < 0");
}
else
{
printf("-1 is >= 0");
}
}
Output:
Expression evaluates to -1
-1 is >= 0
This is the reason why if/else is to be preferred over ?:.
I have seen the following code in the book Computer Systems: A Programmer's Perspective, 2/E. This works well and creates the desired output. The output can be explained by the difference of signed and unsigned representations.
#include<stdio.h>
int main() {
if (-1 < 0u) {
printf("-1 < 0u\n");
}
else {
printf("-1 >= 0u\n");
}
return 0;
}
The code above yields -1 >= 0u, however, the following code which shall be the same as above, does not! In other words,
#include <stdio.h>
int main() {
unsigned short u = 0u;
short x = -1;
if (x < u)
printf("-1 < 0u\n");
else
printf("-1 >= 0u\n");
return 0;
}
yields -1 < 0u. Why this happened? I cannot explain this.
Note that I have seen similar questions like this, but they do not help.
PS. As #Abhineet said, the dilemma can be solved by changing short to int. However, how can one explains this phenomena? In other words, -1 in 4 bytes is 0xff ff ff ff and in 2 bytes is 0xff ff. Given them as 2s-complement which are interpreted as unsigned, they have corresponding values of 4294967295 and 65535. They both are not less than 0 and I think in both cases, the output needs to be -1 >= 0u, i.e. x >= u.
A sample output for it on a little endian Intel system:
For short:
-1 < 0u
u =
00 00
x =
ff ff
For int:
-1 >= 0u
u =
00 00 00 00
x =
ff ff ff ff
The code above yields -1 >= 0u
All integer literals (numeric constansts) have a type and therefore also a signedness. By default, they are of type int which is signed. When you append the u suffix, you turn the literal into unsigned int.
For any C expression where you have one operand which is signed and one which is unsiged, the rule of balacing (formally: the usual arithmetic conversions) implicitly converts the signed type to unsigned.
Conversion from signed to unsigned is well-defined (6.3.1.3):
Otherwise, if the new type is unsigned, the value is converted by repeatedly adding or
subtracting one more than the maximum value that can be represented in the new type
until the value is in the range of the new type.
For example, for 32 bit integers on a standard two's complement system, the max value of an unsigned integer is 2^32 - 1 (4294967295, UINT_MAX in limits.h). One more than the maximum value is 2^32. And -1 + 2^32 = 4294967295, so the literal -1 is converted to an unsigned int with the value 4294967295. Which is larger than 0.
When you switch types to short however, you end up with a small integer type. This is the difference between the two examples. Whenever a small integer type is part of an expression, the integer promotion rule implicitly converts it to a larger int (6.3.1.1):
If an int can represent all values of the original type (as restricted
by the width, for a bit-field), the value is converted to an int;
otherwise, it is converted to an unsigned int. These are called the
integer promotions. All other types are unchanged by the integer
promotions.
If short is smaller than int on the given platform (as is the case on 32 and 64 bit systems), any short or unsigned short will therefore always get converted to int, because they can fit inside one.
So for the expression if (x < u), you actually end up with if((int)x < (int)u) which behaves as expected (-1 is lesser than 0).
You're running into C's integer promotion rules.
Operators on types smaller than int automatically promote their operands to int or unsigned int. See comments for more detailed explanations. There is a further step for binary (two-operand) operators if the types still don't match after that (e.g. unsigned int vs. int). I won't try to summarize the rules in more detail than that. See Lundin's answer.
This blog post covers this in more detail, with a similar example to yours: signed and unsigned char. It quotes the C99 spec:
If an int can represent all values of the original type, the value is
converted to an int; otherwise, it is converted to an unsigned int.
These are called the integer promotions. All other types are unchanged
by the integer promotions.
You can play around with this more easily on something like godbolt, with a function that returns one or zero. Just look at the compiler output to see what ends up happening.
#define mytype short
int main() {
unsigned mytype u = 0u;
mytype x = -1;
return (x < u);
}
Other than what you seem to assume , this is not a property of the particular width of the types, here 2 byte versus 4 bytes, but a question of the rules that are to be applied. The integer promotion rules state that short and unsigned short are converted to int on all platforms where the corresponding range of values fit into int. Since this is the case here, both values are preserved and obtain the type int. -1 is perfectly representable in int as is 0. So the test results in -1 is smaller than 0.
In the case of testing -1 against 0u the common conversion choses the unsigned type as a common type to which both are converted. -1 converted to unsigned is the value UINT_MAX, which is larger than 0u.
This is a good example, why you should never use "narrow" types to do arithmetic or comparison. Only use them if you have a sever size constraint. This will rarely be the case for simple variables, but mostly for large arrays where you can really gain from storing in a narrow type.
0u is not unsigned short, it's unsigned int.
Edit:: The explanation to the behavior,
How comparison is performed ?
As answered by Jens Gustedt,
This is called "usual arithmetic conversions" by the standard and
applies whenever two different integer types occur as operands of the
same operator.
In essence what is does
if the types have different width (more precisely what the standard
calls conversion rank) then it converts to the wider type if both
types are of same width, besides really weird architectures, the
unsigned of them wins Signed to unsigned conversion of the value -1
with whatever type always results in the highest representable value
of the unsigned type.
The more explanatory blog written by him could be found here.
Please look at my test code:
#include <stdlib.h>
#include <stdio.h>
#define PRINT_COMPARE_RESULT(a, b) \
if (a > b) { \
printf( #a " > " #b "\n"); \
} \
else if (a < b) { \
printf( #a " < " #b "\n"); \
} \
else { \
printf( #a " = " #b "\n" ); \
}
int main()
{
signed int a = -1;
unsigned int b = 2;
signed short c = -1;
unsigned short d = 2;
PRINT_COMPARE_RESULT(a,b);
PRINT_COMPARE_RESULT(c,d);
return 0;
}
The result is the following:
a > b
c < d
My platform is Linux, and my gcc version is 4.4.2.
I am surprised by the second line of output.
The first line of output is caused by integer promotion. But why is the result of the second line different?
The following rules are from C99 standard:
If both operands have the same type, then no further conversion is needed.
Otherwise, if both operands have signed integer types or both have unsigned
integer types, the operand with the type of lesser integer conversion rank is
converted to the type of the operand with greater rank.
Otherwise, if the operand that has unsigned integer type has rank greater or
equal to the rank of the type of the other operand, then the operand with
signed integer type is converted to the type of the operand with unsigned
integer type.
Otherwise, if the type of the operand with signed integer type can represent
all of the values of the type of the operand with unsigned integer type, then
the operand with unsigned integer type is converted to the type of the
operand with signed integer type.
Otherwise, both operands are converted to the unsigned integer type
corresponding to the type of the operand with signed integer type.
I think both of the two comparisons should belong to the same case, the second case of integer promotion.
When you use an arithmetic operator, the operands go through two conversions.
Integer promotions: If int can represent all values of the type, then the operand is promoted to int. This applies to both short and unsigned short on most platforms. The conversion performed on this stage is done on each operand individually, without regard for the other operand. (There are more rules, but this is the one that applies.)
Usual arithmetic conversions: If you compare an unsigned int against a signed int, since neither includes the entire range of the other, and both have the same rank, then both are converted to the unsigned type. This conversion is done after examining the type of both operands.
Obviously, the "usual arithmetic conversions" don't always apply, if there are not two operands. This is why there are two sets of rules. One gotcha, for example, is that shift operators << and >> don't do usual arithmetic conversions, since the type of the result should only depend on the left operand (so if you see someone type x << 5U, then the U stands for "unnecessary").
Breakdown: Let's assume a typical system with 32-bit int and 16-bit short.
int a = -1; // "signed" is implied
unsigned b = 2; // "int" is implied
if (a < b)
puts("a < b"); // not printed
else
puts("a >= b"); // printed
First the two operands are promoted. Since both are int or unsigned int, no promotions are done.
Next, the two operands are converted to the same type. Since int can't represent all possible values of unsigned, and unsigned can't represent all possible values of int, there is no obvious choice. In this case, both are converted to unsigned.
When converting from signed to unsigned, 232 is repeatedly added to the signed value until it is in the range of the unsigned value. This is actually a noop as far as the processor is concerned.
So the comparison becomes if (4294967295u < 2u), which is false.
Now let's try it with short:
short c = -1; // "signed" is implied
unsigned short d = 2;
if (c < d)
puts("c < d"); // printed
else
puts("c >= d"); // not printed
First, the two operands are promoted. Since both can be represented faithfully by int, both are promoted to int.
Next, they are converted to the same type. But they already are the same type, int, so nothing is done.
So the comparison becomes if (-1 < 2), which is true.
Writing good code: There's an easy way to catch these "gotchas" in your code. Just always compile with warnings turned on, and fix the warnings. I tend to write code like this:
int x = ...;
unsigned y = ...;
if (x < 0 || (unsigned) x < y)
...;
You have to watch out that any code you do write doesn't run into the other signed vs. unsigned gotcha: signed overflow. For example, the following code:
int x = ..., y = ...;
if (x + 100 < y + 100)
...;
unsigned a = ..., b = ...;
if (a + 100 < b + 100)
...;
Some popular compilers will optimize (x + 100 < y + 100) to (x < y), but that is a story for another day. Just don't overflow your signed numbers.
Footnote: Note that while signed is implied for int, short, long, and long long, it is NOT implied for char. Instead, it depends on the platform.
Taken from the C++ standard:
4.5 Integral promotions [conv.prom] 1 An rvalue of type char, signed char, unsigned char, short int, or unsigned short int can be
converted to an rvalue of type int if int can represent all the values of the
source type; otherwise, the source rvalue can be converted to an
rvalue of type unsigned int.
In practice it means, that all operations (on the types in the list) are actually evaluated on the type int if it can cover the whole value set you are dealing with, otherwise it is carried out on unsigned int.
In the first case the values are compared as unsigned int because one of them was unsigned int and this is why -1 is "greater" than 2. In the second case the values a compared as signed integers, as int covers the whole domain of both short and unsigned short and so -1 is smaller than 2.
(Background story: Actually, all this complex definition about covering all the cases in this way is resulting that the compilers can actually ignore the actual type behind (!) :) and just care about the data size.)
The conversion process for C++ is described as the usual arithmetic conversions. However, I think the most relevant rule is at the sub-referenced section conv.prom: Integral promotions 4.6.1:
A prvalue of an integer type other than bool, char16_t, char32_t, or
wchar_t whose integer conversion rank ([conv.rank]) is less than the
rank of int can be converted to a prvalue of type int if int can
represent all the values of the source type; otherwise, the source
prvalue can be converted to a prvalue of type unsigned int.
The funny thing there is the use of the word "can", which I think suggests that this promotion is performed at the discretion of the compiler.
I also found this C-spec snippet that hints at the omission of promotion:
11 EXAMPLE 2 In executing the fragment
char c1, c2;
/* ... */
c1 = c1 + c2;
the ``integer promotions'' require that the abstract machine promote the value of each variable to int size
and then add the two ints and truncate the sum. Provided the addition of two chars can be done without
overflow, or with overflow wrapping silently to produce the correct result, the actual execution need only
produce the same result, possibly omitting the promotions.
There is also the definition of "rank" to be considered. The list of rules is pretty long, but as it applies to this question "rank" is straightforward:
The rank of any unsigned integer type shall equal the rank of the
corresponding signed integer type.
See this code snippet
int main()
{
unsigned int a = 1000;
int b = -1;
if (a>b) printf("A is BIG! %d\n", a-b);
else printf("a is SMALL! %d\n", a-b);
return 0;
}
This gives the output: a is SMALL: 1001
I don't understand what's happening here. How does the > operator work here? Why is "a" smaller than "b"? If it is indeed smaller, why do i get a positive number (1001) as the difference?
Binary operations between different integral types are performed within a "common" type defined by so called usual arithmetic conversions (see the language specification, 6.3.1.8). In your case the "common" type is unsigned int. This means that int operand (your b) will get converted to unsigned int before the comparison, as well as for the purpose of performing subtraction.
When -1 is converted to unsigned int the result is the maximal possible unsigned int value (same as UINT_MAX). Needless to say, it is going to be greater than your unsigned 1000 value, meaning that a > b is indeed false and a is indeed small compared to (unsigned) b. The if in your code should resolve to else branch, which is what you observed in your experiment.
The same conversion rules apply to subtraction. Your a-b is really interpreted as a - (unsigned) b and the result has type unsigned int. Such value cannot be printed with %d format specifier, since %d only works with signed values. Your attempt to print it with %d results in undefined behavior, so the value that you see printed (even though it has a logical deterministic explanation in practice) is completely meaningless from the point of view of C language.
Edit: Actually, I could be wrong about the undefined behavior part. According to C language specification, the common part of the range of the corresponding signed and unsigned integer type shall have identical representation (implying, according to the footnote 31, "interchangeability as arguments to functions"). So, the result of a - b expression is unsigned 1001 as described above, and unless I'm missing something, it is legal to print this specific unsigned value with %d specifier, since it falls within the positive range of int. Printing (unsigned) INT_MAX + 1 with %d would be undefined, but 1001u is fine.
On a typical implementation where int is 32-bit, -1 when converted to an unsigned int is 4,294,967,295 which is indeed ≥ 1000.
Even if you treat the subtraction in an unsigned world, 1000 - (4,294,967,295) = -4,294,966,295 = 1,001 which is what you get.
That's why gcc will spit a warning when you compare unsigned with signed. (If you don't see a warning, pass the -Wsign-compare flag.)
You are doing unsigned comparison, i.e. comparing 1000 to 2^32 - 1.
The output is signed because of %d in printf.
N.B. sometimes the behavior when you mix signed and unsigned operands is compiler-specific. I think it's best to avoid them and do casts when in doubt.
#include<stdio.h>
int main()
{
int a = 1000;
signed int b = -1, c = -2;
printf("%d",(unsigned int)b);
printf("%d\n",(unsigned int)c);
printf("%d\n",(unsigned int)a);
if(1000>-1){
printf("\ntrue");
}
else
printf("\nfalse");
return 0;
}
For this you need to understand the precedence of operators
Relational Operators works left to right ...
so when it comes
if(1000>-1)
then first of all it will change -1 to unsigned integer because int is by default treated as unsigned number and it range it greater than the signed number
-1 will change into the unsigned number ,it changes into a very big number
Find a easy way to compare, maybe useful when you can not get rid of unsigned declaration, (for example, [NSArray count]), just force the "unsigned int" to an "int".
Please correct me if I am wrong.
if (((int)a)>b) {
....
}
The hardware is designed to compare signed to signed and unsigned to unsigned.
If you want the arithmetic result, convert the unsigned value to a larger signed type first. Otherwise the compiler wil assume that the comparison is really between unsigned values.
And -1 is represented as 1111..1111, so it a very big quantity ... The biggest ... When interpreted as unsigned.
while comparing a>b where a is unsigned int type and b is int type, b is type casted to unsigned int so, signed int value -1 is converted into MAX value of unsigned**(range: 0 to (2^32)-1 )**
Thus, a>b i.e., (1000>4294967296) becomes false. Hence else loop printf("a is SMALL! %d\n", a-b); executed.