why the result is "2" of unsigned int (1) - unsigned int (0xFFFFFFFF) - c

Please look at the following codes:
#include <stdlib.h>
#include <stdio.h>
int main()
{
unsigned int a = 1;
unsigned int b = -1;
printf("0x%X\n", (a-b));
return 0;
}
The result is 0x2.
I think the integer promotion should not happen because the type of both of "a" and "b" are unsigned int. But the result beats me.... I don't know the reason.
By the way, I know the arithmetic result should be 2 because 1-(-1)=2. But the type of b is unsigned int. When assign the (-1) to b, the value of b is 0xFFFFFFFF actually. It is the maximum value of unsigned int. When one small unsigned value subtract one big value, the result is not that I expect.
From the answer below, I think maybe the overflow is a good explanation。
Now I writes other test codes. It proves the overflow answer is right.
#include <stdlib.h>
#include <stdio.h>
int main()
{
unsigned int c = 1;
unsigned int d = -1;
printf("0x%llx\n", (unsigned long long)c-(unsigned long long)d);
return 0;
}
The result is "0xffffffff00000002". It is I expect.

unsigned int a = 1;
This initializes a to 1. Actually, since 1 is of type int, there's an implicit int-to-unsigned conversion, but it's a trivial conversion that doesn't change the value or representation).
unsigned int b = -1;
This is more interesting. -1 is of type int, and the initialization implicitly converts it from int to unsigned int. Since the mathematical value -1 cannot be represented as an unsigned int, the conversion is non-trivial. The rule in this case is (quoting section 6.3.1.3 of the C standard):
the value is converted by repeatedly adding or subtracting one more
than the maximum value that can be represented in the new type until
the value is in the range of the new type.
Of course it doesn't actually have to do it that way, as long as the result is the same. Adding UINT_MAX+1 ("one more than the maximum value that can be represented in the new type") to -1 yields UINT_MAX. (That addition is defined mathematically; it's not itself subject to any type conversions.)
In fact, assigning -1 to an object of any unsigned type is a good way to get the maximum value of that type without having to refer to the *_MAX macros defined in <limits.h>.
So, assuming a typical 32-bit system, we have a == 1 and b == 0xFFFFFFFF.
printf("0x%X\n", (a-b));
The mathematical result of the subtraction is -0xFFFFFFFE, but that's obviously outside the range of unsigned int. The rules for unsigned arithmetic are similar to the rules for unsigned conversion; the result is 2.

Who says you're suffering integer promotion? Let's pretend that your integers are two's complement (likely, though not mandated by the standard) and they're only four bits wide (not possible according to the standard, I'm just using this to simplify things).
int unsigned-int bit-pattern
--- ------------ -----------
1 1 0001
-1 15 1111
------
(subtract with borrow) 1 0010
(only have four bits) 0010 -> 2
You can see here that you can get the result you see without any promotion to signed or wider types.

There should be a compiler warning that you probably ignored or turned off, but it's still possible to store -1 in an unsigned integer. Internally, -1 is stored on a 32-bit machine as 0xffffffff. So if you subtract 0xffffffff from 1, you end up with -0xfffffffe, which is 2. (There are no negative numbers, a negative number is the maximum integer value plus one minus the number).
Bottom line - signed or unsigned doesn't matter at all, it only comes to play when you compare values.
And mathematically speaking, 1 - (-1) = 1+1.

If you subtract a negative number, it is the equivalent of adding a positive number.
a = 1
b = -1
(a-b) = ?
((1)-(-1)) = ?
(1-(-1)) = ?
(1+1) = ?
2 = ?
At first you might think that this isn't allowed, since you specified an unsigned int; however, you are also converting signed int (the -1 constant) to an unsigned int. So, you are effectively storing the exact same bits into the unsigned int (0xFFFF).
Then, in the expression, you take the negative of the 0xFFFF value, which of course forces the number to be signed. In effect, you are circumventing the unsigned directive at ever step.

Related

Is it guaranteed that assigning -1 to an unsigned type yields the maximum value?

I found a few questions on this particular topic, but they were about C++.
How portable is casting -1 to an unsigned type?
converting -1 to unsigned types
Is it safe to assign -1 to an unsigned int to get the max value?
And when reading the answers, it seemed likely or at least not unlikely that this is one of those things where C and C++ differs.
The question is simple:
If I declare a variable with unsigned char/short/int/long var or use any other unsigned types like fixed width, minimum width etc, is it then guaranteed that var = -1 will set var to the maximum value it can hold? Is this program guaranteed to print "Yes"?
#include <stdio.h>
#include <limits.h>
int main(void) {
unsigned long var = -1;
printf("%s\n", var == ULONG_MAX ? "Yes" : "No");
}
Is it guaranteed that assigning -1 to an unsigned type yields the maximum value?
Yes.
Is this program guaranteed to print "Yes"?
Yes.
It's a conversion, from int -1 to unsigned long. -1 can't be represented as unsigned long. From C11 6.3.1.3p2:
Otherwise, if the new type is unsigned, the value is converted by repeatedly adding or subtracting one more than the maximum value that can be represented in the new type until the value is in the range of the new type
so we add one (ULONG_MAX + 1) to -1 and we get -1 + (ULONG_MAX + 1) = ULONG_MAX which is in range of unsigned long.
The conversion from -1 (signed int) to a large unsigned type is well-defined, as per C17 6.3.1.3.
Otherwise, if the new type is unsigned, the value is converted by repeatedly adding or
subtracting one more than the maximum value that can be represented in the new type
until the value is in the range of the new type60).
This comes with a helpful foot note 60)
The rules describe arithmetic on the mathematical value, not the value of a given type of expression.
So given unsigned long var = -1;, the value becomes
-1 then adding one more than maximum value of unsigned long
Meaning -1 + ULONG_MAX+1 = U_LONG_MAX.
This is well-defined and portable behavior. Unlike conversions from unsigned to signed, that can invoke implementation-defined behavior.
Also, this is regardless of signedness format, since the mathematical value should be used, not the raw binary one. Had you done something like unsigned long var = (signed long)0xFFFFFFFFF; then that would be another story in case of exotic/fictional systems with 1's complement or signed magnitude.

C Convert a Short Int to Unsigned Short

A short int in C contains 16 bits and the first bit represents whether or not the value is negative or positive. I have a C program that is as follows:
int main() {
short int v;
unsigned short int uv;
v = -20000;
uv = v;
printf("\nuv = %hu\n", uv);
return 0;
}
Since the value of v is negative I know the first bit of the variable is a 1. So I expect the output of the program to equal uv = 52,768 b/c 20,000 + (2^15) = 52,768.
Instead I am getting uv = 45536 as the output. What part of my logic is incorrect?
The behavior you're seeing can be explained by the conversion rules of C:
6.3.1.3 Signed and unsigned integers
1 When a value with integer type is converted to another integer type other than _Bool, if the value can be represented by the new type, it is unchanged.
2 Otherwise, if the new type is unsigned, the value is converted by repeatedly adding or subtracting one more than the maximum value that can be represented in the new type until the value is in the range of the new type.
(This quote is from C99.)
-20000 can't be represented by an unsigned short because it's negative. The target type is unsigned, so the value is converted by repeatedly adding 65536 (which is USHORT_MAX + 1) until it is in range: -20000 + 65536 is exactly 45536.
Note that this behavior is mandated by the C standard and has nothing to do with how negative numbers are actually represented in memory (in particular, it works the same way even on machines using sign/magnitude or ones' complement).

What is the difference between literals and variables in C (signed vs unsigned short ints)?

I have seen the following code in the book Computer Systems: A Programmer's Perspective, 2/E. This works well and creates the desired output. The output can be explained by the difference of signed and unsigned representations.
#include<stdio.h>
int main() {
if (-1 < 0u) {
printf("-1 < 0u\n");
}
else {
printf("-1 >= 0u\n");
}
return 0;
}
The code above yields -1 >= 0u, however, the following code which shall be the same as above, does not! In other words,
#include <stdio.h>
int main() {
unsigned short u = 0u;
short x = -1;
if (x < u)
printf("-1 < 0u\n");
else
printf("-1 >= 0u\n");
return 0;
}
yields -1 < 0u. Why this happened? I cannot explain this.
Note that I have seen similar questions like this, but they do not help.
PS. As #Abhineet said, the dilemma can be solved by changing short to int. However, how can one explains this phenomena? In other words, -1 in 4 bytes is 0xff ff ff ff and in 2 bytes is 0xff ff. Given them as 2s-complement which are interpreted as unsigned, they have corresponding values of 4294967295 and 65535. They both are not less than 0 and I think in both cases, the output needs to be -1 >= 0u, i.e. x >= u.
A sample output for it on a little endian Intel system:
For short:
-1 < 0u
u =
00 00
x =
ff ff
For int:
-1 >= 0u
u =
00 00 00 00
x =
ff ff ff ff
The code above yields -1 >= 0u
All integer literals (numeric constansts) have a type and therefore also a signedness. By default, they are of type int which is signed. When you append the u suffix, you turn the literal into unsigned int.
For any C expression where you have one operand which is signed and one which is unsiged, the rule of balacing (formally: the usual arithmetic conversions) implicitly converts the signed type to unsigned.
Conversion from signed to unsigned is well-defined (6.3.1.3):
Otherwise, if the new type is unsigned, the value is converted by repeatedly adding or
subtracting one more than the maximum value that can be represented in the new type
until the value is in the range of the new type.
For example, for 32 bit integers on a standard two's complement system, the max value of an unsigned integer is 2^32 - 1 (4294967295, UINT_MAX in limits.h). One more than the maximum value is 2^32. And -1 + 2^32 = 4294967295, so the literal -1 is converted to an unsigned int with the value 4294967295. Which is larger than 0.
When you switch types to short however, you end up with a small integer type. This is the difference between the two examples. Whenever a small integer type is part of an expression, the integer promotion rule implicitly converts it to a larger int (6.3.1.1):
If an int can represent all values of the original type (as restricted
by the width, for a bit-field), the value is converted to an int;
otherwise, it is converted to an unsigned int. These are called the
integer promotions. All other types are unchanged by the integer
promotions.
If short is smaller than int on the given platform (as is the case on 32 and 64 bit systems), any short or unsigned short will therefore always get converted to int, because they can fit inside one.
So for the expression if (x < u), you actually end up with if((int)x < (int)u) which behaves as expected (-1 is lesser than 0).
You're running into C's integer promotion rules.
Operators on types smaller than int automatically promote their operands to int or unsigned int. See comments for more detailed explanations. There is a further step for binary (two-operand) operators if the types still don't match after that (e.g. unsigned int vs. int). I won't try to summarize the rules in more detail than that. See Lundin's answer.
This blog post covers this in more detail, with a similar example to yours: signed and unsigned char. It quotes the C99 spec:
If an int can represent all values of the original type, the value is
converted to an int; otherwise, it is converted to an unsigned int.
These are called the integer promotions. All other types are unchanged
by the integer promotions.
You can play around with this more easily on something like godbolt, with a function that returns one or zero. Just look at the compiler output to see what ends up happening.
#define mytype short
int main() {
unsigned mytype u = 0u;
mytype x = -1;
return (x < u);
}
Other than what you seem to assume , this is not a property of the particular width of the types, here 2 byte versus 4 bytes, but a question of the rules that are to be applied. The integer promotion rules state that short and unsigned short are converted to int on all platforms where the corresponding range of values fit into int. Since this is the case here, both values are preserved and obtain the type int. -1 is perfectly representable in int as is 0. So the test results in -1 is smaller than 0.
In the case of testing -1 against 0u the common conversion choses the unsigned type as a common type to which both are converted. -1 converted to unsigned is the value UINT_MAX, which is larger than 0u.
This is a good example, why you should never use "narrow" types to do arithmetic or comparison. Only use them if you have a sever size constraint. This will rarely be the case for simple variables, but mostly for large arrays where you can really gain from storing in a narrow type.
0u is not unsigned short, it's unsigned int.
Edit:: The explanation to the behavior,
How comparison is performed ?
As answered by Jens Gustedt,
This is called "usual arithmetic conversions" by the standard and
applies whenever two different integer types occur as operands of the
same operator.
In essence what is does
if the types have different width (more precisely what the standard
calls conversion rank) then it converts to the wider type if both
types are of same width, besides really weird architectures, the
unsigned of them wins Signed to unsigned conversion of the value -1
with whatever type always results in the highest representable value
of the unsigned type.
The more explanatory blog written by him could be found here.

How does C store negative numbers in signed vs unsigned integers?

Here is the example:
#include <stdio.h>
int main()
{
int x=35;
int y=-35;
unsigned int z=35;
unsigned int p=-35;
signed int q=-35;
printf("Int(35d)=%d\n\
Int(-35d)=%d\n\
UInt(35u)=%u\n\
UInt(-35u)=%u\n\
UInt(-35d)=%d\n\
SInt(-35u)=%u\n",x,y,z,p,p,q);
return 0;
}
Output:
Int(35d)=35
Int(-35d)=-35
UInt(35u)=35
UInt(-35u)=4294967261
UInt(-35d)=-35
SInt(-35u)=4294967261
Does it really matter if I declare the value as signed or unsigned int? Because, C actually only cares about how I read the value from memory. Please help me understand this and I hope you prove me wrong.
Representation of signed integers is up to the underlying platform, not the C language itself. The language definition is mostly agnostic with regard to signed integer representations. Two's complement is probably the most common, but there are other representations such as one's complement and signed magnitude.
In a two's complement system, you negate a value by inverting the bits and adding 1. To get from 5 to -5, you'd do:
5 == 0101 => 1010 + 1 == 1011 == -5
To go from -5 back to 5, you follow the same procedure:
-5 == 1011 => 0100 + 1 == 0101 == 5
Does it really matter if I declare the value as signed or unsigned int?
Yes, for the following reasons:
It affects the values you can represent: unsigned integers can represent values from 0 to 2N-1, whereas signed integers can represent values between -2N-1 and 2N-1-1 (two's complement).
Overflow is well-defined for unsigned integers; UINT_MAX + 1 will "wrap" back to 0. Overflow is not well-defined for signed integers, and INT_MAX + 1 may "wrap" to INT_MIN, or it may not.
Because of 1 and 2, it affects arithmetic results, especially if you mix signed and unsigned variables in the same expression (in which case the result may not be well defined if there's an overflow).
An unsigned int and a signed int take up the same number of bytes in memory. They can store the same byte values. However the data will be treated differently depending on if it's signed or unsigned.
See http://en.wikipedia.org/wiki/Two%27s_complement for an explanation of the most common way to represent integer values.
Since you can typecast in C you can effectively force the compiler to treat an unsigned int as signed int and vice versa, but beware that it doesn't mean it will do what you think or that the representation will be correct. (Overflowing a signed integer invokes undefined behaviour in C).
(As pointed out in comments, there are other ways to represent integers than two's complement, however two's complement is the most common way on desktop machines.)
Does it really matter if I declare the value as signed or unsigned int?
Yes.
For example, have a look at
#include <stdio.h>
int main()
{
int a = -4;
int b = -3;
unsigned int c = -4;
unsigned int d = -3;
printf("%f\n%f\n%f\n%f\n", 1.0 * a/b, 1.0 * c/d, 1.0*a/d, 1.*c/b);
}
and its output
1.333333
1.000000
-0.000000
-1431655764.000000
which clearly shows that it makes a huge difference if I have the same byte representation interpreted as signed or unsigned.
#include <stdio.h>
int main(){
int x = 35, y = -35;
unsigned int z = 35, p = -35;
signed int q = -35;
printf("x=%d\tx=%u\ty=%d\ty=%u\tz=%d\tz=%u\tp=%d\tp=%u\tq=%d\tq=%u\t",x,x,y,y,z,z,p,p,q,q);
}
the result is:
x=35 x=35 y=-35 y=4294967261 z=35 z=35 p=-35 p=4294967261 q=-35 q=4294967261
the int number store is not different, it stored with Complement style in memory,
I can use 0X... the 35 in 0X00000023, and the -35 in 0Xffffffdd, it is not different you use sigend or unsigend. it only output with different sytle. The %d and %u is not different about positive, but the negative the first position is sign, if you output with %u is 0Xffffffdd equal 4294967261, but the %d the 0Xffffffdd can be - 0X00000023 equal -35.
The most fundamental thing that variable's type defines is the way it is stored (that is - read from and written to) in memory and how are the bits interpreted, so your statement can be considered "valid".
You can also look at the problem using conversions. When you store signed and negative value in unsigned variable it gets converted to unsigned. It so happens that this conversion is reversible, so signed -35 converts to unsigned 4294967261, which - when you request it - can be converted to signed -35. That's how 2's complement encoding (see link in other answer) works.

Comparison operation on unsigned and signed integers

See this code snippet
int main()
{
unsigned int a = 1000;
int b = -1;
if (a>b) printf("A is BIG! %d\n", a-b);
else printf("a is SMALL! %d\n", a-b);
return 0;
}
This gives the output: a is SMALL: 1001
I don't understand what's happening here. How does the > operator work here? Why is "a" smaller than "b"? If it is indeed smaller, why do i get a positive number (1001) as the difference?
Binary operations between different integral types are performed within a "common" type defined by so called usual arithmetic conversions (see the language specification, 6.3.1.8). In your case the "common" type is unsigned int. This means that int operand (your b) will get converted to unsigned int before the comparison, as well as for the purpose of performing subtraction.
When -1 is converted to unsigned int the result is the maximal possible unsigned int value (same as UINT_MAX). Needless to say, it is going to be greater than your unsigned 1000 value, meaning that a > b is indeed false and a is indeed small compared to (unsigned) b. The if in your code should resolve to else branch, which is what you observed in your experiment.
The same conversion rules apply to subtraction. Your a-b is really interpreted as a - (unsigned) b and the result has type unsigned int. Such value cannot be printed with %d format specifier, since %d only works with signed values. Your attempt to print it with %d results in undefined behavior, so the value that you see printed (even though it has a logical deterministic explanation in practice) is completely meaningless from the point of view of C language.
Edit: Actually, I could be wrong about the undefined behavior part. According to C language specification, the common part of the range of the corresponding signed and unsigned integer type shall have identical representation (implying, according to the footnote 31, "interchangeability as arguments to functions"). So, the result of a - b expression is unsigned 1001 as described above, and unless I'm missing something, it is legal to print this specific unsigned value with %d specifier, since it falls within the positive range of int. Printing (unsigned) INT_MAX + 1 with %d would be undefined, but 1001u is fine.
On a typical implementation where int is 32-bit, -1 when converted to an unsigned int is 4,294,967,295 which is indeed ≥ 1000.
Even if you treat the subtraction in an unsigned world, 1000 - (4,294,967,295) = -4,294,966,295 = 1,001 which is what you get.
That's why gcc will spit a warning when you compare unsigned with signed. (If you don't see a warning, pass the -Wsign-compare flag.)
You are doing unsigned comparison, i.e. comparing 1000 to 2^32 - 1.
The output is signed because of %d in printf.
N.B. sometimes the behavior when you mix signed and unsigned operands is compiler-specific. I think it's best to avoid them and do casts when in doubt.
#include<stdio.h>
int main()
{
int a = 1000;
signed int b = -1, c = -2;
printf("%d",(unsigned int)b);
printf("%d\n",(unsigned int)c);
printf("%d\n",(unsigned int)a);
if(1000>-1){
printf("\ntrue");
}
else
printf("\nfalse");
return 0;
}
For this you need to understand the precedence of operators
Relational Operators works left to right ...
so when it comes
if(1000>-1)
then first of all it will change -1 to unsigned integer because int is by default treated as unsigned number and it range it greater than the signed number
-1 will change into the unsigned number ,it changes into a very big number
Find a easy way to compare, maybe useful when you can not get rid of unsigned declaration, (for example, [NSArray count]), just force the "unsigned int" to an "int".
Please correct me if I am wrong.
if (((int)a)>b) {
....
}
The hardware is designed to compare signed to signed and unsigned to unsigned.
If you want the arithmetic result, convert the unsigned value to a larger signed type first. Otherwise the compiler wil assume that the comparison is really between unsigned values.
And -1 is represented as 1111..1111, so it a very big quantity ... The biggest ... When interpreted as unsigned.
while comparing a>b where a is unsigned int type and b is int type, b is type casted to unsigned int so, signed int value -1 is converted into MAX value of unsigned**(range: 0 to (2^32)-1 )**
Thus, a>b i.e., (1000>4294967296) becomes false. Hence else loop printf("a is SMALL! %d\n", a-b); executed.

Resources