unsigned int data types in c - c

Please help me in differentiating these codes in C:
Code 1:
#include<stdio.h>
#include <stdint.h>
uint8_t fb(int a)
{
return -3;
}
int main()
{
int a = fb(-3);
printf("%d",a);
return 0;
}
Code 2:
#include<stdio.h>
unsigned int fb(int a)
{
return -3;
}
int main()
{
int a = fb(-3);
printf("%d",a);
return 0;
}
The problem is the the first code returns 253 as expected but the second code returns -3 which is unexpected as return type is unsigned. Please help me how this is possible?
I have used mingw gcc compiler.

In the first program,
When the function returns, int -3 (FFFFFFFD) is cast to uint8_t is 253 (FD). The higher order bytes are dropped so it fits.
When the assignment is performed, uint8_t 253 (FD) is cast to int by sign extension to 253 (000000FD).
In the second program,
When the function returns, int -3 (FFFFFFFD) cast to unsigned int 4294967293 (FFFFFFFD). No bytes needed to be dropped.
When the assignment is performed, unsigned int 4294967293 (FFFFFFFD) cast to int is -3 (FFFFFFFD).
Notes:
Assumes 32-bit ints, but it's a similar story with 64-bit ints.
This is an explanation of what happens for you; it isn't necessarily what the C spec calls to happen.

The conversion from signed to unsigned will follow the rules laid out in section 6.3.1.3 Signed and unsigned integers of the draft C99 standard which says:
Otherwise, if the new type is unsigned, the value is converted by
repeatedly adding or subtracting one more than the maximum value that
can be represented in the new type until the value is in the range of
the new type.49)
So this means that -3 will be converted to by adding UMAX +1 to the value. In the case of uint8_t this is a relatively small value 253 which fits into a singed int and therefore is converted to signed int without issue.
In your second case with the return value of unsigned int, we will end up with a rather large value upon conversion, actually std::numeric_limits<unsigned int>::max() + 1 - 3. Which will not fit into an signed int this is overflow and is thus undefined behavior as per section 6.5 paragraph 5 which says:
If an exceptional condition occurs during the evaluation of an
expression (that is, if the result is not mathematically defined or
not in the range of representable values for its type), the behavior
is undefined.

unsigned int should be printed as %u instead of %d

The rule for conversion from integral type to an unsigned integral type is simple:
Add/Subtact (Maximum value of target type + 1) as often as needed to fit.
The rule for conversion from integral type to signd integral type is simpler still:
Preserve the value, if out-of-range the conversion is undefined.
Most modern implementations (all on windows platform) define it to add/subtract same constant as for corresponding unsigned type until it fits though.
Using those rules, the conversion on return from fb() in the first example is value-preserving and ok:
-3 -> 256-3 = 253 -> 253
The return in the second example is out-of-range and thus undefined behavior, though normally:
-3 -> 2CHAR_BIT*sizeof(int)-3 -> -3
Bonus facts:
Passing a variadic function a signed/unsigned type where the corresponding other one was expected is allowed, though the raw value cannot be converted and will just be assumed to be of the indicated type.
CHAR_BIT is a preprocessor constant giving the number of bits in a byte.
sizeof is an operator giving the number of bytes in a type / variable.

Related

Why the value of signed char is greater than the unsigned int [duplicate]

This question already has answers here:
Comparison operation on unsigned and signed integers
(7 answers)
Closed 1 year ago.
#include <stdio.h>
int main() {
unsigned int i = 23;
signed char c = -23;
if (i<c)
puts("TRUE");
return 0;
}
Why the output of the following program is TRUE, even though I have used signed char which can store from -128 to 127.
Your i < c comparison has operands of two different types, so the operand with the smaller type (lower rank) is converted to the type of the other. In this case, as -23 cannot be represented as an unsigned int, and following the "usual arithmetic conversions," it will end up with a value considerably greater than +23 (actually, UINT_MAX + 1 - 23), according to Paragraph #2 in the following excerpt from this (Draft) C11 Standard:
6.3.1.3 Signed and unsigned integers
1    When a value with integer type is converted to another integer type other than _Bool, if the value can be represented by the new type, it is unchanged.
2    Otherwise, if the new type is unsigned, the value is converted by
repeatedly adding or subtracting one more than the maximum value that
can be represented in the new type until the value is in the range of
the new type.
3    Otherwise, the new type is signed and the
value cannot be represented in it; either the result is
implementation-defined or an implementation-defined signal is
raised.
You can see this conversion in action by assigning c to a separate unsigned int and printing its value, as in the following code. Assuming a 32-bit unsigned int, you will likely see a value of 4294967273 (which is, indeed, greater than 23).
#include <stdio.h>
int main()
{
unsigned int i = 23;
signed char c = -23;
unsigned int uc = c; // Add a diagnostic line ...
printf("%u\n", uc); // ... and show the value being used in the comparison
if (i < c) {
printf("TRUE");
}
return 0;
}
Note: The relative size of the two variables (i and c) here is something of a red herring; even if you declare c as signed int c = -23;, you will still get the same result, due to the Usual Arithmetic Conversions (link courtesy of Jonathan Leffler's comment) – note that a signed int cannot represent all possible values of an unsigned int, so the very last of the 'hollow' bullets will come into play.
In order to make the comparison, the smallest variable (char, 8 bit) must be promoted to the size and signedness of the larger one (integer, 32 bit). But in so doing, -23 becomes 4294967273. Which is much more than 23.
If you compare c with a signed integer, the sign promotion causes no change and the test behaves as expected.

cast float to unsigned int in C with gcc

I am using gcc to test some simple casts between float to unsigned int.
The following piece of code gives the result 0.
const float maxFloat = 4294967295.0;
unsigned int a = (unsigned int) maxFloat;
printf("%u\n", a);
0 is printed (which I belive is very strange).
On the other hand the following piece of code:
const float maxFloat = 4294967295.0;
unsigned int a = (unsigned int) (signed int) maxFloat;
printf("%u\n", a);
prints 2147483648 which I belive is the correct results.
What happens that I get 2 different results?
If you first do this:
printf("%f\n", maxFloat);
The output you'll get is this:
4294967296.000000
Assuming a float is implemented as an IEEE754 single precision floating point type, the value 4294967295.0 cannot be represented exactly by this type because there's aren't enough bits of precision. The closest value it can store is 4294967296.0.
Assuming an int (and likewise unsigned int) is 32 bits, the value 4294967296.0 is outside the range of both of these types. Converting a floating point type to an integer type when the value cannot be represented in the given integer type invokes undefined behavior.
This is detailed in section 6.3.1.4 of the C standard which dictates conversion from floating point types to integer types:
1 When a finite value of real floating type is converted to an integer type other than _Bool, the fractional part is discarded (i.e.,
the value is truncated toward zero). If the value of the integral part
cannot be represented by the integer type, the behavior is undefined.61)
...
61) The remaindering operation performed when a value of integer type
is converted to unsigned type need not be performed when a value of
real floating type is converted to unsigned type. Thus, the range of
portable real floating values is (−1, Utype_MAX+1).
The footnote in the above passage is referencing section 6.3.1.3, which details integer to integer conversions:
1 When a value with integer type is converted to another integer type other than _Bool, if the value can be represented by the new
type, it is unchanged.
2 Otherwise, if the new type is unsigned, the value is converted by repeatedly adding or subtracting one more than the maximum value that
can be represented in the new type until the value is in the range of
the new type.
3 Otherwise, the new type is signed and the value cannot be represented in it; either the result is implementation-defined or an
implementation-defined signal is raised.
The behavior you see in the first code snippet is consistent with an out-of-range conversion to an unsigned type when the value in question is an integer, however because the value being converted has a floating point type it is undefined behavior.
Just because one implementation does this doesn't mean that all will. In fact, gcc gives a different result if you change the optimization settings.
For example, on my machine using gcc 5.4.0, given this code:
float n = 4294967296;
printf("n=%f\n", n);
unsigned int a = (unsigned int) n;
int b = (signed int) n;
unsigned int c = (unsigned int) (signed int) n;
printf("a=%u\n", a);
printf("b=%d\n", b);
printf("c=%u\n", c);
I get the following results with -O0:
n=4294967296.000000
a=0
b=-2147483648
c=2147483648
And this with -O1:
n=4294967296.000000
a=4294967295
b=2147483647
c=2147483647
If on the other hand n is defined as long or long long, you would always get this output:
n=4294967296
a=0
b=0
c=0
The conversion to unsigned is well defined by the C standard as sited above, and the conversion to signed is implementation defined, which gcc defines as follows:
The result of, or the signal raised by, converting an integer to a signed integer type when the value cannot be represented in an object
of that type (C90 6.2.1.2, C99 and C11 6.3.1.3).
For conversion to a type of width N, the value is reduced modulo 2^N
to be within range of the type; no signal is raised.
Assuming IEEE 754 floating point numbers, the number 4294967295.0 can't be stored exactly in a float. It will be stored as 4294967296.0 instead (which is 232).
Further assuming your unsigned int has 32 value bits, this is just by one too large to fit in an unsigned int, so the result of the conversion is undefined according to the C standard -- 0 is a "reasonable" outcome.
In your second case, you have undefined behavior as well, and I have no theory what's happening here on the representation level. Fact is, the number is much too large for a 32 bit signed int (still assuming this is what your machine uses).
From this remark in your question:
prints 2147483648 which I belive is the correct results.
I assume you wanted to see the representation of your float in memory. Casting will convert the value, so that's not the way to see the representation. The following code would do:
int main(void) {
const float maxFloat = 4294967295.0;
unsigned char *floatBytes = &maxFloat;
for (int i=0; i < sizeof maxFloat; ++i)
{
printf("0x%02x ", floatBytes[i]);
}
puts("");
}
online example

Different Data Types - Signed and Unsigned

I just executed the following code
main()
{
char a = 0xfb;
unsigned char b = 0xfb;
printf("a=%c,b=%c",a,b);
if(a==b) {
printf("\nSame");
}
else {
printf("\nNot Same");
}
}
For this code I got the answer as
a=? b=?
Different
Why don't I get Same, and what is the value for a and b?
The line if (a == b)... promotes the characters to integers before comparison, so the signedness of the character affects how that happens. The unsigned character 0xFB becomes the integer 251; the signed character 0xFB becomes the integer -5. Thus, they are unequal.
There are 2 cases to consider:
if the char type is unsigned by default, both a and b are assigned the value 251 and the program will print Same.
if the char type is signed by default, which is alas the most common case, the definition char a = 0xfb; has implementation defined behavior as 0xfb (251 in decimal) is probably out of range for the char type (typically -128 to 127). Most likely the value -5 will be stored into a and a == b evaluates to 0 as both arguments are promoted to int before the comparison, hence -5 == 251 will be false.
The behavior of printf("a=%c,b=%c", a, b); is also system dependent as the non ASCII characters -5 and 251 may print in unexpected ways if at all. Note however that both will print the same as the %c format specifies that the argument is converted to unsigned char before printing. It would be safer and more explicit to try printf("a=%d, b=%d\n", a, b);
With gcc or clang, you can try recompiling your program with -funsigned-char to see how the behavior will differ.
According to the C Standard (6.5.9 Equality operators)
4 If both of the operands have arithmetic type, the usual arithmetic
conversions are performed....
The usual arithmetic conversions include the integer promotions.
From the C Standard (6.3.1.1 Boolean, characters, and integers)
2 The following may be used in an expression wherever an int or
unsigned int may be used:
...
If an int can represent all values of the original type (as restricted
by the width, for a bit-field), the value is converted to an int;
otherwise, it is converted to an unsigned int. These are called the
integer promotions.58) All other types are unchanged by the integer
promotions.
So in this equality expression
a == b
the both operands are converted to the type int. The signed operand ( provided that the type char behaves as the type signed char) is converted to the type int by means of propagating the sign bit.
As result the operands have different values due to the difference in the binary representation.
If the type char behaves as the type unsigned char (for example by setting a corresponding option of the compiler) then evidently the operands will be equal.
char stores from -128 to 127 and unsigned char stores from 0 to 255.
and 0xfb represents 251 in decimal which is beyond the limit of char a.

why the result is "2" of unsigned int (1) - unsigned int (0xFFFFFFFF)

Please look at the following codes:
#include <stdlib.h>
#include <stdio.h>
int main()
{
unsigned int a = 1;
unsigned int b = -1;
printf("0x%X\n", (a-b));
return 0;
}
The result is 0x2.
I think the integer promotion should not happen because the type of both of "a" and "b" are unsigned int. But the result beats me.... I don't know the reason.
By the way, I know the arithmetic result should be 2 because 1-(-1)=2. But the type of b is unsigned int. When assign the (-1) to b, the value of b is 0xFFFFFFFF actually. It is the maximum value of unsigned int. When one small unsigned value subtract one big value, the result is not that I expect.
From the answer below, I think maybe the overflow is a good explanation。
Now I writes other test codes. It proves the overflow answer is right.
#include <stdlib.h>
#include <stdio.h>
int main()
{
unsigned int c = 1;
unsigned int d = -1;
printf("0x%llx\n", (unsigned long long)c-(unsigned long long)d);
return 0;
}
The result is "0xffffffff00000002". It is I expect.
unsigned int a = 1;
This initializes a to 1. Actually, since 1 is of type int, there's an implicit int-to-unsigned conversion, but it's a trivial conversion that doesn't change the value or representation).
unsigned int b = -1;
This is more interesting. -1 is of type int, and the initialization implicitly converts it from int to unsigned int. Since the mathematical value -1 cannot be represented as an unsigned int, the conversion is non-trivial. The rule in this case is (quoting section 6.3.1.3 of the C standard):
the value is converted by repeatedly adding or subtracting one more
than the maximum value that can be represented in the new type until
the value is in the range of the new type.
Of course it doesn't actually have to do it that way, as long as the result is the same. Adding UINT_MAX+1 ("one more than the maximum value that can be represented in the new type") to -1 yields UINT_MAX. (That addition is defined mathematically; it's not itself subject to any type conversions.)
In fact, assigning -1 to an object of any unsigned type is a good way to get the maximum value of that type without having to refer to the *_MAX macros defined in <limits.h>.
So, assuming a typical 32-bit system, we have a == 1 and b == 0xFFFFFFFF.
printf("0x%X\n", (a-b));
The mathematical result of the subtraction is -0xFFFFFFFE, but that's obviously outside the range of unsigned int. The rules for unsigned arithmetic are similar to the rules for unsigned conversion; the result is 2.
Who says you're suffering integer promotion? Let's pretend that your integers are two's complement (likely, though not mandated by the standard) and they're only four bits wide (not possible according to the standard, I'm just using this to simplify things).
int unsigned-int bit-pattern
--- ------------ -----------
1 1 0001
-1 15 1111
------
(subtract with borrow) 1 0010
(only have four bits) 0010 -> 2
You can see here that you can get the result you see without any promotion to signed or wider types.
There should be a compiler warning that you probably ignored or turned off, but it's still possible to store -1 in an unsigned integer. Internally, -1 is stored on a 32-bit machine as 0xffffffff. So if you subtract 0xffffffff from 1, you end up with -0xfffffffe, which is 2. (There are no negative numbers, a negative number is the maximum integer value plus one minus the number).
Bottom line - signed or unsigned doesn't matter at all, it only comes to play when you compare values.
And mathematically speaking, 1 - (-1) = 1+1.
If you subtract a negative number, it is the equivalent of adding a positive number.
a = 1
b = -1
(a-b) = ?
((1)-(-1)) = ?
(1-(-1)) = ?
(1+1) = ?
2 = ?
At first you might think that this isn't allowed, since you specified an unsigned int; however, you are also converting signed int (the -1 constant) to an unsigned int. So, you are effectively storing the exact same bits into the unsigned int (0xFFFF).
Then, in the expression, you take the negative of the 0xFFFF value, which of course forces the number to be signed. In effect, you are circumventing the unsigned directive at ever step.

Comparison operation on unsigned and signed integers

See this code snippet
int main()
{
unsigned int a = 1000;
int b = -1;
if (a>b) printf("A is BIG! %d\n", a-b);
else printf("a is SMALL! %d\n", a-b);
return 0;
}
This gives the output: a is SMALL: 1001
I don't understand what's happening here. How does the > operator work here? Why is "a" smaller than "b"? If it is indeed smaller, why do i get a positive number (1001) as the difference?
Binary operations between different integral types are performed within a "common" type defined by so called usual arithmetic conversions (see the language specification, 6.3.1.8). In your case the "common" type is unsigned int. This means that int operand (your b) will get converted to unsigned int before the comparison, as well as for the purpose of performing subtraction.
When -1 is converted to unsigned int the result is the maximal possible unsigned int value (same as UINT_MAX). Needless to say, it is going to be greater than your unsigned 1000 value, meaning that a > b is indeed false and a is indeed small compared to (unsigned) b. The if in your code should resolve to else branch, which is what you observed in your experiment.
The same conversion rules apply to subtraction. Your a-b is really interpreted as a - (unsigned) b and the result has type unsigned int. Such value cannot be printed with %d format specifier, since %d only works with signed values. Your attempt to print it with %d results in undefined behavior, so the value that you see printed (even though it has a logical deterministic explanation in practice) is completely meaningless from the point of view of C language.
Edit: Actually, I could be wrong about the undefined behavior part. According to C language specification, the common part of the range of the corresponding signed and unsigned integer type shall have identical representation (implying, according to the footnote 31, "interchangeability as arguments to functions"). So, the result of a - b expression is unsigned 1001 as described above, and unless I'm missing something, it is legal to print this specific unsigned value with %d specifier, since it falls within the positive range of int. Printing (unsigned) INT_MAX + 1 with %d would be undefined, but 1001u is fine.
On a typical implementation where int is 32-bit, -1 when converted to an unsigned int is 4,294,967,295 which is indeed ≥ 1000.
Even if you treat the subtraction in an unsigned world, 1000 - (4,294,967,295) = -4,294,966,295 = 1,001 which is what you get.
That's why gcc will spit a warning when you compare unsigned with signed. (If you don't see a warning, pass the -Wsign-compare flag.)
You are doing unsigned comparison, i.e. comparing 1000 to 2^32 - 1.
The output is signed because of %d in printf.
N.B. sometimes the behavior when you mix signed and unsigned operands is compiler-specific. I think it's best to avoid them and do casts when in doubt.
#include<stdio.h>
int main()
{
int a = 1000;
signed int b = -1, c = -2;
printf("%d",(unsigned int)b);
printf("%d\n",(unsigned int)c);
printf("%d\n",(unsigned int)a);
if(1000>-1){
printf("\ntrue");
}
else
printf("\nfalse");
return 0;
}
For this you need to understand the precedence of operators
Relational Operators works left to right ...
so when it comes
if(1000>-1)
then first of all it will change -1 to unsigned integer because int is by default treated as unsigned number and it range it greater than the signed number
-1 will change into the unsigned number ,it changes into a very big number
Find a easy way to compare, maybe useful when you can not get rid of unsigned declaration, (for example, [NSArray count]), just force the "unsigned int" to an "int".
Please correct me if I am wrong.
if (((int)a)>b) {
....
}
The hardware is designed to compare signed to signed and unsigned to unsigned.
If you want the arithmetic result, convert the unsigned value to a larger signed type first. Otherwise the compiler wil assume that the comparison is really between unsigned values.
And -1 is represented as 1111..1111, so it a very big quantity ... The biggest ... When interpreted as unsigned.
while comparing a>b where a is unsigned int type and b is int type, b is type casted to unsigned int so, signed int value -1 is converted into MAX value of unsigned**(range: 0 to (2^32)-1 )**
Thus, a>b i.e., (1000>4294967296) becomes false. Hence else loop printf("a is SMALL! %d\n", a-b); executed.

Resources