I am confused about how type promotion happens in case of printf and in general. I tried the following code
unsigned char uc = 255
signed char sc = -128
printf("unsigned char value = %d \n", uc);
printf("signed char value = %d \n", sc);
This gives the following output :
unsigned char value = 255
signed char value = -128
This has left me wondering about how promotion actually takes place and whether a sign extension happens or not. If a sign extension is done then the value 255 should be printed as negative value (-128 remaining the same) and if no sign extension is done then -128 should have been printed as a positive value (255 remaining the same). Please explain.
If a sign extension is done then the value 255 should be printed as negative value
This is where you're wrong - all values of type unsigned char including 255, can be represented in a int, so the promotion to int from unsigned char just happens without any funny business.
Where problems occur is when a signed value must be converted (which is a different thing than promotion, and occurs to create a common type for operands) to an unsigned value. If that signed type has a negative value, then the conversion to an unsigned type will change the value.
In summary, integer promotion preserves value (including sign), conversion can change value.
A va_arg function has no information on the expected type for the ... part. Therefore the promotion rules for functions as declared without prototype apply. This means that all types that are shorter than int are promoted to int or unsigned directly. So your printf function never sees an (un)signed char.
Sign extension is done. But you can't sign extend an unsigned char because it has no sign bit. The whole point of sign extension is to keep the value the same. Or, if you prefer to think of it this way, every unsigned variable has an implied zero sign bit. So when it's sign-extended to a larger signed type, the sign bit should be zero in the larger type.
Both are promoted to ints - hence keeping the sign.
Sign extension is done.
But the since the case of uc, there is no sign, as it is an unsigned char, so it is left positive.
Related
I was trying to print the minimum of int, char, short, long without using the header file <limit.h>. So bitwise operation will be a good choice. But something strange happened.
The statement
printf("The minimum of short: %d\n", ~(((unsigned short)~0) >> 1));
gives me
The minimum of short: -32768
But the statement
printf("The minimum of short: %d\n", ~((~(unsigned short)0) >> 1));
gives me
The minimum of short: 0
This phenomenon also occurs in char. But it does not occur in long, int. Why does this happen?
And it is worth mentioning that I use VS Code as my editor. When I moved my cursor on unsigned char in the statement
printf("The minimum of char: %d\n", (short)~((~(unsigned char)0) >> 1));
It gives me a hint (int) 0 instead of (unsigned char)0, which I expected. Why does this happen?
First of all, none of your code is really reliable and won't do what you expect.
printf and all other variable argument length functions have a dysfunctional "feature" called the default argument promotions. This means that the actual type of the parameters passed undergo silent promotion. Small integer types (such as char and short) get promoted to int which is signed. (And float gets promoted to double.) Tl;dr: printf is a nuts function.
Therefore you can cast between various small integer types all you want, there will still be a promotion to int in the end. This is no problem if you use the correct format specifier for the intended type, but you don't, you use %d which is for int.
In addition, the ~ operator, like most operators in C, performs implicit integer promotion of its operand. See Implicit type promotion rules.
That being said, this line ~((~(unsigned short)0) >> 1) does the following:
Take the literal 0 which is of type int and convert to unsigned short.
Implicitly promote that unsigned short back to int through implicit integer promotion.
Calculate the bitwise complement of the int value 0. This is 0xFF...FF hex, -1 dec, assuming 2's complement.
Right shift this int by 1. Here you invoke implementation-defined behavior upon shifting a negative integer. C allows this to either result in a logical shift = shift in zeroes, or arithmetic shift = shift in a sign bit. Different result from compiler to compiler and non-portable.
You get either 0x7F...FF in case of logical shift or 0xFF...FF in case of arithmetic shift. In this case it seems to be the latter, meaning you still have decimal -1 after shift.
You do bitwise complement of the 0xFF...FF = -1 and get 0.
You cast this to short. Still 0.
Default argument promotion convert it to int. Still 0.
%d expects a int and therefore prints accordingly. unsigned short is printed with %hu and short with %hd. Using the correct format specifier should undo the effect of default argument promotion.
Advice: study implicit type promotion and avoid using bitwise operators on operands that have signed type.
To simply display the lowest 2's complement value of various signed types, you have to do some trickery with unsigned types, since bitwise operations on their signed version are unreliable. Example:
int shift = sizeof(short)*8 - 1; // 15 bits on sane systems
short s = (short) (1u << shift);
printf("%hd\n", s);
This shifts an unsigned int 1u 15 bits, then converts the result of that to short, in some "implementation-defined way", meaning on two's complement systems you'll end up converting 0x8000 to -32768.
Then give printf the right format specifier and you'll get the expected result from there.
In the below snippet, shouldn't the output be 1? Why am I getting output as -1 and 4294967295?
What I understand is, the variable, c, here is of signed type, so shouldn't its value be 1?
char c=0xff;
printf("%d %u",c,c);
c is of signed type. a char is 8 bits. So you have an 8 bit signed quantity, with all bits 1. On a twos complement machine, that evaluates to -1.
Some compilers will warn you when you do that sort of thing. If you're using gcc/clang, switch on all the warnings.
Pedant note: On some machines it could have the value 255, should the compiler treat 'char' as unsigned.
You're getting the correct answer.
The %u format specifier indicates that the value will be an unsigned int. The compiler automatically promotes your 8-bit char to a 32-bit int. However you have to remember that char is a signed type. So a value of 0xff is in fact -1.
When the casting from char to int occurs, the value is still -1, but the it's the 32-bit representation which in binary is 11111111 11111111 11111111 11111111 or in hex 0xffffffff
When that is interpreted as an unsigned integer, all of the bits are obviously preserved because the length is the same, but now it's handled as an unsigned quantity.
0xffffffff = 4294967295 (unsigned)
0xffffffff = -1 (signed)
There are three character types in C, char, signed char, and unsigned char. Plain char has the same representation as either signed char or unsigned char; the choice is implementation-defined. It appears that plain char is signed in your implementation.
All three types have the same size, which is probably 8 bits (CHAR_BIT, defined in <limits.h>, specifies the number of bits in a byte). I'll assume 8 bits.
char c=0xff;
Assuming plain char is signed, the value 0xff (255) is outside the range of type char. Since you can't store the value 255 in a char object, the value is implicitly converted. The result of this conversion is implementation-defined, but is very likely to be -1.
Keep this carefully in mind: 0xff is simply another way to write 255, and 0xff and -1 are two distinct values. You cannot store the value 255 in a char object; its value is -1. Integer constants, whether they're decimal, hexadecimal, or octal, specify values, not representations.
If you really want a one-byte object with the value 0xff, define it as an unsigned char, not as a char.
printf("%d %u",c,c);
When a value of an integer type narrower than int is passed to printf (or to any variadic function), it's promoted to int if that type can hold the type's entire range of values, or to unsigned int if it can't. For type char, it's almost certainly promoted to int. So this call is equivalent to:
printf("%d %u", -1, -1);
The output for the "%d" format is obvious. The output for "%u" is less obvious. "%u" tells printf that the corresponding argument is of type unsigned int, but you've passed it a value of type int. What probably happens is that the representation of the int value is treated as if it were of type unsigned int, most likely yielding UINT_MAX, which happens to be 4294967295 on your system. If you really want to do that, you should convert the value to type unsigned int. This:
printf("%d %u", -1, (unsigned int)-1);
is well defined.
Your two lines of code are playing a lot of games with various types, treating values of one type as if they were of another type, and doing implicit conversions that might yield results that are implementation-defined and/or depend on the choices your compiler happens to make.
Whatever you're trying to do, there's undoubtedly a cleaner way to do it (unless you're just trying to see what your implementation does with this particular code).
Let us start with the assumption using OP's "c, here is of signed type"
char c=0xff; // Implementation defined behavior.
0xff is a hexadecimal constant with the value of 255 and type of int.
... the new type is signed and the value cannot be represented in it; either the result is implementation-defined or an implementation-defined signal is raised. ยง6.3.1.4 3
So right off, the value of c is implementation defined (ID). Let us assume the common ID behavior of 8-bit wrap-around, so c --> -1.
A signed char will be promoted to int as part of a variadic argument to printf("%d %u",c,c); is the same as printf("%d %u",-1, -1);. Printing the -1 with "%d" is not an issue and "-1" is printed.
Printing an int -1 with "%x" is undefined behavior (UB) as it is a mis-matched specifier/type and does not fall under the exception of being representable in both types. The common UB is to print the value as if it was converted to unsigned before being passed. When UINT_MAX == 4294967295 (4-bytes) that prints the value as -1 + (UINT_MAX + 1) or "4294967295"`.
So with ID and UB, you get a result, but robust code would be re-written to depend on neither.
Suppose I have the following C code.
unsigned int u = 1234;
int i = -5678;
unsigned int result = u + i;
What implicit conversions are going on here, and is this code safe for all values of u and i? (Safe, in the sense that even though result in this example will overflow to some huge positive number, I could cast it back to an int and get the real result.)
Short Answer
Your i will be converted to an unsigned integer by adding UINT_MAX + 1, then the addition will be carried out with the unsigned values, resulting in a large result (depending on the values of u and i).
Long Answer
According to the C99 Standard:
6.3.1.8 Usual arithmetic conversions
If both operands have the same type, then no further conversion is needed.
Otherwise, if both operands have signed integer types or both have unsigned integer types, the operand with the type of lesser integer conversion rank is converted to the type of the operand with greater rank.
Otherwise, if the operand that has unsigned integer type has rank greater or equal to the rank of the type of the other operand, then the operand with signed integer type is converted to the type of the operand with unsigned integer type.
Otherwise, if the type of the operand with signed integer type can represent all of the values of the type of the operand with unsigned integer type, then the operand with unsigned integer type is converted to the type of the operand with signed integer type.
Otherwise, both operands are converted to the unsigned integer type corresponding to the type of the operand with signed integer type.
In your case, we have one unsigned int (u) and signed int (i). Referring to (3) above, since both operands have the same rank, your i will need to be converted to an unsigned integer.
6.3.1.3 Signed and unsigned integers
When a value with integer type is converted to another integer type other than _Bool, if the value can be represented by the new type, it is unchanged.
Otherwise, if the new type is unsigned, the value is converted by repeatedly adding or subtracting one more than the maximum value that can be represented in the new type until the value is in the range of the new type.
Otherwise, the new type is signed and the value cannot be represented in it; either the result is implementation-defined or an implementation-defined signal is raised.
Now we need to refer to (2) above. Your i will be converted to an unsigned value by adding UINT_MAX + 1. So the result will depend on how UINT_MAX is defined on your implementation. It will be large, but it will not overflow, because:
6.2.5 (9)
A computation involving unsigned operands can never overflow, because a result that cannot be represented by the resulting unsigned integer type is reduced modulo the number that is one greater than the largest value that can be represented by the resulting type.
Bonus: Arithmetic Conversion Semi-WTF
#include <stdio.h>
int main(void)
{
unsigned int plus_one = 1;
int minus_one = -1;
if(plus_one < minus_one)
printf("1 < -1");
else
printf("boring");
return 0;
}
You can use this link to try this online: https://repl.it/repls/QuickWhimsicalBytes
Bonus: Arithmetic Conversion Side Effect
Arithmetic conversion rules can be used to get the value of UINT_MAX by initializing an unsigned value to -1, ie:
unsigned int umax = -1; // umax set to UINT_MAX
This is guaranteed to be portable regardless of the signed number representation of the system because of the conversion rules described above. See this SO question for more information: Is it safe to use -1 to set all bits to true?
Conversion from signed to unsigned does not necessarily just copy or reinterpret the representation of the signed value. Quoting the C standard (C99 6.3.1.3):
When a value with integer type is converted to another integer type other than _Bool, if
the value can be represented by the new type, it is unchanged.
Otherwise, if the new type is unsigned, the value is converted by repeatedly adding or
subtracting one more than the maximum value that can be represented in the new type
until the value is in the range of the new type.
Otherwise, the new type is signed and the value cannot be represented in it; either the
result is implementation-defined or an implementation-defined signal is raised.
For the two's complement representation that's nearly universal these days, the rules do correspond to reinterpreting the bits. But for other representations (sign-and-magnitude or ones' complement), the C implementation must still arrange for the same result, which means that the conversion can't just copy the bits. For example, (unsigned)-1 == UINT_MAX, regardless of the representation.
In general, conversions in C are defined to operate on values, not on representations.
To answer the original question:
unsigned int u = 1234;
int i = -5678;
unsigned int result = u + i;
The value of i is converted to unsigned int, yielding UINT_MAX + 1 - 5678. This value is then added to the unsigned value 1234, yielding UINT_MAX + 1 - 4444.
(Unlike unsigned overflow, signed overflow invokes undefined behavior. Wraparound is common, but is not guaranteed by the C standard -- and compiler optimizations can wreak havoc on code that makes unwarranted assumptions.)
Referring to The C Programming Language, Second Edition (ISBN 0131103628),
Your addition operation causes the int to be converted to an unsigned int.
Assuming two's complement representation and equally sized types, the bit pattern does not change.
Conversion from unsigned int to signed int is implementation dependent. (But it probably works the way you expect on most platforms these days.)
The rules are a little more complicated in the case of combining signed and unsigned of differing sizes.
When converting from signed to unsigned there are two possibilities. Numbers that were originally positive remain (or are interpreted as) the same value. Number that were originally negative will now be interpreted as larger positive numbers.
When one unsigned and one signed variable are added (or any binary operation) both are implicitly converted to unsigned, which would in this case result in a huge result.
So it is safe in the sense of that the result might be huge and wrong, but it will never crash.
As was previously answered, you can cast back and forth between signed and unsigned without a problem. The border case for signed integers is -1 (0xFFFFFFFF). Try adding and subtracting from that and you'll find that you can cast back and have it be correct.
However, if you are going to be casting back and forth, I would strongly advise naming your variables such that it is clear what type they are, eg:
int iValue, iResult;
unsigned int uValue, uResult;
It is far too easy to get distracted by more important issues and forget which variable is what type if they are named without a hint. You don't want to cast to an unsigned and then use that as an array index.
What implicit conversions are going on here,
i will be converted to an unsigned integer.
and is this code safe for all values of u and i?
Safe in the sense of being well-defined yes (see https://stackoverflow.com/a/50632/5083516 ).
The rules are written in typically hard to read standards-speak but essentially whatever representation was used in the signed integer the unsigned integer will contain a 2's complement representation of the number.
Addition, subtraction and multiplication will work correctly on these numbers resulting in another unsigned integer containing a twos complement number representing the "real result".
division and casting to larger unsigned integer types will have well-defined results but those results will not be 2's complement representations of the "real result".
(Safe, in the sense that even though result in this example will overflow to some huge positive number, I could cast it back to an int and get the real result.)
While conversions from signed to unsigned are defined by the standard the reverse is implementation-defined both gcc and msvc define the conversion such that you will get the "real result" when converting a 2's complement number stored in an unsigned integer back to a signed integer. I expect you will only find any other behaviour on obscure systems that don't use 2's complement for signed integers.
https://gcc.gnu.org/onlinedocs/gcc/Integers-implementation.html#Integers-implementation
https://msdn.microsoft.com/en-us/library/0eex498h.aspx
Horrible Answers Galore
Ozgur Ozcitak
When you cast from signed to unsigned
(and vice versa) the internal
representation of the number does not
change. What changes is how the
compiler interprets the sign bit.
This is completely wrong.
Mats Fredriksson
When one unsigned and one signed
variable are added (or any binary
operation) both are implicitly
converted to unsigned, which would in
this case result in a huge result.
This is also wrong. Unsigned ints may be promoted to ints should they have equal precision due to padding bits in the unsigned type.
smh
Your addition operation causes the int
to be converted to an unsigned int.
Wrong. Maybe it does and maybe it doesn't.
Conversion from unsigned int to signed
int is implementation dependent. (But
it probably works the way you expect
on most platforms these days.)
Wrong. It is either undefined behavior if it causes overflow or the value is preserved.
Anonymous
The value of i is converted to
unsigned int ...
Wrong. Depends on the precision of an int relative to an unsigned int.
Taylor Price
As was previously answered, you can
cast back and forth between signed and
unsigned without a problem.
Wrong. Trying to store a value outside the range of a signed integer results in undefined behavior.
Now I can finally answer the question.
Should the precision of int be equal to unsigned int, u will be promoted to a signed int and you will get the value -4444 from the expression (u+i). Now, should u and i have other values, you may get overflow and undefined behavior but with those exact numbers you will get -4444 [1]. This value will have type int. But you are trying to store that value into an unsigned int so that will then be cast to an unsigned int and the value that result will end up having would be (UINT_MAX+1) - 4444.
Should the precision of unsigned int be greater than that of an int, the signed int will be promoted to an unsigned int yielding the value (UINT_MAX+1) - 5678 which will be added to the other unsigned int 1234. Should u and i have other values, which make the expression fall outside the range {0..UINT_MAX} the value (UINT_MAX+1) will either be added or subtracted until the result DOES fall inside the range {0..UINT_MAX) and no undefined behavior will occur.
What is precision?
Integers have padding bits, sign bits, and value bits. Unsigned integers do not have a sign bit obviously. Unsigned char is further guaranteed to not have padding bits. The number of values bits an integer has is how much precision it has.
[Gotchas]
The macro sizeof macro alone cannot be used to determine precision of an integer if padding bits are present. And the size of a byte does not have to be an octet (eight bits) as defined by C99.
[1] The overflow may occur at one of two points. Either before the addition (during promotion) - when you have an unsigned int which is too large to fit inside an int. The overflow may also occur after the addition even if the unsigned int was within the range of an int, after the addition the result may still overflow.
As per my knowledge range of unsigned char in C is 0-255. but when I executed the below code its printing the 256 as output. How this is possible? I have got this code from "test your C skill" book which say char size is one byte.
main()
{
unsigned char i = 0x80;
printf("\n %d",i << 1);
}
Because the operands to <<* undergo integer promotion. It's effectively equivalent to (int)i << 1.
* This is true for most operators in C.
Several things are happening.
First, the expression i << 1 has type int, not char; the literal 1 has type int, so the type of i is "promoted" to int, and 0x100 is well within the range of a signed integer.
Secondly, the %d conversion specifier expects its corresponding argument to have type int. So the argument is being interpreted as an integer.
If you want to print the numeric value of a signed char, use the conversion specifier %hhd. If you want to print the numeric value of an unsigned char, use %hhu.
For arithmetical operations, char is promoted to int before the operation is performed. See the standard for details. Simplified: the "smaller" type is first brought to the "larger" type before the operation is performed. For the shift-operators, the resulting type is that of the left side operand, while for e.g. + and other "combining" operators it is the larger of both, but at least int. The latter means that char and short (and their unsigned counterparts are always promoted to int with the result being int, too. (simplified, for details please read the standard)
Note also that %d takes an int argument, not a char.
Additional notes:
unsigned char has not necessarily the range 0..255. Check limits.h, you will find UCHAR_MAX there.
char and "byte" are synonymously used in the standard, but neither are necessarily 8 bits wide (just very likely for modern general purpose CPUs).
As others have already explained, the statement "printf("\n %d",i << 1);" does integer promotion. So the one right shifting of integer value 128 results in 256. You could try the following code to print the maximum value of "unsigned char". The maximum value of "unsigned char" has all bits set. So a bitwise NOT operation using "~" should give you the maximum ASCII value of 255.
int main()
{
unsigned char ch = ~0;
printf("ch = %d\n", ch);
return 0;
}
Output:-
M-40UT:Desktop$ ./a.out
ch = 255
Suppose I have the following C code.
unsigned int u = 1234;
int i = -5678;
unsigned int result = u + i;
What implicit conversions are going on here, and is this code safe for all values of u and i? (Safe, in the sense that even though result in this example will overflow to some huge positive number, I could cast it back to an int and get the real result.)
Short Answer
Your i will be converted to an unsigned integer by adding UINT_MAX + 1, then the addition will be carried out with the unsigned values, resulting in a large result (depending on the values of u and i).
Long Answer
According to the C99 Standard:
6.3.1.8 Usual arithmetic conversions
If both operands have the same type, then no further conversion is needed.
Otherwise, if both operands have signed integer types or both have unsigned integer types, the operand with the type of lesser integer conversion rank is converted to the type of the operand with greater rank.
Otherwise, if the operand that has unsigned integer type has rank greater or equal to the rank of the type of the other operand, then the operand with signed integer type is converted to the type of the operand with unsigned integer type.
Otherwise, if the type of the operand with signed integer type can represent all of the values of the type of the operand with unsigned integer type, then the operand with unsigned integer type is converted to the type of the operand with signed integer type.
Otherwise, both operands are converted to the unsigned integer type corresponding to the type of the operand with signed integer type.
In your case, we have one unsigned int (u) and signed int (i). Referring to (3) above, since both operands have the same rank, your i will need to be converted to an unsigned integer.
6.3.1.3 Signed and unsigned integers
When a value with integer type is converted to another integer type other than _Bool, if the value can be represented by the new type, it is unchanged.
Otherwise, if the new type is unsigned, the value is converted by repeatedly adding or subtracting one more than the maximum value that can be represented in the new type until the value is in the range of the new type.
Otherwise, the new type is signed and the value cannot be represented in it; either the result is implementation-defined or an implementation-defined signal is raised.
Now we need to refer to (2) above. Your i will be converted to an unsigned value by adding UINT_MAX + 1. So the result will depend on how UINT_MAX is defined on your implementation. It will be large, but it will not overflow, because:
6.2.5 (9)
A computation involving unsigned operands can never overflow, because a result that cannot be represented by the resulting unsigned integer type is reduced modulo the number that is one greater than the largest value that can be represented by the resulting type.
Bonus: Arithmetic Conversion Semi-WTF
#include <stdio.h>
int main(void)
{
unsigned int plus_one = 1;
int minus_one = -1;
if(plus_one < minus_one)
printf("1 < -1");
else
printf("boring");
return 0;
}
You can use this link to try this online: https://repl.it/repls/QuickWhimsicalBytes
Bonus: Arithmetic Conversion Side Effect
Arithmetic conversion rules can be used to get the value of UINT_MAX by initializing an unsigned value to -1, ie:
unsigned int umax = -1; // umax set to UINT_MAX
This is guaranteed to be portable regardless of the signed number representation of the system because of the conversion rules described above. See this SO question for more information: Is it safe to use -1 to set all bits to true?
Conversion from signed to unsigned does not necessarily just copy or reinterpret the representation of the signed value. Quoting the C standard (C99 6.3.1.3):
When a value with integer type is converted to another integer type other than _Bool, if
the value can be represented by the new type, it is unchanged.
Otherwise, if the new type is unsigned, the value is converted by repeatedly adding or
subtracting one more than the maximum value that can be represented in the new type
until the value is in the range of the new type.
Otherwise, the new type is signed and the value cannot be represented in it; either the
result is implementation-defined or an implementation-defined signal is raised.
For the two's complement representation that's nearly universal these days, the rules do correspond to reinterpreting the bits. But for other representations (sign-and-magnitude or ones' complement), the C implementation must still arrange for the same result, which means that the conversion can't just copy the bits. For example, (unsigned)-1 == UINT_MAX, regardless of the representation.
In general, conversions in C are defined to operate on values, not on representations.
To answer the original question:
unsigned int u = 1234;
int i = -5678;
unsigned int result = u + i;
The value of i is converted to unsigned int, yielding UINT_MAX + 1 - 5678. This value is then added to the unsigned value 1234, yielding UINT_MAX + 1 - 4444.
(Unlike unsigned overflow, signed overflow invokes undefined behavior. Wraparound is common, but is not guaranteed by the C standard -- and compiler optimizations can wreak havoc on code that makes unwarranted assumptions.)
Referring to The C Programming Language, Second Edition (ISBN 0131103628),
Your addition operation causes the int to be converted to an unsigned int.
Assuming two's complement representation and equally sized types, the bit pattern does not change.
Conversion from unsigned int to signed int is implementation dependent. (But it probably works the way you expect on most platforms these days.)
The rules are a little more complicated in the case of combining signed and unsigned of differing sizes.
When converting from signed to unsigned there are two possibilities. Numbers that were originally positive remain (or are interpreted as) the same value. Number that were originally negative will now be interpreted as larger positive numbers.
When one unsigned and one signed variable are added (or any binary operation) both are implicitly converted to unsigned, which would in this case result in a huge result.
So it is safe in the sense of that the result might be huge and wrong, but it will never crash.
As was previously answered, you can cast back and forth between signed and unsigned without a problem. The border case for signed integers is -1 (0xFFFFFFFF). Try adding and subtracting from that and you'll find that you can cast back and have it be correct.
However, if you are going to be casting back and forth, I would strongly advise naming your variables such that it is clear what type they are, eg:
int iValue, iResult;
unsigned int uValue, uResult;
It is far too easy to get distracted by more important issues and forget which variable is what type if they are named without a hint. You don't want to cast to an unsigned and then use that as an array index.
What implicit conversions are going on here,
i will be converted to an unsigned integer.
and is this code safe for all values of u and i?
Safe in the sense of being well-defined yes (see https://stackoverflow.com/a/50632/5083516 ).
The rules are written in typically hard to read standards-speak but essentially whatever representation was used in the signed integer the unsigned integer will contain a 2's complement representation of the number.
Addition, subtraction and multiplication will work correctly on these numbers resulting in another unsigned integer containing a twos complement number representing the "real result".
division and casting to larger unsigned integer types will have well-defined results but those results will not be 2's complement representations of the "real result".
(Safe, in the sense that even though result in this example will overflow to some huge positive number, I could cast it back to an int and get the real result.)
While conversions from signed to unsigned are defined by the standard the reverse is implementation-defined both gcc and msvc define the conversion such that you will get the "real result" when converting a 2's complement number stored in an unsigned integer back to a signed integer. I expect you will only find any other behaviour on obscure systems that don't use 2's complement for signed integers.
https://gcc.gnu.org/onlinedocs/gcc/Integers-implementation.html#Integers-implementation
https://msdn.microsoft.com/en-us/library/0eex498h.aspx
Horrible Answers Galore
Ozgur Ozcitak
When you cast from signed to unsigned
(and vice versa) the internal
representation of the number does not
change. What changes is how the
compiler interprets the sign bit.
This is completely wrong.
Mats Fredriksson
When one unsigned and one signed
variable are added (or any binary
operation) both are implicitly
converted to unsigned, which would in
this case result in a huge result.
This is also wrong. Unsigned ints may be promoted to ints should they have equal precision due to padding bits in the unsigned type.
smh
Your addition operation causes the int
to be converted to an unsigned int.
Wrong. Maybe it does and maybe it doesn't.
Conversion from unsigned int to signed
int is implementation dependent. (But
it probably works the way you expect
on most platforms these days.)
Wrong. It is either undefined behavior if it causes overflow or the value is preserved.
Anonymous
The value of i is converted to
unsigned int ...
Wrong. Depends on the precision of an int relative to an unsigned int.
Taylor Price
As was previously answered, you can
cast back and forth between signed and
unsigned without a problem.
Wrong. Trying to store a value outside the range of a signed integer results in undefined behavior.
Now I can finally answer the question.
Should the precision of int be equal to unsigned int, u will be promoted to a signed int and you will get the value -4444 from the expression (u+i). Now, should u and i have other values, you may get overflow and undefined behavior but with those exact numbers you will get -4444 [1]. This value will have type int. But you are trying to store that value into an unsigned int so that will then be cast to an unsigned int and the value that result will end up having would be (UINT_MAX+1) - 4444.
Should the precision of unsigned int be greater than that of an int, the signed int will be promoted to an unsigned int yielding the value (UINT_MAX+1) - 5678 which will be added to the other unsigned int 1234. Should u and i have other values, which make the expression fall outside the range {0..UINT_MAX} the value (UINT_MAX+1) will either be added or subtracted until the result DOES fall inside the range {0..UINT_MAX) and no undefined behavior will occur.
What is precision?
Integers have padding bits, sign bits, and value bits. Unsigned integers do not have a sign bit obviously. Unsigned char is further guaranteed to not have padding bits. The number of values bits an integer has is how much precision it has.
[Gotchas]
The macro sizeof macro alone cannot be used to determine precision of an integer if padding bits are present. And the size of a byte does not have to be an octet (eight bits) as defined by C99.
[1] The overflow may occur at one of two points. Either before the addition (during promotion) - when you have an unsigned int which is too large to fit inside an int. The overflow may also occur after the addition even if the unsigned int was within the range of an int, after the addition the result may still overflow.