Different Data Types - Signed and Unsigned - c

I just executed the following code
main()
{
char a = 0xfb;
unsigned char b = 0xfb;
printf("a=%c,b=%c",a,b);
if(a==b) {
printf("\nSame");
}
else {
printf("\nNot Same");
}
}
For this code I got the answer as
a=? b=?
Different
Why don't I get Same, and what is the value for a and b?

The line if (a == b)... promotes the characters to integers before comparison, so the signedness of the character affects how that happens. The unsigned character 0xFB becomes the integer 251; the signed character 0xFB becomes the integer -5. Thus, they are unequal.

There are 2 cases to consider:
if the char type is unsigned by default, both a and b are assigned the value 251 and the program will print Same.
if the char type is signed by default, which is alas the most common case, the definition char a = 0xfb; has implementation defined behavior as 0xfb (251 in decimal) is probably out of range for the char type (typically -128 to 127). Most likely the value -5 will be stored into a and a == b evaluates to 0 as both arguments are promoted to int before the comparison, hence -5 == 251 will be false.
The behavior of printf("a=%c,b=%c", a, b); is also system dependent as the non ASCII characters -5 and 251 may print in unexpected ways if at all. Note however that both will print the same as the %c format specifies that the argument is converted to unsigned char before printing. It would be safer and more explicit to try printf("a=%d, b=%d\n", a, b);
With gcc or clang, you can try recompiling your program with -funsigned-char to see how the behavior will differ.

According to the C Standard (6.5.9 Equality operators)
4 If both of the operands have arithmetic type, the usual arithmetic
conversions are performed....
The usual arithmetic conversions include the integer promotions.
From the C Standard (6.3.1.1 Boolean, characters, and integers)
2 The following may be used in an expression wherever an int or
unsigned int may be used:
...
If an int can represent all values of the original type (as restricted
by the width, for a bit-field), the value is converted to an int;
otherwise, it is converted to an unsigned int. These are called the
integer promotions.58) All other types are unchanged by the integer
promotions.
So in this equality expression
a == b
the both operands are converted to the type int. The signed operand ( provided that the type char behaves as the type signed char) is converted to the type int by means of propagating the sign bit.
As result the operands have different values due to the difference in the binary representation.
If the type char behaves as the type unsigned char (for example by setting a corresponding option of the compiler) then evidently the operands will be equal.

char stores from -128 to 127 and unsigned char stores from 0 to 255.
and 0xfb represents 251 in decimal which is beyond the limit of char a.

Related

c: type casting char values into unsigned short

starting with a pseudo-code snippet:
char a = 0x80;
unsigned short b;
b = (unsigned short)a;
printf ("0x%04x\r\n", b); // => 0xff80
to my current understanding "char" is by definition neither a signed char nor an unsigned char but sort of a third type of signedness.
how does it come that it happens that 'a' is first sign extended from (maybe platform dependent) an 8 bits storage to (a maybe again platform specific) 16 bits of a signed short and then converted to an unsigned short?
is there a c standard that determines the order of expansion?
does this standard guide in any way on how to deal with those third type of signedness that a "pure" char (i called it once an X-char, x for undetermined signedness) so that results are at least deterministic?
PS: if inserting an "(unsigned char)" statement in front of the 'a' in the assignment line, then the result in the printing line is indeed changed to 0x0080. thus only two type casts in a row will provide what might be the intended result for certain intentions.
The type char is not a "third" signedness. It is either signed char or unsigned char, and which one it is is implementation defined.
This is dictated by section 6.2.5p15 of the C standard:
The three types char , signed char , and unsigned char are
collectively called the character types. The implementation
shall define char to have the same range, representation, and
behavior as either signed char or unsigned char.
It appears that on your implementation, char is the same as signed char, so because the value is negative and because the destination type is unsigned it must be converted.
Section 6.3.1.3 dictates how conversion between integer types occur:
1 When a value with integer type is converted to another integer type
other than
_Bool ,if the value can be represented by the new type, it is unchanged.
2 Otherwise, if the new type is unsigned, the value is
converted by repeatedly adding or subtracting one more than
the maximum value that can be represented in the new type
until the value is in the range of the new type.
3 Otherwise, the new type is signed and the value cannot be
represented in it; either the result is implementation-defined or
an implementation-defined signal is raised.
Since the value 0x80 == -128 cannot be represented in an unsigned short the conversion in paragraph 2 occurs.
char has implementation-defined signedness. It is either signed or unsigned, depending on compiler. It is true, in a way, that char is a third character type, see this. char has an indeterministic (non-portable) signedness and therefore should never be used for storing raw numbers.
But that doesn't matter in this case.
On your compiler, char is signed.
char a = 0x80; forces a conversion from the type of 0x80, which is int, to char, in a compiler-specific manner. Normally on 2's complement systems, that will mean that the char gets the value -128, as seems to be the case here.
b = (unsigned short)a; forces a conversion from char to unsigned short 1). C17 6.3.1.3 Signed and unsigned integers then says:
Otherwise, if the new type is unsigned, the value is converted by repeatedly adding or
subtracting one more than the maximum value that can be represented in the new type
until the value is in the range of the new type.
One more than the maximum value would be 65536. So you can think of this as -128 + 65536 = 65408.
The unsigned hex representation of 65408 is 0xFF80. No sign extension takes place anywhere!
1) The cast is not needed. When both operands of = are arithmetic types, as in this case, the right operand is implicitly converted to the type of the right operand (C17 6.5.16.1 §2).

C typecasting from a signed char to int type

In the below snippet, shouldn't the output be 1? Why am I getting output as -1 and 4294967295?
What I understand is, the variable, c, here is of signed type, so shouldn't its value be 1?
char c=0xff;
printf("%d %u",c,c);
c is of signed type. a char is 8 bits. So you have an 8 bit signed quantity, with all bits 1. On a twos complement machine, that evaluates to -1.
Some compilers will warn you when you do that sort of thing. If you're using gcc/clang, switch on all the warnings.
Pedant note: On some machines it could have the value 255, should the compiler treat 'char' as unsigned.
You're getting the correct answer.
The %u format specifier indicates that the value will be an unsigned int. The compiler automatically promotes your 8-bit char to a 32-bit int. However you have to remember that char is a signed type. So a value of 0xff is in fact -1.
When the casting from char to int occurs, the value is still -1, but the it's the 32-bit representation which in binary is 11111111 11111111 11111111 11111111 or in hex 0xffffffff
When that is interpreted as an unsigned integer, all of the bits are obviously preserved because the length is the same, but now it's handled as an unsigned quantity.
0xffffffff = 4294967295 (unsigned)
0xffffffff = -1 (signed)
There are three character types in C, char, signed char, and unsigned char. Plain char has the same representation as either signed char or unsigned char; the choice is implementation-defined. It appears that plain char is signed in your implementation.
All three types have the same size, which is probably 8 bits (CHAR_BIT, defined in <limits.h>, specifies the number of bits in a byte). I'll assume 8 bits.
char c=0xff;
Assuming plain char is signed, the value 0xff (255) is outside the range of type char. Since you can't store the value 255 in a char object, the value is implicitly converted. The result of this conversion is implementation-defined, but is very likely to be -1.
Keep this carefully in mind: 0xff is simply another way to write 255, and 0xff and -1 are two distinct values. You cannot store the value 255 in a char object; its value is -1. Integer constants, whether they're decimal, hexadecimal, or octal, specify values, not representations.
If you really want a one-byte object with the value 0xff, define it as an unsigned char, not as a char.
printf("%d %u",c,c);
When a value of an integer type narrower than int is passed to printf (or to any variadic function), it's promoted to int if that type can hold the type's entire range of values, or to unsigned int if it can't. For type char, it's almost certainly promoted to int. So this call is equivalent to:
printf("%d %u", -1, -1);
The output for the "%d" format is obvious. The output for "%u" is less obvious. "%u" tells printf that the corresponding argument is of type unsigned int, but you've passed it a value of type int. What probably happens is that the representation of the int value is treated as if it were of type unsigned int, most likely yielding UINT_MAX, which happens to be 4294967295 on your system. If you really want to do that, you should convert the value to type unsigned int. This:
printf("%d %u", -1, (unsigned int)-1);
is well defined.
Your two lines of code are playing a lot of games with various types, treating values of one type as if they were of another type, and doing implicit conversions that might yield results that are implementation-defined and/or depend on the choices your compiler happens to make.
Whatever you're trying to do, there's undoubtedly a cleaner way to do it (unless you're just trying to see what your implementation does with this particular code).
Let us start with the assumption using OP's "c, here is of signed type"
char c=0xff; // Implementation defined behavior.
0xff is a hexadecimal constant with the value of 255 and type of int.
... the new type is signed and the value cannot be represented in it; either the result is implementation-defined or an implementation-defined signal is raised. §6.3.1.4 3
So right off, the value of c is implementation defined (ID). Let us assume the common ID behavior of 8-bit wrap-around, so c --> -1.
A signed char will be promoted to int as part of a variadic argument to printf("%d %u",c,c); is the same as printf("%d %u",-1, -1);. Printing the -1 with "%d" is not an issue and "-1" is printed.
Printing an int -1 with "%x" is undefined behavior (UB) as it is a mis-matched specifier/type and does not fall under the exception of being representable in both types. The common UB is to print the value as if it was converted to unsigned before being passed. When UINT_MAX == 4294967295 (4-bytes) that prints the value as -1 + (UINT_MAX + 1) or "4294967295"`.
So with ID and UB, you get a result, but robust code would be re-written to depend on neither.

Range of unsigned char in C language

As per my knowledge range of unsigned char in C is 0-255. but when I executed the below code its printing the 256 as output. How this is possible? I have got this code from "test your C skill" book which say char size is one byte.
main()
{
unsigned char i = 0x80;
printf("\n %d",i << 1);
}
Because the operands to <<* undergo integer promotion. It's effectively equivalent to (int)i << 1.
* This is true for most operators in C.
Several things are happening.
First, the expression i << 1 has type int, not char; the literal 1 has type int, so the type of i is "promoted" to int, and 0x100 is well within the range of a signed integer.
Secondly, the %d conversion specifier expects its corresponding argument to have type int. So the argument is being interpreted as an integer.
If you want to print the numeric value of a signed char, use the conversion specifier %hhd. If you want to print the numeric value of an unsigned char, use %hhu.
For arithmetical operations, char is promoted to int before the operation is performed. See the standard for details. Simplified: the "smaller" type is first brought to the "larger" type before the operation is performed. For the shift-operators, the resulting type is that of the left side operand, while for e.g. + and other "combining" operators it is the larger of both, but at least int. The latter means that char and short (and their unsigned counterparts are always promoted to int with the result being int, too. (simplified, for details please read the standard)
Note also that %d takes an int argument, not a char.
Additional notes:
unsigned char has not necessarily the range 0..255. Check limits.h, you will find UCHAR_MAX there.
char and "byte" are synonymously used in the standard, but neither are necessarily 8 bits wide (just very likely for modern general purpose CPUs).
As others have already explained, the statement "printf("\n %d",i << 1);" does integer promotion. So the one right shifting of integer value 128 results in 256. You could try the following code to print the maximum value of "unsigned char". The maximum value of "unsigned char" has all bits set. So a bitwise NOT operation using "~" should give you the maximum ASCII value of 255.
int main()
{
unsigned char ch = ~0;
printf("ch = %d\n", ch);
return 0;
}
Output:-
M-40UT:Desktop$ ./a.out
ch = 255

C cast and char signedness

So lately, I read on an issue regarding the three distinct types in C, char/unsigned char/signed char. The problem that I now encounter is not something I have experienced up till now (my program works correctly on all tested computers and only targets little-endian (basically all modern desktops and servers using Windows/Linux right?). I frequently reuse a char array I defined for holding a "string" (not a real string of course) as temporary variables. E.g. instead of adding another char to the stack I just reuse one of the members like array[0]. However, I based this tactic on the fact that a char would always be signed, until I read today that it actually depends on the implementation. What will happen if I now have a char and I assign a negative value to it?
char unknownsignedness = -1;
If I wrote
unsigned char A = -1;
I think that the C-style cast will simply reinterpret the bits and the value that A represents as an unsigned type becomes different. Am I right that these C-Style casts are simply reinterpretation of bits? I am now referring to signed <-> unsigned conversions.
So if an implementation has char as unsigned, would my program stop working as intended? Take the last variable, if I now do
if (A == -1)
I am now comparing a unsigned char to a signed char value, so will this simply compare the bits not caring about the signedness or will this return false because obviously A cannot be -1? I am confused what happens in this case. This is also my greatest concern, as I use chars like this frequently.
The following code prints No:
#include <stdio.h>
int
main()
{
unsigned char a;
a = -1;
if(a == -1)
printf("Yes\n");
else
printf("No\n");
return 0;
}
The code a = -1 assigns an implementation-defined value to a; on most machines, a will be 255. The test a == -1 compares an unsigned char to an int, so the usual promotion rules apply; hence, it is interpreted as
`(int)a == -1`
Since a is 255, (int)a is still 255, and the test yields false.
unsigned char a = -1;
ISO/IEC 9899:1999 says in 6.3.1.3/2:
if the new type is unsigned, the value is converted by repeatedly adding or
subtracting one more than the maximum value that can be represented in the new type until the value is in the range of the new type
We add (UCHAR_MAX+1) to -1 once, and the result is UCHAR_MAX, which is obviously in range for unsigned char.
if (a == -1)
There's a long passage in 6.3.1.8/1:
If both operands have the same type, then no further conversion is needed.
Otherwise, if both operands have signed integer types or both have unsigned
integer types, the operand with the type of lesser integer conversion rank is
converted to the type of the operand with greater rank.
Otherwise, if the operand that has unsigned integer type has rank greater or
equal to the rank of the type of the other operand, then the operand with
signed integer type is converted to the type of the operand with unsigned
integer type.
Otherwise, if the type of the operand with signed integer type can represent
all of the values of the type of the operand with unsigned integer type, then
the operand with unsigned integer type is converted to the type of the
operand with signed integer type.
Otherwise, both operands are converted to the unsigned integer type
corresponding to the type of the operand with signed integer type.
The rank of unsigned char is less than that of int.
If int can represent all the values that unsigned char can (which is usually the case), then both operands are converted to int, and the comparison returns false.
If int cannot represent all values in unsigned char, which can happen on rare machines with sizeof(int)==sizeof(char), then both are converted to unsigned int, -1 gets converted to UINT_MAX which happens to be the same as UCHAR_MAX, and the comparison returns true.
unsigned char A = -1;
results in 255. There is no reinterpretation upon assignment or initialization. A -1 is just a bunch of 1 bits in two's complement notation and 8 of them are copied verbatim.
Comparisons are a bit different, as the literal -1 is of int type.
if (A == -1)
will do a promotion (implicit cast) (int)A before comparison, so you end up comparing 255 with -1. Not equal.
And yes, you have to be careful with plain char.
I think this question is best answered by a quick example (warning: C++, but see explanation for my reasoning):
char c = -1;
unsigned char u = -1;
signed char s = -1;
if (c == u)
printf("c == u\n");
if (s == u)
printf("s == u\n");
if (s == c)
printf("s == c\n");
if (static_cast<unsigned char>(s) == u)
printf("(unsigned char)s == u\n");
if (c == static_cast<char>(u))
printf("c == (char)u\n");
The output:
s == c
(unsigned char)s == u
c == (char)u
C is treating the values differently when used as-is, but you are correct in that casting will just reinterpret the bits. I used a C++ static_cast here instead to show that the compiler is okay with doing this casting. In C, you would just cast by prefixing the type in parenthesis. There is no compiler checking to ensure that the cast is safe in C.

Regarding type safety when storing an unsigned char value in char variable

I have a char array holding several characters. I want to compare one of these characters with an unsigned char variable. For example:
char myarr = { 20, 14, 5, 6, 42 };
const unsigned char foobar = 133;
myarr[2] = foobar;
if(myarr[2] == foobar){
printf("You win a shmoo!\n");
}
Is this comparison type safe?
I know from the C99 standard that char, signed char, and unsigned char are three different types (section 6.2.5 paragraph 14).
Nevertheless, can I safely convert between unsigned char and char, and back, without losing precision and without risking undefined (or implementation-defined) behavior?
In section 6.2.5 paragraph 15:
The implementation shall define char to have the same range,
representation, and behavior as either signed char or unsigned char.
In section 6.3.1.3 paragraph 3:
Otherwise, the new type is signed and the value cannot be represented in it; either the result is implementation-defined or an implementation-defined signal is raised.
I'm afraid that if char is defined as a signed char, then myarr[2] = foobar could result in an implementation-defined value that will not be converted correctly back to the original unsigned char value; for example, an implementation may always result in the value 42 regardless of the unsigned value involved.
Does this mean that it is not safe to store an unsigned value in a signed variable of the same type?
Also what is an implementation-defined signal; does this mean an implementation could simply end the program in this case?
In section 6.3.1.1 paragraph 1:
-- The rank of long long int shall be greater than the rank of long int, which shall be greater than the rank of int, which shall be greater than the rank of short int, which shall be greater than the rank of signed char.
-- The rank of any unsigned integer type shall equal the rank of the corresponding
signed integer type, if any.
In section 6.2.5 paragraph 8:
For any two integer types with the same signedness and different integer conversion rank
(see 6.3.1.1), the range of values of the type with smaller integer conversion rank is a
subrange of the values of the other type.
In section 6.3.1 paragraph 2:
If an int can represent all values of the original type, the value is converted to an int; otherwise, it is converted to an unsigned int.
In section 6.3.1.8 paragraph 1:
Otherwise, both operands are converted to the unsigned integer type
corresponding to the type of the operand with signed integer type.
The range of char is guaranteed to be the same range as that of signed char or unsigned char, which are both subranges of int and unsigned int respectively as a result of their smaller integer conversion rank.
Since, integer promotions rules dictate that char, signed char, and unsigned char be promoted to at least int before being evaluated, does this mean that char could maintain its "signedness" throughout the comparision?
For example:
signed char foo = -1;
unsigned char bar = 255;
if(foo == bar){
printf("same\n");
}
Does foo == bar evaluate to a false value, even if -1 is equivalent to 255 when an explicit (unsigned char) cast is used?
UPDATE:
In section J.3.5 paragraph 1 regarding which cases result in implementation-defined values and behavior:
-- The result of, or the signal raised by, converting an integer to a signed integer type
when the value cannot be represented in an object of that type (6.3.1.3).
Does this mean that not even an explicit conversion is safe?
For example, could the following code result in implementation-defined behavior since char could be defined as a signed integer type:
char blah = (char)255;
My original post is rather broad and consists of many specific questions of which I should have given each its own page. However, I address and answer each question here so future visitors can grok the answers more easily.
Answer 1
Question:
Is this comparison type safe?
The comparison between myarr[2] and foobar in this particular case is safe since both variables hold unsigned values. In general, however, this is not true.
For example, suppose an implementation defines char to have the same behavior as signed char, and int is able to represent all values representable by unsigned char and signed char.
char foo = -25;
unsigned char bar = foo;
if(foo == bar){
printf("This line of text will not be printed.\n");
}
Although bar is set equal to foo, and the C99 standard guarantees that there is no loss of precision when converting from signed char to unsigned char (see Answer 2), the foo == bar conditional expression will evaluate false.
This is due to the nature of integer promotion as required by section 6.3.1 paragraph 2 of the C99 standard:
If an int can represent all values of the original type, the value is converted to an int; otherwise, it is converted to an unsigned int.
Since in this implementation int can represent all values of both signed char and unsigned char, the values of both foo and bar are converted to type int before being evaluated. Thus the resulting conditional expression is -25 == 231 which evaluates to false.
Answer 2
Question:
Nevertheless, can I safely convert between unsigned char and char, and back, without losing precision and without risking undefined (or implementation-defined) behavior?
You can safely convert from char to unsigned char without losing precision (nor width nor information), but converting in the other direction -- unsigned char to char -- can lead to implementation-defined behavior.
The C99 standard makes certain guarantees which enable us to convert safely from char to unsigned char.
In section 6.2.5 paragraph 15:
The implementation shall define char to have the same range,
representation, and behavior as either signed char or unsigned char.
Here, we are guaranteed that char will have the same range, representation, and behavior as signed char or unsigned char. If the implementation chooses the unsigned char option, then the conversion from char to unsigned char is essentially that of unsigned char to unsigned char -- thus no width nor information is lost and there are no issues.
The conversion for the signed char option is not as intuitive, but is implicitly guaranteed to preserve precision.
In section 6.2.5 paragraph 6:
For each of the signed integer types, there is a corresponding (but different) unsigned
integer type (designated with the keyword unsigned) that uses the same amount of
storage (including sign information) and has the same alignment requirements.
In 6.2.6.1 paragraph 3:
Values stored in unsigned bit-fields and objects of type unsigned char shall be
represented using a pure binary notation.
In section 6.2.6.2 paragraph 2:
For signed integer types, the bits of the object representation shall be divided into three
groups: value bits, padding bits, and the sign bit. There need not be any padding bits; there shall be exactly one sign bit. Each bit that is a value bit shall have the same value as
the same bit in the object representation of the corresponding unsigned type (if there are
M value bits in the signed type and N in the unsigned type, then M <= N).
First, signed char is guaranteed to occupy the same amount of storage as an unsigned char, as are all signed integers in respect to their unsigned counterparts.
Second, unsigned char is guaranteed to have a pure binary representation (i.e. no padding bits and no sign bit).
signed char is required to have exactly one sign bit, and no more than the same number of value bits as unsigned char.
Given these three facts, we can prove via pigeonhole principle that the signed char type has at most one less than the number of value bits as the unsigned char type. Similarly, signed char can safely be converted to unsigned char with not only no loss of precision, but no loss of width or information as well:
unsigned char has storage size of N bits.
signed char must have the same storage size of N bits.
unsigned char has no padding or sign bits and therefore has N value bits
signed char can have at most N non-padding bits, and must allocate exactly one bit as the sign bit.
signed char can have at most N-1 value bits and exactly one sign bit
All signed char bits therefore match up one-to-one to the respective unsigned char value bits; in other words, for any given signed char value, there is a unique unsigned char representation.
/* binary representation prefix: 0b */
(signed char)(-25) = 0b11100111
(unsigned char)(231) = 0b11100111
Unfortunately, converting from unsigned char to char can lead to implementation-defined behavior. For example, if char is defined by the implementation to behave as signed char, then an unsigned char variable may hold a value that is outside the range of values representable by a signed char. In such cases, either the result is implementation-defined or an implementation-defined signal is raised.
In section 6.3.1.3 paragraph 3:
Otherwise, the new type is signed and the value cannot be represented in it; either the result is implementation-defined or an implementation-defined signal is raised.
Answer 3
Question:
Does this mean that it is not safe to store an unsigned value in a signed variable of the same type?
Trying to convert an unsigned type value to a signed type value can result in implementation-defined behavior if the unsigned type value cannot be represented in the new signed type.
unsigned foo = UINT_MAX;
signed bar = foo; /* possible implementation-defined behavior */
In section 6.3.1.3 paragraph 3:
Otherwise, the new type is signed and the value cannot be represented in it; either the result is implementation-defined or an implementation-defined signal is raised.
An implementation-defined result would be any value returned within the range of values representable by the new signed type. An implementation could theoretically return the same value consistently (e.g. 42) for these cases and thus loss information occurs -- i.e. there is no guarantee that converting from unsigned to signed to back to unsigned will result in the same original unsigned value.
An implementation-defined signal is that which conforms to the rules laid out in section 7.14 of the C99 standard; an implementation is permitted to define additional conforming signals which are not explicitly enumerated by the C99 standard.
In this particular case, an implementation could theoretically raise the SIGTERM signal which requests the termination of the program. Thus, attempting to convert an unsigned type value to signed type could result in a program termination.
Answer 4
Question:
Does foo == bar evaluate to a false value, even if -1 is equivalent to 255 when an explicit (unsigned char) cast is used?
Consider the following code:
signed char foo = -1;
unsigned char bar = 255;
if((unsigned char)foo == bar){
printf("same\n");
}
Although signed char and unsigned char values are promoted to at least int before the evaluation of a conditional expression, the explicit unsigned char cast will convert the signed char value to unsigned char before the integer promotions occur. Furthermore, converting to an unsigned value is well-defined in the C99 standard and does not lead to implementation-defined behavior.
In section 6.3.1.3 paragraph 2:
Otherwise, if the new type is unsigned, the value is converted by repeatedly adding or
subtracting one more than the maximum value that can be represented in the new type
This the conditional expression essentially becomes 255 = 255 which evaluates to true.
until the value is in the range of the new type.
Answer 5
Questions:
Does this mean that not even an explicit conversion is safe?
In general, an explicit cast to char for a value outside the range of values representable by signed char can lead to implementation-defined behavior (see Answer 3). A conversion need not be implicit for section 6.3.1.3 paragraph 3 of the C99 standard to apply.
"does this mean that char could maintain its 'signedness' throughout the comparison?" yes; -1 as a signed char will be promoted to a signed int, which will retain its -1 value. As for the unsigned char, it will also keep its 255 value when being promoted, so yes, the comparison will be false. If you want it to evaluate to true, you will need an explicit cast.
It has to do with how the memory for the char's are stored, in an unsigned char, all 8 bits are used to represent the value of the char while a signed char uses only 7 bits for the number and the 8'th bit to represent the sign.
For an example, lets take a simpler 3 bit value (I will call this new value type tinychar):
bits unsigned signed
000 0 0
001 1 1
010 2 2
011 3 3
100 4 -4
101 5 -3
110 6 -2
111 7 -1
By looking at this chart, you can see the difference in value between a signed and an unsigned tinychar based on how the bits are arranged. Up until you start getting into the negative range, the values are identical for both types. However, once you reach the point where the left-most bit changes to 1, the value suddenly becomes a negative for the signed. The way this works is if you reach the maximum positive value (3) and then add one more you end up with the maximum negative value (-4) and if you subtract one from 0 you will underflow and cause the signed tinychar to become -1 while an unsigned tinychar would become 7. You can also see the equivalence (==) between an unsigned 7 and the signed -1 tinychar because the bits are the same (111) for both.
Now if you expand this to have a total of 8 bits, you should see similar results.
I've tested your code and it doesn't compare (signed char)-1 and (unsigned char)255 the same.
You should convert signed char into unsigned char first, because it doesn't use the MSB sign bit in operations.
I have bad experience with using signed char type for buffer operations. Things like your problem then happen. Then be sure you have turned on all warnings during compilation and try to fix them.

Resources