Byte to signed int in C - c

I have a binary value stored in a char in C, I want transform this byte into signed int in C.
Currently I have something like this:
char a = 0xff;
int b = a;
printf("value of b: %d\n", b);
The result in standard output will be "255", the desired output is "-1".

According to the C99 standard,
6.3.1.3 Signed and unsigned integers
When a value with integer type is converted to another integer type other than _Bool, if the value can be represented by the new type, it is unchanged.
Otherwise, if the new type is unsigned, the value is converted by repeatedly adding or
subtracting one more than the maximum value that can be represented in the new type
until the value is in the range of the new type.
Otherwise, the new type is signed and the value cannot be represented in it; either the
result is implementation-defined or an implementation-defined signal is raised.
You need to cast your char to a signed char before assigning to int, as any value char could take is directly representable as an int.
#include <stdio.h>
int main(void) {
char a = 0xff;
int b = (signed char) a;
printf("value of b: %d\n", b);
return 0;
}
Quickly testing shows it works here:
C:\dev\scrap>gcc -std=c99 -oprint-b print-b.c
C:\dev\scrap>print-b
value of b: -1
Be wary that char is undefined by the C99 standard as to whether it is treated signed or unsigned.
6.2.5 Types
An object declared as type char is large enough to store any member of the basic
execution character set. If a member of the basic execution character set is stored in a
char object, its value is guaranteed to be positive. If any other character is stored in a char object, the resulting value is implementation-defined but shall be within the range of values that can be represented in that type.
...
The three types char, signed char, and unsigned char are collectively called
the character types. The implementation shall define char to have the same range,
representation, and behavior as either signed char or unsigned char.

Replace:
char a = 0xff
by
signed char a = 0xff; // or more explicit: = -1
to have printf prints -1.
If you don't want to change the type of a, as #veer added in the comments you can simply cast a to (signed char) before assigning its value to b.
Note that in both cases, this integer conversion is implementation-defined but this is the commonly seen implementation-defined behavior.

You are already wrong from the start:
char a = 0xff;
if char is signed, which you seem to assume, here you already have a value that is out of range, 0xFF is an unsigned quantity with value 255. If you want to see char as signed numbers use signed char and assign -1 to it. If you want to see it as a bit pattern use unsigned char and assign 0xFF to it. Your initialization of the int will then do what you expect it to do.
char, signed char and unsigned char are by definition of the standard three different types. Reserve char itself to characters, printing human readable stuff.

Related

typecasting unsigned char and signed char to int in C

int main()
{
char ch1 = 128;
unsigned char ch2 = 128;
printf("%d\n", (int)ch1);
printf("%d\n", (int)ch2);
}
The first printf statement outputs -128 and second 128. According to me both ch1 and ch2 will have same binary representation of the number stored: 10000000. So when I typecast both the values to integers how they end up being different value?
First of all, a char can be signed or unsigned and that depends on the compiler implementation. But, as you got different results. Then, your compiler treats char as signed.
A signed char can only hold values from -128 to 127. So, a value of 128 for signed char overflows to -128.
But an unsigned char can hold values from 0 to 255. So, a value of 128 remains the same.
An unsigned char can have a value of 0 to 255. A signed char can have a value of -128 to 127. Setting a signed char to 128 in your compiler probably wrapped around to the lowest possible value, which is -128.
Your fundamental error here is a misunderstanding of what a cast (or any conversion) does in C. It does not reinterpret bits. It's purely an operation on values.
Assuming plain char is signed, ch1 has value -128 and ch2 has value 128. Both -128 and 128 are representable in int, and therefore the cast does not change their value. (Moreover, writing it is redundant since the default promotions automatically convert variadic arguments of types lower-rank than int up to int.) Conversions can only change the value of an expression when the original value is not representable in the destination type.
For starters these castings
printf("%d\n", (int)ch1);
printf("%d\n", (int)ch2);
are redundant. You could just write
printf("%d\n", ch1);
printf("%d\n", ch2);
because due to the default argument promotions integer types with the rank that is less than the rank of the type int are promoted to the type int if an object of this type can represent the value stored in an object of an integer type with less rank.
The type char can behave either as the type signed char or unsigned char depending on compiler options.
From the C Standard (5.2.4.2.1 Sizes of integer types <limits.h>)
2 If the value of an object of type char is treated as a signed
integer when used in an expression, the value of CHAR_MIN shall be the
same as that of SCHAR_MIN and the value of CHAR_MAX shall be the same
as that of SCHAR_MAX. Otherwise, the value of CHAR_MIN shall be 0 and
the value of CHAR_MAX shall be the same as that of UCHAR_MAX. 20) The
value UCHAR_MAX shall equal 2CHAR_BIT − 1.
So it seems by default the used compiler treats the type char as signed char.
As a result in the first declaration
char ch1 = 128;
unsigned char ch2 = 128;
the internal representation 0x80 of the value 128 was interpreted as a signed value because the sign bit is set. And this value is equal to -128.
So you got that the first call of printf outputted the value -128
printf("%d\n", (int)ch1);
while the second call of printf where there is used an object of the type unsigned char
printf("%d\n", (int)ch2);
outputted the value 128.

c: type casting char values into unsigned short

starting with a pseudo-code snippet:
char a = 0x80;
unsigned short b;
b = (unsigned short)a;
printf ("0x%04x\r\n", b); // => 0xff80
to my current understanding "char" is by definition neither a signed char nor an unsigned char but sort of a third type of signedness.
how does it come that it happens that 'a' is first sign extended from (maybe platform dependent) an 8 bits storage to (a maybe again platform specific) 16 bits of a signed short and then converted to an unsigned short?
is there a c standard that determines the order of expansion?
does this standard guide in any way on how to deal with those third type of signedness that a "pure" char (i called it once an X-char, x for undetermined signedness) so that results are at least deterministic?
PS: if inserting an "(unsigned char)" statement in front of the 'a' in the assignment line, then the result in the printing line is indeed changed to 0x0080. thus only two type casts in a row will provide what might be the intended result for certain intentions.
The type char is not a "third" signedness. It is either signed char or unsigned char, and which one it is is implementation defined.
This is dictated by section 6.2.5p15 of the C standard:
The three types char , signed char , and unsigned char are
collectively called the character types. The implementation
shall define char to have the same range, representation, and
behavior as either signed char or unsigned char.
It appears that on your implementation, char is the same as signed char, so because the value is negative and because the destination type is unsigned it must be converted.
Section 6.3.1.3 dictates how conversion between integer types occur:
1 When a value with integer type is converted to another integer type
other than
_Bool ,if the value can be represented by the new type, it is unchanged.
2 Otherwise, if the new type is unsigned, the value is
converted by repeatedly adding or subtracting one more than
the maximum value that can be represented in the new type
until the value is in the range of the new type.
3 Otherwise, the new type is signed and the value cannot be
represented in it; either the result is implementation-defined or
an implementation-defined signal is raised.
Since the value 0x80 == -128 cannot be represented in an unsigned short the conversion in paragraph 2 occurs.
char has implementation-defined signedness. It is either signed or unsigned, depending on compiler. It is true, in a way, that char is a third character type, see this. char has an indeterministic (non-portable) signedness and therefore should never be used for storing raw numbers.
But that doesn't matter in this case.
On your compiler, char is signed.
char a = 0x80; forces a conversion from the type of 0x80, which is int, to char, in a compiler-specific manner. Normally on 2's complement systems, that will mean that the char gets the value -128, as seems to be the case here.
b = (unsigned short)a; forces a conversion from char to unsigned short 1). C17 6.3.1.3 Signed and unsigned integers then says:
Otherwise, if the new type is unsigned, the value is converted by repeatedly adding or
subtracting one more than the maximum value that can be represented in the new type
until the value is in the range of the new type.
One more than the maximum value would be 65536. So you can think of this as -128 + 65536 = 65408.
The unsigned hex representation of 65408 is 0xFF80. No sign extension takes place anywhere!
1) The cast is not needed. When both operands of = are arithmetic types, as in this case, the right operand is implicitly converted to the type of the right operand (C17 6.5.16.1 §2).

Range of unsigned char in C language

As per my knowledge range of unsigned char in C is 0-255. but when I executed the below code its printing the 256 as output. How this is possible? I have got this code from "test your C skill" book which say char size is one byte.
main()
{
unsigned char i = 0x80;
printf("\n %d",i << 1);
}
Because the operands to <<* undergo integer promotion. It's effectively equivalent to (int)i << 1.
* This is true for most operators in C.
Several things are happening.
First, the expression i << 1 has type int, not char; the literal 1 has type int, so the type of i is "promoted" to int, and 0x100 is well within the range of a signed integer.
Secondly, the %d conversion specifier expects its corresponding argument to have type int. So the argument is being interpreted as an integer.
If you want to print the numeric value of a signed char, use the conversion specifier %hhd. If you want to print the numeric value of an unsigned char, use %hhu.
For arithmetical operations, char is promoted to int before the operation is performed. See the standard for details. Simplified: the "smaller" type is first brought to the "larger" type before the operation is performed. For the shift-operators, the resulting type is that of the left side operand, while for e.g. + and other "combining" operators it is the larger of both, but at least int. The latter means that char and short (and their unsigned counterparts are always promoted to int with the result being int, too. (simplified, for details please read the standard)
Note also that %d takes an int argument, not a char.
Additional notes:
unsigned char has not necessarily the range 0..255. Check limits.h, you will find UCHAR_MAX there.
char and "byte" are synonymously used in the standard, but neither are necessarily 8 bits wide (just very likely for modern general purpose CPUs).
As others have already explained, the statement "printf("\n %d",i << 1);" does integer promotion. So the one right shifting of integer value 128 results in 256. You could try the following code to print the maximum value of "unsigned char". The maximum value of "unsigned char" has all bits set. So a bitwise NOT operation using "~" should give you the maximum ASCII value of 255.
int main()
{
unsigned char ch = ~0;
printf("ch = %d\n", ch);
return 0;
}
Output:-
M-40UT:Desktop$ ./a.out
ch = 255

Char data type arithmetic expression

int main()
{
char a = 'P';
char b = 0x80;
printf("a>b %s\n",a>b ? "true":"false");
return 0;
}
Why does it evaluates to true?
On your system, char is signed. It is also eight bits, so 0x80 overflows what a signed 8-bit integer can represent. The resulting value is -128. Since P is some positive value, it is greater than -128.
C permits the char type to be signed or unsigned. This is a special (annoying) property, unlike other integer types such as int. It is often advisable to explicitly declare character types with unsigned char so that the behavior is more determined rather than implementation-dependent.

Regarding type safety when storing an unsigned char value in char variable

I have a char array holding several characters. I want to compare one of these characters with an unsigned char variable. For example:
char myarr = { 20, 14, 5, 6, 42 };
const unsigned char foobar = 133;
myarr[2] = foobar;
if(myarr[2] == foobar){
printf("You win a shmoo!\n");
}
Is this comparison type safe?
I know from the C99 standard that char, signed char, and unsigned char are three different types (section 6.2.5 paragraph 14).
Nevertheless, can I safely convert between unsigned char and char, and back, without losing precision and without risking undefined (or implementation-defined) behavior?
In section 6.2.5 paragraph 15:
The implementation shall define char to have the same range,
representation, and behavior as either signed char or unsigned char.
In section 6.3.1.3 paragraph 3:
Otherwise, the new type is signed and the value cannot be represented in it; either the result is implementation-defined or an implementation-defined signal is raised.
I'm afraid that if char is defined as a signed char, then myarr[2] = foobar could result in an implementation-defined value that will not be converted correctly back to the original unsigned char value; for example, an implementation may always result in the value 42 regardless of the unsigned value involved.
Does this mean that it is not safe to store an unsigned value in a signed variable of the same type?
Also what is an implementation-defined signal; does this mean an implementation could simply end the program in this case?
In section 6.3.1.1 paragraph 1:
-- The rank of long long int shall be greater than the rank of long int, which shall be greater than the rank of int, which shall be greater than the rank of short int, which shall be greater than the rank of signed char.
-- The rank of any unsigned integer type shall equal the rank of the corresponding
signed integer type, if any.
In section 6.2.5 paragraph 8:
For any two integer types with the same signedness and different integer conversion rank
(see 6.3.1.1), the range of values of the type with smaller integer conversion rank is a
subrange of the values of the other type.
In section 6.3.1 paragraph 2:
If an int can represent all values of the original type, the value is converted to an int; otherwise, it is converted to an unsigned int.
In section 6.3.1.8 paragraph 1:
Otherwise, both operands are converted to the unsigned integer type
corresponding to the type of the operand with signed integer type.
The range of char is guaranteed to be the same range as that of signed char or unsigned char, which are both subranges of int and unsigned int respectively as a result of their smaller integer conversion rank.
Since, integer promotions rules dictate that char, signed char, and unsigned char be promoted to at least int before being evaluated, does this mean that char could maintain its "signedness" throughout the comparision?
For example:
signed char foo = -1;
unsigned char bar = 255;
if(foo == bar){
printf("same\n");
}
Does foo == bar evaluate to a false value, even if -1 is equivalent to 255 when an explicit (unsigned char) cast is used?
UPDATE:
In section J.3.5 paragraph 1 regarding which cases result in implementation-defined values and behavior:
-- The result of, or the signal raised by, converting an integer to a signed integer type
when the value cannot be represented in an object of that type (6.3.1.3).
Does this mean that not even an explicit conversion is safe?
For example, could the following code result in implementation-defined behavior since char could be defined as a signed integer type:
char blah = (char)255;
My original post is rather broad and consists of many specific questions of which I should have given each its own page. However, I address and answer each question here so future visitors can grok the answers more easily.
Answer 1
Question:
Is this comparison type safe?
The comparison between myarr[2] and foobar in this particular case is safe since both variables hold unsigned values. In general, however, this is not true.
For example, suppose an implementation defines char to have the same behavior as signed char, and int is able to represent all values representable by unsigned char and signed char.
char foo = -25;
unsigned char bar = foo;
if(foo == bar){
printf("This line of text will not be printed.\n");
}
Although bar is set equal to foo, and the C99 standard guarantees that there is no loss of precision when converting from signed char to unsigned char (see Answer 2), the foo == bar conditional expression will evaluate false.
This is due to the nature of integer promotion as required by section 6.3.1 paragraph 2 of the C99 standard:
If an int can represent all values of the original type, the value is converted to an int; otherwise, it is converted to an unsigned int.
Since in this implementation int can represent all values of both signed char and unsigned char, the values of both foo and bar are converted to type int before being evaluated. Thus the resulting conditional expression is -25 == 231 which evaluates to false.
Answer 2
Question:
Nevertheless, can I safely convert between unsigned char and char, and back, without losing precision and without risking undefined (or implementation-defined) behavior?
You can safely convert from char to unsigned char without losing precision (nor width nor information), but converting in the other direction -- unsigned char to char -- can lead to implementation-defined behavior.
The C99 standard makes certain guarantees which enable us to convert safely from char to unsigned char.
In section 6.2.5 paragraph 15:
The implementation shall define char to have the same range,
representation, and behavior as either signed char or unsigned char.
Here, we are guaranteed that char will have the same range, representation, and behavior as signed char or unsigned char. If the implementation chooses the unsigned char option, then the conversion from char to unsigned char is essentially that of unsigned char to unsigned char -- thus no width nor information is lost and there are no issues.
The conversion for the signed char option is not as intuitive, but is implicitly guaranteed to preserve precision.
In section 6.2.5 paragraph 6:
For each of the signed integer types, there is a corresponding (but different) unsigned
integer type (designated with the keyword unsigned) that uses the same amount of
storage (including sign information) and has the same alignment requirements.
In 6.2.6.1 paragraph 3:
Values stored in unsigned bit-fields and objects of type unsigned char shall be
represented using a pure binary notation.
In section 6.2.6.2 paragraph 2:
For signed integer types, the bits of the object representation shall be divided into three
groups: value bits, padding bits, and the sign bit. There need not be any padding bits; there shall be exactly one sign bit. Each bit that is a value bit shall have the same value as
the same bit in the object representation of the corresponding unsigned type (if there are
M value bits in the signed type and N in the unsigned type, then M <= N).
First, signed char is guaranteed to occupy the same amount of storage as an unsigned char, as are all signed integers in respect to their unsigned counterparts.
Second, unsigned char is guaranteed to have a pure binary representation (i.e. no padding bits and no sign bit).
signed char is required to have exactly one sign bit, and no more than the same number of value bits as unsigned char.
Given these three facts, we can prove via pigeonhole principle that the signed char type has at most one less than the number of value bits as the unsigned char type. Similarly, signed char can safely be converted to unsigned char with not only no loss of precision, but no loss of width or information as well:
unsigned char has storage size of N bits.
signed char must have the same storage size of N bits.
unsigned char has no padding or sign bits and therefore has N value bits
signed char can have at most N non-padding bits, and must allocate exactly one bit as the sign bit.
signed char can have at most N-1 value bits and exactly one sign bit
All signed char bits therefore match up one-to-one to the respective unsigned char value bits; in other words, for any given signed char value, there is a unique unsigned char representation.
/* binary representation prefix: 0b */
(signed char)(-25) = 0b11100111
(unsigned char)(231) = 0b11100111
Unfortunately, converting from unsigned char to char can lead to implementation-defined behavior. For example, if char is defined by the implementation to behave as signed char, then an unsigned char variable may hold a value that is outside the range of values representable by a signed char. In such cases, either the result is implementation-defined or an implementation-defined signal is raised.
In section 6.3.1.3 paragraph 3:
Otherwise, the new type is signed and the value cannot be represented in it; either the result is implementation-defined or an implementation-defined signal is raised.
Answer 3
Question:
Does this mean that it is not safe to store an unsigned value in a signed variable of the same type?
Trying to convert an unsigned type value to a signed type value can result in implementation-defined behavior if the unsigned type value cannot be represented in the new signed type.
unsigned foo = UINT_MAX;
signed bar = foo; /* possible implementation-defined behavior */
In section 6.3.1.3 paragraph 3:
Otherwise, the new type is signed and the value cannot be represented in it; either the result is implementation-defined or an implementation-defined signal is raised.
An implementation-defined result would be any value returned within the range of values representable by the new signed type. An implementation could theoretically return the same value consistently (e.g. 42) for these cases and thus loss information occurs -- i.e. there is no guarantee that converting from unsigned to signed to back to unsigned will result in the same original unsigned value.
An implementation-defined signal is that which conforms to the rules laid out in section 7.14 of the C99 standard; an implementation is permitted to define additional conforming signals which are not explicitly enumerated by the C99 standard.
In this particular case, an implementation could theoretically raise the SIGTERM signal which requests the termination of the program. Thus, attempting to convert an unsigned type value to signed type could result in a program termination.
Answer 4
Question:
Does foo == bar evaluate to a false value, even if -1 is equivalent to 255 when an explicit (unsigned char) cast is used?
Consider the following code:
signed char foo = -1;
unsigned char bar = 255;
if((unsigned char)foo == bar){
printf("same\n");
}
Although signed char and unsigned char values are promoted to at least int before the evaluation of a conditional expression, the explicit unsigned char cast will convert the signed char value to unsigned char before the integer promotions occur. Furthermore, converting to an unsigned value is well-defined in the C99 standard and does not lead to implementation-defined behavior.
In section 6.3.1.3 paragraph 2:
Otherwise, if the new type is unsigned, the value is converted by repeatedly adding or
subtracting one more than the maximum value that can be represented in the new type
This the conditional expression essentially becomes 255 = 255 which evaluates to true.
until the value is in the range of the new type.
Answer 5
Questions:
Does this mean that not even an explicit conversion is safe?
In general, an explicit cast to char for a value outside the range of values representable by signed char can lead to implementation-defined behavior (see Answer 3). A conversion need not be implicit for section 6.3.1.3 paragraph 3 of the C99 standard to apply.
"does this mean that char could maintain its 'signedness' throughout the comparison?" yes; -1 as a signed char will be promoted to a signed int, which will retain its -1 value. As for the unsigned char, it will also keep its 255 value when being promoted, so yes, the comparison will be false. If you want it to evaluate to true, you will need an explicit cast.
It has to do with how the memory for the char's are stored, in an unsigned char, all 8 bits are used to represent the value of the char while a signed char uses only 7 bits for the number and the 8'th bit to represent the sign.
For an example, lets take a simpler 3 bit value (I will call this new value type tinychar):
bits unsigned signed
000 0 0
001 1 1
010 2 2
011 3 3
100 4 -4
101 5 -3
110 6 -2
111 7 -1
By looking at this chart, you can see the difference in value between a signed and an unsigned tinychar based on how the bits are arranged. Up until you start getting into the negative range, the values are identical for both types. However, once you reach the point where the left-most bit changes to 1, the value suddenly becomes a negative for the signed. The way this works is if you reach the maximum positive value (3) and then add one more you end up with the maximum negative value (-4) and if you subtract one from 0 you will underflow and cause the signed tinychar to become -1 while an unsigned tinychar would become 7. You can also see the equivalence (==) between an unsigned 7 and the signed -1 tinychar because the bits are the same (111) for both.
Now if you expand this to have a total of 8 bits, you should see similar results.
I've tested your code and it doesn't compare (signed char)-1 and (unsigned char)255 the same.
You should convert signed char into unsigned char first, because it doesn't use the MSB sign bit in operations.
I have bad experience with using signed char type for buffer operations. Things like your problem then happen. Then be sure you have turned on all warnings during compilation and try to fix them.

Resources