Conversion to short from char, odd results? - c

So my code has in it the following:
unsigned short num=0;
num=*(cra+3);
printf("> char %u\n",num);
cra is a char*
The problem is that it is getting odd output, sometimes outputting numbers such as 65501 (clearly not within the range of a char). Any ideas?
Thanks in advance!

Apparently *(cra+3) is a char of value '\xdd'. Since a char is signed, it actually means -35 (0xdd in 2's complement), i.e. 0x...fffffdd. Restricting this to 16-bit gives 0xffdd, i.e. 65501.
You need to make it an unsigned char so it gives a number in the range 0–255:
num = (unsigned char)cra[3];
Note:
1. the signedness of char is implementation defined, but usually (e.g. in OP's case) it is signed.
2. the ranges of signed char, unsigned char and unsigned short are implementation defined, but again commonly they are -128–127, 0–255 and 0–65535 respectively.
3. the conversion from signed char to unsigned char is actually -35 + 65536 = 65501.

char is allowed to be either signed or unsigned - apparently, on your platform, it is signed.
This means that it can hold values like -35. Such a value not within the range representable by unsigned short. When a number out of range is converted to an unsigned type, it is brought into range by repeatedly adding or subtracting one more than the maximum value representable in that type.
In this case, your unsigned short can represent values up to 65535, so -35 is brought into range by adding 65536, which gives 65501.

unsigned short has a range of (at least) 0 .. 65535 (link), the %u format specifier prints an unsigned int with a range of (commonly) 0 .. 4294967295. Thus, depending on the value of cra, the output appears to be completely sensible.

cra is just a pointer.
It hasn't been allocated any space, by way of malloc or calloc. So its contents are undefined . *(cra + 3) will evaluate to the contents of the location 3 bytes ahead of the location cra (assuming char occupies 1 byte). I believe that its contents are also undefined.
unsigned short takes up 2 bytes, atleast on my system. Hence it can hold values from 0 to 65536. So, your output is within its defined range

Related

C and ASCII codes: how could 128 represents a char if it's range is from -128 to 127?

I'm referring this question because I can't understand how ASCII characters from 0 to 255 can be represented with a signed char if the range of it is from -128 to 127.
Being char = sizeof(char)= 1 byte, it is also reasonable to think that it can easily represent values up to the maximum of 255;
So why the assignment: char a = 128 has nothing wrong and also why shouldn't I use unsigned char for it.
Thank you in advance!
char c = 128; by itself is correct in C. The standard says that a char contains CHAR_BIT bits, which can be greater than 8. Also, a char can be signed or unsigned, implementation defined, and an unsigned char has to contain at least the range [0, 255].
So an implementation where a char is bigger than 8 bits, or the char is unsigned by default, this line is valid and relevant.
Even in a common 8 bit signed char implementation, the expression is still well-defined in how it will convert the 128 to fit in a char, so there is no problem.
In real cases, the compiler will often issue a warning for these, clang for example :
warning: implicit conversion from 'int' to 'char' changes value from 128 to -128 [-Wconstant-conversion].
signed or unsigned - it takes 8bits. 8bits can contain 256 values. Just question how we use them.

Char multiplication in C

I have a code like this:
#include <stdio.h>
int main()
{
char a=20,b=30;
char c=a*b;
printf("%c\n",c);
return 0;
}
The output of this program is X .
How is this output possible if a*b=600 which overflows as char values lies between -128 and 127 ?
Whether char is signed or unsigned is implementation defined. Either way, it is an integer type.
Anyway, the multiplication is done as int due to integer promotions and the result is converted to char.
If the value does not fit into the "smaller" type, it is implementation defined for a signed char how this is done. Far by most (if not all) implementations simply cut off the upper bits.
For an unsigned char, the standard actually requires (briefly) cutting of the upper bits.
So:
(int)20 * (int)20 -> (int)600 -> (char)(600 % 256) -> 88 == 'X'
(Assuming 8 bit char).
See the link and its surrounding paragraphs for more details.
Note: If you enable compiler warnings (as always recommended), you should get a truncation warning for the assignment. This can be avoided by an explicit cast (only if you are really sure about all implications). The gcc option is -Wconversion.
First off, the behavior is implementation-defined here. A char may be either unsigned char or signed char, so it may be able to hold 0 to 255 or -128 to 127, assuming CHAR_BIT == 8.
600 in decimal is 0x258. What happens is the least significant eight bits are stored, the value is 0x58 a.k.a. X in ASCII.
This code will cause undefined behavior if char is signed.
I thought overflow of signed integer is undefined behavior, but conversion to smaller type is implementation-defined.
quote from N1256 6.3.1.3 Signed and unsigned integers:
3 Otherwise, the new type is signed and the value cannot be represented in it; either the
result is implementation-defined or an implementation-defined signal is raised.
If the value is simply truncated to 8 bits, (20 * 30) & 0xff == 0x58 and 0x58 is ASCII code for X. So, if your system do this and use ASCII code, the output will be X.
First, looks like you have unsigned char with a range from 0 to 255.
You're right about the overflow.
600 - 256 - 256 = 88
This is just an ASCII code of 'X'.

What does (int)(unsigned char)(x) do in C?

In ctype.h, line 20, __ismask is defined as:
#define __ismask(x) (_ctype[(int)(unsigned char)(x)])
What does (int)(unsigned char)(x) do? I guess it casts x to unsigned char (to retrieve the first byte only regardless of x), but then why is it cast to an int at the end?
(unsigned char)(x) effectively computes an unsigned char with the value of x % (UCHAR_MAX + 1). This has the effect of giving a positive value (between 0 and UCHAR_MAX). With most implementations UCHAR_MAX has a value of 255 (although the standard permits an unsigned char to support a larger range, such implementations are uncommon).
Since the result of (unsigned char)(x) is guaranteed to be in the range supported by an int, the conversion to int will not change value.
Net effect is the least significant byte, with a positive value.
Some compilers give a warning when using a char (signed or not) type as an array index. The conversion to int shuts the compiler up.
The unsigned char-cast is to make sure the value is within the range 0..255, the resulting value is then used as an index in the _ctype array which is 255 bytes large, see ctype.h in Linux.
A cast to unsigned char safely extracts the least significant CHAR_BITs of x, due to the wraparound properties of an unsigned type. (A cast to char could be undefined if a char is a signed type on a platform: overflowing a signed type is undefined behaviour in c). CHAR_BIT is usually 8.
The cast to int then converts the unsigned char. The standard guarantees that an int can always hold any value that unsigned char can take.
A better alternative, if you wanted to extract the 8 least significant bits would be to apply & 0xFF and cast that result to an unsigned type.
I think char is implementation dependent, either signed or unsigned. So you need to be explicit by writing unsigned char, in order not to cast to a negative number. Then cast to int.

Unsigned char. C

So, where can unsigned char be useful?
If I understood right, unsigned char can represent numbers from -128 to 127. But every encoding table uses positive numbers. So, unsigned char can't be used for representing characters. Am I right?
No, unsigned char is 0 to 255.
It can be useful in representing binary data (a single byte), although, like any primitive data type, the possibilities are endless.
First of all, what you are representing is signed char, unsigned char ranges from 0 - 255.
To answer your questions about negative valued character, you are right that character encoding is done using positive values.
On a different view, just think of signed and unsigned char as integer representation.
Unsigned char is used to represent bytes. If you need just one byte of memory in a variable, you use unsigned char and assign an integer to it.
fo example, there is used uint8_t to represent bytes, but is not more than that.
A signed char can represent number from -128 to +127
and unsigned char is from 0 to 255.
Altough unsigned is more convenient in many use cases,
everthing binary-related can be done with signed too:
0=0, 1=1 ... 127=127, -128=128, -127=129, -126=130 ... -1=255
Such conversions happens automatically (or, better to say,
it´s just different interpretation).
("binary-related" means that a mathematical -2 * 2 would be possible too with unsigned,
but make even less sense)
Regarding So, where can unsigned char be useful?
Here perhaps?: (a very simple example to test for ASCII digit)
BOOL isDigit(unsigned char c)
{
if((c >= '0') &&(c <= '9')) return TRUE;
return FALSE;
}
By virtue of argument type unsigned char guarantees input will be a single ASCII character (there are 128 encoded ASCII possibilities, with Extended ASCII, there are 255 possibilities). So, in this function, all that remains is to test input value for specific criteria (in this case is it a digit) There is no requirement for function to test for negative numbers. A regular char (i.e. signed) cannot contain the entire range of ASCII characters. The sizeof unsigned char is also significant in that it is only 1 byte as opposed to 4 bytes (typically, but not always) for say, an int

Difference between signed / unsigned char [duplicate]

This question already has answers here:
What is an unsigned char?
(16 answers)
char!=(signed char), char!=(unsigned char)
(4 answers)
Closed 5 years ago.
So I know that the difference between a signed int and unsigned int is that a bit is used to signify if the number if positive or negative, but how does this apply to a char? How can a character be positive or negative?
There's no dedicated "character type" in C language. char is an integer type, same (in that regard) as int, short and other integer types. char just happens to be the smallest integer type. So, just like any other integer type, it can be signed or unsigned.
It is true that (as the name suggests) char is mostly intended to be used to represent characters. But characters in C are represented by their integer "codes", so there's nothing unusual in the fact that an integer type char is used to serve that purpose.
The only general difference between char and other integer types is that plain char is not synonymous with signed char, while with other integer types the signed modifier is optional/implied.
I slightly disagree with the above. The unsigned char simply means: Use the most significant bit instead of treating it as a bit flag for +/- sign when performing arithmetic operations.
It makes significance if you use char as a number for instance:
typedef char BYTE1;
typedef unsigned char BYTE2;
BYTE1 a;
BYTE2 b;
For variable a, only 7 bits are available and its range is (-127 to 127) = (+/-)2^7 -1.
For variable b all 8 bits are available and the range is 0 to 255 (2^8 -1).
If you use char as character, "unsigned" is completely ignored by the compiler just as comments are removed from your program.
There are three char types: (plain) char, signed char and unsigned char. Any char is usually an 8-bit integer* and in that sense, a signed and unsigned char have a useful meaning (generally equivalent to uint8_t and int8_t). When used as a character in the sense of text, use a char (also referred to as a plain char). This is typically a signed char but can be implemented either way by the compiler.
* Technically, a char can be any size as long as sizeof(char) is 1, but it is usually an 8-bit integer.
Representation is the same, the meaning is different. e.g, 0xFF, it both represented as "FF". When it is treated as "char", it is negative number -1; but it is 255 as unsigned. When it comes to bit shifting, it is a big difference since the sign bit is not shifted. e.g, if you shift 255 right 1 bit, it will get 127; shifting "-1" right will be no effect.
A signed char is a signed value which is typically smaller than, and is guaranteed not to be bigger than, a short. An unsigned char is an unsigned value which is typically smaller than, and is guaranteed not to be bigger than, a short. A type char without a signed or unsigned qualifier may behave as either a signed or unsigned char; this is usually implementation-defined, but there are a couple of cases where it is not:
If, in the target platform's character set, any of the characters required by standard C would map to a code higher than the maximum `signed char`, then `char` must be unsigned.
If `char` and `short` are the same size, then `char` must be signed.
Part of the reason there are two dialects of "C" (those where char is signed, and those where it is unsigned) is that there are some implementations where char must be unsigned, and others where it must be signed.
The same way -- e.g. if you have an 8-bit char, 7 bits can be used for magnitude and 1 for sign. So an unsigned char might range from 0 to 255, whilst a signed char might range from -128 to 127 (for example).
This because a char is stored at all effects as a 8-bit number. Speaking about a negative or positive char doesn't make sense if you consider it an ASCII code (which can be just signed*) but makes sense if you use that char to store a number, which could be in range 0-255 or in -128..127 according to the 2-complement representation.
*: it can be also unsigned, it actually depends on the implementation I think, in that case you will have access to extended ASCII charset provided by the encoding used
The same way how an int can be positive or negative. There is no difference. Actually on many platforms unqualified char is signed.

Resources