Char and int16 array element both shown as 32bit hex? - c

In the example below:
int main(int argc, char *argv[])
{
int16_t array1[] = {0xffff,0xffff,0xffff,0xffff};
char array2[] = {0xff,0xff,0xff,0xff};
printf("Char size: %d \nint16_t size: %d \n", sizeof(char), sizeof(int16_t));
if (*array1 == *array2)
printf("They are the same \n");
if (array1[0] == array2[0])
printf("They are the same \n");
printf("%x \n", array1[0]);
printf("%x \n", *array1);
printf("%x \n", array2[0]);
printf("%x \n", *array2);
}
Output:
Char size: 1
int16_t size: 2
They are the same
They are the same
ffffffff
ffffffff
ffffffff
ffffffff
Why are the 32bit values printed for both char and int16_t and why can they be compared and are considered the same?

They're the same because they're all different representations of -1.
They print as 32 bits' worth of ff becaue you're on a 32-bit machine and you used %d and the default argument promotions took place (basically, everything smaller gets promoted to int). Try using %hx. (That'll probably get you ffff; I don't know of a way to get ff here other than by using unsigned char, or masking with & 0xff: printf("%x \n", array2[0] & 0xff) .)
Expanding on "They're the same because they're all different representations of -1":
int16_t is a signed 16-bit type. It can contain values in the range -32768 to +32767.
char is an 8-bit type, and on your machine it's evidently signed also. So it can contain values in the range -128 to +127.
0xff is decimal 255, a value which can't be represented in a signed char. If you assign 0xff to a signed char, that bit pattern ends up getting interpreted not as 255, but rather as -1. (Similarly, if you assigned 0xfe, that would be interpreted not as 254, but rather as -2.)
0xffff is decimal 65535, a value which can't be represented in an int16_t. If you assign 0xffff to a int16_t, that bit pattern ends up getting interpreted not as 65535, but rather as -1. (Similarly, if you assigned 0xfffe, that would be interpreted not as 65534, but rather as -2.)
So when you said
int16_t array1[] = {0xffff,0xffff,0xffff,0xffff};
it was basically just as if you'd said
int16_t array1[] = {-1,-1,-1,-1};
And when you said
char array2[] = {0xff,0xff,0xff,0xff};
it was just as if you'd said
char array2[] = {-1,-1,-1,-1};
So that's why *array1 == *array2, and array1[0] == array2[0].
Also, it's worth noting that all of this is very much because of the types of array1 and array2. If you instead said
uint16_t array3[] = {0xffff,0xffff,0xffff,0xffff};
unsigned char array4[] = {0xff,0xff,0xff,0xff};
You would see different values printed (ffff and ff), and the values from array3 and array4 would not compare the same.
Another answer stated that "there is no type information in C at runtime". That's true but misleading in this case. When the compiler generates code to manipulate values from array1, array2, array3, and array4, the code it generates (which of course is significant at runtime!) will be based on their types. In particular, when generating code to fetch values from array1 and array2 (but not array3 and array4), the compiler will use instructions which perform sign extension when assigning to objects of larger type (e.g. 32 bits). That's how 0xff and 0xffff got changed into 0xffffffff.

Because there is no type information in C at runtime and by using a plain %x for printing, you tell printf that your pointer points to an unsigned int. Poor library function just trusts you ... see Length modifier in printf(3) for how to give printf the information it needs.

Using %x to print negative values causes undefined behaviour so you should not assume that there is anything sensible about what you are seeing.
The correct format specifier for char is %hhd, and for int16_t it is "%" PRId16. You will need #include <inttypes.h> to get the latter macro.
Because of the default argument promotions, it is also correct to use %d with char and int16_t 1. If you change your code to use %d instead of %x, it will no longer exhibit undefined behaviour, and the results will make sense.
1 The C standard doesn't actually say that, but it's assumed that that was the intent of the writers.

Related

Clarifications about unsigned type in C

Hi I'm currently learning C and there's something that I quite don't understand.
First of all I was told that if I did this:
unsigned int c2 = -1;
printf("c2 = %u\n", c2);
It would output 255, according to this table:
But I get a weird result: c2 = 4294967295
Now what's weirder is that this works:
unsigned char c2 = -1;
printf("c2 = %d\n", c2);
But I don't understand because since a char is, well, a char why does it even print anything? Since the specifier here is %d and not %u as it should be for unsigned types.
The following code:
unsigned int c2 = -1;
printf("c2 = %u\n", c2);
Will never print 255. The table you are looking at is referring to an unsigned integer of 8 bits. An int in C needs to be at least 16 bits in order to comply with the C standard (UINT_MAX defined as 2^16-1 in paragraph §5.2.4.2.1, page 22 here). Therefore the value you will see is going to be a much larger number than 255. The most common implementations use 32 bits for an int, and in that case you'll see 4294967295 (2^32 - 1).
You can check how many bits are used for any kind of variable on your system by doing sizeof(type_or_variable) * CHAR_BIT (CHAR_BIT is defined in limits.h and represents the number of bits per byte, which is again most of the times 8).
The correct code to obtain 255 as output is:
unsigned char c = -1;
printf("c = %hhu\n", c);
Where the hh prefix specifier means (from man 3 printf):
hh: A following integer conversion corresponds to a signed char or unsigned char argument, or a following n conversion corresponds to a pointer to a signed char argument.
Anything else is just implementation defined or even worse undefined behavior.
In this declaration
unsigned char c2 = -1;
the internal representation of -1 is truncated to one byte and interpreted as unsigned char. That is all bits of the object c2 are set.
In this call
printf("c2 = %d\n", c2);
the argument that has the type unsigned char is promoted to the type int preserving its value that is 255. This value is outputted as an integer.
Is this declaration
unsigned int c2 = -1;
there is no truncation. The integer value -1 that usually occupies 4 bytes (according to the size of the type int) is interpreted as an unsigned value with all bits set.
So in this call
printf("c2 = %u\n", c2);
there is outputted the maximum value of the type unsigned int. It is the maximum value because all bits in the internal representation are set. The conversion from signed integer type to a larger unsigned integer type preserve the sign propagating it to the width of the unsigned integer object.
In C integer can have multiple representations, so multiple storage sizes and value ranges
refer to the table below for more details.

Converting unsigned char array to signed short in C

I want to convert an array of unsigned char to a signed int!
I've already done some try, and the conversion works for the single element like this:
unsigned char byte[2];
signed char *valueS[2];
byte[0] = 0b11110111;
byte[1] = 0b00001001;
//Conversion
for(int i = 0; i < 2; i++)
{
valueS[i] = (signed char*)&byte[i];
}
//Result
printf("Val 0 -> %d \n", *valueS[0]); // print -9 Correctly
printf("Val 1 -> %d \n", *valueS[1]); // print 9 Correctly
//But when i try to print a 16 bit signed
printf("Int %d \n", *(signed short*)valueS); //It doesn't work! I expected -2295
How can i get the 16 bit signed int from that unsigned char array? Thank you in advance!
How can i get the 16 bit signed int from that unsigned char array?
Supposing you mean you want to obtain the int16_t whose representation is byte-for-byte identical to the contents of an arbitrary array of two unsigned char, the only conforming approach is to declare an int16_t object and copy the array elements to its representation. You could use the memcpy() function to do the copying, or you could do it manually.
For example,
#include <stdint.h>
// ...
unsigned char byte[2] = { 0xF7, 0x05 };
int16_t my_int;
unsigned char *ip = (unsigned char *) &my_int;
ip[0] = byte[0];
ip[1] = byte[1];
printf("Int %d \n", my_int);
You might see a recommendation to use a pointer aliasing trick to try to reinterpret the bytes of the array directly as the representation of an integer. That would take a form similar to your code example, but such an approach is non-conforming, and formally it yields undefined behavior. You may access the representation of an object of any type via a pointer to [unsigned] char, as the code in this answer does, but, generally, you may not otherwise access an object via a pointer to a type incompatible with that object's.
Note also that the printf above is a bit sloppy. In the event that int16_t is a different type from int, such as short int, the corresponding printf directive for it will have a length modifier in it -- likely %hd. But because of details of the way printf is declared, it is the result of promoting my_int to int that will be passed to printf. That rescues the mismatch, and in practice, the printed result will be the same as if you used the correct directive.

Why hex encoded characters greater than x7F displays different in printf function?

I expected the code below show two equal lines:
#include <stdio.h>
int main(void) {
//printf("%x %x %x\n", '\x7F', (unsigned char)'\x8A', (unsigned char)'\x8B');
printf("%x %x %x\n", '\x7F', '\x8A', '\x8B');
printf("%x %x %x\n", 0x7F, 0x8A, 0x8B);
return 0;
}
My output:
7f ffffff8a ffffff8b
7f 8a 8b
I know that is maybe a overflow case. But why the ffffff8a (4 bytes)...?
'\x8A' is, according to cppreference,
a single-byte integer character constant, e.g. 'a' or '\n' or '\13'.
What is particularly interesting is the following.
Such constant has type int and a value equal to the representation of c-char in the execution character set as a value of type char mapped to int.
This means that the conversion of '\x8A' to an unsigned int is implementation-defined, because char can be signed or unsigned, depending on the system. If char is signed, as it seems to be the case for you (and is very common), then the value of '\x8A' (which is negative) as a 32-bit int is 0xFFFFFF8A (also negative). However, if char is unsigned, then it becomes 0x0000008A (which is why the commented line in your code works as you'd think it should).
The printf format specifier %x is used to convert an unsigned integer into hexadecimal representation. Although printf expects an unsigned int and you give it an int, and even though the standard says that passing an incorrect type to printf is (generally) undefined behavior, it isn't in your case. This is because the conversion from int to unsigned int is well-defined, even though the opposite isn't.

Since characters from -128 to -1 are same as from +128 to +255, then what is the point of using unsigned char?

#include <stdio.h>
#include <conio.h>
int main()
{
char a=-128;
while(a<=-1)
{
printf("%c\n",a);
a++;
}
getch();
return 0;
}
The output of the above code is same as the output of the code below
#include <stdio.h>
#include <conio.h>
int main()
{
unsigned char a=+128;
while(a<=+254)
{
printf("%c\n",a);
a++;
}
getch();
return 0;
}
Then why we use unsigned char and signed char?
K & R, chapter and verse, p. 43 and 44:
There is one subtle point about the conversion of characters to
integers. The language does not specify whether variables of type char
are signed or unsigned quantities. When a char is converted to an int,
can it ever produce a negative integer? The answer varies from machine
to machine, reflecting differences in architecture. On some machines,
a char whose leftmost bit is 1 will be converted to a negative integer
("sign extension"). On others, a char is promoted to an int by adding
zeros at the left end, and thus is always positive. [...] Arbitrary
bit patterns stored in character variables may appear to be negative
on some machines, yet positive on others. For portability, specify
signed or unsigned if non-character data is to be stored in char
variables.
With printing characters - no difference:
The function printf() uses "%c" and takes the int argument and converts it to unsigned char and then prints it.
char a;
printf("%c\n",a); // a is converted to int, then passed to printf()
unsigned char ua;
printf("%c\n",ua); // ua is converted to int, then passed to printf()
With printing values (numbers) - difference when system uses a char that is signed:
char a = -1;
printf("%d\n",a); // --> -1
unsigned char ua = -1;
printf("%d\n",ua); // --> 255 (Assume 8-bit unsigned char)
Note: Rare machines will have int the same size as char and other concerns apply.
So if code uses a as a number rather than a character, the printing differences are significant.
The bit representation of a number is what the computer stores, but it doesn't mean anything without someone (or something) imposing a pattern onto it.
The difference between the unsigned char and signed char patterns is how we interpret the set bits. In one case we decide that zero is the smallest number and we can add bits until we get to 0xFF or binary 11111111. In the other case we decide that 0x80 is the smallest number and we can add bits until we get to 0x7F.
The reason we have the funny way of representing signed numbers (the latter pattern) is because it places zero 0x00 roughly in the middle of the sequence, and because 0xFF (which is -1, right before zero) plus 0x01 (which is 1, right after zero) add together to carry until all the bits carry off the high end leaving 0x00 (-1 + 1 = 0). Likewise -5 + 5 = 0 by the same mechanisim.
For fun, there are a lot of bit patterns that mean different things. For example 0x2a might be what we call a "number" or it might be a * character. It depends on the context we choose to impose on the bit patterns.
Because unsigned char is used for one byte integer in C89.
Note there are three distinct char related types in C89: char, signed char, unsigned char.
For character type, char is used.
unsigned char and signed char are used for one byte integers like short is used for two byte integers. You should not really use signed char or unsigned char for characters. Neither should you rely on the order of those values.
Different types are created to tell the compiler how to "understand" the bit representation of one or more bytes. For example, say I have a byte which contains 0xFF. If it's interpreted as a signed char, it's -1; if it's interpreted as a unsigned char, it's 255.
In your case, a, no matter whether signed or unsigned, is integral promoted to int, and passed to printf(), which later implicitly convert it to unsigned char before printing it out as a character.
But let's consider another case:
#include <stdio.h>
#include <string.h>
int main(void)
{
char a = -1;
unsigned char b;
memmove(&b, &a, 1);
printf("%d %u", a, b);
}
It's practically acceptable to simply write printf("%d %u", a, a);. memmove() is used just to avoid undefined behaviour.
It's output on my machine is:
-1 4294967295
Also, think about this ridiculous question:
Suppose sizeof (int) == 4, since arrays of characters (unsigned
char[]){UCHAR_MIN, UCHAR_MIN, UCHAR_MIN, UCHAR_MIN} to (unsigned
char[]){UCHAR_MAX, UCHAR_MAX, UCHAR_MAX, UCHAR_MAX} are same as
unsigned ints from UINT_MIN to UINT_MAX, then what is the point
of using unsigned int?

Why does printing char sometimes print 4 bytes number in C

Why does printing a hex representation of char to the screen using printf sometimes prints a 4 byte number?
This is the code I have written
#include <stdio.h>
#include <stdint.h>
#include<stdio.h>
int main(void) {
char testStream[8] = {'a', 'b', 'c', 'd', 0x3f, 0x9d, 0xf3, 0xb6};
int i;
for(i=0;i<8;i++){
printf("%c = 0x%X, ", testStream[i], testStream[i]);
}
return 0;
}
And following is the output:
a = 0x61, b = 0x62, c = 0x63, d = 0x64, ? = 0x3F, � = 0xFFFFFF9D, � = 0xFFFFFFF3, � = 0xFFFFFFB6
char appears to be signed on your system. With the standard "two's complement" representation of integers, having the most significant bit set means it is a negative number.
In order to pass a char to a vararg function like printf it has to be expanded to an int. To preserve its value the sign bit is copied to all the new bits (0x9D → 0xFFFFFF9D). Now the %X conversion expects and prints an unsigned int and you get to see all the set bits in the negative number rather than a minus sign.
If you don't want this, you have to either use unsigned char or cast it to unsigned char when passing it to printf. An unsigned char has no extra bits compared to a signed char and therefore the same bit pattern. When the unsigned value gets extended, the new bits will be zeros and you get what you expected in the first place.
From the C standard (C11 6.3.2.1/8) description of %X:
The unsigned int argument is converted to unsigned octal (o), unsigned
decimal (u), or unsigned hexadecimal notation (x or X) in the style dddd; the
letters abcdef are used for x conversion and the letters ABCDEF for X
conversion.
You did not provide an unsigned int as argument1, therefore your code causes undefined behaviour.
In this case the undefined behaviour manifests itself as the implementation of printf writing its code for %X to behave as if you only ever pass unsigned int. What you are seeing is the unsigned int value which has the same bit-pattern as the negative integer value you gave as argument.
There's another issue too, with:
char testStream[8] = {'a', 'b', 'c', 'd', 0x3f, 0x9d, 0xf3, 0xb6};
On your system the range of char is -128 to +127. However 0x9d, which is 157, is out of range for char. This causes implementation-defined behaviour (and may raise a signal); the most common implementation definition here is that the char with the same bit-pattern as (unsigned char)0x9d will be selected.
1 Although it says unsigned int, this section is usually interpreted to mean that a signed int, or any argument of lower rank, with a non-negative value is permitted too.
On your machine, char is signed by default. Change the type to unsigned char and you'll get the results you are expecting.
A Quick explanation on why this is
In computer systems, the MSB (Most Significant Bit) is the bit with the highest value (the left most bit). The MSB of a number is used to determine if the number is positive or negative. Even though a char type is 8-bits long, a signed char only can use 7-bits because the 8th bit determines if its positive or negative. Here is an example:
Data Type: signed char
Decimal: 25
Binary: 00011001
^
|
--- Signed flag. 0 indicates positive number. 1 indicates negtive number
Because a signed char uses the 8th bit as a signed flag, the number of bits it can actually use to store a number is 7-bits. The largest value you can store in 7-bits is 127 (7F in hex).
In order to convert a number from positive to negative, computers use something called two's-compliment. How it works is that all the bits are inverted, then 1 is added to the value. Here's an example:
Decimal: 25
Binary: 00011001
Decimal: -25
Binary: 11100111
When you declared char testStream[8], the compiler assumed you wanted signed char's. When you assigned a value of 0x9D or 0xF3, those numbers were bigger then 0x7F, which is the biggest number that can fit into 7-bits of a signed char. Therefore, when you tried to printf the value to the screen, it was expanded into an int and filled with FF's.
I hope this explanation clears things up!
char is signed on your platform: the initializer 0x9d for the 6th character is larger than CHAR_MAX (157 > 127), it is converted to char as a negative value -99 (157 - 256 = -99) stored at offset 5 in textStream.
When you pass textStream[5] as an argument to printf, it is first promoted to int, with a value of -99. printf actually expects an unsigned int for the "%X" format specifier.
On your architecture, int is 32 bits with 2's complement representation of negative values, hence the value -99 passed as int is interpreted as 4294967197 (2^32-99), whose hexadecimal representation is 0xFFFFFF9D. On a different architecture, it could be something else: on 16-bit DOS, you would get 0xFF9D, on a 64-bit Cray you might get 0xFFFFFFFFFFFFFF9D.
To avoid this confusion, you should cast the operands of printf as (unsigned char). Try replacing your printf with this:
printf("%c = 0x%2X, ", (unsigned char)testStream[i], (unsigned char)testStream[i]);
What seem to happen here is implicit char -> int -> uint cast. When the positive char is being converted to int nothing bad happens. But in case of the negative chars such as 0x9d, 0xf3, 0xb6 cast to int will keep them negative and therefore they become 0xffffff9d, 0xfffffff3, 0xffffffb6. Not that actual value is not changed, that is 0xffffff9d == -99 and 0x9d == -99.
To print them properly you can change your code to
printf("%c = 0x%X, ", testStream[i] & 0xff, testStream[i] & 0xff);

Resources