I want to convert an array of unsigned char to a signed int!
I've already done some try, and the conversion works for the single element like this:
unsigned char byte[2];
signed char *valueS[2];
byte[0] = 0b11110111;
byte[1] = 0b00001001;
//Conversion
for(int i = 0; i < 2; i++)
{
valueS[i] = (signed char*)&byte[i];
}
//Result
printf("Val 0 -> %d \n", *valueS[0]); // print -9 Correctly
printf("Val 1 -> %d \n", *valueS[1]); // print 9 Correctly
//But when i try to print a 16 bit signed
printf("Int %d \n", *(signed short*)valueS); //It doesn't work! I expected -2295
How can i get the 16 bit signed int from that unsigned char array? Thank you in advance!
How can i get the 16 bit signed int from that unsigned char array?
Supposing you mean you want to obtain the int16_t whose representation is byte-for-byte identical to the contents of an arbitrary array of two unsigned char, the only conforming approach is to declare an int16_t object and copy the array elements to its representation. You could use the memcpy() function to do the copying, or you could do it manually.
For example,
#include <stdint.h>
// ...
unsigned char byte[2] = { 0xF7, 0x05 };
int16_t my_int;
unsigned char *ip = (unsigned char *) &my_int;
ip[0] = byte[0];
ip[1] = byte[1];
printf("Int %d \n", my_int);
You might see a recommendation to use a pointer aliasing trick to try to reinterpret the bytes of the array directly as the representation of an integer. That would take a form similar to your code example, but such an approach is non-conforming, and formally it yields undefined behavior. You may access the representation of an object of any type via a pointer to [unsigned] char, as the code in this answer does, but, generally, you may not otherwise access an object via a pointer to a type incompatible with that object's.
Note also that the printf above is a bit sloppy. In the event that int16_t is a different type from int, such as short int, the corresponding printf directive for it will have a length modifier in it -- likely %hd. But because of details of the way printf is declared, it is the result of promoting my_int to int that will be passed to printf. That rescues the mismatch, and in practice, the printed result will be the same as if you used the correct directive.
Related
Hi I'm currently learning C and there's something that I quite don't understand.
First of all I was told that if I did this:
unsigned int c2 = -1;
printf("c2 = %u\n", c2);
It would output 255, according to this table:
But I get a weird result: c2 = 4294967295
Now what's weirder is that this works:
unsigned char c2 = -1;
printf("c2 = %d\n", c2);
But I don't understand because since a char is, well, a char why does it even print anything? Since the specifier here is %d and not %u as it should be for unsigned types.
The following code:
unsigned int c2 = -1;
printf("c2 = %u\n", c2);
Will never print 255. The table you are looking at is referring to an unsigned integer of 8 bits. An int in C needs to be at least 16 bits in order to comply with the C standard (UINT_MAX defined as 2^16-1 in paragraph ยง5.2.4.2.1, page 22 here). Therefore the value you will see is going to be a much larger number than 255. The most common implementations use 32 bits for an int, and in that case you'll see 4294967295 (2^32 - 1).
You can check how many bits are used for any kind of variable on your system by doing sizeof(type_or_variable) * CHAR_BIT (CHAR_BIT is defined in limits.h and represents the number of bits per byte, which is again most of the times 8).
The correct code to obtain 255 as output is:
unsigned char c = -1;
printf("c = %hhu\n", c);
Where the hh prefix specifier means (from man 3 printf):
hh: A following integer conversion corresponds to a signed char or unsigned char argument, or a following n conversion corresponds to a pointer to a signed char argument.
Anything else is just implementation defined or even worse undefined behavior.
In this declaration
unsigned char c2 = -1;
the internal representation of -1 is truncated to one byte and interpreted as unsigned char. That is all bits of the object c2 are set.
In this call
printf("c2 = %d\n", c2);
the argument that has the type unsigned char is promoted to the type int preserving its value that is 255. This value is outputted as an integer.
Is this declaration
unsigned int c2 = -1;
there is no truncation. The integer value -1 that usually occupies 4 bytes (according to the size of the type int) is interpreted as an unsigned value with all bits set.
So in this call
printf("c2 = %u\n", c2);
there is outputted the maximum value of the type unsigned int. It is the maximum value because all bits in the internal representation are set. The conversion from signed integer type to a larger unsigned integer type preserve the sign propagating it to the width of the unsigned integer object.
In C integer can have multiple representations, so multiple storage sizes and value ranges
refer to the table below for more details.
I know that when using %x with printf() we are printing 4 bytes (an int in hexadecimal) from the stack. But I would like to print only 1 byte. Is there a way to do this ?
Assumption:You want to print the value of a variable of 1 byte width, i.e., char.
In case you have a char variable say, char x = 0; and want to print the value, use %hhx format specifier with printf().
Something like
printf("%hhx", x);
Otherwise, due to default argument promotion, a statement like
printf("%x", x);
would also be correct, as printf() will not read the sizeof(unsigned int) from stack, the value of x will be read based on it's type and the it will be promoted to the required type, anyway.
You need to be careful how you do this to avoid any undefined behaviour.
The C standard allows you to cast the int to an unsigned char then print the byte you want using pointer arithmetic:
int main()
{
int foo = 2;
unsigned char* p = (unsigned char*)&foo;
printf("%x", p[0]); // outputs the first byte of `foo`
printf("%x", p[1]); // outputs the second byte of `foo`
}
Note that p[0] and p[1] are converted to the wider type (the int), prior to displaying the output.
You can use the following solution to print one byte with printf:
unsigned char c = 255;
printf("Unsigned char: %hhu\n", c);
If you want to print a single byte that is present in a larger value type, you can mask and/or shift out the required value (e.g. int x = 0x12345678; x & 0x00FF0000 >> 16). Or just retrieve the required byte by casting the needed byte location using a (unsigned) char pointer and using an offset.
I want to print the ASCII code of a char in hex; for example, for
char a = 0xA5;
I want to print A5 on the console. Here is what I have tried:
char a = 0xA5;
printf("%02X", a);
but i get FFFFFFA5. How could I solve this?
Cast the value to unsigned char, then cast again to unsigned int to be printed via %X.
char a = 0xA5;
printf("%02X", (unsigned int)(unsigned char)a);
Note that conversion to signed integer which is not capable to store original value is implementation-defined, but conversion to unsigned integer is defined, according to N1256 6.3.1.3
The "problem" is called sign extension -- parameteres are passed as int by default so a char would be converted to int and in the process the sign extension would means that the extra f are added -- make the char unsigned like this
unsigned char a = 0xA5;
printf("%02X", a);
and the compiler will understand how to treat your data.
In the example below:
int main(int argc, char *argv[])
{
int16_t array1[] = {0xffff,0xffff,0xffff,0xffff};
char array2[] = {0xff,0xff,0xff,0xff};
printf("Char size: %d \nint16_t size: %d \n", sizeof(char), sizeof(int16_t));
if (*array1 == *array2)
printf("They are the same \n");
if (array1[0] == array2[0])
printf("They are the same \n");
printf("%x \n", array1[0]);
printf("%x \n", *array1);
printf("%x \n", array2[0]);
printf("%x \n", *array2);
}
Output:
Char size: 1
int16_t size: 2
They are the same
They are the same
ffffffff
ffffffff
ffffffff
ffffffff
Why are the 32bit values printed for both char and int16_t and why can they be compared and are considered the same?
They're the same because they're all different representations of -1.
They print as 32 bits' worth of ff becaue you're on a 32-bit machine and you used %d and the default argument promotions took place (basically, everything smaller gets promoted to int). Try using %hx. (That'll probably get you ffff; I don't know of a way to get ff here other than by using unsigned char, or masking with & 0xff: printf("%x \n", array2[0] & 0xff) .)
Expanding on "They're the same because they're all different representations of -1":
int16_t is a signed 16-bit type. It can contain values in the range -32768 to +32767.
char is an 8-bit type, and on your machine it's evidently signed also. So it can contain values in the range -128 to +127.
0xff is decimal 255, a value which can't be represented in a signed char. If you assign 0xff to a signed char, that bit pattern ends up getting interpreted not as 255, but rather as -1. (Similarly, if you assigned 0xfe, that would be interpreted not as 254, but rather as -2.)
0xffff is decimal 65535, a value which can't be represented in an int16_t. If you assign 0xffff to a int16_t, that bit pattern ends up getting interpreted not as 65535, but rather as -1. (Similarly, if you assigned 0xfffe, that would be interpreted not as 65534, but rather as -2.)
So when you said
int16_t array1[] = {0xffff,0xffff,0xffff,0xffff};
it was basically just as if you'd said
int16_t array1[] = {-1,-1,-1,-1};
And when you said
char array2[] = {0xff,0xff,0xff,0xff};
it was just as if you'd said
char array2[] = {-1,-1,-1,-1};
So that's why *array1 == *array2, and array1[0] == array2[0].
Also, it's worth noting that all of this is very much because of the types of array1 and array2. If you instead said
uint16_t array3[] = {0xffff,0xffff,0xffff,0xffff};
unsigned char array4[] = {0xff,0xff,0xff,0xff};
You would see different values printed (ffff and ff), and the values from array3 and array4 would not compare the same.
Another answer stated that "there is no type information in C at runtime". That's true but misleading in this case. When the compiler generates code to manipulate values from array1, array2, array3, and array4, the code it generates (which of course is significant at runtime!) will be based on their types. In particular, when generating code to fetch values from array1 and array2 (but not array3 and array4), the compiler will use instructions which perform sign extension when assigning to objects of larger type (e.g. 32 bits). That's how 0xff and 0xffff got changed into 0xffffffff.
Because there is no type information in C at runtime and by using a plain %x for printing, you tell printf that your pointer points to an unsigned int. Poor library function just trusts you ... see Length modifier in printf(3) for how to give printf the information it needs.
Using %x to print negative values causes undefined behaviour so you should not assume that there is anything sensible about what you are seeing.
The correct format specifier for char is %hhd, and for int16_t it is "%" PRId16. You will need #include <inttypes.h> to get the latter macro.
Because of the default argument promotions, it is also correct to use %d with char and int16_t 1. If you change your code to use %d instead of %x, it will no longer exhibit undefined behaviour, and the results will make sense.
1 The C standard doesn't actually say that, but it's assumed that that was the intent of the writers.
I have these definitions:
int data = uartBaseAddress[UART_DATA_REGISTER / 4]; // data coming from UART RX port
char message[20]; // array of 20 chars
Now when I try to do this:
message[0] = (char) data;
printf("%x", message[0]);
It prints (for example): "ffffff9c".
Of course I want only the last 8 bits ("9c") and I don't understand how to properly do the conversion from int to char.
EDIT: I mean: i have to populate the array like this:
data = 0xFFFFFF9c;
message[0] = data & 0xFF; -- it has to contain only 9c
data = 0xFFFFFFde;
message[1] = data & 0xFF; -- it has to contain only de
etc...
The conversion is correct. It's the printf that's the problem.
Apparently plain char is signed on your system (it can be either signed or unsigned).
I'm going to guess that the value printed was ffffff9c (8 digits), not ffffffff9c (10 digits); please verify that.
The value of data was probably -100. Converting that value from int to char would yield -100, since that value is within the range of type char (probably -128 .. +127).
But the %x format specifier requires an argument of type unsigned int, not int. The value of message[0] is promoted to int when it's passed to printf, but printf, because of the format string, assumes that the argument is of type unsigned int.
Strictly speaking, the behavior is undefined, but most likely printf will simply take the int value passed to it and treat it as if it were an unsigned int. (int)-100 and (unsigned int)0xffffff9c have the same representation.
There is no printf format specifier to print a signed value in hexadecimal. If you change the format from %x to %d, you'll at least see the correct value.
But you should step back an decide just what you're trying to accomplish. If you want to extract the low-order 8 bits of data, you can do so by masking it, as unwind's answer suggests. Or you can convert it to unsigned char rather than plain char to guarantee that you'll get an unsigned value.
An unsigned char value is still promoted to int when passed to printf, so to be fully correct you should explicitly convert it to unsigned int:
int data = ...;
unsigned char message[20]; // change to unsigned char
message[0] = data; // no cast needed, the value is implicitly converted
printf("%x", (unsigned int)message[0]);
Strictly speaking, the (unsigned int) isn't necessary. message[0] is an unsigned char, so it will be converted to a non-negative int value, and there's a special-case rule that says int and unsigned int arguments with the same value are interchangeable as function arguments. Still, my own preference is to use the cast because it's clearer, and because adding the cast is easier (particularly for anyone reading the code) than following the line of reasoning that says it's not necessary.
There's no need for message, just mask off the bits you don't want:
printf("%x\n", data & 0xff);
To convert a 32-bit integer to a 4-element array of 8-bit numbers, do:
const uint32_t data = ...
uint8_t message[20];
for(int i = 0; i < 4; ++i)
{
message[i] = data & 0xff;
data >>= 8;
}
The above uses little-endian byte order. If data is 0x12345678, then message will begin 0x78, 0x56, 0x34, 0x12.