How to print unsigned char as 2-digit hex value in C? - c

I am trying to printout an unsigned char value as a 2-Digit hex value, but always getting the result as 4-Digit hex values, not sure what's wrong with my code.
// unsigned char declaration
unsigned char status = 0x00;
// printing out the value
printf("status = (0x%02X)\n\r", (status |= 0xC0));
I am expecting a 2 digit hex result as 0xC0, but I always get 0xC0FF.
As well, when I tried to print the same variable (status) as an unsigned char with the %bu format identifier I got the output as 255.
How do you get just the two hex characters as output?

As far as I know, the Keil C compiler doesn't fully conform to the C standard. If so, it's likely that it doesn't quite follow the standard promotion rules for things like passing char values to variadic functions; on an 8-bit CPU, there are performance advantages in not automatically expanding 8-bit values to 16 bits or more.
As a workaround, you can explicitly truncate the high-order bits before passing the argument to printf. Try this:
#include <stdio.h>
int main(void) {
unsigned char status = 0x00;
status |= 0xC0;
printf("status = 0x%02X\n", (unsigned int)(status & 0xFF));
return 0;
}
Doing a bitwise "and" with 0xFF clears all but the bottom 8 bits; casting to unsigned int shouldn't be necessary, but it guarantees that the argument is actually of the type expected by printf with a "%02X" format.
You should also consult your implementation's documentation regarding any non-standard behavior for type promotions and printf.

you are sending a char to a format string which expects an int. The printf function is grabbing another byte off the stack to fill it out. Try
printf("%02X",(int)(status|0xC0));

Looking at all the answers, I think probably we are missing another way of doing this.
const unsigned char chararr[]="abceXYZ";
for (int i=0; i< 7; ++i) {
printf("%#04X %d %c\n", chararr[i], chararr[i], chararr[i]);
}
0X61 97 a
0X62 98 b
0X63 99 c
0X65 101 e
0X58 88 X
0X59 89 Y
0X5A 90 Z
If you use %#04x small x then the output will b 0x small x prefix. The # pound sign tells the function to print the 0x. 04 to instruct how many digits to output, if input is '0x0a' it will print this,without 04 it will print '0xa'.
In my computer, Dell workstation, the output is as expected by the question. Unless
unsigned char status = 0x00;
printf("status = (0x%02X)\n\r", (status |= 0xC0));
// output
//status = (0xC0)
// is exactly expected by the original question.
Better illustrated by examples:
37 printf("status = (%#02x)\n", (status |= 0xC0));
38 printf("status = (%#04x)\n", (status |= 0xC0));
39 printf("status = (%#04x)\n", 0x0f);
40 printf("status = (%#02x)\n", 0x0f);
status = (0xc0)
status = (0xc0)
status = (0x0f)
status = (0xf)

Cast it to unsigned char:
printf("status = (0x%02X)\n\r", (unsigned char)(status |= 0xC0));

Related

Char automatically converts to int (I guess)

I have following code
char temp[] = { 0xAE, 0xFF };
printf("%X\n", temp[0]);
Why output is FFFFFFAE, not just AE?
I tried
printf("%X\n", 0b10101110);
And output is correct: AE.
Suggestions?
The answer you're getting, FFFFFFAE, is a result of the char data type being signed. If you check the value, you'll notice that it's equal to -82, where -82 + 256 = 174, or 0xAE in hexadecimal.
The reason you get the correct output when you print 0b10101110 or even 174 is because you're using the literal values directly, whereas in your example you're first putting the 0xAE value in a signed char where the value is then being sort of "reinterpreted modulo 128", if you wanna think of it that way.
So in other words:
0 = 0 = 0x00
127 = 127 = 0x7F
128 = -128 = 0xFFFFFF80
129 = -127 = 0xFFFFFF81
174 = -82 = 0xFFFFFFAE
255 = -1 = 0xFFFFFFFF
256 = 0 = 0x00
To fix this "problem", you could declare the same array you initially did, just make sure to use an unsigned char type array and your values should print as you expect.
#include <stdio.h>
#include <stdlib.h>
int main()
{
unsigned char temp[] = { 0xAE, 0xFF };
printf("%X\n", temp[0]);
printf("%d\n\n", temp[0]);
printf("%X\n", temp[1]);
printf("%d\n\n", temp[1]);
return EXIT_SUCCESS;
}
Output:
AE
174
FF
255
https://linux.die.net/man/3/printf
According to the man page, %x or %X accept an unsigned integer. Thus it will read 4 bytes from the stack.
In any case, under most architectures you can't pass a parameter that is less then a word (i.e. int or long) in size, and in your case it will be converted to int.
In the first case, you're passing a char, so it will be casted to int. Both are signed, so a signed cast is performed, thus you see preceding FFs.
In your second example, you're actually passing an int all the way, so no cast is performed.
If you'd try:
printf("%X\n", (char) 0b10101110);
You'd see that FFFFFFAE will be printed.
When you pass a smaller than int data type (as char is) to a variadic function (as printf(3) is) the parameter is converted to int in case the parameter is signed and to unsigned int in the case it is unsigned. What is being done and you observe is a sign extension, as the most significative bit of the char variable is active, it is replicated to the thre bytes needed to complete an int.
To solve this and to have the data in 8 bits, you have two possibilities:
Allow your signed char to convert to an int (with sign extension) then mask the bits 8 and above.
printf("%X\n", (int) my_char & 0xff);
Declare your variable as unsigned, so it is promoted to an unsigned int.
unsigned char my_char;
...
printf("%X\n", my_char);
This code causes undefined behaviour. The argument to %X must have type unsigned int, but you supply char.
Undefined behaviour means that anything can happen; including, but not limited to, extra F's appearing in the output.

c and bit shifting in a char

I am new to C and having a hard time understanding why the code below prints out ffffffff when binary 1111111 should equal hex ff.
int i;
char num[8] = "11111111";
unsigned char result = 0;
for ( i = 0; i < 8; ++i )
result |= (num[i] == '1') << (7 - i);
}
printf("%X", bytedata);
You print bytedata which may be uninitialized.
Replace
printf("%X", bytedata);
with
printf("%X", result);
Your code then run's fine. code
Although it is legal in C, for good practice you should make
char num[8] = "11111111";
to
char num[9] = "11111111";
because in C the null character ('\0') always appended to the string literal. And also it would not compile as a C++ file with g++.
EDIT
To answer your question
If I use char the result is FFFFFFFF but if I use unsigned char the result is FF.
Answer:
Case 1:
In C size of char is 1byte(Most implementation). If it is unsigned we can
use 8bit and hold maximum 11111111 in binary and FF in hex(decimal 255). When you print it with printf("%X", result);, this value implicitly converted to unsigned int which becomes FF in hex.
Case 2: But when you use char(signed), then MSB bit use as sign bit, so you can use at most 7 bit for your number whose range -128 to 127 in decimal. When you assign it with FF(255 in decimal) then Integer Overflow occur which leads to Undefined behavior.

C Printing Hexadecimal

Okay so I am trying to print hexadecimal values of a struct. Now my print function does the following:
int len = sizeof(someStruct);
unsigned char *buffer = (unsigned char*)&someStruct;
int count;
for(count = 0; count < len; count++) {
fprintf(stderr, "%02x ", buffer[count]);
}
fprintf(stderr, "\n");
Here is the definition of the struct:
struct someStruct {
unsigned char a;
short myShort;
} __attribute__((packed)) someStruct;
The length of this struct printed out as expected is (output on console):
sizeof(someStruct): 3 bytes
Issue here is the following that I am encountering. There is a short which I set to a value.
someStruct.myShort = 0x08;
Now this short is 2 bytes long. When it is printed out into the console however, it does not show the most significant 0x00. Here is the output I get,
stderr: 00 08
I would like the following output however (3 bytes long),
stderr: 00 00 08
If I fill the short with a 0xFFFF, then I do get the 2 byte output, however, whenever there is leading 0x00, it does not output the leading 0x00 to console.
Any ideas on what I am doing wrong. Probably something small I would assume I am overlooking.
After you provided more info, your code is OK for me. It prints the output:
00 08 00
First 00 is from unsigned char a; and second bytes 08 00 are from short. They are switched because of platform dependent data storing in memory.
If you want switched bytes of the short you could just show a short:
fprintf(stderr, "%02x %02x", (someStruct.myShort >> 8) & 0xFF, someStruct.myShort & 0xFF)
I don't see a problem with your code. However, I get 08 00, which makes sense on my little-endian Intel machine.
The problem is in the format of the printf
%02x
%02x means that the result will be printed as hex value (x), with a minimum lenght of 2 (2) and filling the spaces with 0 (0)
Try with
fprintf(stderr, "%04x ", buffer[count]);
The width specifier in the format string (2 in your case) refers to the minimum number of characters in the text output, not the number of bytes to print. Try using "%04x " as your format string instead.
As for the digit grouping (00 08 as opposed to 0008): Plain old printf doesn't support that, but POSIX printf does. Info here: Digit grouping in C's printf
Need to take care not to shift in a signed bit should buffer be signed. Use "hh" to only print 1 byte worth of data. "hh" available with C99. See What is the purpose of the h and hh modifiers for printf?
fprintf(stderr, "%02hhx %02hhx", buffer[count] >> 8, buffer[count]);
[Edit OP's latest edit wants to see 3 bytes] This will print all field's contents. Each field is in the endian order of the machine.
size_t len = sizeof(someStruct);
const unsigned char *buffer = (unsigned char*)&someStruct;
size_t count;
for(count = 0; count < len; count++) {
fprintf(stderr, "%02x ", buffer[count]);
}
fprintf(stderr, "\n");

Why does C print my hex values incorrectly?

So I'm a bit of a newbie to C and I am curious to figure out why I am getting this unusual behavior.
I am reading a file 16 bits at a time and just printing them out as follows.
#include <stdio.h>
#define endian(hex) (((hex & 0x00ff) << 8) + ((hex & 0xff00) >> 8))
int main(int argc, char *argv[])
{
const int SIZE = 2;
const int NMEMB = 1;
FILE *ifp; //input file pointe
FILE *ofp; // output file pointer
int i;
short hex;
for (i = 2; i < argc; i++)
{
// Reads the header and stores the bits
ifp = fopen(argv[i], "r");
if (!ifp) return 1;
while (fread(&hex, SIZE, NMEMB, ifp))
{
printf("\n%x", hex);
printf("\n%x", endian(hex)); // this prints what I expect
printf("\n%x", hex);
hex = endian(hex);
printf("\n%x", hex);
}
}
}
The results look something like this:
ffffdeca
cade // expected
ffffdeca
ffffcade
0
0 // expected
0
0
600
6 // expected
600
6
Can anyone explain to me why the last line in each block doesn't print the same value as the second?
The placeholder %x in the format string interprets the corresponding parameter as unsigned int.
To print the parameter as short, add a length modifier h to the placeholder:
printf("%hx", hex);
http://en.wikipedia.org/wiki/Printf_format_string#Format_placeholders
This is due to integer type-promotion.
Your shorts are being implicitly promoted to int. (which is 32-bits here) So these are sign-extension promotions in this case.
Therefore, your printf() is printing out the hexadecimal digits of the full 32-bit int.
When your short value is negative, the sign-extension will fill the top 16 bits with ones, thus you get ffffcade rather than cade.
The reason why this line:
printf("\n%x", endian(hex));
seems to work is because your macro is implicitly getting rid of the upper 16-bits.
You have implicitly declared hex as a signed value (to make it unsigned write unsigned short hex) so that any value over 0x8FFF is considered to be negative. When printf displays it as a 32-bit int value it is sign-extended with ones, causing the leading Fs. When you print the return value of endian before truncating it by assigning it to hex the full 32 bits are available and printed correctly.

Converting an int into a 4 byte char array (C)

Hey, I'm looking to convert a int that is inputed by the user into 4 bytes, that I am assigning to a character array. How can this be done?
Example:
Convert a user inputs of 175 to
00000000 00000000 00000000 10101111
Issue with all of the answers so far, converting 255 should result in 0 0 0 ff although it prints out as: 0 0 0 ffffffff
unsigned int value = 255;
buffer[0] = (value >> 24) & 0xFF;
buffer[1] = (value >> 16) & 0xFF;
buffer[2] = (value >> 8) & 0xFF;
buffer[3] = value & 0xFF;
union {
unsigned int integer;
unsigned char byte[4];
} temp32bitint;
temp32bitint.integer = value;
buffer[8] = temp32bitint.byte[3];
buffer[9] = temp32bitint.byte[2];
buffer[10] = temp32bitint.byte[1];
buffer[11] = temp32bitint.byte[0];
both result in 0 0 0 ffffffff instead of 0 0 0 ff
Just another example is 175 as the input prints out as 0, 0, 0, ffffffaf when it should just be 0, 0, 0, af
The portable way to do this (ensuring that you get 0x00 0x00 0x00 0xaf everywhere) is to use shifts:
unsigned char bytes[4];
unsigned long n = 175;
bytes[0] = (n >> 24) & 0xFF;
bytes[1] = (n >> 16) & 0xFF;
bytes[2] = (n >> 8) & 0xFF;
bytes[3] = n & 0xFF;
The methods using unions and memcpy() will get a different result on different machines.
The issue you are having is with the printing rather than the conversion. I presume you are using char rather than unsigned char, and you are using a line like this to print it:
printf("%x %x %x %x\n", bytes[0], bytes[1], bytes[2], bytes[3]);
When any types narrower than int are passed to printf, they are promoted to int (or unsigned int, if int cannot hold all the values of the original type). If char is signed on your platform, then 0xff likely does not fit into the range of that type, and it is being set to -1 instead (which has the representation 0xff on a 2s-complement machine).
-1 is promoted to an int, and has the representation 0xffffffff as an int on your machine, and that is what you see.
Your solution is to either actually use unsigned char, or else cast to unsigned char in the printf statement:
printf("%x %x %x %x\n", (unsigned char)bytes[0],
(unsigned char)bytes[1],
(unsigned char)bytes[2],
(unsigned char)bytes[3]);
Do you want to address the individual bytes of a 32-bit int? One possible method is a union:
union
{
unsigned int integer;
unsigned char byte[4];
} foo;
int main()
{
foo.integer = 123456789;
printf("%u %u %u %u\n", foo.byte[3], foo.byte[2], foo.byte[1], foo.byte[0]);
}
Note: corrected the printf to reflect unsigned values.
In your question, you stated that you want to convert a user input of 175 to
00000000 00000000 00000000 10101111, which is big endian byte ordering, also known as network byte order.
A mostly portable way to convert your unsigned integer to a big endian unsigned char array, as you suggested from that "175" example you gave, would be to use C's htonl() function (defined in the header <arpa/inet.h> on Linux systems) to convert your unsigned int to big endian byte order, then use memcpy() (defined in the header <string.h> for C, <cstring> for C++) to copy the bytes into your char (or unsigned char) array.
The htonl() function takes in an unsigned 32-bit integer as an argument (in contrast to htons(), which takes in an unsigned 16-bit integer) and converts it to network byte order from the host byte order (hence the acronym, Host TO Network Long, versus Host TO Network Short for htons), returning the result as an unsigned 32-bit integer. The purpose of this family of functions is to ensure that all network communications occur in big endian byte order, so that all machines can communicate with each other over a socket without byte order issues. (As an aside, for big-endian machines, the htonl(), htons(), ntohl() and ntohs() functions are generally compiled to just be a 'no op', because the bytes do not need to be flipped around before they are sent over or received from a socket since they're already in the proper byte order)
Here's the code:
#include <stdio.h>
#include <arpa/inet.h>
#include <string.h>
int main() {
unsigned int number = 175;
unsigned int number2 = htonl(number);
char numberStr[4];
memcpy(numberStr, &number2, 4);
printf("%x %x %x %x\n", numberStr[0], numberStr[1], numberStr[2], numberStr[3]);
return 0;
}
Note that, as caf said, you have to print the characters as unsigned characters using printf's %x format specifier.
The above code prints 0 0 0 af on my machine (an x86_64 machine, which uses little endian byte ordering), which is hex for 175.
You can try:
void CopyInt(int value, char* buffer) {
memcpy(buffer, (void*)value, sizeof(int));
}
Why would you need an intermediate cast to void * in C++
Because cpp doesn't allow direct conversion between pointers, you need to use reinterpret_cast or casting to void* does the thing.
int a = 1;
char * c = (char*)(&a); //In C++ should be intermediate cst to void*
The issue with the conversion (the reason it's giving you a ffffff at the end) is because your hex integer (that you are using the & binary operator with) is interpreted as being signed. Cast it to an unsigned integer, and you'll be fine.
An int is equivalent to uint32_t and char to uint8_t.
I'll show how I resolved client-server communication, sending the actual time (4 bytes, formatted in Unix epoch) in a 1-bit array, and then re-built it in the other side. (Note: the protocol was to send 1024 bytes)
Client side
uint8_t message[1024];
uint32_t t = time(NULL);
uint8_t watch[4] = { t & 255, (t >> 8) & 255, (t >> 16) & 255, (t >>
24) & 255 };
message[0] = watch[0];
message[1] = watch[1];
message[2] = watch[2];
message[3] = watch[3];
send(socket, message, 1024, 0);
Server side
uint8_t res[1024];
uint32_t date;
recv(socket, res, 1024, 0);
date = res[0] + (res[1] << 8) + (res[2] << 16) + (res[3] << 24);
printf("Received message from client %d sent at %d\n", socket, date);
Hope it helps.
You can simply use memcpy as follows:
unsigned int value = 255;
char bytes[4] = {0, 0, 0, 0};
memcpy(bytes, &value, 4);
The problem is arising as unsigned char is a 4 byte number not a 1 byte number as many think, so change it to
union {
unsigned int integer;
char byte[4];
} temp32bitint;
and cast while printing, to prevent promoting to 'int' (which C does by default)
printf("%u, %u \n", (unsigned char)Buffer[0], (unsigned char)Buffer[1]);

Resources