how to extract lower bytes from a Hex value? - c

I am making a program to communicate with a serial device. Device is giving me the data in hex format. Hex format which I am getting is FFFFFF84 but I am interested in extracting the last two bits that is 84 . So how can i extract it?
while(1)
{
int i;
char receivebuffer [1];
read (fd, receivebuffer, sizeof receivebuffer);
for ( i = 0; i < sizeof (receivebuffer); i++)
{
printf("value of buffer is %X\n\n", (char)receivebuffer[i]);
}
return 0;
}
I am getting the data in receivebuffer. Please help thanks.

you want to extract the last 2 byte? you need operator '&' extract it:
FFFFFF84 -> 1111 1111 1111 1111 1111 1111 1000 0100
000000FF -> 0000 0000 0000 0000 0000 0000 1111 1111
---------------------------------------------------
after & -> 0000 0000 0000 0000 0000 0000 1000 0100
so the anwser is do assignment:
last2 = input & 0xFF
hope this anwser help you understand bit operation.

You're just confused because printf is printing your data as a sign-extended int (this means that char on your system char is treated as signed - note that this is implementation-defined).
Change your printf to:
printf("value of buffer is %#X\n\n", (unsigned char)receivebuffer[i]);
or just make the type of receivebuffer unsigned:
unsigned char receivebuffer[1];
// ...
printf("value of buffer is %#X\n\n", receivebuffer[i]);

Device is giving me the data in hex format.
This contradicts your code. It seems the device gives you the data in binary (raw) format and you covert it to hex for printing. That is a huge difference.
If you do
printf("value of buffer is %X\n\n", (char)receivebuffer[i]);
the char (whose cast is unnecessary as it is already a char) gets converted to int. As your system has char signed, the resulting int is negative and thus the FFF... at the start.
You can do any of
printf("value of buffer is %X\n\n", receivebuffer[i] & 0xFF);
printf("value of buffer is %X\n\n", (unsigned char)receivebuffer[i]);
printf("value of buffer is %X\n\n", (uint8_t)receivebuffer[i]);

I know this is an old topic but I just want to add another option:
printf("value of buffer is %hhx\n\n", receivebuffer[i]);
%hhx translates to a "short short hex" value, or in other words, an 8-bit hex value.

The device just returns bytes. It is the printf which displays a byte in a certain format (decimal, hex, etc.). To show bytes in hex you should use "0x%02x" format.

Related

Why casting short* to int* shows incorrect value

To better learn how malloc and pointers work internally, I created an array of short. On my system, int is double the size of short, so I created another pointer q of type int* and set its address to the casted value of p:
#include <stdio.h>
#include <stdlib.h>
#include <assert.h>
int main() {
short* p = (short*) malloc(2 * sizeof(short));
int* q = (int*) p;
assert(sizeof *q == 2 * sizeof *p);
p[0] = 0;
p[1] = 1;
printf("%u\n", *q);
}
When I print *q it shows the number 65536 instead of 1 and I can't figure out why. If I understand correctly, p should be represented as the following (assuming short is 2 bytes and int is 4 bytes):
p[0] p[1]
0000 0000 0000 0000 | 0000 0000 0000 0001
So *q should read 4 bytes hence reading the value 1. Instead it shows 65536 which is represented as:
0000 0000 0000 0001 0000 0000 0000 0000
Most systems you're likely to interact with these days use little-endian byte ordering, which mean that the least significant byte comes first.
So the bytes starting at p[1] contain 0x01 0x00, not 0x00 0x01. This also means the bytes starting at p[0] are 0x00 0x00 0x10 0x00. If these bytes are then interpreted as a 4 byte int it has the value 0x00010000, i.e. 65536 decimal.
Also, reinterpreting bytes in this fashion (i.e. taking a pointer to one type, casting it to another pointer type, and dereferencing), is an aliasing violation and triggers undefined behavior, so there is no guarantee this will always work in this way.
This is due to endianness (https://en.wikipedia.org/wiki/Endianness).
This determines which byte comes first in memory. Therefore, if you flip the bytes in your representation, you get exactly what you provided as the representation for 65536.
You seem to be on a little endian machine.

Printf in C prints ffffffe1 instead of e1

I am very much confused, I have a small program where I am printing the value at different address location.
int main ()
{
// unsigned int x = 0x15711056;
unsigned int x = 0x15b11056;
char *c = (char*) &x;
printf ("*c is: 0x%x\n", *c);
printf("size of %d\n", sizeof(x));
printf("Value at first address %x\n", *(c+0));
printf("Value at second address %x\n", *(c+1));
printf("Value at third address %x\n", *(c+2));
printf("Value at fourth address %x\n", *(c+3));
For the commented unsigned int x the printf values are as expected i.e.
printf("Value at first address %x\n", *(c+0)) = 56
printf("Value at second address %x\n", *(c+1))= 10
printf("Value at third address %x\n", *(c+2))= 71
printf("Value at fourth address %x\n", *(c+3))= 15
But for un-commented int x why I am getting below result for *(c+2) It should be b1 not ffffffb1. Please help me to understand this I am running this on an online IDE https://www.onlinegdb.com/online_c_compiler. My PC is i7 intel.
printf("Value at first address %x\n", *(c+0)) = 56
printf("Value at second address %x\n", *(c+1))= 10
printf("Value at third address %x\n", *(c+2))= ffffffb1
printf("Value at fourth address %x\n", *(c+3))= 15
The value is signed as 0xB1 is 10110001 in binary, you need to use an unsigned char pointer:
unsigned char *c = (unsigned char*) &x;
Your code would work for any bytes up to 0x7F.
c is a signed char, 0xB1 (which is signed) is 1011 0001, you see that
the most significant bit is 1, so it's a negative number.
When you pass *(c+2) to printf, it gets promoted to an int which is
signed. Sign extension fills the rest of the bits with the same value as the
most significant bit from your char, which is 1. At this point printf
gets 1111 1111 1111 1111 1111 1111 1011 0001.
%x in printf prints it as an unsigned int, thus it prints 0xFFFFFFB1.
You have to declare your pointer as an unsigned char.
unsigned char *c = (unsigned char*) &x;
unsigned int x = 0x15b11056; /*lets say starting address of x is 0x100 */
char *c = (char*) &x; /** c is char pointer i.e at a time it can fetch 1 byte and it points to 0x100 **/
x looks like as below
------------------------------------------------------
| 0001 0101 | 1011 0001 | 0001 0000 | 0101 0110 |
------------------------------------------------------
0x104 0x103 0x102 0x101 0x100
x
c
Next, when you are doing *(c+2)); Lets expand it
*(c+2)) = *(0x100 + 2*1) /** increment by 1 byte */
= *(0x102)
= 1011 0001 (in binary) Notice here that sign bit is 1
means sign bit is going to copy to remaining bytes
As you are printing in %x format which expects unsigned type but c is of signed byte,sign bit gets copied into remaining bytes.
for *(c+2) input will be looks like
0000 0000 | 0000 0000 | 0000 0000 | 1011 0001
|
sign bit is one so this bit will be copied into remaining bytes, resultant will look like below
1111 1111 | 1111 1111 | 1111 1111 | 1011 0001
f f f f f f b 1
I explained particular part which you had doubt, I hope it helps.

How can I treat a short integer as an array of elements in C?

So I have an integer stored as a short. Let's say:
short i = 3000;
Which in binary is:
0011 0000 0000 0000
I was told I can treat it as an array of two elements where each element is a byte basically, so:
i[0] = 0011 0000
i[1] = 0000 0000
How can I accomplish this?
You could do it like this (assuming short is 2 bytes)
short i = 3000; // 3000 in Binary is: 00001011 10111000
unsigned char x[2] = {0};
memcpy(x, &i, 2);
Now x[0] will be 10111000 and x[1] 00001011 if this code runs on little endian machine. And reverse will hold true in case of big endian machine.
Btw. Your binary representation of 3000 looks wrong

48 byte binary to 6 byte binary

I read a 17 byte hex string from command line "13:22:45:33:99:cd" and want to convert it into binary value of length 6 bytes.
For this, I read a 17 byte hex string from command line "13:22:45:33:99:cd" and converted it into binary digits as 0001 0011 0010 0010 0100 0101 0011 0011 1001 1001 1100 1101. However, since I am using a char array, it is of length 48 bytes. I want the resulting binary to be of length 6 bytes i.e, I want 0001 0011 0010 0010 0100 0101 0011 0011 1001 1001 1100 1101 to be of 6 bytes (48 bits) and not 48 bytes. Please suggest how to accomplish this.
Split the string on the ':' character, and each substring is now a string you can pass to e.g. strtoul and then put in a unsigned char array of size 6.
#include <stdio.h>
const char *input = "13:22:45:33:99:cd";
int output[6];
unsigned char result[6];
if (6 == sscanf(input, "%02x:%02x:%02x:%02x:%02x:%02x",
&output[0], &output[1], &output[2],
&output[3], &output[4], &output[5])) {
for (int i = 0; i < 6; i++)
result[i] = output[i];
} else {
whine("something's wrong with the input!");
}
You pack them together. Like high nibble with low nibble: (hi<<4)|lo (assuming both are 4-bits).
Although you may prefer to convert them byte by byte, not digit by digit in the first place.

How to convert a 48-bit byte array into an 64-bit integer in C?

I have an unsigned char array whose size is 6. The content of the byte array is an integer (4096*number of seconds since Unix Time). I know that the byte array is big-endian.
Is there a library function in C that I can use to convert this byte array into int_64 or do I have to do it manually?
Thanks!
PS: just in case you need more information, yes, I am trying to parse an Unix timestamp. Here is the format specification of the timestamp that I dealing with.
A C99 implementation may offer uint64_t (it doesn't have to provide it if there is no native fixed-width integer that is exactly 64 bits), in which case, you could use:
#include <stdint.h>
unsigned char data[6] = { /* bytes from somewhere */ };
uint64_t result = ((uint64_t)data[0] << 40) |
((uint64_t)data[1] << 32) |
((uint64_t)data[2] << 24) |
((uint64_t)data[3] << 16) |
((uint64_t)data[4] << 8) |
((uint64_t)data[5] << 0);
If your C99 implementation doesn't provide uint64_t you can still use unsigned long long or (I think) uint_least64_t. This will work regardless of the native endianness of the host.
Have your tried this:
unsigned char a [] = {0xaa,0xbb,0xcc,0xdd,0xee,0xff};
unsigned long long b = 0;
memcpy(&b,a,sizeof(a)*sizeof(char));
cout << hex << b << endl;
Or you can do it by hand which will avoid some architecture specific issues.
I would recommend using normal integer operation (sums and shifts) rather than trying to emulate memory block ordering which is no better than the solution above in term of compatibility.
I think the best way to do it is using a union.
union time_u{
uint8_t data[6];
uint64_t timestamp;
}
Then you can use that memory space as a byte array or uint64_t, by referencing
union time_u var_name;
var_name.data[i]
var_name.timestamp
Here is a method to convert it to 64 bits:
uint64_t
convert_48_to_64(uint8_t *val_ptr){
uint64_t ret = 0;
uint8_t *ret_ptr = (uint8_t *)&ret;
for (int i = 0; i < 6; i++) {
ret_ptr[5-i] = val_ptr[i];
}
return ret;
}
convert_48_to_64((uint8_t)&temp); //temp is in 48 bit
eg: num_in_48_bit = 77340723707904; this number in 48 bit binary will be : 0100 0110 0101 0111 0100 1010 0101 1101 0000 0000 0000 0000 After conversion in 64 bit binary will be : 0000 0000 0000 0000 0000 0000 0000 0000 0101 1101 0100 1010 0101 0111 0100 0110 let's say val_ptr stores the base address of num_in_48_bit. Since pointer typecast to uint8_t, incrementing val_ptr will give you next byte. Looping over and copy the value byte by byte. Note, I am taking care of network to byte order as well.
You can use pack option
#pragma pack(1)
or
__attribute__((packed))
depending on the compiler
typedef struct __attribute__((packed))
{
uint64_t u48: 48;
} uint48_t;
uint48_t data;
memcpy(six_byte_array, &data, 6);
uint64_t result = data.u48;
See
_int64 bit field
How can I create a 48-bit uint for bit mask
If a 32-bit integer overflows, can we use a 40-bit structure instead of a 64-bit long one?
Which C datatype can represent a 40-bit binary number?

Resources