I am receiving 2 bytes of binary information via Serial in C.
I am receiving them in a char.
So I need to join the 2 chars to make a int but Im unsure how to do that.. first of all the first byte is in binary format and not char format.. So im unsure how I Can convert it into a useable form for my program.
Just OR them together ?
x = (b1 << 8) | b2;
Make sure they're unsigned or cast accordingly (shifting signed stuff is nasty).
You can use something like this:
int my_int=char1;
myint<<=8;
myint|=char2;
This assumes char1 contains the most significant byte. Switch the 1 and 2 otherwise.
use unsigned char to avoid sign-extension problems.
val16 = char1 * 256 + char2;
For a start, it would be better to receive them in an unsigned char just so you have no issues with sign extension and the like.
If you want to combine them, you can use something like:
int val = ch1; val = val << 8 | ch2;
or:
int val = ch2; val = val << 8 | ch1;
depending on the endian-ness of your system, and assuming your system has an eight-bit char type.
if MSB (Most Significant Byte) comes first:
unsigned char array[2];
...
int bla;
bla = array[1] | (array[0] << 8);
if LSB (Least Significant Byte) comes first:
bla = array[0] | (array[1] << 8);
Related
I am new in C and this is for a school project. I am implementing the Skinny Block Cipher in C.
My code:
unsigned char *bits[8]; // this array holds 1 byte of data.
... call in another func to convert hex to bit.
unsigned int four = bits[4] - '0'; // value 0
unsigned int seven = bits[7] - '0'; // value 1
unsigned int six = bits[6] - '0'; // value 1
four = four ^ ~(seven | six); // eq 1;
Now, my question
Do I have to convert the char to int every time to run the bit operation? What will happen if I do it using unsigned char?
If I store the value for eq - 1 on an unsigned int, the value is fe which is wrong (according to an online bit calculator), on the other hand, if I store the result in an unsigned char, the value is -2 which is correct. What's the difference? I am kind of lost here.
bits[8] is a pointer and I tried to do the eq 1 using indexes from bits pointer, like bits[4], etc but my VSCode throws an error and I don't understand why. Obviously, I have some gaps in my knowledge. I am using my Python knowledge to go through this.
I don't know if I am giving all the information that's needed. Hit me up for extras!
TIA.
I updated the code
unsigned char bits[9];
It converts a3 into 010100011.
unsigned char *bits[8]; // this array holds 1 byte of data.
No, it is an array of 8 pointers to char.
unsigned int four = bits[4] - '0'; // value 0
This will not work as you subtract the integer '0' from the pointer.
If you want to keep the string representation of the number in the binary form you need to define an array of 9 chars
char bits[9] = "10010110";
Then you can do the operations as in your code.
Do I have to convert the char to int every time to run the bit
operation? What will happen if I do it using unsigned char?
If you want to keep it as a string then - yes.
unsigned char x = 0x96;
unsigned int four = !!(x & (1 << 4));
unsigned int seven = !!(x & (1 << 7));
unsigned int six = !!(x & (1 << 6));
I receive a port number as 2 bytes (least significant byte first) and I want to convert it into an integer so that I can work with it. I've made this:
char buf[2]; //Where the received bytes are
char port[2];
port[0]=buf[1];
port[1]=buf[0];
int number=0;
number = (*((int *)port));
However, there's something wrong because I don't get the correct port number. Any ideas?
I receive a port number as 2 bytes (least significant byte first)
You can then do this:
int number = buf[0] | buf[1] << 8;
If you make buf into an unsigned char buf[2];, you can simplify it to:
number = (buf[1] << 8) + buf[0];
I appreciate this has already been answered reasonably. However, another technique is to define a macro in your code eg:
// bytes_to_int_example.cpp
// Output: port = 514
// I am assuming that the bytes the bytes need to be treated as 0-255 and combined MSB -> LSB
// This creates a macro in your code that does the conversion and can be tweaked as necessary
#define bytes_to_u16(MSB,LSB) (((unsigned int) ((unsigned char) MSB)) & 255)<<8 | (((unsigned char) LSB)&255)
// Note: #define statements do not typically have semi-colons
#include <stdio.h>
int main()
{
char buf[2];
// Fill buf with example numbers
buf[0]=2; // (Least significant byte)
buf[1]=2; // (Most significant byte)
// If endian is other way around swap bytes!
unsigned int port=bytes_to_u16(buf[1],buf[0]);
printf("port = %u \n",port);
return 0;
}
Least significant byte:
int number = (uint8_t)buf[0] | (uint8_t)buf[1] << 8;
Most significant byte:
int number = (uint8_t)buf[1] << 8 | (uint8_t)buf[0];
char buf[2]; //Where the received bytes are
int number;
number = *((int*)&buf[0]);
&buf[0] takes address of first byte in buf.
(int*) converts it to integer pointer.
Leftmost * reads integer from that memory address.
If you need to swap endianness:
char buf[2]; //Where the received bytes are
int number;
*((char*)&number) = buf[1];
*((char*)&number+1) = buf[0];
I want to convert an unsigned int and break it into 2 chars. For example: If the integer is 1, its binary representation would be 0000 0001. I want the 0000 part in one char variable and the 0001 part in another binary variable. How do I achieve this in C?
If you insist that you have a sizeof(int)==2 then:
unsigned int x = (unsigned int)2; //or any other value it happens to be
unsigned char high = (unsigned char)(x>>8);
unsigned char low = x & 0xff;
If you have eight bits total (one byte) and you are breaking it into two 4-bit values:
unsigned char x=2;// or whatever
unsigned char high = (x>>4);
unsigned char low = x & 0xf;
Shift and mask off the part of the number you want. Unsigned ints are probably four bytes, and if you wanted all four bytes, you'd just shift by 16 and 24 for the higher order bytes.
unsigned char low = myuint & 0xff;
unsigned char high = (myuint >> 8) & 0xff;
This is assuming 16 bit ints check with sizeof!! On my platform ints are 32bit so I will use a short in this code example. Mine wins the award for most disgusting in terms of pulling apart the pointer - but it also is the clearest for me to understand.
unsigned short number = 1;
unsigned char a;
a = *((unsigned char*)(&number)); // Grab char from first byte of the pointer to the int
unsigned char b;
b = *((unsigned char*)(&number) + 1); // Offset one byte from the pointer and grab second char
One method that works is as follows:
typedef union
{
unsigned char c[sizeof(int)];
int i;
} intchar__t;
intchar__t x;
x.i = 2;
Now x.c[] (an array) will reference the integer as a series of characters, although you will have byte endian issues. Those can be addressed with appropriate #define values for the platform you are programming on. This is similar to the answer that Justin Meiners provided, but a bit cleaner.
unsigned short s = 0xFFEE;
unsigned char b1 = (s >> 8)&0xFF;
unsigned char b2 = (((s << 8)>> 8) & 0xFF);
Simplest I could think of.
int i = 1 // 2 Byte integer value 0x0001
unsigned char byteLow = (i & 0x00FF);
unsinged char byteHigh = ((i & 0xFF00) >> 8);
value in byteLow is 0x01 and value in byteHigh is 0x00
I am stuck with a simple problem. Here is the code:
int main(int argc, char **argv)
{
char buf[2] = {0xd, 0x1f};
unsigned int a;
a = ((buf[1] << 8) & 0xFF00) | buf[0];
printf("%d\n",a);
return 0;
}
The value in a that I need is 0x1FD(509) but when I ran the above program the output in a is 0x1F0D(7949) .
How can I achieve this ?
EDIT :
Ok let me clarify . I am doing a project where I receive the data as shown in the code snippet.To simplify I have declared them local . The main thing is I want the data to be interpreted as 0x1FD(509) .
The program does what you asked it to do. The source of your confusion is in the 0xd constant, which is actually 0x0d, because char is eight bits. Packing it together with 0x1f as you do should produce 0x1f0d, and it does.
You need
char buf[2] = {0xfd, 0x01};
That is, you need to "pack" the bits from right to left.
This is clearer if you pad the desired value with zeroes, so it's written as a string of complete bytes:
0x1FD = 0x01FD = (0x01 << 8) | 0xFD
A char is 8 bits, hence it is represented by two hexadecimal digits.
If you want 0x1FD in your variable a, then you should initialize the array with 0xfd and 0x01.
You need to remove the empty bits:
unsigned pack(char bytes[2])
{
char shift = 8;
if (bytes[0] < 32)
shift = 4;
return (unsigned) ((bytes[1] << shift) | bytes[0]);
}
I have a char array that is really used as a byte array and not for storing text. In the array, there are two specific bytes that represent a numeric value that I need to store into an unsigned int value. The code below explains the setup.
char* bytes = bytes[2];
bytes[0] = 0x0C; // For the sake of this example, I'm
bytes[1] = 0x88; // assigning random values to the char array.
unsigned int val = ???; // This needs to be the actual numeric
// value of the two bytes in the char array.
// In other words, the value should equal 0x0C88;
I can not figure out how to do this. I would assume it would involve some casting and recasting of the pointers, but I can not get this to work. How can I accomplish my end goal?
UPDATE
Thank you Martin B for the quick response, however this doesn't work. Specifically, in my case the two bytes are 0x00 and 0xbc. Obviously what I want is 0x000000bc. But what I'm getting in my unsigned int is 0xffffffbc.
The code that was posted by Martin was my actual, original code and works fine so long as all of the bytes are less than 128 (.i.e. positive signed char values.)
unsigned int val = (unsigned char)bytes[0] << CHAR_BIT | (unsigned char)bytes[1];
This if sizeof(unsigned int) >= 2 * sizeof(unsigned char) (not something guaranteed by the C standard)
Now... The interesting things here is surely the order of operators (in many years still I can remember only +, -, * and /... Shame on me :-), so I always put as many brackets I can). [] is king. Second is the (cast). Third is the << and fourth is the | (if you use the + instead of the |, remember that + is more importan than << so you'll need brakets)
We don't need to upcast to (unsigned integer) the two (unsigned char) because there is the integral promotion that will do it for us for one, and for the other it should be an automatic Arithmetic Conversion.
I'll add that if you want less headaches:
unsigned int val = (unsigned char)bytes[0] << CHAR_BIT;
val |= (unsigned char)bytes[1];
unsigned int val = (unsigned char) bytes[0]<<8 | (unsigned char) bytes[1];
The byte ordering depends on the endianness of your processor. You can do this, which will work on big or little endian machines. (without ntohs it will work on big-endian):
unsigned int val = ntohs(*(uint16_t*)bytes)
unsigned int val = bytes[0] << 8 + bytes[1];
I think this is a better way to go about it than relying on pointer aliasing:
union {unsigned asInt; char asChars[2];} conversion;
conversion.asInt = 0;
conversion.asChars[0] = 0x0C;
conversion.asChars[1] = 0x88;
unsigned val = conversion.asInt;