Combining uint8_t, uint16_t and uint8_t - c

I have three values, uint8_t, uint16_t and uint8_t in that order. I am trying to combine them to one uint_32 without losing the order. I found this question from here, but I got stuck with the uint_16 value in the middle.
For example:
uint8_t v1=0x01;
uint16_t v2=0x1001;
uint8_t v3=0x11;
uint32_t comb = 0x01100111;
I was thinking about spitting v2 into two separate uint8_t:s but realized there might be some easier way to solve it.
My try:
v2 = 0x1001;
a = v2 & 0xFF;
b = v1 >> 8;
first = ((uint16_t)v1 << 8) | a;
end = ((uint16_t)b << 8) | v3;
comb = ((uint32_t)first << 16) | end;

This should be your nestedly implied and as one-liner written transformation:
uint32_t comb = ((uint32_t)v1 << 24) | (((uint32_t)v2 << 8) | v3);
Basically, you have the 8 | 16 | 8 building the 32bit-sized type. To shift the first one and put at the head, you would need to cast to 32bit and use 24 (32-8). Then OR the next ones whilst shifting, i.e. placing at the right offset and the rest filling with zeros and casting respectively.
You use OR for the obvious reasons of not losing any information.

Related

C set 3 bits for a particular number

I am trying to understand masking concept and want to set bits 24,25,26 of a uint32_t number in C.
example i have
uint32_t data =0;
I am taking an input from user of uint_8 which can be only be value 3 and 4 (011,100)
I want to set the value 011 or 110 in bits 24,25,26 of the data variable without disturbing other bits.
Thanks.
To set bits 24, 25, and 26 of an integer without modifying the other bits, you can use this pattern:
data = (data & ~((uint32_t)7 << 24)) | ((uint32_t)(newBitValues & 7) << 24);
The first & operation clears those three bits. Then we use another & operation to ensure we have a number between 0 and 7. Then we shift it to the left by 24 bits and use | to put those bits into the final result.
I have some uint32_t casts just to ensure that this code works properly on systems where int has fewer than 32 bits, but you probably won't need those unless you are programming embedded systems.
More general approach macro and function. Both are the same efficient as optimizing compilers do a really good job. Macro sets n bits of the d at position s to nd. Function has the same parameters order.
#define MASK(n) ((1ULL << n) - 1)
#define SMASK(n,s) (~(MASK(n) << s))
#define NEWDATA(d,n,s) (((d) & MASK(n)) << s)
#define SETBITS(d,nd,n,s) (((d) & SMASK(n,s)) | NEWDATA(nd,n,s))
uint32_t setBits(uint32_t data, uint32_t newBitValues, unsigned nbits, unsigned startbit)
{
uint32_t mask = (1UL << nbits) - 1;
uint32_t smask = ~(mask << startbit);
data = (data & smask) | ((newBitValues & mask) << startbit);
return data;
}

Swap first and last 5 bits in a 16-bit number

I have some C/C++ code where I have a 16-bit number (uint16_t), and I need to swap the first 5 bits with the last 5 bits, keeping their left-to-right order within each block of 5 bits. The middle 6 bits need to remain intact. I am not great at bitwise maths/operations so help would be appreciated!
Conceptually speaking, the switching of positions would look like:
ABCDEFGHIJKLMNOP becomes LMNOPFGHIJKABCDE
or more literally...
10101000000001010 becomes 0101000000010101.
Any help would be much appreciated!
First, you should check if whatever library you use doesn't have a RGB-BGR swap for R5G6B5 pixels already.
Here is a literal translation of what you wrote in your question. It is probably too slow for real-time video:
uint16_t rgbswap(uint16_t in) {
uint16_t r = (in >> 11) & 0b011111;
uint16_t g = (in >> 5) & 0b111111;
uint16_t b = (in >> 0) & 0b011111;
return b << 11 | g << 5 | r << 0;
}
Instead of breaking the input into 3 separate R, G and B components you can work on R and B in parallel by shifting to the higher bits
uint16_t rgb2bgr(uint16_t in)
{
uint16_t r0b = in & 0b1111100000011111;
uint16_t b0r = ((r0b << 22) | r0b) >> 11;
return b0r | (in & 0b11111100000);
}
Another alternative to use multiplication to swap R and B
uint16_t rgb2bgr_2(uint16_t in)
{
uint16_t r0b = in & 0b1111100000011111;
uint16_t b0r = r0b * 0b1000000000000000000000100000 >> 16;
return b0r | (in & 0b11111100000);
}
It's basically this technique which is useful for extracting bits or moving bits around
You can check the compiled result on Godbolt to see the multiplication method produces shorter output, but it's only useful if you have a fast multiplier

Buffer operations with shift and or operations

I'm not very sure of my code and want to improve it.
I receive some datas from SPI (8 bits communication) and store it into a buffer of 8 bits so. To use it, I want to use 32 bits word. I know my first code will work but I'm not sure about the second one, can anyone confirm it ?
uint8_t *regData[5];
spi_xfer(fd, "\x24\xFF\xFF\xFF\xCC", 5, regData, 5);
uint32_t regVal;
regVal = (regData[0]);
regVal += (uint32_t)(regData[1]) << 8;
regVal += (uint32_t)(regData[2]) << 16;
regVal += (uint32_t)(regData[3]) << 24;
The second one :
uint8_t *regData[5];
spi_xfer(fd, "\x24\xFF\xFF\xFF\xCC", 5, regData, 5);
uint32_t regVal;
regVal = (regData[0]) | (uint32_t)(regData[1]) << 8 | (uint32_t)(regData[2]) << 16 | (uint32_t)(regData[3]) << 24;
Thanks a lot for your help !
Brieuc
uint8_t *regData[5];
The regData[] is an array of pointers. If this is intended, to retrieve the value stored at the pointer in the array you need to dereference the pointer.
regVal = *(regData[0]);
Otherwise the operation will assign the address stored at regData[0] to regVal, rather than the value stored at the address.

Changing endianness on 3 byte integer

I am receiving a 3-byte integer, which I'm storing in an array. For now, assume the array is unsigned char myarray[3]
Normally, I would convert this into a standard int using:
int mynum = ((myarray[2] << 16) | (myarray[1] << 8) | (myarray[0]));
However, before I can do this, I need to convert the data from network to host byte ordering.
So, I change the above to (it comes in 0-1-2, but it's n to h, so 0-2-1 is what I want):
int mynum = ((myarray[1] << 16) | (myarray[2] << 8) | (myarray[0]));
However, this does not seem to work. For the life of me can't figure this out. I've looked at it so much that at this point I think I'm fried and just confusing myself. Is what I am doing correct? Is there a better way? Would the following work?
int mynum = ((myarray[2] << 16) | (myarray[1] << 8) | (myarray[0]));
int correctnum = ntohl(mynum);
Here's an alternate idea. Why not just make it structured and make it explicit what you're doing. Some of the confusion you're having may be rooted in the "I'm storing in an array" premise. If instead, you defined
typedef struct {
u8 highByte;
u8 midByte;
u8 lowByte;
} ThreeByteInt;
To turn it into an int, you just do
u32 ThreeByteTo32(ThreeByteInt *bytes) {
return (bytes->highByte << 16) + (bytes->midByte << 8) + (bytes->lowByte);
}
if you receive the value in network ordering (that is big endian) you have this situation:
myarray[0] = most significant byte
myarray[1] = middle byte
myarray[2] = least significant byte
so this should work:
int result = (((int) myarray[0]) << 16) | (((int) myarray[1]) << 8) | ((int) myarray[2]);
Beside the ways of using strucures / unions with byte-size members you have two other ways
Using ntoh / hton and masking out the high byte of the 4-byte integer before or after
the conversion with an bitwise and.
Doing the bitshift operations contained in other answers
At any rate you should not rely on side effects and shift data beyond the size of data type.
Shift by 16 is beyond the size of unsigned char and will cause problems depending on compiler, flags, platform endianess and byte order. So always do the proper cast before bitwise to make it work on any compiler / platform:
int result = (((int) myarray[0]) << 16) | (((int) myarray[1]) << 8) | ((int) myarray[2]);
Why don't just receive into the top 3 bytes of a 4-byte buffer? After that you could use ntohl which is just a byte swap instruction in most architectures. In some optimization levels it'll be faster than simple bitshifts and or
union
{
int32_t val;
unsigned char myarray[4];
} data;
memcpy(&data, buffer, 3);
data.myarray[3] = 0;
data.val = ntohl(data.val);
or in case you have copied it to the bottom 3 bytes then another shift is enough
memcpy(&data.myarray[1], buffer, 3);
data.myarray[0] = 0;
data.val = ntohl(data.val) >> 8; // or data.val = ntohl(data.val << 8);
unsigned char myarray[3] = { 1, 2, 3 };
# if LITTLE_ENDIAN // you figure out a way to express this on your platform
int mynum = (myarray[0] << 0) | (myarray[1] << 8) | (myarray[2] << 16);
# else
int mynum = (myarray[0] << 16) | (myarray[1] << 8) | (myarray[2] << 0);
# endif
printf("%x\n", mynum);
That prints 30201 which I think is what you want. The key is to realize that you have to shift the bytes differently per-platform: you can't easily use ntohl() because you don't know where to put the extra zero byte.

Store an integer in an array where the elements represent 1 byte of the value

I'm using AES to encrypt some data that I'm going to send in a packet. I need to store an integer in an array of 8 bit elements. To make this clear, my array is declared as:
uint8_t in[16] = {0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
0x00,0x00,0x00,0x00};
I need to be able to store an integer in this array and then easily retrieve the data in the receiving client. Is there an easy way to accomplish this?
This is usually achieved via bit-shifting:
int i = 42;
in[0] = i & 0xff;
in[1] = (i >> 8) & 0xff;
in[2] = (i >> 16) & 0xff;
in[3] = (i >> 24) & 0xff;
Note that you cannot always be guaranteed that an int is four bytes. However, it's easy enough to turn the above code into a loop, based on sizeof i.
Retrieving the integer works as follows:
int i = in[0] | (in[1] << 8) | (in[2] << 16) | (in[3] << 24);
Of course, if you are about to encrypt this with AES, you need to give some thought to a sensible padding algorithm. Currently you look like you're heading towards zero-padding, which is far from optimal.

Resources