Buffer operations with shift and or operations - c

I'm not very sure of my code and want to improve it.
I receive some datas from SPI (8 bits communication) and store it into a buffer of 8 bits so. To use it, I want to use 32 bits word. I know my first code will work but I'm not sure about the second one, can anyone confirm it ?
uint8_t *regData[5];
spi_xfer(fd, "\x24\xFF\xFF\xFF\xCC", 5, regData, 5);
uint32_t regVal;
regVal = (regData[0]);
regVal += (uint32_t)(regData[1]) << 8;
regVal += (uint32_t)(regData[2]) << 16;
regVal += (uint32_t)(regData[3]) << 24;
The second one :
uint8_t *regData[5];
spi_xfer(fd, "\x24\xFF\xFF\xFF\xCC", 5, regData, 5);
uint32_t regVal;
regVal = (regData[0]) | (uint32_t)(regData[1]) << 8 | (uint32_t)(regData[2]) << 16 | (uint32_t)(regData[3]) << 24;
Thanks a lot for your help !
Brieuc

uint8_t *regData[5];
The regData[] is an array of pointers. If this is intended, to retrieve the value stored at the pointer in the array you need to dereference the pointer.
regVal = *(regData[0]);
Otherwise the operation will assign the address stored at regData[0] to regVal, rather than the value stored at the address.

Related

How does this bitwise hamming(31,26) encoder work in C? (Bitmasking)

I understand what the individual operations are (such as the bitwise ANDs an ORs), but I don't understand why they have been used where they have been.
Also, my understanding is that the first set of masks are used to compute parity bits. But I don't understand why the 2nd set of masks have been chosen or what their purpose is. Can anyone help me get my head around this please?
rawData is the input word that is to be encoded by the hamming.c function.
Doing the encoding of a [31,26] Hamming code, the function hammingEncode() encodes a message rawData consisting of 26 message bits by inserting 5 parity bits on positions 0, 1, 3, 7 and 15 (counting bits starting from 0).
The purpose of the part you are asking about:
unsigned int mask1 = 0b11111111111111100000000000;
unsigned int mask2 = 0b00000000000000011111110000;
unsigned int mask3 = 0b00000000000000000000001110;
unsigned int mask4 = 0b00000000000000000000000001;
encodedData |= (rawData & mask1) << 5;
encodedData |= (rawData & mask2) << 4;
encodedData |= (rawData & mask3) << 3;
encodedData |= (rawData & mask4) << 2;
is to move the 26 message bits into the correct positions: 16-30, 8-14, 4-6 and 2 using mask1, mask2, mask3 and mask4 respectively.
After that, the parity bits are calculated and inserted on their correct positions.

C - Increment 18 bits in C 8051

I have been programming the 8051 for about two months now and am somewhat of a newbie to the C language. I am currently working with flash memory in order to read, write, erase, and analyze it. I am working on the write phase at the moment and one of the tasks that I need to do is specify an address location and fill that location with data then increment to the next location and fill it with complementary data. So on and so forth until I reach the end.
My dilemma is I have 18 address bits to play with and currently have three bytes allocated for those 18 bits. Is there anyway that I could combine those 18 bits into an int or unsigned int and increment like that? Or is my only option to increment the first byte, then when that byte rolls over to 0x00 increment the next byte and when that one rolls over, increment the next?
I currently have:
void inc_address(void)
{
P6=address_byte1;
P7=address_byte2;
P2=address_byte3;
P5=data_byte;
while(1)
{
P6++;
if(P6==0x00){P7++;}
else if(P7==0x00){P2++;}
else if(P2 < 0x94){break;} //hex 9 is for values dealing with flash chip
P5=~data_byte;
}
}
Where address is uint32_t:
void inc_address(void)
{
// Increment address
address = (address + 1) & 0x0003ffff ;
// Assert address A0 to A15
P6 = (address & 0xff)
P7 = (address >> 8) & 0xff
// Set least significant two bits of P2 to A16,A17
// without modifying other bits in P2
P2 &= 0xFC ; // xxxxxx00
P2 |= (address >> 16) & 0x03 ; // xxxxxxAA
// Set data
P5 = ~data_byte ;
}
However it is not clear why the function is called inc_address but also assigns P5 with ~data_byte, which presumably asserts the the data bus? It is doing something more than increment an address it seems, so is poorly and confusingly named. I suggest also that the function should take address and data as parameters rather than global data.
Is there anyway that I could combine those 18 bits into an int or
unsigned int and increment like that?
Sure. Supposing that int and unsigned int are at least 18 bits wide on your system, you can do this:
unsigned int next_address = (hi_byte << 16) + (mid_byte << 8) + low_byte + 1;
hi_byte = next_address >> 16;
mid_byte = (next_address >> 8) & 0xff;
low_byte = next_address & 0xff;
The << and >> are bitwise shift operators, and the binary & is the bitwise "and" operator.
It would be a bit safer and more portable to not make assumptions about the sizes of your types, however. To avoid that, include stdint.h, and use type uint_least32_t instead of unsigned int:
uint_least32_t next_address = ((uint_least32_t) hi_byte << 16)
+ ((uint_least32_t) mid_byte << 8)
+ (uint_least32_t) low_byte
+ 1;
// ...

Combining uint8_t, uint16_t and uint8_t

I have three values, uint8_t, uint16_t and uint8_t in that order. I am trying to combine them to one uint_32 without losing the order. I found this question from here, but I got stuck with the uint_16 value in the middle.
For example:
uint8_t v1=0x01;
uint16_t v2=0x1001;
uint8_t v3=0x11;
uint32_t comb = 0x01100111;
I was thinking about spitting v2 into two separate uint8_t:s but realized there might be some easier way to solve it.
My try:
v2 = 0x1001;
a = v2 & 0xFF;
b = v1 >> 8;
first = ((uint16_t)v1 << 8) | a;
end = ((uint16_t)b << 8) | v3;
comb = ((uint32_t)first << 16) | end;
This should be your nestedly implied and as one-liner written transformation:
uint32_t comb = ((uint32_t)v1 << 24) | (((uint32_t)v2 << 8) | v3);
Basically, you have the 8 | 16 | 8 building the 32bit-sized type. To shift the first one and put at the head, you would need to cast to 32bit and use 24 (32-8). Then OR the next ones whilst shifting, i.e. placing at the right offset and the rest filling with zeros and casting respectively.
You use OR for the obvious reasons of not losing any information.

what does a[0] = addr & 0xff?

i'm currently learning from the book "the shellcoder's handbook", I have a strong understanding of c but recently I came across a piece of code that I can't grasp.
Here is the piece of code:
char a[4];
unsigned int addr = 0x0806d3b0;
a[0] = addr & 0xff;
a[1] = (addr & 0xff00) >> 8;
a[2] = (addr & 0xff0000) >> 16;
a[3] = (addr) >> 24;
So the question is what does this, what is addr & 0xff (and the three lines below it) and what makes >> 8 to it (I know that it divides it 8 times by 2)?
Ps: don't hesitate to tell me if you have ideas for the tags that I should use.
The variable addr is 32 bits of data, while each element in the array a is 8 bits. What the code does is copy the 32 bits of addr into the array a, one byte at a time.
Lets take this line:
a[1] = (addr & 0xff00) >> 8;
And then do it step by step.
addr & 0xff00 This gets the bits 8 to 15 of the value in addr, the result after the operation is 0x0000d300.
>> 8 This shifts the bits to the right, so 0x0000d300 becomes 0x000000d3.
Assign the resulting value of the mask and shift to a[1].
The code is trying to enforce endianness on the data input. Specifically, it is trying to enforce little endian behavior on the data. Here is the explaination:
a[0] = addr & 0xff; /* gets the LSB 0xb0 */
a[1] = (addr & 0xff00) >> 8; /* gets the 2nd LSB 0xd3 */
a[2] = (addr & 0xff0000) >> 16; /* gets 2nd MSB 0x06 */
a[3] = (addr) >> 24; /* gets the MSB 0x08 */
So basically, the code is masking and separating out every byte of data and storing it in the array "a" in the little endian format.
unsigned char a[4]; /* I think using unsigned char is better in this case */
unsigned int addr = 0x0806d3b0;
a[0] = addr & 0xff; /* get the least significant byte 0xb0 */
a[1] = (addr & 0xff00) >> 8; /* get the second least significant byte 0xd3 */
a[2] = (addr & 0xff0000) >> 16; /* get the second most significant byte 0x06 */
a[3] = (addr) >> 24; /* get the most significant byte 0x08 */
Apparently, the code isolates the individual bytes from addr to store them in the array a so they can be indexed. The first line
a[0] = addr & 0xff;
masks out the byte of lowest value by using 0xff as a bit mask; the subsequent lines do the same, but in addition shift the result to the rightmost position. Finally, the the last line
a[3] = (addr) >> 24;
no masking is necessary anymore, as all unneccesary information is discarded by the shift.
The code is effectively storing a 32 bit adress in a 4 chars long array. As you may know, a char has a byte (8 bit). It first copies the first byte of the adress, then shifts, copies the second byte, then shifts, etc. You get the gist.
It enforces endianness, and stores the integer in little-endian format in a.
See the illustration on wikipedia.
also, why not visualize the bit shifting results..
char a[4];
unsigned int addr = 0x0806d3b0;
a[0] = addr & 0xff;
a[1] = (addr & 0xff00) >> 8;
a[2] = (addr & 0xff0000) >> 16;
a[3] = (addr) >> 24;
int i = 0;
for( ; i < 4; i++ )
{
printf( "a[%d] = %02x\t", i, (unsigned char)a[i] );
}
printf("\n" );
Output:
a[0] = b0 a[1] = d3 a[2] = 06 a[3] = 08
I addition to the multiple answers given, the code has some flaws that need to be fixed to make the code portable. In particular, the char type is very dangerous to use for storing values, because of its implementation-defined signedness. Very classic C bug. If the code was taken from a book, then you should read that book sceptically.
While we are at it, we can also tidy up the code, make it overly explicit to avoid potential future maintenance bugs, remove some implicit type promotions of integer literals etc.
#include <stdint.h>
uint8_t a[4];
uint32_t addr = 0x0806d3b0UL;
a[0] = addr & 0xFFu;
a[1] = (addr >> 8) & 0xFFu;
a[2] = (addr >> 16) & 0xFFu;
a[3] = (addr >> 24) & 0xFFu;
The masks & 0xFFu are strictly speaking not needed, but they might save you from some false positive compiler warnings about wrong integer types. Alternatively, each shift result could be cast to uint8_t and that would have been fine too.

Changing endianness on 3 byte integer

I am receiving a 3-byte integer, which I'm storing in an array. For now, assume the array is unsigned char myarray[3]
Normally, I would convert this into a standard int using:
int mynum = ((myarray[2] << 16) | (myarray[1] << 8) | (myarray[0]));
However, before I can do this, I need to convert the data from network to host byte ordering.
So, I change the above to (it comes in 0-1-2, but it's n to h, so 0-2-1 is what I want):
int mynum = ((myarray[1] << 16) | (myarray[2] << 8) | (myarray[0]));
However, this does not seem to work. For the life of me can't figure this out. I've looked at it so much that at this point I think I'm fried and just confusing myself. Is what I am doing correct? Is there a better way? Would the following work?
int mynum = ((myarray[2] << 16) | (myarray[1] << 8) | (myarray[0]));
int correctnum = ntohl(mynum);
Here's an alternate idea. Why not just make it structured and make it explicit what you're doing. Some of the confusion you're having may be rooted in the "I'm storing in an array" premise. If instead, you defined
typedef struct {
u8 highByte;
u8 midByte;
u8 lowByte;
} ThreeByteInt;
To turn it into an int, you just do
u32 ThreeByteTo32(ThreeByteInt *bytes) {
return (bytes->highByte << 16) + (bytes->midByte << 8) + (bytes->lowByte);
}
if you receive the value in network ordering (that is big endian) you have this situation:
myarray[0] = most significant byte
myarray[1] = middle byte
myarray[2] = least significant byte
so this should work:
int result = (((int) myarray[0]) << 16) | (((int) myarray[1]) << 8) | ((int) myarray[2]);
Beside the ways of using strucures / unions with byte-size members you have two other ways
Using ntoh / hton and masking out the high byte of the 4-byte integer before or after
the conversion with an bitwise and.
Doing the bitshift operations contained in other answers
At any rate you should not rely on side effects and shift data beyond the size of data type.
Shift by 16 is beyond the size of unsigned char and will cause problems depending on compiler, flags, platform endianess and byte order. So always do the proper cast before bitwise to make it work on any compiler / platform:
int result = (((int) myarray[0]) << 16) | (((int) myarray[1]) << 8) | ((int) myarray[2]);
Why don't just receive into the top 3 bytes of a 4-byte buffer? After that you could use ntohl which is just a byte swap instruction in most architectures. In some optimization levels it'll be faster than simple bitshifts and or
union
{
int32_t val;
unsigned char myarray[4];
} data;
memcpy(&data, buffer, 3);
data.myarray[3] = 0;
data.val = ntohl(data.val);
or in case you have copied it to the bottom 3 bytes then another shift is enough
memcpy(&data.myarray[1], buffer, 3);
data.myarray[0] = 0;
data.val = ntohl(data.val) >> 8; // or data.val = ntohl(data.val << 8);
unsigned char myarray[3] = { 1, 2, 3 };
# if LITTLE_ENDIAN // you figure out a way to express this on your platform
int mynum = (myarray[0] << 0) | (myarray[1] << 8) | (myarray[2] << 16);
# else
int mynum = (myarray[0] << 16) | (myarray[1] << 8) | (myarray[2] << 0);
# endif
printf("%x\n", mynum);
That prints 30201 which I think is what you want. The key is to realize that you have to shift the bytes differently per-platform: you can't easily use ntohl() because you don't know where to put the extra zero byte.

Resources