usb crc5 11 bit check function - c

I have the following data from which I want to build the crc5.
address: 0x19, endp: 0x1, and crc 0x19.
value1 = convert_lsb(0x19) >> 1;
value2 = convert_lsb(0x1);
crc = crc5_11bit_usb(value1 << 4 | (value2 >> 4));
The crc5 result regarding the tool with addr 0x19 and endp 0x1 should be 0x19.
What is wrong with the code?
If I do the same with similiar data but the endp is 0xa the result is correct.
value1 = convert_lsb(0x3a) >> 1;
value2 = (convert_lsb(0xa));
crc = crc5_11bit_usb(value1 << 4 | (value2 >> 4));
The data comes from a tool to analyse usb and I want to build the crc5 manually.
Example code on
https://godbolt.org/z/Wqv1Mxnxj

The data are read from left to right therefore the endpoint is in front of the address and not at the end.

Related

Can anyone explain to me what this shifting means under this circumstance: uint16_t register0 = (instruction >> 9) & 0x7

I'm trying to write a VM (LC-3), and on this ADD instruction I encountered this statement. Basically the "register0" is the DR register, but I don't really understand what is actually shifting and why 9. Also the AND operator with the 0x7 value.
|15|14|13|12|11|10|9|8|7|6|5|4|3|2|1|0|
| 0001 | DR | SR1 |0| 00| SR2 |
Could anyone please explain it to me in detail?
ADD {
/* destination register (DR) */
uint16_t r0 = (instr >> 9) & 0x7;
/* first operand (SR1) */
uint16_t r1 = (instr >> 6) & 0x7;
/* whether we are in immediate mode */
uint16_t imm_flag = (instr >> 5) & 0x1;
if (imm_flag) {
uint16_t imm5 = sign_extend(instr & 0x1F, 5);
reg[r0] = reg[r1] + imm5;
} else {
uint16_t r2 = instr & 0x7;
reg[r0] = reg[r1] + reg[r2];
}
update_flags(r0);
}
What it's doing is isolating the 3 bits that represent the DR register so they become a standalone number.
Let's say the entire sequence looks like this:
1110101101101011
^^^
DR
Shifting 9 bits right gives this:
1110101
and & 0x7 (bitwise AND) isolates the 3 lowest bits:
101
Similar operations are performed to isolate the values of SR1 and the immediate mode flag. Depending on that flag, SR2 may also be required, but as it's already in the lowest 3 bits, no shifting is needed.

Adding bits at specific indexes for a uint8_t block

So I have a pointer to a uint8_t array in the form of:
uint8_t* array; // 64-bit array
I need to modify this array by shifting bits to the right and inserting a bit of 0 or 1 at indexes with a power of 2. Thereby generating a 72-bit array.
uint8_t newArr[9];
What is the best way to modify an array so I can add the bits at the specific places I computed. I thought of converting the array to a a char array and then adding the bits one by one. However, is there a faster and more easier method than this.
So if I have a pointer to a bit array in the form of uint8_t like:
000100001 11001111 01101101 11000001 11100000 00101111 11111001 10010010
I would need to modify it into a uint8_t[9] array so that I insert bits I have specified at 0, 1, 2, 4, 8, 16, 32, 64 of the new array to look like: (answer is wrong)
00000000 11001111 11001111 11001111 11001111 01101101 01101101 10010010 00100001
But I don't know how to shift a particular bit to the right without shifting all the bits. For example if I shift all bits starting at index 2 1 to the right and then all bits starting from index 4 to the right by one.
Say you start with the following:
src[0] src[1] src[2] src[3] src[4] src[5] src[6] src[7]
-------- -------- -------- -------- -------- -------- -------- --------
7 6 5 4 3 2 1
76543210 76543210 76543210 76543210 76543210 76543210 76543210 76543210
aaaaaaaa bbbbbbbb cccccccc dddddddd eeeeeeee ffffffff gggggggg hhhhhhhh
You want to insert at the following positions (octal):
0, 1, 2, 4, 10, 20, 40, 100
That means you want the following:
dst[0] dst[1] dst[2] dst[3] dst[4] dst[5] dst[6] dst[7] dst[8]
-------- -------- -------- -------- -------- -------- -------- -------- --------
1
0 7 6 5 4 3 2 1
76543210 76543210 76543210 76543210 76543210 76543210 76543210 76543210 76543210
aaaaaaa0 abbbbbbb bccccccc cddddddd deeeeee0 eeffffff ffggggg0 ggghhhh0 hhh0h000
So,
dst[0] = src[0] & 0xFE;
dst[1] = ((src[0] ) << 7) | ((src[1] ) >> 1);
dst[2] = ((src[1] ) << 7) | ((src[2] ) >> 1);
dst[3] = ((src[2] ) << 7) | ((src[3] ) >> 1);
dst[4] = ((src[3] ) << 7) | ((src[4] & 0xFC) >> 1);
dst[5] = ((src[4] ) << 6) | ((src[5] ) >> 2);
dst[6] = ((src[5] ) << 6) | ((src[6] & 0xFC) >> 2);
dst[7] = ((src[6] ) << 5) | ((src[7] & 0xFC) >> 3);
dst[8] = ((src[7] & 0x0E) << 5) | ((src[7] & 0x01) << 3);
(Useless masks omitted.)
You can treat the input array as a 64-bit int to make it faster. Suppose the input and output values are like this
array = aaaaaaaa bbbbbbbb cccccccc dddddddd eeeeeeee ffffffff gggggggg hhhhhhhh
newArr = aaaaaaa0 abbbbbbb bccccccc cddddddd deeeeee0 eeffffff ffggggg0 ggghhhh0 hhh0h000
Then you can get the desired result using this way
uin64_t src = htobe64(*(uint64_t*)array); // operate on big endian
uint64_t* dst = (uint64_t*)newArr;
*dst = (src & 0xFE00'0000'0000'0000) >> 0; // aaaaaaa
*dst |= (src & 0x01FF'FFFF'FC00'0000) >> 1; // abbbbbbbbccccccccddddddddeeeeee
*dst |= (src & 0x0000'0000'03FF'F800) >> 2; // eeffffffffggggg
*dst |= (src & 0x0000'0000'0000'07F0) >> 3; // ggghhhh
*dst = be64toh(*dst); // convert back to the native endian
newArr[8] = ((array[7] & 0x0E) << 4) | ((array[7] & 0x01) << 3); // hhh0h000
To avoid strict aliasing you can change to memcpy to copy from array to src and from dst to newArr
Of course you may need to align the input and output arrays for better performance. On many modern architectures it's also a must
#ifdef _MSC_VER
__declspec(align(8)) uint8_t array[8]; // 64-bit array
__declspec(align(8)) uint8_t newArr[9];
#else
uint8_t array[8] __attribute__((aligned(8))); // 64-bit array
uint8_t newArr[9] __attribute__((aligned(8)));
#endif
On modern x86 with BMI2 you can use the new bit deposit instruction to get the first 8 bytes directly
uint64_t src;
memcpy(&src, &array, sizeof src); // avoid strict aliasing
uin64_t src = htobe64(src);
uint64_t dst = _pdep_u64(src >> 4,
// aaaaaaa0 abbbbbbb bccccccc cddddddd deeeeee0 eeffffff ffggggg0 ggghhhh0
0b11111110'11111111'11111111'11111111'11111110'11111111'11111110'11111110);
// hhh0h000
newArr[8] = _pdep_u32(src, 0b11101000);
*dst = be64toh(*dst); // convert back to the native endian
memcpy(&newArr, &dst, sizeof dst); // avoid strict aliasing

C weird function variable assignment

I found this code while I was learning how to make a Virtual Machine. But I haven't got a clue what this function does. Do any of you know what this function is doing?
void decode( int instr )
{
instrNum = (instr & 0xF000) >> 12;
reg1 = (instr & 0xF00 ) >> 8;
reg2 = (instr & 0xF0 ) >> 4;
reg3 = (instr & 0xF );
imm = (instr & 0xFF );
}
The variable instr = 1.
The function is saving specific sets of 4 bits (called nibbles) from the variable instr into other variables instrNum, reg1, etc (these other variables must have a global scope as they're not defined here).
Consider for example if instr was 0x1234
instrNum = (0x1234 & 0xF000) >> 12;
= (0x1000) >> 12;
= 1
reg1 = (0x1234 & 0xF00) >> 8;
= (0x0200) >> 8;
= 2
reg2 = (0x1234 & 0xF0) >> 4;
= (0x0030) >> 4;
= 3
reg3 = (0x1234 & 0xF);
= (0x0004);
= 4
imm = (0x1234 & 0xFF);
= (0x0034);
= 52
So it's taking each nibble of the variable instr and saving it into a separate variable. The last variable imm gets the last byte. & and >> are bit operators, AND operator for seperating out bits and the right shift operator.
Why it's saving these is anyone's guess, we would need to know what type those variables are and what they're used for, but that's what is happening anyway
Those are bit operations, which are often used to compactly store some flags within a single integer. This function "reads" bits from the argument instr and writes the results to other fields.
This function seems to decode an instruction instr into a 4-bit instruction code (instNum), and up to three registers 4-bit codes (reg1 to reg3). In your virtual machine, there seems also an encoding for immediate 8 bit operands (imm). Here an illustration of my guess of the 16-bit instruction set of the VM:

understanding MSB LSB

I am working on converting a program that runs on a specific micro-controller and adapt it to run on the raspberry pi. I have successfully been able to pull values from the sensor I have been working with but now I have run into a problem and I think it is caused by a few lines of code I am having trouble understanding. I have read up on what they are but am still scratching my head. The code below I believe is supposed to modify the number that gets stored in the X,Y,Z variables however I don't think this is occurring in my current program. Also I had to change byte to an INT to get the program to compile with out errors. This is the unmodified code from the original code I have converted. Can someone tell me if this is even modifying the number in anyway?
void getGyroValues () {
byte MSB, LSB;
MSB = readI2C(0x29);
LSB = readI2C(0x28);
x = ((MSB << 8) | LSB);
MSB = readI2C(0x2B);
LSB = readI2C(0x2A);
y = ((MSB << 8) | LSB);
MSB = readI2C(0x2D);
LSB = readI2C(0x2C);
z = ((MSB << 8) | LSB);
}
Here is the original readI2C function:
int readI2C (byte regAddr) {
Wire.beginTransmission(Addr);
Wire.write(regAddr); // Register address to read
Wire.endTransmission(); // Terminate request
Wire.requestFrom(Addr, 1); // Read a byte
while(!Wire.available()) { }; // Wait for receipt
return(Wire.read()); // Get result
}
I2C is a 2-wire protocol used to talk to low-speed peripherals.
Your sensor should be connected over the I2C bus to your CPU. And you're reading 3 values from the sensor - x, y and z. The values for these are accessible from the sensor as 6 x 8-bit registers.
x - Addresses 0x28, 0x29
y - Addresses 0x2A, 0x2B
z - Addresses 0x2C, 0x2D
ReadI2C() as the function name implies, reads a byte of data from a given address from your sensor and returns the data being read. The code in ReadI2C() is dependent on how your device's I2C controller is setup.
A byte is 8-bits of data. The MSB (Most-Significant-Byte) and LSB(Least-Significant-Byte) denote 8-bits each read over I2C.
It looks like you're interested in a 16-bit data (for x, y and z). To construct the 16-bit data from the 2 pieces of 8-bit data, you shift the MSB by 8-bits to the left and then perform a logical-OR operation with the LSB.
For example:
Let us assume: MSB = 0x45 LSB = 0x89
MSB << 8 = 0x4500
(MSB << 8) | LSB = 0x4589
Look at my comments inline as well:
void getGyroValues () {
byte MSB, LSB;
MSB = readI2C(0x29);
LSB = readI2C(0x28);
// Shift the value in MSB left by 8 bits and OR with the 8-bits of LSB
// And store this result in x
x = ((MSB << 8) | LSB);
MSB = readI2C(0x2B);
LSB = readI2C(0x2A);
// Do the same as above, but store the value in y
y = ((MSB << 8) | LSB);
MSB = readI2C(0x2D);
LSB = readI2C(0x2C);
// Do the same as above, but store the value in z
z = ((MSB << 8) | LSB);
}

Convert Bytes to Int / uint in C

I have a unsigned char array[248]; filled with bytes. Like 2F AF FF 00 EB AB CD EF .....
This Array is my Byte Stream which I store my Data from the UART (RS232) as a Buffer.
Now I want to convert the bytes back to my uint16's and int32's.
In C# I used the BitConverter Class to do this. e.g:
byte[] Array = { 0A, AB, CD, 25 };
int myint1 = BitConverter.ToInt32(bytes, 0);
int myint2 = BitConverter.ToInt32(bytes, 4);
int myint3 = BitConverter.ToInt32(bytes, 8);
int myint4 = BitConverter.ToInt32(bytes, 12);
//...
enter code here
Console.WriteLine("int: {0}", myint1); //output Data...
Is there a similiar Function in C ? (no .net , I use the KEIL compiler because code is running on a microcontroller)
With Regards
Sam
There's no standard function to do it for you in C. You'll have to assemble the bytes back into your 16- and 32-bit integers yourself. Be careful about endianness!
Here's a simple little-endian example:
extern uint8_t *bytes;
uint32_t myInt1 = bytes[0] + (bytes[1] << 8) + (bytes[2] << 16) + (bytes[3] << 24);
For a big-endian system, it's just the opposite order:
uint32_t myInt1 = (bytes[0] << 24) + (bytes[1] << 16) + (bytes[2] << 8) + bytes[3];
You might be able to get away with:
uint32_t myInt1 = *(uint32_t *)bytes;
If you're careful about alignment issues.
Yes there is. Assume your bytes are in:
uint8_t bytes[N] = { /* whatever */ };
We know that, a 16 bit integer is just two 8 bit integers concatenated, i.e. one has a multiple of 256 or alternatively is shifted by 8:
uint16_t sixteen[N/2];
for (i = 0; i < N; i += 2)
sixteen[i/2] = bytes[i] | (uint16_t)bytes[i+1] << 8;
// assuming you have read your bytes little-endian
Similarly for 32 bits:
uint32_t thirty_two[N/4];
for (i = 0; i < N; i += 4)
thirty_two[i/4] = bytes[i] | (uint32_t)bytes[i+1] << 8
| (uint32_t)bytes[i+2] << 16 | (uint32_t)bytes[i+3] << 24;
// same assumption
If the bytes are read big-endian, of course you reverse the order:
bytes[i+1] | (uint16_t)bytes[i] << 8
and
bytes[i+3] | (uint32_t)bytes[i+2] << 8
| (uint32_t)bytes[i+1] << 16 | (uint32_t)bytes[i] << 24
Note that there's a difference between the endian-ness in the stored integer and the endian-ness of the running architecture. The endian-ness referred to in this answer is of the stored integer, i.e., the contents of bytes. The solutions are independent of the endian-ness of the running architecture since endian-ness is taken care of when shifting.
In case of little-endian, can't you just use memcpy?
memcpy((char*)&myint1, aesData.inputData[startindex], length);
char letter = 'A';
size_t filter = letter;
filter = (filter << 8 | filter);
filter = (filter << 16 | filter);
filter = (filter << 32 | filter);
printf("filter: %#I64x \n", filter);
result: "filter: 0x4141414141414141"

Resources