Calculate CRC with words (16-bit) as a base variable - c

Is there a simple CRC algorithm based on a lookup table, but with words entering the algorithm instead of bytes.
For example, this algorithm works with bytes:
#include <stdint.h>
const uint16_t wTable_CRC16_Modbus[256] = {
0x0000, 0xC0C1, 0xC181, 0x0140, 0xC301, 0x03C0, 0x0280, 0xC241,
0xC601, 0x06C0, 0x0780, 0xC741, 0x0500, 0xC5C1, 0xC481, 0x0440,
0xCC01, 0x0CC0, 0x0D80, 0xCD41, 0x0F00, 0xCFC1, 0xCE81, 0x0E40,
0x0A00, 0xCAC1, 0xCB81, 0x0B40, 0xC901, 0x09C0, 0x0880, 0xC841,
0xD801, 0x18C0, 0x1980, 0xD941, 0x1B00, 0xDBC1, 0xDA81, 0x1A40,
0x1E00, 0xDEC1, 0xDF81, 0x1F40, 0xDD01, 0x1DC0, 0x1C80, 0xDC41,
0x1400, 0xD4C1, 0xD581, 0x1540, 0xD701, 0x17C0, 0x1680, 0xD641,
0xD201, 0x12C0, 0x1380, 0xD341, 0x1100, 0xD1C1, 0xD081, 0x1040,
0xF001, 0x30C0, 0x3180, 0xF141, 0x3300, 0xF3C1, 0xF281, 0x3240,
0x3600, 0xF6C1, 0xF781, 0x3740, 0xF501, 0x35C0, 0x3480, 0xF441,
0x3C00, 0xFCC1, 0xFD81, 0x3D40, 0xFF01, 0x3FC0, 0x3E80, 0xFE41,
0xFA01, 0x3AC0, 0x3B80, 0xFB41, 0x3900, 0xF9C1, 0xF881, 0x3840,
0x2800, 0xE8C1, 0xE981, 0x2940, 0xEB01, 0x2BC0, 0x2A80, 0xEA41,
0xEE01, 0x2EC0, 0x2F80, 0xEF41, 0x2D00, 0xEDC1, 0xEC81, 0x2C40,
0xE401, 0x24C0, 0x2580, 0xE541, 0x2700, 0xE7C1, 0xE681, 0x2640,
0x2200, 0xE2C1, 0xE381, 0x2340, 0xE101, 0x21C0, 0x2080, 0xE041,
0xA001, 0x60C0, 0x6180, 0xA141, 0x6300, 0xA3C1, 0xA281, 0x6240,
0x6600, 0xA6C1, 0xA781, 0x6740, 0xA501, 0x65C0, 0x6480, 0xA441,
0x6C00, 0xACC1, 0xAD81, 0x6D40, 0xAF01, 0x6FC0, 0x6E80, 0xAE41,
0xAA01, 0x6AC0, 0x6B80, 0xAB41, 0x6900, 0xA9C1, 0xA881, 0x6840,
0x7800, 0xB8C1, 0xB981, 0x7940, 0xBB01, 0x7BC0, 0x7A80, 0xBA41,
0xBE01, 0x7EC0, 0x7F80, 0xBF41, 0x7D00, 0xBDC1, 0xBC81, 0x7C40,
0xB401, 0x74C0, 0x7580, 0xB541, 0x7700, 0xB7C1, 0xB681, 0x7640,
0x7200, 0xB2C1, 0xB381, 0x7340, 0xB101, 0x71C0, 0x7080, 0xB041,
0x5000, 0x90C1, 0x9181, 0x5140, 0x9301, 0x53C0, 0x5280, 0x9241,
0x9601, 0x56C0, 0x5780, 0x9741, 0x5500, 0x95C1, 0x9481, 0x5440,
0x9C01, 0x5CC0, 0x5D80, 0x9D41, 0x5F00, 0x9FC1, 0x9E81, 0x5E40,
0x5A00, 0x9AC1, 0x9B81, 0x5B40, 0x9901, 0x59C0, 0x5880, 0x9841,
0x8801, 0x48C0, 0x4980, 0x8941, 0x4B00, 0x8BC1, 0x8A81, 0x4A40,
0x4E00, 0x8EC1, 0x8F81, 0x4F40, 0x8D01, 0x4DC0, 0x4C80, 0x8C41,
0x4400, 0x84C1, 0x8581, 0x4540, 0x8701, 0x47C0, 0x4680, 0x8641,
0x8201, 0x42C0, 0x4380, 0x8341, 0x4100, 0x81C1, 0x8081, 0x4040
};
uint16_t Modbus_Calculate_CRC16(uint8_t *frame, uint8_t size) {
// Initialize CRC16 word
uint16_t crc16 = 0xFFFF;
// Calculate CRC16 word
uint16_t ind;
while(size--) {
ind = ( crc16 ^ *frame++ ) & 0x00FF;
crc16 >>= 8;
crc16 ^= wTable_CRC16_Modbus[ind];
}
// Swap low and high bytes
crc16 = (crc16<<8) | (crc16>>8);
// Return the CRC16 word with
// swapped low and high bytes
return crc16;
}
Since I'm always going to send words, using the above algorithm I would need to break each WORD in LSB and MSB and repeat the code in the loop body twice, first for LSB and then for MSB. Instead of doing that, I would like to update CRC in one step, with WORD as the algorithm (loop body) input.

If and only if you read words from memory in little-endian order (least significant byte first), then you can read a 16-bit word at a time. You would still need to use the table twice to process the word. The loop would be:
size >>= 1; // better have been even!
uint16_t const *words = frame; // better be little-endian!
while (size--) {
crc16 ^= *words++;
crc16 = (crc16 >> 8) ^ wTable_CRC16_Modbus[crc16 & 0xff];
crc16 = (crc16 >> 8) ^ wTable_CRC16_Modbus[crc16 & 0xff];
}
Depending on your processor, you may also need to make sure that frame points to an even address. Note that if size is odd, this won't process the last byte. If size can be and is odd, you can just add the processing of the last byte after the loop.
If you would like a single table lookup per word, then you'll need a much bigger table. The existing table is simply the CRC of the single-byte values 0..255 (where the initial CRC is zero, not 0xffff). The bigger table would be the same thing, but for the two-byte values 0..65536, with the least significant byte processed first. You can make that table using the existing table:
void modbus_bigtable(uint16_t *table) {
for (uint16_t lo = 0; lo < 256; lo++) {
uint16_t crclo = wTable_CRC16_Modbus[lo];
uint16_t crchi = crclo >> 8;
crclo &= 0xff;
uint16_t *nxt = table;
for (uint16_t hi = 0; hi < 256; hi++) {
*nxt = crchi ^ wTable_CRC16_Modbus[hi ^ crclo];
nxt += 256;
}
table++;
}
}
Then the loop becomes:
size >>= 1; // better have been even!
uint16_t const *words = frame; // better be little-endian!
while (size--)
crc16 = table[crc16 ^ *words++];

Related

How to continously append a 16 bit integer to a fixed 8 bit array

I'm trying to store ADC values of type uint16_t to a 32KB uint8_t as I'd like to write the data to an SD Card. Currently I'm struggling with how to copy my converted 16 bit integer into my buffer.
My code looks like this:
#define SD_WRITE_BUF_SIZE 32768 //32KB
uint16_t adc_value; // holds the adc Converted value
uint8_t adc_conv_value[2]; // array to store the 8bit conversion of adc_value;
UINT bytesWritten;
int adc_callback_counter = 0;
uint8_t sd_buf_write[SD_WRITE_BUF_SIZE];
int main void
{
Mount_SD_CARD(); // Mount SD Card
Open_SD_CARD_write(); // create file and open for writing
Start_ADC_Conversion(); // Start ADC Conversion;
while (1)
{
if(adc_callback_counter >= (32768/2))
{
write_res = f_write(&myFile, sd_buf_write, SD_WRITE_BUF_SIZE, &bytesWritten);
if(write_res == FR_OK)
{
// Stop ADC
ADC_Stop();
// reset counter
adc_conv_counter = 0;
// close file
f_close(&myFile);
}
}
}
ADC_Conversion_Callback()
{
// convert to 2 8 bit values;
adc_conv_value[0] = (adc_values & 0xFF);
adc_conv_value[1] = (adc_values >> 8) & 0xFF;
// How do I use memcpy to continously copy adc_conv_value to sd_buf_write
memcpy(sd_buf_write, ???, sizeof(??));
// Increment counter;
adc_callback_counter++;
}
In the ADC_Conversion_Callback() function, How do you continously append the adc_conv_value to sd_buf_write until the latter array is full?
// How do I use memcpy to continously copy adc_conv_value to
sd_buf_write
memcpy(sd_buf_write, ???, sizeof(??));
You could use adc_callback_counter variable which tells how many times memcpy has been called.
memcpy((char *)sd_buf_write + (sizeof (adc_conv_value) * adc_callback_counter), adc_conv_value, sizeof(adc_conv_value));
(sizeof (adc_conv_value)* adc_callback_counter) Because each time we are copying sizeof (adc_conv_value), thus starting position will be sd_buf_write + (sizeof (adc_conv_value) * adc_callback_counter)

byte order using GCC struct bit packing

I am using GCC struct bit fields in an attempt interpret 8 byte CAN message data. I wrote a small program as an example of one possible message layout. The code and the comments should describe my problem. I assigned the 8 bytes so that all 5 signals should equal 1. As the output shows on an Intel PC, that is hardly the case. All CAN data that I deal with is big endian, and the fact that they are almost never packed 8 bit aligned makes htonl() and friends useless in this case. Does anyone know of a solution?
#include <stdio.h>
#include <netinet/in.h>
typedef union
{
unsigned char data[8];
struct {
unsigned int signal1 : 32;
unsigned int signal2 : 6;
unsigned int signal3 : 16;
unsigned int signal4 : 8;
unsigned int signal5 : 2;
} __attribute__((__packed__));
} _message1;
int main()
{
_message1 message1;
unsigned char incoming_data[8]; //This is how this message would come in from a CAN bus for all signals == 1
incoming_data[0] = 0x00;
incoming_data[1] = 0x00;
incoming_data[2] = 0x00;
incoming_data[3] = 0x01; //bit 1 of signal 1
incoming_data[4] = 0x04; //bit 1 of signal 2
incoming_data[5] = 0x00;
incoming_data[6] = 0x04; //bit 1 of signal 3
incoming_data[7] = 0x05; //bit 1 of signal 4 and signal 5
for(int i = 0; i < 8; ++i){
message1.data[i] = incoming_data[i];
}
printf("signal1 = %x\n", message1.signal1);
printf("signal2 = %x\n", message1.signal2);
printf("signal3 = %x\n", message1.signal3);
printf("signal4 = %x\n", message1.signal4);
printf("signal5 = %x\n", message1.signal5);
}
Because struct packing order varies between compilers and architectures, the best option is to use a helper function to pack/unpack the binary data instead.
For example:
static inline void message1_unpack(uint32_t *fields,
const unsigned char *buffer)
{
const uint64_t data = (((uint64_t)buffer[0]) << 56)
| (((uint64_t)buffer[1]) << 48)
| (((uint64_t)buffer[2]) << 40)
| (((uint64_t)buffer[3]) << 32)
| (((uint64_t)buffer[4]) << 24)
| (((uint64_t)buffer[5]) << 16)
| (((uint64_t)buffer[6]) << 8)
| ((uint64_t)buffer[7]);
fields[0] = data >> 32; /* Bits 32..63 */
fields[1] = (data >> 26) & 0x3F; /* Bits 26..31 */
fields[2] = (data >> 10) & 0xFFFF; /* Bits 10..25 */
fields[3] = (data >> 2) & 0xFF; /* Bits 2..9 */
fields[4] = data & 0x03; /* Bits 0..1 */
}
Note that because the consecutive bytes are interpreted as a single unsigned integer (in big-endian byte order), the above will be perfectly portable.
Instead of an array of fields, you could use a structure, of course; but it does not need to have any resemblance to the on-the-wire structure at all. However, if you have several different structures to unpack, an array of (maximum-width) fields usually turns out to be easier and more robust.
All sane compilers will optimize the above code just fine. In particular, GCC with -O2 does a very good job.
The inverse, packing those same fields to a buffer, is very similar:
static inline void message1_pack(unsigned char *buffer,
const uint32_t *fields)
{
const uint64_t data = (((uint64_t)(fields[0] )) << 32)
| (((uint64_t)(fields[1] & 0x3F )) << 26)
| (((uint64_t)(fields[2] & 0xFFFF )) << 10)
| (((uint64_t)(fields[3] & 0xFF )) << 2)
| ( (uint64_t)(fields[4] & 0x03 ) );
buffer[0] = data >> 56;
buffer[1] = data >> 48;
buffer[2] = data >> 40;
buffer[3] = data >> 32;
buffer[4] = data >> 24;
buffer[5] = data >> 16;
buffer[6] = data >> 8;
buffer[7] = data;
}
Note that the masks define the field length (0x03 = 0b11 (2 bits), 0x3F = 0b111111 (16 bits), 0xFF = 0b11111111 (8 bits), 0xFFFF = 0b1111111111111111 (16 bits)); and the shift amount depends on the bit position of the least significant bit in each field.
To verify such functions work, pack, unpack, repack, and re-unpack a buffer that should contain all zeros except one of the fields all ones, and verify the data stays correct over two roundtrips. It usually suffices to detect the typical bugs (wrong bit shift amounts, typos in masks).
Note that documentation will be key to ensure the code remains maintainable. I'd personally add comment blocks before each of the above functions, similar to
/* message1_unpack(): Unpack 8-byte message to 5 fields:
field[0]: Foobar. Bits 32..63.
field[1]: Buzz. Bits 26..31.
field[2]: Wahwah. Bits 10..25.
field[3]: Cheez. Bits 2..9.
field[4]: Blop. Bits 0..1.
*/
with the field "names" reflecting their names in documentation.

Parallel Verilog CRC algorithm from C-like reference

I have a set of c-like snippets provided that describe a CRC algorithm, and this article that explains how to transform a serial implementation to parallel that I need to implement in Verilog.
I tried using multiple online code generators, both serial and parallel (although serial would not work in final solution), and also tried working with the article, but got no similar results to what these snippets generate.
I should say I'm more or less exclusively hardware engineer and my understanding of C is rudimentary. I also never worked with CRC other than straightforward shift register implementation. I can see the polynomial and initial value from what I have, but that is more or less it.
Serial implementation uses augmented message. Should I also create parallel one for 6 bits wider message and append zeros to it?
I do not understand too well how the final value crc6 is generated. CrcValue is generated using the CalcCrc function for the final zeros of augmented message, then its top bit is written to its place in crc6 and removed before feeding it to the function again. Why is that? When working the algorithm to get the matrices for the parallel implementation, I should probably take crc6 as my final result, not last value of CrcValue?
Regardless of how crc6 is obtained, in the snippet for CRC check only runs through the function. How does that work?
Here are the code snippets:
const unsigned crc6Polynom =0x03; // x**6 + x + 1
unsigned CalcCrc(unsigned crcValue, unsigned thisbit) {
unsigned m = crcValue & crc6Polynom;
while (m > 0) {
thisbit ^= (m & 1);
m >>= 1;
return (((thisbit << 6) | crcValue) >> 1);
}
}
// obtain CRC6 for sending (6 bit)
unsigned GetCrc(unsigned crcValue) {
unsigned crc6 = 0;
for (i = 0; i < 6; i++) {
crcValue = CalcCrc(crcValue, 0);
crc6 |= (crcValue & 0x20) | (crc6 >> 1);
crcValue &= 0x1F; // remove output bit
}
return (crc6);
}
// Calculate CRC6
unsigned crcValue = 0x3F;
for (i = 1; i < nDataBits; i++) { // Startbit excluded
unsigned thisBit = (unsigned)((telegram >> i) & 0x1);
crcValue = CalcCrc(crcValue, thisBit);
}
/* now send telegram + GetCrc(crcValue) */
// Check CRC6
unsigned crcValue = 0x3F;
for (i = 1; i < nDataBits+6; i++) { // No startbit, but with CRC
unsigned thisBit = (unsigned)((telegram >> i) & 0x1);
crcValue = CalcCrc(crcValue, thisBit);
}
if (crcValue != 0) { /* put error handler here */ }
Thanks in advance for any advice, I'm really stuck there.
xoring bits of the data stream can be done in parallel because only the least signficant bit is used for feedback (in this case), and the order of the data stream bit xor operations doesn't affect the result.
Whether the hardware would need a parallel version depends on how a data stream is handled. The hardware could calculate the CRC one bit at a time during transmission or reception. If the hardware is staged to work with 6 bit characters, then a parallel version would make sense.
Since the snippets use a right shift for the CRC, it would seem that data for each 6 bit character is transmitted and received least significant bit first, to allow for hardware that could calculate CRC 1 bit at a time as it's transmitted or received. After all 6 bit data characters are transmitted, then the 6 bit CRC is transmitted (also least significant bit first).
The snippets seem wrong. My guess at what they should be:
/* calculate crc6 1 bit at a time */
const unsigned crc6Polynom =0x43; /* x**6 + x + 1 */
unsigned CalcCrc(unsigned crcValue, unsigned thisbit) {
crcValue ^= thisbit;
if(crcValue&1)
crcValue ^= crc6Polynom;
crcValue >>= 1;
return crcValue;
}
Example for passing 6 bits at a time. A 64 by 6 bit table lookup could be used to replace the for loop.
/* calculate 6 bits at a time */
unsigned CalcCrc6(unsigned crcValue, unsigned sixbits) {
int i;
crcValue ^= sixbits;
for(i = 0; i < 6; i++){
if(crcValue&1)
crcValue ^= crc6Polynom;
crcValue >>= 1;
}
return crcValue;
}
Assume that telegram contains 31 bits, 1 start bit + 30 data bits (five 6 bit characters):
/* code to calculate crc 6 bits at a time: */
unsigned crcValue = 0x3F;
int i;
telegram >>= 1; /* skip start bit */
for (i = 0; i < 5; i++) {
crcValue = CalcCrc6(unsigned crcValue, telegram & 0x3f);
telegram >>= 6;
}

_mm_crc32_u8 gives different result than reference code

I've been struggling with the intrinsics. In particular I don't get the same results using the standard CRC calculation and the supposedly equivalent intel intrinsics. I'd like to move to using _mm_crc32_u16, and _mm_crc32_u32 but if I can't get the 8 bit operation to work there's no point.
static UINT32 g_ui32CRC32Table[256] =
{
0x00000000L, 0x77073096L, 0xEE0E612CL, 0x990951BAL,
0x076DC419L, 0x706AF48FL, 0xE963A535L, 0x9E6495A3L,
0x0EDB8832L, 0x79DCB8A4L, 0xE0D5E91EL, 0x97D2D988L,
....
// Your basic 32-bit CRC calculator
// NOTE: this code cannot be changed
UINT32 CalcCRC32(unsigned char *pucBuff, int iLen)
{
UINT32 crc = 0xFFFFFFFF;
for (int x = 0; x < iLen; x++)
{
crc = g_ui32CRC32Table[(crc ^ *pucBuff++) & 0xFFL] ^ (crc >> 8);
}
return crc ^ 0xFFFFFFFF;
}
UINT32 CalcCRC32_Intrinsic(unsigned char *pucBuff, int iLen)
{
UINT32 crc = 0xFFFFFFFF;
for (int x = 0; x < iLen; x++)
{
crc = _mm_crc32_u8(crc, *pucBuff++);
}
return crc ^ 0xFFFFFFFF;
}
That table is for a different CRC polynomial than the one used by the Intel instruction. The table is for the Ethernet/ZIP/etc. CRC, often referred to as CRC-32. The Intel instruction uses the iSCSI (Castagnoli) polynomial, for the CRC often referred to as CRC-32C.
This short example code can calculate either, by uncommenting the desired polynomial:
#include <stddef.h>
#include <stdint.h>
/* CRC-32 (Ethernet, ZIP, etc.) polynomial in reversed bit order. */
#define POLY 0xedb88320
/* CRC-32C (iSCSI) polynomial in reversed bit order. */
/* #define POLY 0x82f63b78 */
/* Compute CRC of buf[0..len-1] with initial CRC crc. This permits the
computation of a CRC by feeding this routine a chunk of the input data at a
time. The value of crc for the first chunk should be zero. */
uint32_t crc32c(uint32_t crc, const unsigned char *buf, size_t len)
{
int k;
crc = ~crc;
while (len--) {
crc ^= *buf++;
for (k = 0; k < 8; k++)
crc = crc & 1 ? (crc >> 1) ^ POLY : crc >> 1;
}
return ~crc;
}
You can use this code to generate a replacement table for your code by simply computing the CRC-32C of each of the one-byte messages 0, 1, 2, ..., 255.
FWIW, I've obtained SW code that demonstrably matches the Intel crc32c instruction, but it uses a different polynomial: 0x82f63b78 The function definitely doesn't match any of the iSCSI test examples here: https://www.rfc-editor.org/rfc/rfc3720#appendix-B.4
What's frustrating in all this is every implementation I've tried for CRC-32C comes out with different hashes from all the others. Is there a true piece of reference code out there?

How to set bits in a byte variable (Arduino)

My question would be Arduino specific, although if you know how to do it in C it will be similar in the Arduino IDE too.
So I have 5 integer variables:
r1, r2, r3, r4, r5
Their value either 0 (off) or 1 (on).
I would like to store these in a byte variable let's call it relays, not by adding them but setting certain bits to 1/0 whether they are 0 or 1.
For example:
1, 1, 0, 0, 1
I would like to have the exact same value in my relay's byte variable, not
r1+r2+r3+r4+r5 which in this case would be decimal 3, binary 11.
Thanks!
I recommend using a UNION of a structure of bits. It adds a clarity and makes it readily portable. You can specify single or any size of adjacent bits. Along with quickly rearranging them.
union {
uint8_t BAR;
struct {
uint8_t r1 : 1; // bit position 0
uint8_t r2 : 2; // bit positions 1..2
uint8_t r3 : 3; // bit positions 3..5
uint8_t r4 : 2; // bit positions 6..7
// total # of bits just needs to add up to the uint8_t size
} bar;
} foo;
void setup() {
Serial.begin(9600);
foo.bar.r1 = 1;
foo.bar.r2 = 2;
foo.bar.r3 = 2;
foo.bar.r4 = 1;
Serial.print(F("foo.bar.r1 = 0x"));
Serial.println(foo.bar.r1, HEX);
Serial.print(F("foo.bar.r2 = 0x"));
Serial.println(foo.bar.r2, HEX);
Serial.print(F("foo.bar.r3 = 0x"));
Serial.println(foo.bar.r3, HEX);
Serial.print(F("foo.bar.r4 = 0x"));
Serial.println(foo.bar.r5, HEX);
Serial.print(F("foo.BAR = 0x"));
Serial.println(foo.BAR, HEX);
}
Where you can expand this UNION to be larger than bytes
Note uint8_t is the same as byte.
You can even expand the union to an array of bytes and then send the bytes over serial port or clock them out individual as one long word, etc... see a more extensive example.
How about:
char byte = (r1 << 4) | (r2 << 3) | (r3 << 2) | (r4 << 1) | r5;
Or the other way around:
char byte = r1 | (r2 << 1) | (r3 << 2) | (r4 << 3) | (r5 << 4);

Resources