Compute CRC lookup table of poly CRC-32K/6.2 - c

I need the properties of poly (0x992c1a4c; 0x132583499) <=> (0x992c1a4c; 0x132583499) from CRC Zoo.
I have read Wikipedia and Ross N. Williams thoroughly but I can't make the final connection. Also I don't know how to check the generated table for correctness.
Do I need to take into account the endianness of the system where it is implemented? Can I use the reflected algorithm regardless of that? Which initial and XorOut values should I pick? How do I check my results?

Trivia - the polynomial 0x132583499 is the product of 3 prime factors (carryless multiply):
0x3 * 0x3 * 0x5A12A42D = 0x132583499
compute-crc-lookup-table
This depends if the crc is left shifted or right shifted. Assuming that the table is for working with one byte at a time, a 256 by 32 bit table is used. For a left shifting crc, the most signficant bit is masked off: 0x132583499 -> 0x32583499:
void gentbl(void)
{
uint32_t crc;
uint32_t b;
uint32_t c;
uint32_t i;
for(c = 0; c < 0x100; c++){
crc = c<<24;
for(i = 0; i < 8; i++){
b = crc>>31;
crc <<= 1;
crc ^= (0 - b) & 0x32583499;
}
crctbl[c] = crc;
}
}
For a right shifting crc, the polynomial is reversed and right shifted 1 bit: (0x132583499 reversed = 0x132583499, shifted right 1 bit = 0x992c1a4c.
void gentbl(void)
{
uint32_t crc;
uint32_t b;
uint32_t c;
uint32_t i;
for(c = 0; c < 0x100; c++){
crc = c;
for(i = 0; i < 8; i++){
b = crc&1;
crc >>= 1;
crc ^= (0 - b) & 0x992c1a4c;
}
crctbl[c] = crc;
}
}
Do I need to take into account the endianness of the system where it is implemented?
Only if the code loads or stores more than a byte at a time. This may be required if the symbol size is not the same as a byte.
Can I use the reflected algorithm regardless of that (endianness)?
Yes, the endianness only affects the loading and storing of data. The reflected algorithm is used for right shifting crc, non-reflected for left shifting crc.
Which initial and XorOut values should I pick?
This is arbitrary and depends on the specific crc. Initial value is typically all zero bits or all one bits with a few exceptions. XorOut is most often 0, but sometimes all one bits to post complement a crc.
How do I check my results?
Use the code with the same initial, xorout, and polynomial as some crc used with an online calculator to verify some crc values. Note that endianness affects the output shown on some online calculators. The string size for an online calculator is limited, but even a few bytes should be enough to check the crc. If the table is created using code similar to the examples above, it's unlikely to have a mix of good and bad entries.
A 32 bit crc is the remainder produced by treating the message as a long n bit dividend and dividing it by a 33 bit polynomial, resulting in a 32 bit remainder, which is the crc. The remainder is appended to the message, resulting in an encoded string of n+32 bits that is an exact multiple of the crc polynomial. If there are no errors and the crc is generated for the n+32 bit encoded string of bits, the crc will always be some constant, such as 0 if xorout == 0.
The crc zoo table contains additional information, such as a list of maximum number of data bits (before the 32 bit crc is appended) versus the Hamming Distance (HD), starting with HD=3, which means that every valid encoded string will differ by at least 3 bits from any other valid encoded string, and therefore any 2 bit error can be detected if the message length is not too long. You can click on the lengths to see an expanded list including failure examples, showing the indexes of the leading bits, and the last 32 bits of the message (somewhat confusing, I converted some of these to show all as indexes). There are 12 lengths shown, I added a 3rd row showing the lengths including the 32 bit crc:
HD = { 3, 4, 5, 6, 7, 8, 9,10,11,12,13,14
{65506,65506,32738,32738,134,134,26,26,16,16, 3, 3}
{65538,65538,32770,32770,166,166,58,58,48,48,35,35} +32 for crc
The web site includes examples of failures with crc poly of 0x132583499 when the message length is too long for a given hamming distance. For HD=3 or 4, encoded message with length (65538+1) 65539 bits, all zero bits except bit[0] and bit[65538] = 1, this will pass a crc check, even though there are 2 bits in error. For HD=5 or 6, encoded message length 32771, all zero bits except bit[{0, 1, 32769, 32770}] = 1, passes crc check with 4 bit error. For HD=7 or 8, encoded message length 167, all zero bits except bit[{0, 43, 44, 122, 123, 166}] = 1, passes crc check with 6 bit error. For HD=9 or 10, encoded message length 59, all zero bits except bit[{0, 5, 21, 25, 33, 37, 53, 58}] = 1, passes crc check with 8 bit error.

Related

How to efficiently count leading zeros in a 24 bit unsigned integer?

Most of the clz() (SW impl.) are optimized for 32 bit unsigned integer.
How to efficiently count leading zeros in a 24 bit unsigned integer?
UPD. Target's characteristics:
CHAR_BIT 24
sizeof(int) 1
sizeof(long int) 2
sizeof(long long int) 3
TL;DR: See point 4 below for the C program.
Assuming your hypothetical target machine is capable of correctly implementing unsigned 24-bit multiplication (which must return the low-order 24 bits of the product), you can use the same trick as is shown in the answer you link. (But you might not want to. See [Note 1].) It's worth trying to understand what's going on in the linked answer.
The input is reduced to a small set of values, where all integers with the same number of leading zeros map to the same value. The simple way of doing that is to flood every bit to cover all the bit positions to the right of it:
x |= x>>1;
x |= x>>2;
x |= x>>4;
x |= x>>8;
x |= x>>16;
That will work for 17 up to 32 bits; if your target datatype has 9 to 16 bits, you could leave off the last shift-and-or because there is no bit position 16 bits to the right of any bit. And so on. But with 24 bits, you'll want all five shift-and-or.
With that, you've turned x into one of 25 values (for 24-bit ints):
x clz x clz x clz x clz x clz
-------- --- -------- --- -------- --- -------- --- -------- ---
0x000000 24 0x00001f 19 0x0003ff 14 0x007fff 9 0x0fffff 4
0x000001 23 0x00003f 18 0x0007ff 13 0x00ffff 8 0x1fffff 3
0x000003 22 0x00007f 17 0x000fff 12 0x01ffff 7 0x3fffff 2
0x000007 21 0x0000ff 16 0x001fff 11 0x03ffff 6 0x7fffff 1
0x00000f 20 0x0001ff 15 0x003fff 10 0x07ffff 5 0xffffff 0
Now, to turn x into clz, we need a good hash function. We don't necessarily expect that hash(x)==clz, but we want the 25 possible x values to hash to different numbers, ideally in a small range. As with the link you provide, the hash function we'll choose is to multiply by a carefully-chosen multiplicand and then mask off a few bits. Using a mask means that we need to choose five bits; in theory, we could use a 5-bit mask anywhere in the 24-bit word, but in order to not have to think too much, I just chose the five high-order bits, the same as the 32-bit solution. Unlike the 32-bit solution, I didn't bother adding 1, and I expect to distinct values for all 25 possible inputs. The equivalent isn't possible with a five-bit mask and 33 possible clz values (as in the 32-bit case), so they have to jump through an additional hoop if the original input was 0.
Since the hash function doesn't directly produce the clz value, but rather a number between 0 and 31, we need to translate the result to a clz value, which uses a 32-byte lookup table, called debruijn in the 32-bit algorithm for reasons I'm not going to get into.
An interesting question is how to select a multiplier with the desired characteristics. One possibility would be to do a bunch of number theory to elegantly discover a solution. That's how it was done decades ago, but these days I can just write a quick-and-dirty Python program to do a brute force search over all the possible multipliers. After all, in the 24-bit case there are only about 16 million possibilities and lots of them work. The actual Python code I used is:
# Compute the 25 target values
targ=[2**i - 1 for i in range(25)]
# For each possible multiplier, compute all 25 hashes, and see if they
# are all different (that is, the set of results has size 25):
next(i for i in range(2**19, 2**24)
if len(targ)==len(set(((i * t) >> 19) & 0x1f
for t in targ)))
Calling next on a generator expression returns the first generated value, which in this case is 0x8CB4F, or 576335. Since the search starts at 0x80000 (which is the smallest multiplier for which hash(1) is not 0), the result printed instantly. I then spent a few more milliseconds to generate all the possible multipliers between 219 and 220, of which there are 90, and selected 0xCAE8F (831119) for purely personal aesthetic reasons.
The last step is to create the lookup table from the computed hash function. (Not saying this is good Python. I just took it from my command history; I might come back and clean it up later. But I included it for completeness.):
lut = dict((i,-1) for i in range(32))
lut.update((((v * 0xcae8f) >> 19) & 0x1f, 24 - i)
for i, v in enumerate(targ))
print(" static const char lut[] = {\n " +
",\n ".join(', '.join(f"{lut[i]:2}" for i in range(j, j+8))
for j in range(0, 32, 8)) +
"\n };\n")
# The result is pasted into the C code below.
So then it's just a question of assembling the C code:
// Assumes that `unsigned int` has 24 value bits.
int clz(unsigned x) {
static const char lut[] = {
24, 23, 7, 18, 22, 6, -1, 9,
-1, 17, 15, 21, 13, 5, 1, -1,
8, 19, 10, -1, 16, 14, 2, 20,
11, -1, 3, 12, 4, -1, 0, -1
};
x |= x>>1;
x |= x>>2;
x |= x>>4;
x |= x>>8;
x |= x>>16;
return lut[((x * 0xcae8f) >> 19) & 0x1f];
}
The test code calls clz on every 24-bit integer in turn. Since I don't have a 24-bit machine handy, I just assume that the arithmetic will work the same on the hypothetical 24-bit machine in the OP.
#include <stdio.h>
# For each 24-bit integer in turn (from 0 to 2**24-1), if
# clz(i) is different from clz(i-1), print clz(i) and i.
#
# Expected output is 0 and the powers of 2 up to 2**23, with
# descending clz values from 24 to 0.
int main(void) {
int prev = -1;
for (unsigned i = 0; i < 1<<24; ++i) {
int pfxlen = clz(i);
if (pfxlen != prev) {
printf("%2d 0x%06X\n", pfxlen, i);
prev = pfxlen;
}
}
return 0;
}
Notes:
If the target machine does not implement 24-bit unsigned multiply in hardware --i.e., it depends on a software emulation-- then it's almost certainly faster to do the clz by just looping over initial bits, particularly if you fold the loop by scanning several bits at a time with a lookup table. That might be faster even if the machine does do efficient hardware multiplies. For example, you can scan six bits at a time with a 32-entry table:
// Assumes that `unsigned int` has 24 value bits.
int clz(unsigned int x) {
static const char lut[] = {
5, 4, 3, 3, 2, 2, 2, 2,
1, 1, 1, 1, 1, 1, 1, 1,
0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0
};
/* Six bits at a time makes octal easier */
if (x & 077000000u) return lut[x >> 19];
if (x & 0770000u) return lut[x >> 13] + 6;
if (x & 07700u) return lut[x >> 7] + 12;
if (x ) return lut[x >> 1] + 18;
return 24;
}
That table could be reduced to 48 bits but the extra code would likely eat up the savings.
A couple of clarifications seem to be in order here. First, although we're scanning six bits at a time, we only use five of them to index the table. That's because we've previously verified that the six bits in question are not all zero; in that case, the low-order bit is either irrelevant (if some other bit is set) or it's 1. Also, we get the table index by shifting without masking; the masking is unnecessary because we know from the masked tests that all the higher order bits are 0. (This will, however, fail miserably if x has more than 24 bits.)
Convert the 24 bit integer into a 32 bit one (either by type punning or explicitly shuffling around the bits), then to the 32 bit clz, and subtract 8.
Why do it that way? Because in this day and age you'll be hard pressed to find a machine that deals with 24 bit types, natively, in the first place.
I would look for the builtin function or intrinsic available for your platform and compiler. Those functions usually implement the most efficient way of finding the most significant bit number. For example, gcc has __builtin_clz function.
If the 24 bit integer is stored in a byte array (for example received from sensor)
#define BITS(x) (CHAR_BIT * sizeof(x) - 24)
int unaligned24clz(const void * restrict val)
{
unsigned u = 0;
memcpy(&u, val, 3);
#if defined(__GNUC__)
return __builtin_clz(u) - BITS(u);
#elif defined(__ICCARM__)
return __CLZ(u) - BITS(u);
#elif defined(__arm__)
return __clz(u) - BITS(u);
#else
return clz(u) - BITS(u); //portable version using standard C features
#endif
}
If it is stored in valid integer
int clz24(const unsigned u)
{
#if defined(__GNUC__)
return __builtin_clz(u) - BITS(u);
#elif defined(__ICCARM__)
return __CLZ(u) - BITS(u);
#elif defined(__arm__)
return __clz(u) - BITS(u);
#else
return clz(u) - BITS(u); //portable version using standard C features
#endif
}
https://godbolt.org/z/z6n1rKjba
You can add more compilers support if you need.
Remember if the value is 0 the value of the __builtin_clz is undefined so you will need to add another check.

CRC-15 giving wrong values

I am trying to create a CRC-15 check in c and the output is never correct for each line of the file. I am trying to output the CRC for each line cumulatively next to each line. I use: #define POLYNOMIAL 0xA053 for the divisor and text for the dividend. I need to represent numbers as 32-bit unsigned integers. I have tried printing out the hex values to keep track and flipping different shifts around. However, I just can't seem to figure it out! I have a feeling it has something to do with the way I am padding things. Is there a flaw to my logic?
The CRC is to be represented in four hexadecimal numbers, that sequence will have four leading 0's. For example, it will look like 0000xxxx where the x's are the hexadecimal digits. The polynomial I use is 0xA053.
I thought about using a temp variable and do 4 16 bit chunks of code per line every XOR, however, I'm not quite sure how I could use shifts to accomplish this so I settled for a checksum of the letters on the line and then XORing that to try to calculate the CRC code.
I am testing my code using the following input and padding with . until the string is of length 504 because that is what the pad character needs to be via the requirements given:
"This is the lesson: never give in, never give in, never, never, never, never - in nothing, great or small, large or petty - never give in except to convictions of honor and good sense. Never yield to force; never yield to the apparently overwhelming might of the enemy."
The CRC of the first 64 char line ("This is the lesson: never give in, never give in, never, never,) is supposed to be 000015fa and I am getting bfe6ec00.
My logic:
In CRCCalculation I add each character to a 32-bit unsigned integer and after 64 (the length of one line) I send it into the XOR function.
If it the top bit is not 1, I shift the number to the left one
causing 0s to pad the right and loop around again.
If the top bit is 1, I XOR the dividend with the divisor and then shift the dividend to the left one.
After all calculations are done, I return the dividend shifted to the left four ( to add four zeros to the front) to the calculation function
Add result to the running total of the result
Code:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <stdint.h>
#include <ctype.h>
#define POLYNOMIAL 0xA053
void crcCalculation(char *text, int length)
{
int i;
uint32_t dividend = atoi(text);
uint32_t result;
uint32_t sumText = 0;
// Calculate CRC
printf("\nCRC 15 calculation progress:\n");
i = length;
// padding
if(i < 504)
{
for(; i!=504; i++)
{
// printf("i is %d\n", i);
text[i] = '.';
}
}
// Try calculating by first line of crc by summing the values then calcuating, then add in the next line
for (i = 0; i < 504; i++)
{
if(i%64 == 0 && i != 0)
{
result = XOR(POLYNOMIAL, sumText);
printf(" - %x\n",result);
}
sumText +=(uint32_t)text[i];
printf("%c", text[i]);
}
printf("\n\nCRC15 result : %x\n", result);
}
uint32_t XOR(uint32_t divisor, uint32_t dividend)
{
uint32_t divRemainder = dividend;
uint32_t currentBit;
// Note: 4 16 bit chunks
for(currentBit = 32; currentBit > 0; --currentBit)
{
// if topbit is 1
if(divRemainder & 0x80)
{
//divRemainder = (divRemainder << 1) ^ divisor;
divRemainder ^= divisor;
printf("%x %x\n", divRemainder, divisor);
}
// else
// divisor = divisor >> 1;
divRemainder = (divRemainder << 1);
}
//return divRemainder; , have tried shifting to right and left, want to add 4 zeros to front so >>
//return divRemainder >> 4;
return divRemainder >> 4;
}
The first issue I see is the top bit check, it should be:
if(divRemainder & 0x8000)
The question doesn't state if the CRC is bit reflected (xor data into low order bits of CRC, right shift for cycle) or not (xor data into high order bits of CRC, left shift for cycle), so I can't offer help for the rest of the code.
The question doesn't state the initial value of CRC (0x0000 or 0x7fff), or if the CRC is post complemented.
The logic for a conventional CRC is:
xor a byte of data into the CRC (upper or lower bits)
cycle the CRC 8 times (or do a table lookup)
After generating the CRC for an entire message, the CRC can be appended to the message. If a CRC is generated for a message with the appended CRC and there are no errors, the CRC will be zero (or a constant value if the CRC is post complemented).
here is a typical CRC16, extracted from: <www8.cs.umu.se/~isak/snippets/crc-16.c>
#define POLY 0x8408
/*
// 16 12 5
// this is the CCITT CRC 16 polynomial X + X + X + 1.
// This works out to be 0x1021, but the way the algorithm works
// lets us use 0x8408 (the reverse of the bit pattern). The high
// bit is always assumed to be set, thus we only use 16 bits to
// represent the 17 bit value.
*/
unsigned short crc16(char *data_p, unsigned short length)
{
unsigned char i;
unsigned int data;
unsigned int crc = 0xffff;
if (length == 0)
return (~crc);
do
{
for (i=0, data=(unsigned int)0xff & *data_p++;
i < 8;
i++, data >>= 1)
{
if ((crc & 0x0001) ^ (data & 0x0001))
crc = (crc >> 1) ^ POLY;
else crc >>= 1;
}
} while (--length);
crc = ~crc;
data = crc;
crc = (crc << 8) | (data >> 8 & 0xff);
return (crc);
}
Since you want to calculate a CRC15 rather than a CRC16, the logic will be more complex as cannot work with whole bytes, so there will be a lot of bit shifting and ANDing to extract the desire 15 bits.
Note: the OP did not mention if the initial value of the CRC is 0x0000 or 0x7FFF, nor if the result is to be complemented, nor certain other criteria, so this posted code can only be a guide.

Get list of bits set in BitMap

In C, Is there any optimized way of retrieving list of BitPositions set without parsing through each bit.
Consider following example
int bitmap[4];
So, there are 4 * 32 Bit Positions..Values are following
bitmap = { 0x1, 0x0, 0x0, 0x0010001 }
I want retrieve Position of each bit set instead of parsing from 0 to 4 * 32 positions.
First of all, one cannot really use int for bitmap in C, because shifting a bit to left to the sign bit has undefined behaviour, C doesn't guarantee that the representation is two's complement, or that there are 32 bits in an int; that being said the easiest way to avoid these pitfalls is to use the uint32_t from <stdint.h> instead. Thus
#include <stdint.h>
uint32_t bitmap[4];
So consider that you number these bits 0 ... 127 from indexes 0 ... 3; and within indexes 0 ... 31; so, you can get the index into array and the bit number within that value by using the following formula:
int bit_number = // a value from 0 ... 127
int index = value >> 32; // shift right by number of bits in each index
int bit_in_value = value & 31; // take modulo 32 to get the bit in value
Now you can index the integer by:
bitmap[index];
and the bit mask for the desired value is
uint32_t mask = (uint32_t)1 << bit_in_value;
so you can check if the bit is set by doing
bit_is_set = !!(bitmap[index] & mask);
Now to speed things up, you can skip any index for which bitmap[index] is 0 because it doesn't contain any bits set; likewise, within each index you can speed things up by shifting bits in the uint32_t from the bitmap right by 1 and masking with 1; and breaking the loop when the uint32_t becomes 0:
for (int index = 0; index <= 3; index ++) {
uint32_t entry = bitmap[index];
if (! entry) {
continue;
}
int bit_number = 32 * index;
while (entry) {
if (entry & 1) {
printf("bit number %d is set\n", bit_number);
}
entry >>= 1;
bit_number ++;
}
}
Other than that there is not much to speed up, besides lookup tables, or using compiler intrinsics, such as this to set which is the lowest bit set but you'd still have to use some anyway.
An optimal solution which runs in O(k), where k = the total number of set bits in your entire list, can be achieved by using a lookup table. For example, you can use a table of 256 entries to describe the bit positions of every set bit in that byte. The index would be the actual value of the Byte.
For each entry you could use the following structure.
struct
{
int numberOfSetBits;
char* list; // use malloc and alloocate the list according to numberOfSetBits
}
You can then iterate across the list member of each structure and the number of iterations = the number of set bits for that byte. For a 32-bit integer you will have to iterate through 4 of these structs, one per each byte. To determine which entry you need to check you use a Bitmap and shift 8 bits. Note, that the bit positions are relative to that byte, so you may have to add an offset or either 24, 16, or 8 depending on the byte you are iterating through (assuming a 32 bit integer).
Note: if additional memory usage is not a problem for you, you could build a 64K Table of 16-bit entries and you will decrease the number of your structs by half.
Related with this question, you can see What is the fastest way to return the positions of all set bits in a 64-bit integer?
A simple solution, but perhaps not the fastest, depending on the times of the log and pow functions:
#include<math.h>
#include<stdio.h>
void getSetBits(unsigned int num, int offset){
int bit;
while(num){
bit = log2(num);
num -= pow(2, bit);
printf("%i\n", offset + bit); // use bit number
}
}
int main(){
int i, bitmap[4] = {0x1, 0x0, 0x0, 0x0010001};
for(i = 0; i < 4; i++)
getSetBits(bitmap[i], i * 32);
}
Complexity O(D) | D is the number of set bits.

Setting bits in a bit stream

I have encountered the following C function while working on a legacy code and I am compeletely baffled, the way the code is organized. I can see that the function is trying to set bits at given position in bit stream but I can't get my head around with individual statements and expressions. Can somebody please explain why the developer used divison by 8 (/8) and modulus 8 (%8) expressions here and there. Is there an easy way to read these kinds of bit manipulation functions in c?
static void setBits(U8 *input, U16 *bPos, U8 len, U8 val)
{
U16 pos;
if (bPos==0)
{
pos=0;
}
else
{
pos = *bPos;
*bPos += len;
}
input[pos/8] = (input[pos/8]&(0xFF-((0xFF>>(pos%8))&(0xFF<<(pos%8+len>=8?0:8-(pos+len)%8)))))
|((((0xFF>>(8-len)) & val)<<(8-len))>>(pos%8));
if ((pos/8 == (pos+len)/8)|(!((pos+len)%8)))
return;
input[(pos+len)/8] = (input[(pos+len)/8]
&(0xFF-(0xFF<<(8-(pos+len)%8))))
|((0xFF>>(8-len)) & val)<<(8-(pos+len)%8);
}
please explain why the developer used divison by 8 (/8) and modulus 8 (%8) expressions here and there
First of all, note that the individual bits of a byte are numbered 0 to 7, where bit 0 is the least significant one. There are 8 bits in a byte, hence the "magic number" 8.
Generally speaking: if you have any raw data, it consists of n bytes and can therefore always be treated as an array of bytes uint8_t data[n]. To access bit x in that byte array, you can for example do like this:
Given x = 17, bit x is then found in byte number 17/8 = 2. Note that integer division "floors" the value, instead of 2.125 you get 2.
The remainder of the integer division gives you the bit position in that byte, 17%8 = 1.
So bit number 17 is located in byte 2, bit 1. data[2] gives the byte.
To mask out a bit from a byte in C, the bitwise AND operator & is used. And in order to use that, a bit mask is needed. Such bit masks are best obtained by shifting the value 1 by the desired amount of bits. Bit masks are perhaps most clearly expressed in hex and the possible bit masks for a byte will be (1<<0) == 0x01 , (1<<1) == 0x02, (1<<3) == 0x04, (1<<4) == 0x08 and so on.
In this case (1<<1) == 0x02.
C code:
uint8_t data[n];
...
size_t byte_index = x / 8;
size_t bit_index = x % 8;
bool is_bit_set;
is_bit_set = ( data[byte_index] & (1<<bit_index) ) != 0;

CRC bit-order confusion

I'm calculating a CCITT CRC-16 bit by bit. I do this that way because it's a prototype that later should be ported to VHDL and end up in hardware to check a serial bit-stream.
On the net I found a single bit CRC-16 update step code. Wrote a test-program and it works. Except for one strange thing: I have to feed the bits of a byte from lowest to highest bit. If I do it this way, I get correct results.
In the CCITT definition of CRC-16 the bits should be feed highest bit to lowest bit though. The data-stream that I want to calculate the CRC from comes in this format as well, so my current code is kind of useless for me.
I'm confused. I would have not expected that feeding the bits the wrong way around could work at all.
Question: Why is it possible that a CRC can be written to take the data in two different bit-orders, and how do I transform my single bit update code that it accepts the data MSB first?
For reference, here is the relevant code. Initialization and the final check have been removed to keep the example short:
typedef unsigned char bit;
void update_crc_single_bit (bit * crc, bit data)
{
// update CRC for a single bit:
bit temp[16];
int i;
temp[0] = data ^ crc[15];
temp[1] = crc[0];
temp[2] = crc[1];
temp[3] = crc[2];
temp[4] = crc[3];
temp[5] = data ^ crc[4] ^ crc[15];
temp[6] = crc[5];
temp[7] = crc[6];
temp[8] = crc[7];
temp[9] = crc[8];
temp[10] = crc[9];
temp[11] = crc[10];
temp[12] = data ^ crc[11] ^ crc[15];
temp[13] = crc[12];
temp[14] = crc[13];
temp[15] = crc[14];
for (i=0; i<16; i++)
crc[i] = temp[i];
}
void update_crc_byte (bit * crc, unsigned char data)
{
int j;
// calculate CRC lowest bit first
for (j=0; j<8; j++)
{
bit b = (data>>j)&1;
update_crc_single_bit(crc, b);
}
}
Edit: Since there is some confusion here: I have to compute the CRC bit by bit, and for each byte MSB first. I can't simply store the bits because the code shown above is a prototype for something that will end up in hardware (without memory).
The code shown above generates the correct result if I feed in a bit-stream in the following order (shown is the index of the received bit. Each byte gets transmitted MSB first):
|- first byte -|- second byte -|- third byte
7,6,5,4,3,2,1,0,15,14,13,12,11,10,9,8,....
I need the single update loop to be transformed that it generates the same CRC using natural order (e.g. as received):
|- first byte -|- second byte -|- third byte
0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,....
If you look at the RevEng 16-bit CRC Catalogue, you see that there are two different CRCs called "CCITT", one of which is labeled there "CCITT-False". Somewhere along the way someone got confused about what the CCITT 16-bit CRC was, and that confusion was propagated widely. The two CRCs are described thusly, with the first one (KERMIT) being the true CCITT CRC:
KERMIT
width=16 poly=0x1021 init=0x0000 refin=true refout=true xorout=0x0000 check=0x2189 name="KERMIT"
and
CRC-16/CCITT-FALSE
width=16 poly=0x1021 init=0xffff refin=false refout=false xorout=0x0000 check=0x29b1 name="CRC-16/CCITT-FALSE"
You will note that the real one is reflected, and the false one is not, and there is another difference in the initialization. In reflected CRCs, the lowest bit of the data is processed first, so it appears that you are trying to compute the true CCITT CRC.
When the CRC is reflected, so is the order of the bits in the polynomial that is exclusive-ored into the register, so 0x1021 becomes 0x8408. Here is a simple C implementation that you can check against:
#include <stddef.h>
#define POLY 0x8408
unsigned crc16_ccitt(unsigned crc, unsigned char *buf, size_t len)
{
int k;
while (len--) {
crc ^= *buf++;
for (k = 0; k < 8; k++)
crc = crc & 1 ? (crc >> 1) ^ POLY : crc >> 1;
}
return crc;
}
I don't know what you mean by "In the CCITT definition of CRC-16 the bits should be feed highest bit to lowest bit though". What definition are you referring to?
In this Altera document, you can see the shift register implementation of the CRC for a hardware implementation. Here is a copy of the diagram:
For your code, you need to reverse your register, temp[], indices. temp[0] is temp[15] and so on.
Update - If you look at:
RevEng 16-bit CRC Catalogue
there's a link to:
Online CRC calculator
The first three labeled as CRC-CCITT operate on data sent or received MSB to LSB using the polynomial 0x11021. The only difference is the starting value:
CRC-CCITT (XModem) - crc initialized to 0x0000, same as prefixing by 0x0000.
CRC-CCITT (0xFFFF) - crc initialized to 0xFFFF, same as prefixing by 0x84CF.
CRC-CCITT (0x1D0F) - crc initialized to 0x1D0F, same as prefixing by 0xFFFF.
So my guess is that you want to use one of these three.
normally, bits are transferred on the line least significant bit first. So, in case you have an array of bytes, first bit is least significant bit of first byte, then comes the next to least significant bit... so up to the most significant bit of the first byte and then comes the least significant bit of the next byte. This is the order of the bits (coefficients) in the polyonomial division you are making. Try my routines at https://github.com/mojadita/crc.git (you have there a table for CRC16-CCITT)

Resources