uint8 data to uint32 bits in EMBEDDED c - c

I am getting the input data from my microcontroller over uint8 data the are transmitted as 0xff, 0x2a.... the the two bits are subsequntly high and low values.
I need to convert this to uint_32 var where i can use memcpy
example
1C 1D 1E 1F 20 21 22 23
if this are the VALUES that are transferred how i can get them all in a single uint32 variable, with this below approach i can just get the last two bits which is just 23 and not the entire
void Func(const uint8_t * data){
uint32_t msgh =0;
uint32_t msgl=0;
uint32_t datah =0;
uint32_t datal=0;
for(int i= 0; i<dlc;i++){
msgh=*data >> 4;
msgl=*data & 0x0f;
// printf("DATA %x%x",msgh,msgl);
memcpy(&datah, &msgh, 4);
memcpy(&datal, &msgl, 4);
// printf("msgl=%x\n",msgl);
data++;
}
printf("DATA%x%x",datah,datal);
}

No need to memcpying all this stuff – extracting the high and low nibbles of the bytes can be done as follows:
uint32_t high = 0;
uint32_t low = 0;
unsigned int offset = 0;
for(int i = 0; i < 8; ++i)
{
high |= ((uint32_t)(*data) >> 4) << offset;
low |= ((uint32_t)(*data) & 0x0f) << offset;
offset += 4;
++data;
}
Note how you need to shift the offsets such that the nibbles get at their right positions within their respective 32-bit values.
Note, too that this assumes little endian byte order of transferred data (the byte order must be known and fix as part of the protocol definition for communication!).
Big endian order slightly differs:
uint32_t high = 0;
uint32_t low = 0;
unsigned int offset = 32;
for(int i = 0; i < 8; ++i)
{
offset -= 4;
high |= (uint32_t)(*data >> 4) << offset;
low |= (uint32_t)(*data & 0x0f) << offset;
++data;
}
(Well, could have started with 28 as well and subtracted 4 after nibble extraction, analogously to LE variant – this variant reflects my personal preference for constants of powers of two – number of operations doesn't differ anyway…).
Finally note that you might prefer to separate conversion and outputting into separate functions to achieve better reusability, e.g. like
void extract(uint8_t const* data, uint32_t* high, uint32_t* low)
{
// assign to *high and *low analogously to above
}
// assuming C code; if C++, prefer references:
void extract(uint8_t const* data, uint32_t& high, uint32_t& low);
// or alternatively:
typedef struct
{
uint32_t high;
uint32-t low;
} NibblesCombined;
NibblesCombined extract(uint8_t const* data)
{
// assign to the members of the struct analogously to above
}
// again assuming C; C++ could optionally return a std::pair instead!

Related

Bitshift on structures

I'm unsure if this is possible due to structure padding and alignment but, assuming you take care of that by aligning your structures to 4/8 bytes, is it possible to bit shift on a structure as if it was a single variable?
What I'd like to do is take a string (max 8 bytes) and shift it into the high order bits of a 64-bit variable.
Like if I do this:
#include <stdint.h>
#include <string.h>
void shiftstr(uint64_t* t,char* c,size_t len){
memcpy(t, c, len);
//now *t==0x000000617369616b
*t<<=(sizeof(uint64_t)-len)*8;
//now *t==0x617369616b000000
}
int main(){
uint64_t k = 0;
char n[] = "kaisa";
shiftstr(&k, n,strlen(n));
return 0;
}
This works just fine, but what if I had, instead of a uint64_t, two uint32_t, either as individual variables or a structure.
#include <stdint.h>
#include <string.h>
struct U64{
uint32_t x;
uint32_t y;
};
void shiftstrstruct(struct U64* t, char* c, size_t len){
memcpy(t, c, len);
/*
At this point I think
x == 0x7369616b
y == 0x00000061
But I could be wrong
*/
//but how can I perform the bit shift?
//Where
//x==0x0000006b
//y==0x61697361
}
int main(){
char n[] = "kaisa";
struct U64 m = {0};
shiftstrstruct(&m, n, strlen(n));
return 0;
}
Up to the memcpy part, it should be the same as if I were performing it on a single variable. I believe the values of x and y are correct in such situations. But, if that's the case that means the values need to be shifted away from x towards y.
I know I can cast but what if I wanted to deal with a 16 byte string that needed to be shifted into two 64 bit variables, or even larger?
Is shifting structures like this possible? Is there a better alternative?
Is shifting structures like this possible?
No, not really. Even if the x and y members are in adjacent memory locations, bit-shift operations on either are performed as integer operations on the individual variables. So, you can't shift bits "out of" one and "into" the other: bits that "fall off" during the shift will be lost.
Is there a better alternative?
You would have to implement such a multi-component bit-shift yourself – making copies of the bits that would otherwise be lost and somehow masking those back into the result, after shifting other bits internally to each 'component' variable. Exactly how to do this would largely depend on the use case.
Here's one possible implementation of a right-shift function for a structure comprising two uint64_t members (I have not added any error-checking for the count, and I assume that uint64_t is exactly 64 bits wide):
#include <stdio.h>
#include <stdint.h>
typedef struct {
uint64_t hi;
uint64_t lo;
} ui128;
void Rshift(ui128* data, int count)
{
uint64_t mask = (1uLL << count) - 1; // Set low "count" bits to 1
uint64_t save = data->hi & mask; // Save bits that fall off hi
data->hi >>= count; // Shift the hi component
data->lo >>= count; // Shift the lo component
data->lo |= save << (64 - count); // Mask in the bits from hi
return;
}
int main()
{
ui128 test = { 0xF001F002F003F004, 0xF005F006F007F008 };
printf("%016llx%016llx\n", test.hi, test.lo);
Rshift(&test, 16);
printf("%016llx%016llx\n", test.hi, test.lo);
return 0;
}
A similar logic could be used for a left-shift function, but you would then need to save the relevant upper (most significant) bits from the lo member and mask them into the shifted hi value:
void Lshift(ui128* data, int count)
{
uint64_t mask = ((1uLL << count) - 1) << (64 - count);
uint64_t save = data->lo & mask;
data->hi <<= count;
data->lo <<= count;
data->hi |= save >> (64 - count);
return;
}
union is your friend, this is what you want:
#include <stdint.h>
#include <stdio.h>
typedef union _shift_u64{
struct _u64{
uint32_t x;
uint32_t y;
} __attribute__((__packed__)) U64;
uint64_t x_and_y;
} SHIFT_U64;
int main(int argc, char* argv[]){
SHIFT_U64 test;
test.U64.x = 4;
test.U64.y = 8;
printf("test.U64.x=%d, test.U64.y=%d, test.x_and_y=%ld\n", test.U64.x, test.U64.y, test.x_and_y);
test.x_and_y<<=1;
printf("test.U64.x=%d, test.U64.y=%d, test.x_and_y=%ld\n", test.U64.x, test.U64.y, test.x_and_y);
test.x_and_y>>=1;
printf("test.U64.x=%d, test.U64.y=%d, test.x_and_y=%ld\n", test.U64.x, test.U64.y, test.x_and_y);
return 0;
}
EDIT: This simple program illustrates how to do it the other way, but you have to check for the carry over bit and shift overflow and shift underflow by yourself. union doesn't care about the data, you just have to make sure that the data makes sense. After compiling, redirect the output of the program to a file or hex-editor and read the errorlevel of the program.
Linux example: ./a.out > a.out.bin; echo "errorlevel=$?"; xxd a.out.bin
#include <stdio.h>
typedef union _shift_it{
struct _data{
unsigned long x : 64;
unsigned long y : 64;
} __attribute__((__packed__)) DATA;
unsigned char x_and_y[16];
} __attribute__((__packed__)) SHIFT_IT;
int main(int argc, char* argv[]){
SHIFT_IT test;
int errorlevel = 0;
//bitmask for shift operation
static const unsigned long LEFT_SHIFTMASK64 = 0x8000000000000000;
static const unsigned long RIGHT_SHIFTMASK64 = 0x0000000000000001;
//test data
test.DATA.x = 0x2468246824682468; //high bits
test.DATA.y = 0x1357135713571357; //low bits
//binary output to stdout
for(int i=0; i<16; i++) putchar(test.x_and_y[i]);
//left shift
if(test.DATA.x & LEFT_SHIFTMASK64) errorlevel += 1;
test.DATA.x <<= 1;
if(test.DATA.y & LEFT_SHIFTMASK64) errorlevel += 2;
test.DATA.y <<= 1;
//binary output to stdout
for(int i=0; i<16; i++) putchar(test.x_and_y[i]);
//right shift
if(test.DATA.y & RIGHT_SHIFTMASK64) errorlevel += 4;
test.DATA.y >>= 1;
if(test.DATA.x & RIGHT_SHIFTMASK64) errorlevel += 8;
test.DATA.x >>= 1;
//binary output to stdout
for(int i=0; i<16; i++) putchar(test.x_and_y[i]);
//right shift
if(test.DATA.y & RIGHT_SHIFTMASK64) errorlevel += 16;
test.DATA.y >>= 1;
if(test.DATA.x & RIGHT_SHIFTMASK64) errorlevel += 32;
test.DATA.x >>= 1;
//binary output to stdout
for(int i=0; i<16; i++) putchar(test.x_and_y[i]);
//left shift
if(test.DATA.x & LEFT_SHIFTMASK64) errorlevel += 64;
test.DATA.x <<= 1;
if(test.DATA.y & LEFT_SHIFTMASK64) errorlevel += 128;
test.DATA.y <<= 1;
//binary output to stdout
for(int i=0; i<16; i++) putchar(test.x_and_y[i]);
return errorlevel;
}

how Converting 2 arrays of bytes (uint8_t) into a word (uint16_t)?

I want to convert two times the Signal[8] values into a uint16_t word, so I can send it via the SPI port.(shift register)?
I tried the following, but it doesn't work:
the code was like that, my you can compile it.
void senddata(void){
uint8_t NZero = 0;
uint16_t timeout;
uint8_t value ;
volatile uint8_t Signal[8]={RGB_NC_0, RGB_1, RGB_2, RGB_3, RGB_4, RGB_5, RGB_6, RGB_NC_7}; // to be set by the state machine
volatile uint8_t SPIData[16]={0};
for(int i=0;i<8;i++){
nonZero|= Signal[i];
}
int i , j;
//Set LATCH low
GPIO_WriteBit(LED_LATCH_PORT, LED_LATCH, Bit_RESET);
//Set blank high
GPIO_WriteBit(LED_BLANK_PORT, LED_BLANK, Bit_SET);
//Enable SPI
SPI_Cmd(LED_SPI, ENABLE);
//iterate through the registers
for(i = 2 - 1; i >= 0; i--){
//iterate through the bits in each registers
for(j = 8 - 1; j >= 0; j--){
valr = Signal[i] & (1 << j);
SPI_I2S_SendData(LED_SPI, value);
while(SPI_I2S_GetFlagStatus(LED_SPI, SPI_I2S_FLAG_TXE) == 0 && timeout < 0xFFFF) //Odota että TXE=1
{ timeout++; }
if(timeout == 0xFFFF){break;}
}
}
SPI_Cmd(LED_SPI, DISABLE); /*!< SPI disable */
GPIO_WriteBit(LED_LATCH_PORT, LED_LATCH, Bit_SET);//Set LATCH high
if(NZero){
GPIO_WriteBit(LED_BLANK_PORT, LED_BLANK, Bit_RESET);//Set BLANK low
}
else{
GPIO_WriteBit(LED_BLANK_PORT, LED_BLANK, Bit_SET);//Set BLANK high
}
}
You can combine each subsequent two bytes into the SPI port register as follows:
for(size_t i = 0; i < sizeof(signal/sizeof(*signal); i += 2)
{
spiPortRegister = (uint16_t)signal[i + 0] << 0
| (uint16_t)signal[i + 1] << 8;
// send via SPI here!
}
// a *totally* generic implementation might add special handling for
// odd arrays, in your specific case you can omit...
Analogously you split back on receiver side:
for(size_t i = 0; i < sizeof(signal/sizeof(*signal); i += 2)
{
// receive via SPI here
signal[i + 0] = (uint8_t) spiPortRegister >> 0;
signal[i + 1] = (uint8_t) spiPortRegister >> 8;
}
Note: Additions or shifts by 0 are unnecessary and only added for code consistency; they will be optimised away by compiler anyway, but you can omit, if you prefer. Similarly the casts in second case, but these in addition silent the compiler from warning about precision loss.
Note, though, that even though promotion to int occurs in first case int might only be of size of 16 bits – and as you apparently operate on a MCU chances for rise – in which case the shift could provoke overflow, thus undefined behaviour, thus the cast should be applied in any case!
Endianness independent
uint16_t get16(volatile uint8_t *table)
{
return *table | ((uint16_t)*(table + 1) << 8);
}
or depending on endianess
uint16_t get16(volatile uint8_t *table)
{
uint16_t result;
memcpy(&result, table, sizeof(result));
return result;
}

CRC32 calculation with CRC hash at the beginning of the message in C

I need to calculate CRC of the message and put it at the beginning of this message, so that the final CRC of the message with 'prepended' patch bytes equals 0. I was able to do this very easily with the help of few articles, but not for my specific parameters. The thing is that I have to use a given CRC32 algorithm which calculates the CRC of the memory block, but I don't have that 'reverse' algorithm that calculates those 4 patch bytes/'kind of CRC'. Parameters of the given CRC32 algorithm are:
Polynomial: 0x04C11DB7
Endianess: big-endian
Initial value: 0xFFFFFFFF
Reflected: false
XOR out with: 0L
Test stream: 0x0123, 0x4567, 0x89AB, 0xCDEF results in CRC = 0x612793C3
The code to calculate the CRC (half-byte, table-driven, I hope data type definitions are self-explanatory):
uint32 crc32tab(uint16* data, uint32 len, uint32 crc)
{
uint8 nibble;
int i;
while(len--)
{
for(i = 3; i >= 0; i--)
{
nibble = (*data >> i*4) & 0x0F;
crc = ((crc << 4) | nibble) ^ tab[crc >> 28];
}
data++;
}
return crc;
}
The table needed is (I thougth the short [16] table should contain every 16th element from the large [256] table, but this table contains actually first 16 elements, but that's how it was provided to me):
static const uint32 tab[16]=
{
0x00000000, 0x04C11DB7, 0x09823B6E, 0x0D4326D9,
0x130476DC, 0x17C56B6B, 0x1A864DB2, 0x1E475005,
0x2608EDB8, 0x22C9F00F, 0x2F8AD6D6, 0x2B4BCB61,
0x350C9B64, 0x31CD86D3, 0x3C8EA00A, 0x384FBDBD
};
I modified the code so it's not so long, but the functionality stays the same. The problem is that this forward CRC calculation looks more like backward/reverse CRC calc.
I've spent almost a week trying to find out the correct polynomial/algorithm/table combination, but with no luck. If it helps, I came up with bit-wise algorithm that corresponds to table-driven code above, although that was not so hard after all:
uint32 crc32(uint16* data, uint32 len, uint32 crc)
{
uint32 i;
while(len--)
{
for(i = 0; i < 16; i++)
{
// #define POLY 0x04C11DB7
crc = (crc << 1) ^ (((crc ^ *data) & 0x80000000) ? POLY : 0);
}
crc ^= *data++;
}
return crc;
}
Here are expected results - first 2 16-bit words make the needed unknown CRC and the rest is the known data itself (by feeding these examples to provided algorithm, the result is 0).
{0x3288, 0xD244, 0xCDEF, 0x89AB, 0x4567, 0x0123}
{0xC704, 0xDD7B, 0x0000} - append as many zeros as you like, the result is the same
{0xCEBD, 0x1ADD, 0xFFFF}
{0x81AB, 0xB932, 0xFFFF, 0xFFFF}
{0x0857, 0x0465, 0x0000, 0x0123}
{0x1583, 0xD959, 0x0123}
^ ^
| |
unknown bytes that I need to calculate
I think testing this on 0xFFFF or 0x0000 words is convenient because the direction of calculation and endianess is not important (I hope :D). So be careful to use other test bytes, because the direction of calculation is quite devious :D. Also you can see that by feeding only zeros to the algorithm (both forward and backward), the result is so-called residue (0xC704DD7B), that may be helpful.
So...I wrote at least 10 different functions (bite-wise, tables, combination of polynomials etc.) trying to solve this, but with no luck. I give you here the function in which I put my hopes into. It's 'reversed' algorithm of the table-driven one above, with different table of course. The problem is that the only correct CRC I get from that is with all 0s message and that's not so unexpected. Also I have written the reversed implementation of the bit-wise algorithm (reversed shifts, etc.), but that one returns only the first byte correctly.
Here is the table-driven one, pointer to data should point to the last element of the message and crc input should be the requested crc (0s for the whole message or you can maybe take another approach - that the last 4 bytes of message are the CRC you are looking for: Calculating CRC initial value instead of appending the CRC to payload) :
uint32 crc32tabrev(uint16* data, uint32 len, uint32 crc)
{
uint8 nibble;
int i;
while(len--)
{
for(i = 0; i < 4; i++)
{
nibble = (*data >> i*4) & 0x0F;
crc = (crc >> 4) ^ revtab[((crc ^ nibble) & 0x0F)];
}
data--;
}
return reverse(crc); //reverse() flips all bits around center (MSB <-> LSB ...)
}
The table, which I hope is 'the chosen one':
static const uint32 revtab[16]=
{
0x00000000, 0x1DB71064, 0x3B6E20C8, 0x26D930AC,
0x76DC4190, 0x6B6B51F4, 0x4DB26158, 0x5005713C,
0xEDB88320, 0xF00F9344, 0xD6D6A3E8, 0xCB61B38C,
0x9B64C2B0, 0x86D3D2D4, 0xA00AE278, 0xBDBDF21C
};
As you can see, this algorithm has some perks which make me run in circles and I think I'm maybe on the right track, but I'm missing something. I hope an extra pair of eyes will see what I can not. I'm sorry for the long post (no potato :D), but I think all of that explanation was neccessary. Thank you in advance for insight or advice.
I will answer for your CRC specification, that of a CRC-32/MPEG-2. I will have to ignore your attempts at calculating that CRC, since they are incorrect.
Anyway, to answer your question, I happen to have written a program that solves this problem. It is called spoof.c. It very rapidly computes what bits to change in a message to get a desired CRC. It does this in order log(n) time, where n is the length of the message. Here is an example:
Let's take the nine-byte message 123456789 (those digits represented in ASCII). We will prepend it with four zero bytes, which we will change to get the desired CRC at the end. The message in hex is then: 00 00 00 00 31 32 33 34 35 36 37 38 39. Now we compute the CRC-32/MPEG-2 for that message. We get 373c5870.
Now we run spoof with this input, which is the CRC length in bits, the fact that it is not reflected, the polynomial, the CRC we just computed, the length of the message in bytes, and all 32 bit locations in the first four bytes (which is what we are allowing spoof to change):
32 0 04C11DB7
373c5870 13
0 0 1 2 3 4 5 6 7
1 0 1 2 3 4 5 6 7
2 0 1 2 3 4 5 6 7
3 0 1 2 3 4 5 6 7
It gives this output with what bits in those first four bytes to set:
invert these bits in the sequence:
offset bit
0 1
0 2
0 4
0 5
0 6
1 0
1 2
1 5
1 7
2 0
2 2
2 5
2 6
2 7
3 0
3 1
3 2
3 4
3 5
3 7
We then set the first four bytes to: 76 a5 e5 b7. We then test by computing the CRC-32/MPEG-2 of the message 76 a5 e5 b7 31 32 33 34 35 36 37 38 39 and we get 00000000, the desired result.
You can adapt spoof.c to your application.
Here is an example that correctly computes the CRC-32/MPEG-2 on a stream of bytes using a bit-wise algorithm:
uint32_t crc32m(uint32_t crc, const unsigned char *buf, size_t len)
{
int k;
while (len--) {
crc ^= (uint32_t)(*buf++) << 24;
for (k = 0; k < 8; k++)
crc = crc & 0x80000000 ? (crc << 1) ^ 0x04c11db7 : crc << 1;
}
return crc;
}
and with a nybble-wise algorithm using the table in the question (which is correct):
uint32_t crc_table[] = {
0x00000000, 0x04C11DB7, 0x09823B6E, 0x0D4326D9,
0x130476DC, 0x17C56B6B, 0x1A864DB2, 0x1E475005,
0x2608EDB8, 0x22C9F00F, 0x2F8AD6D6, 0x2B4BCB61,
0x350C9B64, 0x31CD86D3, 0x3C8EA00A, 0x384FBDBD
};
uint32_t crc32m_nyb(uint32_t crc, const unsigned char *buf, size_t len)
{
while (len--) {
crc ^= (uint32_t)(*buf++) << 24;
crc = (crc << 4) ^ crc_table[crc >> 28];
crc = (crc << 4) ^ crc_table[crc >> 28];
}
return crc;
}
In both cases, the initial CRC must be 0xffffffff.
Alternate approach. Assumes xorout = 0, if not, then after calculating the normal crc, then crc ^= xorout to remove it. The method here multiplies the normal crc by (1/2)%(crc polynomial) raised to (message size in bits) power % (crc polynomial) equivalent to cycling it backwards. If the message size is fixed, then the mapping is fixed and time complexity is O(1). Otherwise, it's O(log(n)).
This example code uses Visual Studio and an intrinsic for carryless multiply (PCLMULQDQ), which uses XMM (128 bit) registers. Visual Studio uses __m128i type to represent integer XMM values.
#include <stdio.h>
#include <stdlib.h>
#include <intrin.h>
typedef unsigned char uint8_t;
typedef unsigned int uint32_t;
typedef unsigned long long uint64_t;
#define POLY (0x104c11db7ull)
#define POLYM ( 0x04c11db7u)
static uint32_t crctbl[256];
static __m128i poly; /* poly */
static __m128i invpoly; /* 2^64 / POLY */
void GenMPoly(void) /* generate __m128i poly info */
{
uint64_t N = 0x100000000ull;
uint64_t Q = 0;
for(size_t i = 0; i < 33; i++){
Q <<= 1;
if(N&0x100000000ull){
Q |= 1;
N ^= POLY;
}
N <<= 1;
}
poly.m128i_u64[0] = POLY;
invpoly.m128i_u64[0] = Q;
}
void GenTbl(void) /* generate crc table */
{
uint32_t crc;
uint32_t c;
uint32_t i;
for(c = 0; c < 0x100; c++){
crc = c<<24;
for(i = 0; i < 8; i++)
/* assumes twos complement */
crc = (crc<<1)^((0-(crc>>31))&POLYM);
crctbl[c] = crc;
}
}
uint32_t GenCrc(uint8_t * bfr, size_t size) /* generate crc */
{
uint32_t crc = 0xffffffffu;
while(size--)
crc = (crc<<8)^crctbl[(crc>>24)^*bfr++];
return(crc);
}
/* carryless multiply modulo poly */
uint32_t MpyModPoly(uint32_t a, uint32_t b) /* (a*b)%poly */
{
__m128i ma, mb, mp, mt;
ma.m128i_u64[0] = a;
mb.m128i_u64[0] = b;
mp = _mm_clmulepi64_si128(ma, mb, 0x00); /* p[0] = a*b */
mt = _mm_clmulepi64_si128(mp, invpoly, 0x00); /* t[1] = (p[0]*((2^64)/POLY))>>64 */
mt = _mm_clmulepi64_si128(mt, poly, 0x01); /* t[0] = t[1]*POLY */
return mp.m128i_u32[0] ^ mt.m128i_u32[0]; /* ret = p[0] ^ t[0] */
}
/* exponentiate by repeated squaring modulo poly */
uint32_t PowModPoly(uint32_t a, uint32_t b) /* pow(a,b)%poly */
{
uint32_t prd = 0x1u; /* current product */
uint32_t sqr = a; /* current square */
while(b){
if(b&1)
prd = MpyModPoly(prd, sqr);
sqr = MpyModPoly(sqr, sqr);
b >>= 1;
}
return prd;
}
int main()
{
uint32_t inv; /* 1/2 % poly, constant */
uint32_t fix; /* fix value, constant if msg size fixed */
uint32_t crc; /* crc at end of msg */
uint32_t pre; /* prefix for msg */
uint8_t msg[13] = {0x00,0x00,0x00,0x00,0x31,0x32,0x33,0x34,0x35,0x36,0x37,0x38,0x39};
GenMPoly(); /* generate __m128i polys */
GenTbl(); /* generate crc table */
inv = PowModPoly(2, 0xfffffffeu); /* inv = 2^(2^32-2) % Poly = 1/2 % poly */
fix = PowModPoly(inv, 8*sizeof(msg)); /* fix value */
crc = GenCrc(msg, sizeof(msg)); /* calculate normal crc */
pre = MpyModPoly(fix, crc); /* convert to prefix */
printf("crc = %08x pre = %08x ", crc, pre);
msg[0] = (uint8_t)(pre>>24); /* store prefix in msg */
msg[1] = (uint8_t)(pre>>16);
msg[2] = (uint8_t)(pre>> 8);
msg[3] = (uint8_t)(pre>> 0);
crc = GenCrc(msg, sizeof(msg)); /* check result */
if(crc == 0)
printf("passed\n");
else
printf("failed\n");
return 0;
}
Well, few hours after my question, someone whose name I don't remember posted an answer to my question which turned out to be correct. Somehow this answer got completely deleted, I don't know why or who did it, but I'd like to thank to this person and in the case you will see this, please post your answer again and I'll delete this one. But for other users, here's his answer that worked for me, thank you again, mysterious one (unfortunately, I can't replicate his notes and suggestions well enough, just the code itself):
Edit: The original answer came from user samgak, so this stays here until he'll post his answer.
The reverse CRC algorithm:
uint32 revcrc32(uint16* data, uint32 len, uint32 crc)
{
uint32 i;
data += len - 1;
while(len--)
{
crc ^= *data--;
for(i = 0; i < 16; i++)
{
uint32 crc1 = ((crc ^ POLY) >> 1) | 0x80000000;
uint32 crc2 = crc >> 1;
if(((crc1 << 1) ^ (((crc1 ^ *data) & 0x80000000) ? POLY : 0)) == crc)
crc = crc1;
else if(((crc2 << 1) ^ (((crc2 ^ *data) & 0x80000000) ? POLY : 0)) == crc)
crc = crc2;
}
}
return crc;
}
Find patch bytes:
#define CRC_OF_ZERO 0xb7647d
void bruteforcecrc32(uint32 targetcrc)
{
// compute prefixes:
uint16 j;
for(j = 0; j <= 0xffff; j++)
{
uint32 crc = revcrc32(&j, 1, targetcrc);
if((crc >> 16) == (CRC_OF_ZERO >> 16))
{
printf("prefixes: %04lX %04lX\n", (crc ^ CRC_OF_ZERO) & 0xffff, (uint32)j);
return;
}
}
}
Usage:
uint16 test[] = {0x0123, 0x4567, 0x89AB, 0xCDEF}; // prefix should be 0x0CD8236A
bruteforcecrc32(revcrc32(test, 4, 0L));

How can I create a 48-bit uint for bit mask

I am trying to create a 48-bit integer value. I understand it may be possible to use a char array or struct, but I want to be able to do bit masking/manipulation and I'm not sure how that can be done.
Currently the program uses a 16-bit uint and I need to change it to 48. It is a bytecode interpreter and I want to expand the memory addressing to 4GB. I could just use 64-bit, but that would waste a lot of space.
Here is a sample of the code:
unsigned int program[] = { 0x1064, 0x11C8, 0x2201, 0x0000 };
void decode( )
{
instrNum = (program[i] & 0xF000) >> 12; //the instruction
reg1 = (program[i] & 0xF00 ) >> 8; //registers
reg2 = (program[i] & 0xF0 ) >> 4;
reg3 = (program[i] & 0xF );
imm = (program[i] & 0xFF ); //pointer to data
}
full program: http://en.wikibooks.org/wiki/Creating_a_Virtual_Machine/Register_VM_in_C
You can use the bit fields which are often used to represent integral types of known, fixed bit-width. A well-known usage of bit-fields is to represent a set of bits, and/or series of bits, known as flags. You can apply bit operations on them.
#include <stdio.h>
#include <stdint.h>
struct uint48 {
uint64_t x:48;
} __attribute__((packed));
Use a structure or uint16_t array with special functions for an array of uint48.
For individual instances, use uint64_t or unsigned long long. uint64_t will work fine for individually int48, but may want to mask off the results operations like * or << to keep upper bits cleared. Just some space saving routines are needed for arrays.
typedef uint64_t uint48;
const uint48 uint48mask = 0xFFFFFFFFFFFFFFFFull;
uint48 uint48_get(const uint48 *a48, size_t index) {
const uint16_t *a16 = (const uint16_t *) a48;
index *= 3;
return a16[index] | (uint32_t) a16[index + 1] << 16
| (uint64_t) a16[index + 2] << 32;
}
void uint48_set(uint48 *a48, size_t index, uint48 value) {
uint16_t *a16 = (uint16_t *) a48;
index *= 3;
a16[index] = (uint16_t) value;
a16[++index] = (uint16_t) (value >> 16);
a16[++index] = (uint16_t) (value >> 32);
}
uint48 *uint48_new(size_t n) {
size_t size = n * 3 * sizeof(uint16_t);
// Insure size allocated is a multiple of `sizeof(uint64_t)`
// Not fully certain this is needed - but doesn't hurt.
if (size % sizeof(uint64_t)) {
size += sizeof(uint64_t) - size % sizeof(uint64_t);
}
return malloc(size);
}

Strip parity bits in C from 8 bits of data followed by 1 parity bit

I have a buffer of bits with 8 bits of data followed by 1 parity bit. This pattern repeats itself. The buffer is currently stored as an array of octets.
Example (p are parity bits):
0001 0001 p000 0100 0p00 0001 00p01 1100 ...
should become
0001 0001 0000 1000 0000 0100 0111 00 ...
Basically, I need to strip of every ninth bit to just obtain the data bits. How can I achieve this?
This is related to another question asked here sometime back.
This is on a 32 bit machine so the solution to the related question may not be applicable. The maximum possible number of bits is 45 i.e. 5 data octets
This is what I have tried so far. I have created a "boolean" array and added the bits into the array based on the the bitset of the octet. I then look at every ninth index of the array and through it away. Then move the remaining array down one index. Then I've got only the data bits left. I was thinking there may be better ways of doing this.
Your idea of having an array of bits is good. Just implement the array of bits by a 32-bit number (buffer).
To remove a bit from the middle of the buffer:
void remove_bit(uint32_t* buffer, int* occupancy, int pos)
{
assert(*occupancy > 0);
uint32_t high_half = *buffer >> pos >> 1;
uint32_t low_half = *buffer << (32 - pos) >> (32 - pos);
*buffer = high_half | low_half;
--*occupancy;
}
To add a byte to the buffer:
void add_byte(uint32_t* buffer, int* occupancy, uint8_t byte)
{
assert(*occupancy <= 24);
*buffer = (*buffer << 8) | byte;
*occupancy += 8;
}
To remove a byte from the buffer:
uint8_t remove_byte(uint32_t* buffer, int* occupancy)
{
uint8_t result = *buffer >> (*occupancy - 8);
assert(*occupancy >= 8);
*occupancy -= 8;
return result;
}
You will have to arrange the calls so that the buffer never overflows. For example:
buffer = 0;
occupancy = 0;
add_byte(buffer, occupancy, *input++);
add_byte(buffer, occupancy, *input++);
remove_bit(buffer, occupancy, 7);
*output++ = remove_byte(buffer, occupancy);
add_byte(buffer, occupancy, *input++);
remove_bit(buffer, occupancy, 6);
*output++ = remove_byte(buffer, occupancy);
... (there are only 6 input bytes, so this should be easy)
In pseudo-code (since you're not providing any proof you've tried something), I would probably do it like this, for simplicity:
View the data (with parity bits included) as a stream of bits
While there are bits left to read:
Read the next 8 bits
Write to the output
Read one more bit, and discard it
This "lifts you up" from worrying about reading bytes, which no longer is a useful operation since your bytes are interleaved with bits you want to discard.
I have written helper functions to read unaligned bit buffers (this was for AVC streams, see original source here). The code itself is GPL, I'm pasting interesting (modified) bits here.
typedef struct bit_buffer_ {
uint8_t * start;
size_t size;
uint8_t * current;
uint8_t read_bits;
} bit_buffer;
/* reads one bit and returns its value as a 8-bit integer */
uint8_t get_bit(bit_buffer * bb) {
uint8_t ret;
ret = (*(bb->current) >> (7 - bb->read_bits)) & 0x1;
if (bb->read_bits == 7) {
bb->read_bits = 0;
bb->current++;
}
else {
bb->read_bits++;
}
return ret;
}
/* reads up to 32 bits and returns the value as a 32-bit integer */
uint32_t get_bits(bit_buffer * bb, size_t nbits) {
uint32_t i, ret;
ret = 0;
for (i = 0; i < nbits; i++) {
ret = (ret << 1) + get_bit(bb);
}
return ret;
}
You can use the structure like this:
uint_8 * buffer;
size_t buffer_size;
/* assumes buffer points to your data */
bit_buffer bb;
bb.start = buffer;
bb.size = buffer_size;
bb.current = buffer;
bb.read_bits = 0;
uint32_t value = get_bits(&bb, 8);
uint8_t parity = get_bit(&bb);
uint32_t value2 = get_bits(&bb, 8);
uint8_t parity2 = get_bit(&bb);
/* etc */
I must stress that this code is quite perfectible, proper bound checking must be implemented, but it works fine in my use-case.
I leave it as an exercise to you to implement a proper bit buffer reader using this for inspiration.
This also works
void RemoveParity(unsigned char buffer[], int size)
{
int offset = 0;
int j = 0;
for(int i = 1; i + j < size; i++)
{
if (offset == 0)
{
printf("%u\n", buffer[i + j - 1]);
}
else
{
unsigned char left = buffer[i + j - 1] << offset;
unsigned char right = buffer[i + j] >> (8 - offset);
printf("%u\n", (unsigned char)(left | right));
}
offset++;
if (offset == 8)
{
offset = 0;
j++; // advance buffer (8 parity bit consumed)
}
}
}

Resources