I wrote this function with the help of this page on bit twiddling:
uint16_t *decode(uint64_t instr) {
// decode instr (this is new to me lol)
uint16_t icode = (instr >> 48) & ((1 << 16) - 1);
uint16_t p1 = (instr >> 32) & ((1 << 16) - 1);
uint16_t p2 = (instr >> 16) & ((1 << 16) - 1);
uint16_t p3 = (instr >> 00) & ((1 << 16) - 1);
return (uint16_t[]){icode, p1, p2, p3};
}
I have this to test it:
uint16_t *arr = decode(number);
for(int i = 0; i < 4; i++) {
printf("%d\n", arr[i]);
}
However, this prints 0 four times whatever number is. I also haven't solved the first part of the question, how to encode the four uint16_t's in the first place.
how to encode the four uint16_t's in the first place
This isn't hard. All you have to do is to load each uint16_t to a uint64_t one-by-one, and then return that uint64_t:
uint64_t encode(uint16_t uints[]) {
uint64_t master = 0;
for (uint8_t index = 0; index <= 3; ++index) {
master <<= 16; // Shift master left by 16 bits to create space for the next uint16
master |= uints[index]; // Load uints[index] to the lower 16 bits of master
} // Do this four times
return master;
}
To load the uint16_ts in reverse order, simply replace uint8_t index = 0; index <= 3; ++index with uint8_t index = 3; index >= 0; --index.
Your best bet is actually to use memcpy. Most modern compilers will optimize this into the necessary bit shifts and such for you.
uint64_t pack(const uint16_t arr[static 4]) {
uint64_t res;
memcpy(&res, arr, 8);
return res;
}
void unpack(uint64_t v, uint16_t arr[static 4]) {
memcpy(arr, &v, 8);
}
Note that the result is endian-dependent, appropriate for packing and unpacking on the same machine. Note too that I'm using the static array specifier to check that the caller passes at least 4 elements (when such checking is possible); if that gives your compiler grief, just remove the static specifier.
First, you can't pass an array back from a function the way you currently have it (which is why you're getting 0's), you'll need to pass it via pointer or static reference.
However, since you're dealing with 2 known bit-widths, you can use a mask and shift off that:
out[0] = val & 0x000000000000FFFF; // 1st word
out[1] = (val & 0x00000000FFFF0000) >> 16; // 2nd word
out[2] = (val & 0x0000FFFF00000000) >> 32; // 3rd word
out[3] = (val & 0xFFFF000000000000) >> 48; // 4th word
You could put this in a function or macro:
#define MACRO_DECODE(val, arr) arr[0]= val & 0x000000000000FFFF; \
arr[1] = (val & 0x00000000FFFF0000) >> 16; \
arr[2] = (val & 0x0000FFFF00000000) >> 32; \
arr[3] = (val & 0xFFFF000000000000) >> 48;
void decode(uint64_t val, uint16_t *out)
{
out[0] = val & 0x000000000000FFFF;
out[1] = (val & 0x00000000FFFF0000) >> 16;
out[2] = (val & 0x0000FFFF00000000) >> 32;
out[3] = (val & 0xFFFF000000000000) >> 48;
}
int main(int argc, char** argv)
{
int i;
uint16_t arr[] = { 0, 0, 0, 0} ;
for (i = 0; i < 4; ++i) {
printf("%#06x = %d\n", arr[i], arr[i]);
}
// as a function
decode(0xAAAABBBBCCCCDDDD, arr);
for (i = 0; i < 4; ++i) {
printf("%#06x = %d\n", arr[i], arr[i]);
}
// as a macro
MACRO_DECODE(0xDDDDCCCCBBBBAAAA, arr);
for (i = 0; i < 4; ++i) {
printf("%#06x = %d\n", arr[i], arr[i]);
}
return 0;
}
Additionally, you could use memcpy:
int main(int argc, char** argv)
{
int i;
uint16_t arr[] = { 0, 0, 0, 0} ;
uint64_t src = 0xAAAABBBBCCCCDDDD;
for (i = 0; i < 4; ++i) {
printf("%#06x = %d\n", arr[i], arr[i]);
}
// memcpy
memcpy(arr, &src, sizeof(arr));
for (i = 0; i < 4; ++i) {
printf("%#06x = %d\n", arr[i], arr[i]);
}
return 0;
}
Hope that can help.
Related
I am supposed to write a function where a 32 bit decimal is converted to hexadecimal. But my function keeps outputting zero instead of the correct hexadecimal. I don't know what I'm doing wrong. Below is the code to my function.
input: 66
what the output should be: 42
what my code outputs: 0
uint32_t packed_bcd(uint32_t value) {
uint32_t ones = 15;
uint32_t mask = (ones >> 28);
uint32_t numbers[8];
numbers[0] = value & mask;
for(int i = 1; i <= 7; i++) {
uint32_t mask_temp = (mask << (4 * i));
numbers[i] = mask_temp & value;
}
for(int i = 7; i >= 0; i--) {
numbers[i] = numbers[i] * pow(10, i);
}
int sum = 0;
for(int i = 0; i < 8; i++) {
sum = sum + numbers[i];
}
return sum;
}
BCD is not hex or normal binary.
uint32_t bcdtobin(uint32_t bcd)
{
uint32_t result = 0;
uint32_t mask = 1;
while(bcd)
{
result += (bcd & 0x0f) * mask;
bcd >>= 4;
mask *= 10;
}
return result;
}
uint32_t bintobcd(uint32_t bin)
{
uint32_t result = 0;
size_t remaining = 32;
if(bin <= 99999999)
{
while(bin)
{
result >>= 4;
result |= (bin % 10) << 28;
bin /= 10;
remaining -= 4;
}
result >>= remaining;
}
return result;
}
Using your example number:
int main(void)
{
printf("%d\n", bintobcd(42));
printf("%d\n", bcdtobin(66));
}
https://godbolt.org/z/9K48GEosv
I'm working in a project where there is this function that computes a CRC16 checksum.
uint16_t x_crc(uint16_t CrcVal, uint8_t DataIn) {
CrcVal = (unsigned char) (CrcVal >> 8) | (CrcVal << 8);
CrcVal ^= DataIn;
CrcVal ^= (unsigned char) (CrcVal & 0xff) >> 4;
CrcVal ^= (CrcVal << 8) << 4;
CrcVal ^= ((CrcVal & 0xff) << 4) << 1;
return CrcVal &0xFFFF;
}
And it is used like this:
uint8_t x[]={1,2,3,4,5,6,7,8,9,0};
uint16_t CrcResult=0;
for (size_t i = 0; i<10; i++) {
CrcResult = x_crc(CrcResult, *(x + i));
}
printf("\n\n\nCRC1 = 0x%04X\n",CrcResult);
Due to performance issues I need to convert to a lookup table. How can I do that, using the above function to generate the entries?
Thanks.
Due to the shifting implementation, it is not clear that this is a left shifting CRC with polynomial 0x11021. Example code including a 256 by 16 bit table driven one. I'm thinking that compiler optimization will inline z_crc, which is the table one. If not, then change the function to take 3 parameters, crcvalue, buffer pointer, # of bytes.
#include <stdio.h>
typedef unsigned short uint16_t;
typedef unsigned char uint8_t;
uint16_t crctbl[256];
uint16_t x_crc(uint16_t CrcVal, uint8_t DataIn) {
CrcVal = (unsigned char)(CrcVal>>8)|(CrcVal<<8); /* rotate left 8 bits */
/* crc ^= (byte*0x10000)>>16 */
CrcVal ^= DataIn; /* crc ^= (byte*0x0001) */
CrcVal ^= (unsigned char)(CrcVal&0xff)>>4; /* crc ^= ((crc&0xf0)*0x1000)>>16 */
CrcVal ^= (CrcVal<<8)<<4; /* crc ^= ((crc&0x0f)*0x1000) */
CrcVal ^= ((CrcVal&0xff)<<4)<<1; /* crc ^= ((crc&0xff)*0x0020) */
return CrcVal; /* 0x1021 */
}
uint16_t y_crc(uint16_t CrcVal, uint8_t DataIn) {
CrcVal ^= ((uint16_t)DataIn) << 8;
for (uint16_t i = 0; i < 8; i++)
CrcVal = (CrcVal&0x8000)?(CrcVal<<1)^0x1021:(CrcVal << 1);
return CrcVal;
}
void inittbl()
{
for (uint16_t j = 0; j < 256; j++)
crctbl[j] = x_crc(0, (uint8_t)j);
}
uint16_t z_crc(uint16_t CrcVal, uint8_t DataIn) {
CrcVal = crctbl[(CrcVal>>8)^DataIn]^(CrcVal<<8);
return CrcVal;
}
int main()
{
uint16_t crcx = 0;
uint16_t crcy = 0;
uint16_t crcz = 0;
uint8_t x[]={1,2,3,4,5,6,7,8,9,0};
inittbl();
for(size_t i = 0; i<10; i++)
crcx = x_crc(crcx, *(x + i));
for(size_t i = 0; i<10; i++)
crcy = y_crc(crcy, *(x + i));
for(size_t i = 0; i<10; i++)
crcz = z_crc(crcz, *(x + i));
if (crcx == crcy && crcx == crcz)
printf("match\n");
return 0;
}
Simply modifying the 256 in the loop to 65536 just repeats the same 256 values over and over again. How to generate 65536 different values?
#define CRC64_ECMA182_POLY 0x42F0E1EBA9EA3693ULL
static uint64_t crc64_table[256] = {0};
static void generate_crc64_table(void)
{
uint64_t i, j, c, crc;
for (i = 0; i < 256; i++) {
crc = 0;
c = i << 56;
for (j = 0; j < 8; j++) {
if ((crc ^ c) & 0x8000000000000000ULL)
crc = (crc << 1) ^ CRC64_ECMA182_POLY;
else
crc <<= 1;
c <<= 1;
}
crc64_table[i] = crc;
}
}
if you want 65536 values, presumably you want a 16 bit table, so upgrade the bit loop to 16 as well.
static void generate_crc64_table(void)
{
uint64_t i, j, c, crc;
for (i = 0; i < 65536 ; i++) { // 65536 was 256
crc = 0;
c = i << 32; // 32 was 56
for (j = 0; j < 16; j++) { // 16 was 8
if ((crc ^ c) & 0x8000000000000000ULL)
crc = (crc << 1) ^ CRC64_ECMA182_POLY;
else
crc <<= 1;
c <<= 1;
}
crc64_table[i] = crc;
}
}
no guarantees that this will produce a useful table, but the values should all be different at-least.
If the generated CRC is supposed to match bit or byte oriented CRC on a little endian processor, such as X86, you need to swap upper/lower bytes of each 2 byte == 16 bit pair. Example code, not sure if this could be cleaned up. Note that len in the generate function is # shorts == # 2 byte elements == # 16 bit elements.
#define CRC64_ECMA182_POLY 0x42F0E1EBA9EA3693ULL
static uint64_t crc64_table[65536] = {0};
static void generate_crc64_table(void)
{
uint64_t i, j, crc;
for (i = 0; i < 65536; i++) {
crc = i << 48;
for (j = 0; j < 16; j++)
// assumes two's complement math
crc = (crc << 1) ^ ((0ull-(crc>>63))&CRC64_ECMA182_POLY);
// swap byte pairs on index and values for table lookup
crc64_table[((i & 0xff00) >> 8) | ((i & 0x00ff) << 8)] =
((crc & 0xff00ff00ff00ff00ull) >> 8) | ((crc & 0x00ff00ff00ff00ffull) << 8);
}
}
static uint64_t generate_crc64(uint16_t *bfr, int len)
{
uint64_t crc = 0;
int i;
for (i = 0; i < len; i++)
// generates crc with byte pairs swapped
crc = crc64_table[(crc>>48) ^ *bfr++] ^ (crc << 16);
// unswap byte pairs for return
return ((crc & 0xff00ff00ff00ff00) >> 8) | ((crc & 0x00ff00ff00ff00ff) << 8);
}
My aim is to send a datagram over a network that starts with 64 bit unsigned integer in network byte order. So first I use macros to transform the number into big-endian:
#define htonll(x) ((1==htonl(1)) ? (x) : ((uint64_t)htonl((x) & 0xFFFFFFFF) << 32) | htonl((x) >> 32))
#define ntohll(x) ((1==ntohl(1)) ? (x) : ((uint64_t)ntohl((x) & 0xFFFFFFFF) << 32) | ntohl((x) >> 32))
Then I serialize it into a buffer:
unsigned char * serialize_uint64(unsigned char *buffer, uint64_t value) {
printf("**** seriializing PRIu64 value = %"PRIu64"\n", value);
int i;
for (i = 0; i < 8; i++)
buffer[i] = (value >> (56 - 8 * i)) & 0xFF;
for (i = 0; i < 8; i++)
printf("bufer[%d] = %x\n", i, buffer[i]);
return buffer + 8;
}
Then I deserializes it with
uint64_t deserialize_uint64(unsigned char *buffer) {
uint64_t res = 0;
printf("*** deserializing buffer:\n");
int i;
for (i = 0; i < 8; i++)
printf("bufer[%d] = %x\n", i, buffer[i]);
for (i = 0; i < 8; i++)
res |= buffer[i] << (56 - 8 * i);
return res;
}
It seems to work for small integers but the following test code is not working properly:
uint64_t a = (uint64_t) time(NULL);
printf("PRIu64: a =%"PRIu64"\n", a);
uint64_t z = htonll(a);
uint64_t zz = ntohll(z);
printf("z = %"PRIu64" ==> zz = %"PRIu64" \n", z, zz);
unsigned char buffer[1024];
serialize_uint64(buffer, z);
uint64_t b = deserialize_uint64(buffer);
uint64_t c = ntohll(g);
as I get
a = 1494157850
htonll(a) = 1876329069679738880 ==> ntohll(htonll(a)) = 1494157850
**** seriializing PRIu64 value = 1876329069679738880
bufer[0] = 1a
bufer[1] = a
bufer[2] = f
bufer[3] = 59
bufer[4] = 0
bufer[5] = 0
bufer[6] = 0
bufer[7] = 0
*********
*** deserializing buffer:
bufer[0] = 1a
bufer[1] = a
bufer[2] = f
bufer[3] = 59
bufer[4] = 0
bufer[5] = 0
bufer[6] = 0
bufer[7] = 0
===> res = 436866905
c = 6417359100811673600
It seems like the buffer is not capturing the bigger number ...
Your serializer is essentially
unsigned char *serialize_u64(unsigned char *buffer, uint64_t value)
{
buffer[7] = value & 0xFF;
value >>= 8;
buffer[6] = value & 0xFF;
value >>= 8;
buffer[5] = value & 0xFF;
value >>= 8;
buffer[4] = value & 0xFF;
value >>= 8;
buffer[3] = value & 0xFF;
value >>= 8;
buffer[2] = value & 0xFF;
value >>= 8;
buffer[1] = value & 0xFF;
value >>= 8;
buffer[0] = value & 0xFF;
return buffer + 8;
}
and it serializes value from native byte order to network byte order; no macro is needed.
So, it looks like OP's serialize_uint64() should work just fine. It's just that no byte order macro should be used at all.
OP's deserialize_uint64() should cast buffer[i] to (uint64_t) before shifting, to ensure the shifted result is 64-bit. Personally, I prefer to write the deserializer as
unsigned char *serialize_u64(unsigned char *buffer, uint64_t *valueptr)
{
uint64_t value = buffer[0];
value <<= 8;
value |= buffer[1];
value <<= 8;
value |= buffer[2];
value <<= 8;
value |= buffer[3];
value <<= 8;
value |= buffer[4];
value <<= 8;
value |= buffer[5];
value <<= 8;
value |= buffer[6];
value <<= 8;
value |= buffer[7];
*valueptr = value;
return buffer + 8;
}
which does the equivalent operation as OP's, if OP used res |= ((uint64_t)buffer[i]) << (56 - 8 * i); instead.
Again, both serializer and deserializer already convert the data to/from network byte order from/to native byte order; no byte order macros should be used at all.
I am trying to store 8 bytes in a byte array to store the value of a pointer.
int main() {
unsigned long a = 0;
char buf[8];
int i = 0;
int *p = &i;
a = (unsigned long)p;
while (i < 8)
{
buf[i] = (a >> (8 * i)) & 0xFF;
i++;
}
a = 0;
i = 0;
while (i < 8)
{
a = ?
i++;
}
p = (int *)a;
}
The first loop stores successive bytes of p, as casted into usigned long in a, but I don't know how to retrieve the value in the second loop. Does somebody have a clue?
This is the inverse code to yours:
a = 0;
while (i < 8)
{
a |= ((unsigned long)buf[i] & 0xff ) << (8 * i);
i++;
}