Calculating checksum (16 bit) in c - c

I am being asked to do a checksum on this text with the bit size being 16:
"AAAAAAAAAA\nX"
At first the description seemed like it wanted the Fletcher-16 checksum. But the output of Fletcher's checksum performed on the above text yielded 8aee in hex. The example file says that the modular sum algorithm (minus the two's complement) should output 509d in hex.
The only other info is the standard "every two characters should be added to the checksum."
Besides using the generic Fletcher-16 checksum provided on the corresponding Wikipedia page, I have tried using this solution found here: calculating-a-16-bit-checksum to no avail. This code produced the hex value of 4f27.

Simply adding the data seeing it as an array of big-endian 16-bit integers produced the result 509d.
#include <stdio.h>
int main(void) {
char data[] = "AAAAAAAAAA\nX";
int sum = 0;
int i;
for(i = 0; data[i] != '\0' && data[i + 1] != '\0'; i += 2) {
int value = ((unsigned char)data[i] << 8) | (unsigned char)data[i + 1];
sum = (sum + value) & 0xffff;
}
printf("%04x\n", sum);
return 0;
}

Related

Create a 128 byte random number

If the rand() function creates a random number that is 4 bytes in length, and I wanted to create a random number that is 1024 bits in length (128 bytes), is the easiest method to get this by concatenating the rand() function 256 times or is there an alternative method?
#include <stdio.h>
#include <string.h>
int main(void) {
const char data[128];
memset(&data, 0x36, 128);
printf("%s\n", data);
puts("");
printf("%d\n", sizeof(data)/sizeof(data[0]));
puts("");
int i = 0;
unsigned long rez = 0;
for(i = 0; i < 20; i++) {
unsigned int num = rand();
rez = rez + num;
printf("%x\n", rez);
}
printf("%x\n", rez);
return 0;
}
is the easiest method to get this by concatenating the rand() function 256 times or is there an alternative method?
Each rand() returns a value in the [0...RAND_MAX] range. RAND_MAX is limited to 32767 <= RAND_MAX <= INT_MAX.
Very commonly RAND_MAX is a Mersenne number of the form 2n − 1. Code can take advantage of this this very common implementation dependent value. Each rand() call then provides RAND_MAX_BITS and not 32 as suggested by OP for a 4-byte int. #Matteo Italia
[See far below update]
#include <stdlib.h>
#if RAND_MAX == 0x7FFF
#define RAND_MAX_BITS 15
#elif RAND_MAX == 0x7FFFFFFF
#define RAND_MAX_BITS 31
#else
#error TBD code
#endif
Call rand() ⌈size * 8 / RAND_MAX_BITS⌉ times. This eases the number of rand() calls needed from size.
void rand_byte(uint8_t *dest, size_t size) {
int r_queue = 0;
int r_bit_count = 0;
for (size_t i = 0; i < size; i++) {
int r = 0;
//printf("%3zu %2d %8x\n", i, r_bit_count, r_queue);
if (r_bit_count < 8) {
int need = 8 - r_bit_count;
r = r_queue << need;
r_queue = rand();
r ^= r_queue; // OK to flip bits already saved in `r`
r_queue >>= need;
r_bit_count = RAND_MAX_BITS - need;
} else {
r = r_queue;
r_queue >>= 8;
r_bit_count -= 8;
}
dest[i] = r;
}
}
int main(void) {
uint8_t buf[128];
rand_byte(buf, sizeof buf);
...
return 0;
}
If you want the easiest bit less efficient code, simply call rand() for each byte as answered by #dbush
[Update 2021]
#Anonymous Question Guy posted a nifty macro that returns the bit width of a Mersenne number, more generally than the #if RAND_MAX == 0x7FFF approach above.
/* Number of bits in inttype_MAX, or in any (1<<b)-1 where 0 <= b < 3E+10 */
#define IMAX_BITS(m) ((m) /((m)%0x3fffffffL+1) /0x3fffffffL %0x3fffffffL *30 \
+ (m)%0x3fffffffL /((m)%31+1)/31%31*5 + 4-12/((m)%31+3))
_Static_assert((RAND_MAX & 1 && (RAND_MAX/2 + 1) & (RAND_MAX/2)) == 0,
"RAND_MAX is not a Mersenne number");
#define RAND_MAX_BITS IMAX_BITS(RAND_MAX)
The C standard states that RAND_MAX has a minimum value of 32767 (0x7fff), so it's best to work under that assumption.
Because the function will only return 15 random bits, using all the bits in one call will involve some bit shifting and masking to get the results in the proper place. The simplest way to do this would be to call rand 128 times, take the low order byte of each result, and write it to your byte array:
unsigned char rand_val[128];
for (int i=0; i<128; i++) {
rand_val[i] = rand() & 0xff;
}
Don't forget to call srand exactly once somewhere before this in your code.
Using strcat as you mentioned in your comment won't work because this function works on null terminated strings, and a byte containing 0 is a valid random number.
If you plan on using these random values for anything involving cryptography, you're better off using a secure random number generator. If you have OpenSSL available, use RAND_bytes for this purpose:
unsigned char rand_val[128];
RAND_bytes(rand_val, sizeof(rand_val));
On most POSIX (Unix-like) systems, you can also read 128 bytes from /dev/urandom which you would open like a regular file in binary mode — even though POSIX does not specify the device.
The properties of C rand() are vaguely specified by the standard; as said in a comment, the number of actual usable bits depends from implementation, and their quality has been historically plagued by sub-par implementations. Also, rand() affects the global state of the program and on many implementations is not thread safe.
Given that a there are good, known and simple PRNGs such as the ones from the XorShift family, I would just use one of them.
#include <stdint.h>
/* The state must be seeded so that it is not all zero */
uint64_t s[2];
uint64_t xorshift128plus(void) {
uint64_t x = s[0];
uint64_t const y = s[1];
s[0] = y;
x ^= x << 23;
s[1] = x ^ y ^ (x >> 17) ^ (y >> 26);
return s[1] + y;
}
void next128bits(unsigned char ch[16]) {
uint64_t t = xorshift128plus();
memcpy(ch, &t, sizeof(t));
t = xorshift128plus();
memcpy(ch + 8, &t, sizeof(t));
}

Converting a checksum algorithm from Python to C

There is a checksum algorithm for the networks in some Honda vehicles that computes an integer between 0-15 for the provided data. I'm trying to convert it to plain C, but I think I'm missing something, as I get different results in my implementation.
While the Python algorithm computes 6 for "ABC", mine computes -10, which is weird. Am I messing something up with the bit shifting?
The Python algorithm:
def can_cksum(mm):
s = 0
for c in mm:
c = ord(c)
s += (c>>4)
s += c & 0xF
s = 8-s
s %= 0x10
return s
My version, in C:
int can_cksum(unsigned char * data, unsigned int len) {
int result = 0;
for (int i = 0; i < len; i++) {
result += data[i] >> 4;
result += data[i] & 0xF;
}
result = 8 - result;
result %= 0x10;
return result;
}
No, the problem is the modulus. Python follows the sign of the right operand, and C follows the sign of the left. Mask with 0x0f instead to avoid this.
result = 8 - result;
result &= 0x0f;

Decimal to BCD to ASCII

Perhaps this task is a bit more complicated than what I've written below, but the code that follows is my take on decimal to BCD. The task is to take in a decimal number, convert it to BCD and then to ASCII so that it can be displayed on a microcontroller. As far as I'm aware the code works sufficiently for the basic operation of converting to BCD however I'm stuck when it comes to converting this into ASCII. The overall output is ASCII so that an incremented value can be displayed on an LCD.
My code so far:
int dec2bin(int a){ //Decimal to binary function
int bin;
int i =1;
while (a!=0){
bin+=(a%2)*i;
i*=10;
a/=2;
}
return bin;
}
unsigned int ConverttoBCD(int val){
unsigned int unit = 0;
unsigned int ten = 0;
unsigned int hundred = 0;
hundred = (val/100);
ten = ((val-hundred*100)/10);
unit = (val-(hundred*100+ten*10));
uint8_t ret1 = dec2bin(unit);
uint8_t ret2 = dec2bin((ten)<<4);
uint8_t ret3 = dec2bin((hundred)<<8);
return(ret3+ret2+ret1);
}
The idea to convert to BCD for an ASCII representation of a number is actually the "correct one". Given BCD, you only need to add '0' to each digit for getting the corresponding ASCII value.
But your code has several problems. The most important one is that you try to stuff a value shifted left by 8 bits in an 8bit type. This can never work, those 8 bits will be zero, think about it! Then I absolutely do not understand what your dec2bin() function is supposed to do.
So I'll present you one possible correct solution to your problem. The key idea is to use a char for each individual BCD digit. Of course, a BCD digit only needs 4 bits and a char has at least 8 of them -- but you need char anyways for your ASCII representation and when your BCD digits are already in individual chars, all you have to do is indeed add '0' to each.
While at it: Converting to BCD by dividing and multiplying is a waste of resources. There's a nice algorithm called Double dabble for converting to BCD only using bit shifting and additions. I'm using it in the following example code:
#include <stdio.h>
#include <string.h>
// for determining the number of value bits in an integer type,
// see https://stackoverflow.com/a/4589384/2371524 for this nice trick:
#define IMAX_BITS(m) ((m) /((m)%0x3fffffffL+1) /0x3fffffffL %0x3fffffffL *30 \
+ (m)%0x3fffffffL /((m)%31+1)/31%31*5 + 4-12/((m)%31+3))
// number of bits in unsigned int:
#define UNSIGNEDINT_BITS IMAX_BITS((unsigned)-1)
// convert to ASCII using BCD, return the number of digits:
int toAscii(char *buf, int bufsize, unsigned val)
{
// sanity check, a buffer smaller than one digit is pointless
if (bufsize < 1) return -1;
// initialize output buffer to zero
// if you don't have memset, use a loop here
memset(buf, 0, bufsize);
int scanstart = bufsize - 1;
int i;
// mask for single bits in value, start at most significant bit
unsigned mask = 1U << (UNSIGNEDINT_BITS - 1);
while (mask)
{
// extract single bit
int bit = !!(val & mask);
for (i = scanstart; i < bufsize; ++i)
{
// this is the "double dabble" trick -- in each iteration,
// add 3 to each element that is greater than 4. This will
// generate the correct overflowing bits while shifting for
// BCD
if (buf[i] > 4) buf[i] += 3;
}
// if we have filled the output buffer from the right far enough,
// we have to scan one position earlier in the next iteration
if (buf[scanstart] > 7) --scanstart;
// check for overflow of our buffer:
if (scanstart < 0) return -1;
// now just shift the bits in the BCD digits:
for (i = scanstart; i < bufsize - 1; ++i)
{
buf[i] <<= 1;
buf[i] &= 0xf;
buf[i] |= (buf[i+1] > 7);
}
// shift in the new bit from our value:
buf[bufsize-1] <<= 1;
buf[bufsize-1] &= 0xf;
buf[bufsize-1] |= bit;
// next bit:
mask >>= 1;
}
// find first non-zero digit:
for (i = 0; i < bufsize - 1; ++i) if (buf[i]) break;
int digits = bufsize - i;
// eliminate leading zero digits
// (again, use a loop if you don't have memmove)
// (or, if you're converting to a fixed number of digits and *want*
// the leading zeros, just skip this step entirely, including the
// loop above)
memmove(buf, buf + i, digits);
// convert to ascii:
for (i = 0; i < digits; ++i) buf[i] += '0';
return digits;
}
int main(void)
{
// some simple test code:
char buf[10];
int digits = toAscii(buf, 10, 471142);
for (int i = 0; i < digits; ++i)
{
putchar(buf[i]);
}
puts("");
}
You won't need this IMAX_BITS() "magic macro" if you actually know your target platform and how many bits there are in the integer type you want to convert.

How to represent binary data in 8 bits in C

#include<stdio.h>
int main()
{
long int decimalNumber,remainder,quotient;
int binaryNumber[100],i=1,j;
printf("Enter any decimal number: ");
scanf("%ld",&decimalNumber);
quotient = decimalNumber;
while(quotient!=0)
{
binaryNumber[i++]= quotient % 2;
quotient = quotient / 2;
}
printf("Equivalent binary value of decimal number %d: ",decimalNumber);
for(j = i -1 ;j> 0;j--)
printf("%d",binaryNumber[j]);
return 0;
}
I want the output in 8 bit binary form, but the result as shown below, is there any operator in C which can convert 7 bit data to its equivalent 8 bit data? thank you
Sample output:
Enter any decimal number: 50
Equivalent binary value of decimal number 50: 110010
Required output is 00110010 which is 8 bit, how to append a zero in MSB position?
A very convenient way it so have a function return a binary representation in the form of a string. This allows the binary representation to be used within a normal printf format string rather than having the bits spit out at the current cursor position. To specify the exact number of digits, you must pad the binary string to the required number of places (e.g. 8, 16, 32...). The following makes use of a static variable to allow the return of the buffer, but the same can easily be implemented by allocating space for the buffer with dynamically. The preprocessor checks are not required as you can simply hardwire the length of the buffer to 64 + 1, but for the sake of completeness a check for x86/x86_64 is included and BITS_PER_LONG is set accordingly.
#include <stdio.h>
#if defined(__LP64__) || defined(_LP64)
# define BUILD_64 1
#endif
#ifdef BUILD_64
# define BITS_PER_LONG 64
#else
# define BITS_PER_LONG 32
#endif
char *binstr (unsigned long n, size_t sz);
int main (void) {
printf ("\n 50 (decimal) : %s (binary)\n\n", binstr (50, 8));
return 0;
}
/* returns pointer to binary representation of 'n' zero padded to 'sz'. */
char *binstr (unsigned long n, size_t sz)
{
static char s[BITS_PER_LONG + 1] = {0};
char *p = s + BITS_PER_LONG;
register size_t i;
if (!n) {
*s = '0';
return s;
}
for (i = 0; i < sz; i++)
*(--p) = (n>>i & 1) ? '1' : '0';
return p;
}
Output
$ ./bin/bincnv
50 (decimal) : 00110010 (binary)
Note: you cannot make repeated calls in the same printf statement due to the static buffer. If you allocate dynamically, you can call the function as many times as you like in the same printf statement.
Also, note, if you do not care about padding the binary return to any specific length and just want the binary representation to start with the most significant bit, the following simpler version can be used:
/* simple return of binary string */
char *binstr (unsigned long n)
{
static char s[BITS_PER_LONG + 1] = {0};
char *p = s + BITS_PER_LONG;
if (!n) {
*s = '0';
return s;
}
while (n) {
*(--p) = (n & 1) ? '1' : '0';
n >>= 1;
}
return p;
}
Modify your code as shown below:
quotient = quotient / 2;
}
/* ---- Add the following code ---- */
{
int group_size = 8; /* Or CHAR_BIT */
int padding = group_size - ((i-1) % group_size); /* i was inited with 1 */
if(padding != group_size) {
/* Add padding */
while(padding-- != 0) binaryNumber[i++] = 0;
}
}
/* ------- Modification ends -------- */
printf("Equivalent binary value of decimal number %d: ",decimalNumber);
This code calculates the number of padding bits required to print the number and fills the padding bits with 0.
Live demo here
If you want 7 bit answer, change group_size to 7.
Use this for printing your result:
for(j = 7; j>0; j--)
printf("%d", binaryNumber[j]);
This always prints 8 binary digits.
Edit
The int array binaryNumber must be initialized with zeros to make this work:
for(int i=0; i<8; i++) binaryNumber[i] = 0;

unsigned to hex digit

I got a problem that says: Form a character array based on an unsigned int. Array will represent that int in hexadecimal notation. Do this using bitwise operators.
So, my ideas is the following: I create a mask that has 1's for its 4 lowest value bits.
I push the bits of the given int by 4 to the right and use & on that int and mask. I repeat until (int != 0). My question is: when I get individual hex digits (packs of 4 bits), how do I convert them to a char? For example, I get:
x & mask = 1101(2) = 13(10) = D(16)
Is there a function to convert an int to hex representation, or do I have to use brute force with switch statement or whatever else?
I almost forgot, I am doing this in C :)
Here is what I mean:
#include <stdio.h>
#include <stdlib.h>
#define BLOCK 4
int main() {
unsigned int x, y, i, mask;
char a[4];
printf("Enter a positive number: ");
scanf("%u", &x);
for (i = sizeof(usnsigned int), mask = ~(~0 << 4); x; i--, x >>= BLOCK) {
y = x & mask;
a[i] = FICTIVE_NUM_TO_HEX_DIGIT(y);
}
print_array(a);
return EXIT_SUCCESS;
}
You are almost there. The simplest method to convert an integer in the range from 0 to 15 to a hexadecimal digit is to use a lookup table,
char hex_digits[] = "0123456789ABCDEF";
and index into that,
a[i] = hex_digits[y];
in your code.
Remarks:
char a[4];
is probably too small. One hexadecimal digit corresponds to four bits, so with CHAR_BIT == 8, you need up to 2*sizeof(unsigned) chars to represent the number, generally, (CHAR_BIT * sizeof(unsigned int) + 3) / 4. Depending on what print_array does, you may need to 0-terminate a.
for (i = sizeof(usnsigned int), mask = ~(~0 << 4); x; i--, x >>= BLOCK)
initialising i to sizeof(unsigned int) skips the most significant bits, i should be initialised to the last valid index into a (except for possibly the 0-terminator, then the penultimate valid index).
The mask can more simply be defined as mask = 0xF, that has the added benefit of not invoking undefined behaviour, which
mask = ~(~0 << 4)
probably does. 0 is an int, and thus ~0 is one too. On two's complement machines (that is almost everything nowadays), the value is -1, and shifting negative integers left is undefined behaviour.
char buffer[10] = {0};
int h = 17;
sprintf(buffer, "%02X", h);
Try something like this:
char hex_digits[] = "0123456789ABCDEF";
for (i = 0; i < ((sizeof(unsigned int) * CHAR_BIT + 3) / 4); i++) {
digit = (x >> (sizeof(unsigned int) * CHAR_BIT - 4)) & 0x0F;
x = x << 4;
a[i] = hex_digits[digit];
}
Ok, this is where I got:
#include <stdio.h>
#include <stdlib.h>
#define BLOCK 4
void printArray(char*, int);
int main() {
unsigned int x, mask;
int size = sizeof(unsigned int) * 2, i;
char a[size], hexDigits[] = "0123456789ABCDEF";
for (i = 0; i < size; i++)
a[i] = 0;
printf("Enter a positive number: ");
scanf("%u", &x);
for (i = size - 1, mask = ~(~0 << 4); x; i--, x >>= BLOCK) {
a[i] = hexDigits[x & mask];
}
printArray(a, size);
return EXIT_SUCCESS;
}
void printArray(char a[], int n) {
int i;
for (i = 0; i < n; i++)
printf("%c", a[i]);
putchar('\n');
}
I have compiled, it runs and it does the job correctly. I don't know... Should I be worried that this problem was a bit hard for me? At faculty, during exams, we must write our code by hand, on a piece of paper... I don't imagine I would have done this right.
Is there a better (less complicated) way to do this problem? Thank you all for help :)
I would consider the impact of potential padding bits when shifting, as shifting by anything equal to or greater than the number of value bits that exist in an integer type is undefined behaviour.
Perhaps you could terminate the string first using: array[--size] = '\0';, write the smallest nibble (hex digit) using array[--size] = "0123456789ABCDEF"[value & 0x0f], move onto the next nibble using: value >>= 4, and repeat while value > 0. When you're done, return array + size or &array[size] so that the caller knows where the hex sequence begins.

Resources