Decimal to BCD to ASCII - c

Perhaps this task is a bit more complicated than what I've written below, but the code that follows is my take on decimal to BCD. The task is to take in a decimal number, convert it to BCD and then to ASCII so that it can be displayed on a microcontroller. As far as I'm aware the code works sufficiently for the basic operation of converting to BCD however I'm stuck when it comes to converting this into ASCII. The overall output is ASCII so that an incremented value can be displayed on an LCD.
My code so far:
int dec2bin(int a){ //Decimal to binary function
int bin;
int i =1;
while (a!=0){
bin+=(a%2)*i;
i*=10;
a/=2;
}
return bin;
}
unsigned int ConverttoBCD(int val){
unsigned int unit = 0;
unsigned int ten = 0;
unsigned int hundred = 0;
hundred = (val/100);
ten = ((val-hundred*100)/10);
unit = (val-(hundred*100+ten*10));
uint8_t ret1 = dec2bin(unit);
uint8_t ret2 = dec2bin((ten)<<4);
uint8_t ret3 = dec2bin((hundred)<<8);
return(ret3+ret2+ret1);
}

The idea to convert to BCD for an ASCII representation of a number is actually the "correct one". Given BCD, you only need to add '0' to each digit for getting the corresponding ASCII value.
But your code has several problems. The most important one is that you try to stuff a value shifted left by 8 bits in an 8bit type. This can never work, those 8 bits will be zero, think about it! Then I absolutely do not understand what your dec2bin() function is supposed to do.
So I'll present you one possible correct solution to your problem. The key idea is to use a char for each individual BCD digit. Of course, a BCD digit only needs 4 bits and a char has at least 8 of them -- but you need char anyways for your ASCII representation and when your BCD digits are already in individual chars, all you have to do is indeed add '0' to each.
While at it: Converting to BCD by dividing and multiplying is a waste of resources. There's a nice algorithm called Double dabble for converting to BCD only using bit shifting and additions. I'm using it in the following example code:
#include <stdio.h>
#include <string.h>
// for determining the number of value bits in an integer type,
// see https://stackoverflow.com/a/4589384/2371524 for this nice trick:
#define IMAX_BITS(m) ((m) /((m)%0x3fffffffL+1) /0x3fffffffL %0x3fffffffL *30 \
+ (m)%0x3fffffffL /((m)%31+1)/31%31*5 + 4-12/((m)%31+3))
// number of bits in unsigned int:
#define UNSIGNEDINT_BITS IMAX_BITS((unsigned)-1)
// convert to ASCII using BCD, return the number of digits:
int toAscii(char *buf, int bufsize, unsigned val)
{
// sanity check, a buffer smaller than one digit is pointless
if (bufsize < 1) return -1;
// initialize output buffer to zero
// if you don't have memset, use a loop here
memset(buf, 0, bufsize);
int scanstart = bufsize - 1;
int i;
// mask for single bits in value, start at most significant bit
unsigned mask = 1U << (UNSIGNEDINT_BITS - 1);
while (mask)
{
// extract single bit
int bit = !!(val & mask);
for (i = scanstart; i < bufsize; ++i)
{
// this is the "double dabble" trick -- in each iteration,
// add 3 to each element that is greater than 4. This will
// generate the correct overflowing bits while shifting for
// BCD
if (buf[i] > 4) buf[i] += 3;
}
// if we have filled the output buffer from the right far enough,
// we have to scan one position earlier in the next iteration
if (buf[scanstart] > 7) --scanstart;
// check for overflow of our buffer:
if (scanstart < 0) return -1;
// now just shift the bits in the BCD digits:
for (i = scanstart; i < bufsize - 1; ++i)
{
buf[i] <<= 1;
buf[i] &= 0xf;
buf[i] |= (buf[i+1] > 7);
}
// shift in the new bit from our value:
buf[bufsize-1] <<= 1;
buf[bufsize-1] &= 0xf;
buf[bufsize-1] |= bit;
// next bit:
mask >>= 1;
}
// find first non-zero digit:
for (i = 0; i < bufsize - 1; ++i) if (buf[i]) break;
int digits = bufsize - i;
// eliminate leading zero digits
// (again, use a loop if you don't have memmove)
// (or, if you're converting to a fixed number of digits and *want*
// the leading zeros, just skip this step entirely, including the
// loop above)
memmove(buf, buf + i, digits);
// convert to ascii:
for (i = 0; i < digits; ++i) buf[i] += '0';
return digits;
}
int main(void)
{
// some simple test code:
char buf[10];
int digits = toAscii(buf, 10, 471142);
for (int i = 0; i < digits; ++i)
{
putchar(buf[i]);
}
puts("");
}
You won't need this IMAX_BITS() "magic macro" if you actually know your target platform and how many bits there are in the integer type you want to convert.

Related

Why does left-shifting an integer by 24-bit yield the wrong result?

I tried left-shifting a 32-bit integer by 24:
char *int_to_bin(int num) {
int i = 0;
static char bin[64];
while (num != 0) {
bin[i] = num % 2 + 48;
num /= 2;
i++;
}
bin[i] = '\0';
return (bin);
}
int main() {
int number = 255;
printf("number: %s\n", int_to_bin(number));
printf("shifted number: %s\n", int_to_bin(number << 24));
return 0;
}
OUTPUT:
number: 11111111
shifted number: 000000000000000000000000/
and i left-shift with 23-bit it yields this result:
0000000000000000000000011111111
Well Why is it like that and what's the matter with '/' at the end of the wrong result?
Two things:
If number has the value 255 then number << 24 has the numerical value 4278190080, which overflows a 32-bit signed integer whose largest possible value is 2147483647. Signed integer overflow is undefined behavior in C, so the result could be anything at all.
What probably happens in this case is that the result of the shift is negative. When num is negative then num % 2 may take the value -1, so you store character 47 in the string, which is /.
Bit shifting math is usually better to do with unsigned types, where overflow is well-defined (it wraps around and bits just shift off the left and vanish) and num % 2 can only be 0 or 1. (Or write num & 1 instead.)
Your int_to_bin routine puts the least-significant bits at the beginning of the string (on the left), so the result is backwards from the way people usually write numbers (with the least-significant bits on the right). You may want to rewrite it.
Shift works fine, you simply print it from the wrong direction.
char *int_to_bin(char *buff, int num)
{
unsigned mask = 1U << (CHAR_BIT * sizeof(num) - 1);
char *wrk = buff;
for(; mask; mask >>= 1)
{
*wrk++ = '0' + !!((unsigned)num & mask);
}
*wrk = 0;
return buff;
}
int main()
{
char buff[CHAR_BIT * sizeof(int) + 1];
int number = 255;
printf("number: %s\n", int_to_bin(buff, number));
printf("shifted number: %s\n", int_to_bin(buff, number << 24));
return 0;
}
Shifting signed integers left is OK, but the right shift is implementation-defined. Many systems use arithmetic shift right and the result is not the same as using the bitwise one:
https://godbolt.org/z/e7f3shxd4
you are storing numbers backwards
you are using signed int32 while shifting by 23 results needs more than 32 bits to handle that operation ...you should use long long int
signed integer can lead to wrong answers as 1<<31 is -1 which results in bad characters in string
finally using unsigned long long int with storing numbers in correct order will produce correct string
you should try re write code on your own before seeing this improved version of your code
#include<stdio.h>
#include<stdlib.h>
char *int_to_bin( unsigned long long int num) {
int i = 0;
static char bin[65];
while (i != 64) {
bin[63-i] = num % 2 + 48;
num /= 2;
i++;
}
bin[64] = '\0';
return (bin);
}
int main() {
unsigned long long int number = 255;
printf("number 1: %s\n", int_to_bin(number));
printf("number 2: %s\n", int_to_bin(number << 24));
return 0;
}

Calculating checksum (16 bit) in c

I am being asked to do a checksum on this text with the bit size being 16:
"AAAAAAAAAA\nX"
At first the description seemed like it wanted the Fletcher-16 checksum. But the output of Fletcher's checksum performed on the above text yielded 8aee in hex. The example file says that the modular sum algorithm (minus the two's complement) should output 509d in hex.
The only other info is the standard "every two characters should be added to the checksum."
Besides using the generic Fletcher-16 checksum provided on the corresponding Wikipedia page, I have tried using this solution found here: calculating-a-16-bit-checksum to no avail. This code produced the hex value of 4f27.
Simply adding the data seeing it as an array of big-endian 16-bit integers produced the result 509d.
#include <stdio.h>
int main(void) {
char data[] = "AAAAAAAAAA\nX";
int sum = 0;
int i;
for(i = 0; data[i] != '\0' && data[i + 1] != '\0'; i += 2) {
int value = ((unsigned char)data[i] << 8) | (unsigned char)data[i + 1];
sum = (sum + value) & 0xffff;
}
printf("%04x\n", sum);
return 0;
}

cast the memory stored in a uint32_t as a float in C

Having a Hex represented as a string in c
e.g char* text = "0xffff" I manage to hold the data in a uint32_t with the following function:
for (unsigned int i = 0; i < line_length && count < WORD_SIZE; i++) {
char c[2]; //represent the digit as string
c[0] = line[i];
c[1] = '\0';
if (isxdigit(c[0])) { //we've found a relevant char.
res_out <<= 4; // shift left by 4 for the next 4 bits.
res_out += (int32_t)strtol(c, NULL, 16); //set the last 4 bits bit to relevant value
//res_out <<= 4; // shift left by 4 for the next 4 bits.
count += 4;
}
}
Now, having the 32 bits, the uint32_t sometimes represented a single-precision floating point number, and I would like to parse it as such
Using float f = (float)num of course casts the int representation to float (not the needed operation) and I have no other idea's how to tell memory it's actually a floating point number
Just for future references, As #melpomene suggested
uint32_t x = /* some single precision float value dumped into a uint32_t*/;
uint32_t float_placeholder = 0;
memcpy(&float_placeholder, &x, sizeof(uint32_t));
float_placeholder holds the true floating point number

How to represent binary data in 8 bits in C

#include<stdio.h>
int main()
{
long int decimalNumber,remainder,quotient;
int binaryNumber[100],i=1,j;
printf("Enter any decimal number: ");
scanf("%ld",&decimalNumber);
quotient = decimalNumber;
while(quotient!=0)
{
binaryNumber[i++]= quotient % 2;
quotient = quotient / 2;
}
printf("Equivalent binary value of decimal number %d: ",decimalNumber);
for(j = i -1 ;j> 0;j--)
printf("%d",binaryNumber[j]);
return 0;
}
I want the output in 8 bit binary form, but the result as shown below, is there any operator in C which can convert 7 bit data to its equivalent 8 bit data? thank you
Sample output:
Enter any decimal number: 50
Equivalent binary value of decimal number 50: 110010
Required output is 00110010 which is 8 bit, how to append a zero in MSB position?
A very convenient way it so have a function return a binary representation in the form of a string. This allows the binary representation to be used within a normal printf format string rather than having the bits spit out at the current cursor position. To specify the exact number of digits, you must pad the binary string to the required number of places (e.g. 8, 16, 32...). The following makes use of a static variable to allow the return of the buffer, but the same can easily be implemented by allocating space for the buffer with dynamically. The preprocessor checks are not required as you can simply hardwire the length of the buffer to 64 + 1, but for the sake of completeness a check for x86/x86_64 is included and BITS_PER_LONG is set accordingly.
#include <stdio.h>
#if defined(__LP64__) || defined(_LP64)
# define BUILD_64 1
#endif
#ifdef BUILD_64
# define BITS_PER_LONG 64
#else
# define BITS_PER_LONG 32
#endif
char *binstr (unsigned long n, size_t sz);
int main (void) {
printf ("\n 50 (decimal) : %s (binary)\n\n", binstr (50, 8));
return 0;
}
/* returns pointer to binary representation of 'n' zero padded to 'sz'. */
char *binstr (unsigned long n, size_t sz)
{
static char s[BITS_PER_LONG + 1] = {0};
char *p = s + BITS_PER_LONG;
register size_t i;
if (!n) {
*s = '0';
return s;
}
for (i = 0; i < sz; i++)
*(--p) = (n>>i & 1) ? '1' : '0';
return p;
}
Output
$ ./bin/bincnv
50 (decimal) : 00110010 (binary)
Note: you cannot make repeated calls in the same printf statement due to the static buffer. If you allocate dynamically, you can call the function as many times as you like in the same printf statement.
Also, note, if you do not care about padding the binary return to any specific length and just want the binary representation to start with the most significant bit, the following simpler version can be used:
/* simple return of binary string */
char *binstr (unsigned long n)
{
static char s[BITS_PER_LONG + 1] = {0};
char *p = s + BITS_PER_LONG;
if (!n) {
*s = '0';
return s;
}
while (n) {
*(--p) = (n & 1) ? '1' : '0';
n >>= 1;
}
return p;
}
Modify your code as shown below:
quotient = quotient / 2;
}
/* ---- Add the following code ---- */
{
int group_size = 8; /* Or CHAR_BIT */
int padding = group_size - ((i-1) % group_size); /* i was inited with 1 */
if(padding != group_size) {
/* Add padding */
while(padding-- != 0) binaryNumber[i++] = 0;
}
}
/* ------- Modification ends -------- */
printf("Equivalent binary value of decimal number %d: ",decimalNumber);
This code calculates the number of padding bits required to print the number and fills the padding bits with 0.
Live demo here
If you want 7 bit answer, change group_size to 7.
Use this for printing your result:
for(j = 7; j>0; j--)
printf("%d", binaryNumber[j]);
This always prints 8 binary digits.
Edit
The int array binaryNumber must be initialized with zeros to make this work:
for(int i=0; i<8; i++) binaryNumber[i] = 0;

unsigned to hex digit

I got a problem that says: Form a character array based on an unsigned int. Array will represent that int in hexadecimal notation. Do this using bitwise operators.
So, my ideas is the following: I create a mask that has 1's for its 4 lowest value bits.
I push the bits of the given int by 4 to the right and use & on that int and mask. I repeat until (int != 0). My question is: when I get individual hex digits (packs of 4 bits), how do I convert them to a char? For example, I get:
x & mask = 1101(2) = 13(10) = D(16)
Is there a function to convert an int to hex representation, or do I have to use brute force with switch statement or whatever else?
I almost forgot, I am doing this in C :)
Here is what I mean:
#include <stdio.h>
#include <stdlib.h>
#define BLOCK 4
int main() {
unsigned int x, y, i, mask;
char a[4];
printf("Enter a positive number: ");
scanf("%u", &x);
for (i = sizeof(usnsigned int), mask = ~(~0 << 4); x; i--, x >>= BLOCK) {
y = x & mask;
a[i] = FICTIVE_NUM_TO_HEX_DIGIT(y);
}
print_array(a);
return EXIT_SUCCESS;
}
You are almost there. The simplest method to convert an integer in the range from 0 to 15 to a hexadecimal digit is to use a lookup table,
char hex_digits[] = "0123456789ABCDEF";
and index into that,
a[i] = hex_digits[y];
in your code.
Remarks:
char a[4];
is probably too small. One hexadecimal digit corresponds to four bits, so with CHAR_BIT == 8, you need up to 2*sizeof(unsigned) chars to represent the number, generally, (CHAR_BIT * sizeof(unsigned int) + 3) / 4. Depending on what print_array does, you may need to 0-terminate a.
for (i = sizeof(usnsigned int), mask = ~(~0 << 4); x; i--, x >>= BLOCK)
initialising i to sizeof(unsigned int) skips the most significant bits, i should be initialised to the last valid index into a (except for possibly the 0-terminator, then the penultimate valid index).
The mask can more simply be defined as mask = 0xF, that has the added benefit of not invoking undefined behaviour, which
mask = ~(~0 << 4)
probably does. 0 is an int, and thus ~0 is one too. On two's complement machines (that is almost everything nowadays), the value is -1, and shifting negative integers left is undefined behaviour.
char buffer[10] = {0};
int h = 17;
sprintf(buffer, "%02X", h);
Try something like this:
char hex_digits[] = "0123456789ABCDEF";
for (i = 0; i < ((sizeof(unsigned int) * CHAR_BIT + 3) / 4); i++) {
digit = (x >> (sizeof(unsigned int) * CHAR_BIT - 4)) & 0x0F;
x = x << 4;
a[i] = hex_digits[digit];
}
Ok, this is where I got:
#include <stdio.h>
#include <stdlib.h>
#define BLOCK 4
void printArray(char*, int);
int main() {
unsigned int x, mask;
int size = sizeof(unsigned int) * 2, i;
char a[size], hexDigits[] = "0123456789ABCDEF";
for (i = 0; i < size; i++)
a[i] = 0;
printf("Enter a positive number: ");
scanf("%u", &x);
for (i = size - 1, mask = ~(~0 << 4); x; i--, x >>= BLOCK) {
a[i] = hexDigits[x & mask];
}
printArray(a, size);
return EXIT_SUCCESS;
}
void printArray(char a[], int n) {
int i;
for (i = 0; i < n; i++)
printf("%c", a[i]);
putchar('\n');
}
I have compiled, it runs and it does the job correctly. I don't know... Should I be worried that this problem was a bit hard for me? At faculty, during exams, we must write our code by hand, on a piece of paper... I don't imagine I would have done this right.
Is there a better (less complicated) way to do this problem? Thank you all for help :)
I would consider the impact of potential padding bits when shifting, as shifting by anything equal to or greater than the number of value bits that exist in an integer type is undefined behaviour.
Perhaps you could terminate the string first using: array[--size] = '\0';, write the smallest nibble (hex digit) using array[--size] = "0123456789ABCDEF"[value & 0x0f], move onto the next nibble using: value >>= 4, and repeat while value > 0. When you're done, return array + size or &array[size] so that the caller knows where the hex sequence begins.

Resources