Decimal to Binary on C library - c

I want to know if there is a function in C library that convert a decimal to binary number and save number by number in a position on an array.
For example: 2 -> 10 -> array [0] = 0 array[1] = 1.
Thanks.

here:
void dec2bin(int c)
{
int i = 0;
for(i = 31; i >= 0; i--){
if((c & (1 << i)) != 0){
printf("1");
}else{
printf("0");
}
}
}
But this only prints the value of an integer in binary format. All data is represented in binary format internally anyway.

You did not define what is a decimal number for you. I am guessing it is character representation (e.g. in ASCII) of that number.
Notice that numbers are just numbers. Binary or decimal numbers do not exist, but a given number may have a binary, and a decimal, representation. Numbers are not made of digits!
Then you probably want sscanf(3) or strtol(3) pr atoi to convert a string to an integer (e.g. an int or a long), and snprintf(3) to convert an integer to a string.
If you want to convert a number to a binary string (with only 0 or 1 char-s in it) you need to code that conversion by yourself. To convert a binary string to some long use strtol.

There is no such function in C standard library. Anyway, you can write your own:
void get_bin(int *dst, intmax_t x);
Where dst is the resulting array (with 1s and 0s), and x is the decimal number.
For example:
C89 version:
#include <limits.h>
void get_bin(int *dst, int x)
{
int i;
for (i = sizeof x * CHAR_BIT - 1; i >= 0; --i)
*dst++ = x >> i & 1;
}
C99 version:
/* C99 version */
#include <limits.h>
#include <stdint.h>
void get_bin(int *dst, intmax_t x)
{
for (intmax_t i = sizeof x * CHAR_BIT - 1; i >= 0; --i)
*dst++ = x >> i & 1;
}
It works as follow: we run through the binary representation of x, from left to right. The expression (sizeof x * CHAR_BIT - 1) give the number of bits of x - 1. Then, we get the value of each bit (*dst++ = x >> i & 1), and push it into the array.
Example of utilisation:
void get_bin(int *dst, int x)
{
int i;
for (i = sizeof x * CHAR_BIT - 1; i >= 0; --i)
*dst++ = x >> i & 1;
}
int main(void)
{
int buf[128]; /* binary number */
int n = 42; /* decimal number */
unsigned int i;
get_bin(buf, n);
for (i = 0; i < sizeof n * CHAR_BIT; ++i)
printf("%d", buf[i]);
return 0;
}

Here is a version that explicitly uses a string buffer:
#include <string.h>
const char *str2bin(int num, char buffer[], const int BUFLEN)
{
(void) memset(buffer, '\0', BUFLEN );
int i = BUFLEN - 1; /* Index into buffer, running backwards. */
int r = 0; /* Remainder. */
char *p = &buffer[i - 1]; /* buffer[i] holds string terminator '\0'. */
while (( i >= 0 ) && ( num > 0 )) {
r = num % 2;
num = num / 2;
*p = r + '0';
i--;
p--;
}
return (p+1);
}

Use char * itoa ( int value, char * str, int base );
Find more here ...

the function should go like this:
int dec2bin(int n){
static int bin,osn=1,c;
if(n==0) return 0;
else {
c=n%2;
bin += c*osn;
osn*=10;
dec2bin(n/2);
}
return bin;
}

As far as i know there is no such function in any C library. But here's a recursive function that returns a binary representation of a decimal number as int:
int dec2bin(int n)
{
if(n == 0) return 0;
return n % 2 + 10 * dec2bin(n / 2);
}
The max number that it can represent is 1023 (1111111111 in binary) because of int data type limit, but you can substitute int for long long data type to increase the range. Then, you can store the return value to array like this:
int array[100], i = 0;
int n = dec2bin(some_number);
do{
array[i] = n % 10;
n /= 10;
i++;
}while(n > 10)
I know this is an old post, but i hope this will still help somebody!

If it helps you can convert any decimal to binary using bitset library, for example:
#include <iostream>
#include <bits/stdc++.h>
using namespace std;
int main(){
int decimal = 20;
bitset<5> binary20(decimal);
cout << binary20 << endl;
return 0;
}
So, you have an output like 10100. Bitsets also have a "toString()" method for any purpose.

Related

How do I fix my `itoa` implementation so it doesn't print reversed output?

I want to convert an integer into a string of numeric characters in C.
I've tried using itoa, but it's non-standard and not provided by my C library.
I tried to implement my own itoa, but it's not working properly:
#include <stdlib.h>
#include <stdio.h>
char *itoa(int val, char *buf, int base)
{
size_t ctr = 0;
for( ; val; val /= base )
{
buf[ctr++] = '0' + (val % base);
}
buf[ctr] = 0;
return buf;
}
int main(void)
{
unsigned char c = 201;
char *buf = malloc(sizeof(c)*8+1);
itoa(c, buf, 2);
puts(buf);
free(buf);
}
It gives reversed output.
For example, if c is 'A' and base is 2, the output is this: 0101101
The output I want it to be is this: 1011010
How do I fix this issue?
Similar questions
I've already seen this question: Is there a printf converter to print in binary format?
I do not want a printf format specifier to print an integer as binary, I want to convert the binary to a string.
I've already seen this question: Print an int in binary representation using C
Although the answer does convert an integer into a string of binary digits, that's the only thing it can do.
Restrictions
I want itoa to be able to work with other bases, such as 10, 8, etc. and print correctly (i.e. 12345 translates to "12345" and not to "11000000111001").
I do not want to use printf or sprintf to do this.
I do not care about the length of the string as long is the result is correct.
I do not want to convert the integer into ASCII characters other than numeric ones, with the exception of bases greater than 10, in which case the characters may be alphanumeric.
The answer must fit this prototype exactly:
char *itoa(int val, char *buf, int base);
There may be a function called nitoa that has this prototype and returns the number of characters required to hold the result of itoa:
size_t nitoa(int val, int base);
How do I fix my itoa implementation so it doesn't print reversed output?
Rather than reverse the string, form it right-to-left. #4 of #user3386109
I recommend the helper function also receives in a size.
#include <limits.h>
char* itostr(char *dest, size_t size, int a, int base) {
// Max text needs occur with itostr(dest, size, INT_MIN, 2)
char buffer[sizeof a * CHAR_BIT + 1 + 1];
static const char digits[36] = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ";
if (base < 2 || base > 36) {
fprintf(stderr, "Invalid base");
return NULL;
}
// Start filling from the end
char* p = &buffer[sizeof buffer - 1];
*p = '\0';
// Work with negative `int`
int an = a < 0 ? a : -a;
do {
*(--p) = digits[-(an % base)];
an /= base;
} while (an);
if (a < 0) {
*(--p) = '-';
}
size_t size_used = &buffer[sizeof(buffer)] - p;
if (size_used > size) {
fprintf(stderr, "Scant buffer %zu > %zu", size_used , size);
return NULL;
}
return memcpy(dest, p, size_used);
}
Then to provide memory, use a compound literal.
// compound literal C99 or later
#define INT_STR_SIZE (sizeof(int)*CHAR_BIT + 2)
#define MY_ITOA(x, base) itostr((char [INT_STR_SIZE]){""}, INT_STR_SIZE, (x), (base))
Now you can call it multiple times.
int main(void) {
printf("%s %s %s %s\n", MY_ITOA(INT_MIN,10), MY_ITOA(-1,10), MY_ITOA(0,10), MY_ITOA(INT_MAX,10));
printf("%s %s\n", MY_ITOA(INT_MIN,2), MY_ITOA(INT_MIN,36));
return (0);
}
Output
-2147483648 -1 0 2147483647
-10000000000000000000000000000000 -ZIK0ZK
Note: sizeof(c)*8+1 is one too small for INT_MIN, base 2.
This solution works for me:
#include <errno.h>
#include <stdlib.h>
#include <string.h>
#define itoa lltoa
#define utoa ulltoa
#define ltoa lltoa
#define ultoa ulltoa
#define nitoa nlltoa
#define nutoa nulltoa
#define nltoa nlltoa
#define nultoa nulltoa
#define BASE_BIN 2
#define BASE_OCT 8
#define BASE_DEC 10
#define BASE_HEX 16
#define BASE_02Z 36
__extension__
char *ulltoa(unsigned long long val, char *buf, int base)
{
int remainder;
char c, *tmp = buf;
if(base < BASE_BIN)
{
errno = EINVAL;
return NULL;
}
do {
remainder = val % base;
if(remainder >= BASE_DEC) c = 'a' - BASE_DEC;
else c = '0';
*tmp++ = remainder + c;
val /= base;
} while(val);
*tmp = 0;
return strrev(buf);
}
__extension__
size_t nulltoa(unsigned long long val, int base)
{
size_t size = 0;
if(base < BASE_BIN)
{
errno = EINVAL;
return 0;
}
if(!val) size++;
for( ; val; val /= base, size++ );
return size;
}
__extension__
char *lltoa(long long val, char *buf, int base)
{
if(val < 0 && base > BASE_BIN)
{
val = -val;
*buf++ = '-';
}
return ulltoa(val, buf, base);
}
__extension__
size_t nlltoa(long long val, int base)
{
size_t size = 0;
if(val < 0 && base > BASE_BIN)
{
val = -val;
size++;
}
return size + nulltoa(val, base);
}

Converting int to char in C

Right now I am trying to convert an int to a char in C programming. After doing research, I found that I should be able to do it like this:
int value = 10;
char result = (char) value;
What I would like is for this to return 'A' (and for 0-9 to return '0'-'9') but this returns a new line character I think.
My whole function looks like this:
char int2char (int radix, int value) {
if (value < 0 || value >= radix) {
return '?';
}
char result = (char) value;
return result;
}
to convert int to char you do not have to do anything
char x;
int y;
/* do something */
x = y;
only one int to char value as the printable (usually ASCII) digit like in your example:
const char digits[] = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ";
int inttochar(int val, int base)
{
return digits[val % base];
}
if you want to convert to the string (char *) then you need to use any of the stansdard functions like sprintf, itoa, ltoa, utoa, ultoa .... or write one yourself:
char *reverse(char *str);
const char digits[] = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ";
char *convert(int number, char *buff, int base)
{
char *result = (buff == NULL || base > strlen(digits) || base < 2) ? NULL : buff;
char sign = 0;
if (number < 0)
{
sign = '-';
}
if (result != NULL)
{
do
{
*buff++ = digits[abs(number % (base ))];
number /= base;
} while (number);
if(sign) *buff++ = sign;
if (!*result) *buff++ = '0';
*buff = 0;
reverse(result);
}
return result;
}
A portable way of doing this would be to define a
const char* foo = "0123456789ABC...";
where ... are the rest of the characters that you want to consider.
Then and foo[value] will evaluate to a particular char. For example foo[0] will be '0', and foo[10] will be 'A'.
If you assume a particular encoding (such as the common but by no means ubiquitous ASCII) then your code is not strictly portable.
Characters use an encoding (typically ASCII) to map numbers to a particular character. The codes for the characters '0' to '9' are consecutive, so for values less than 10 you add the value to the character constant '0'. For values 10 or more, you add the value minus 10 to the character constant 'A':
char result;
if (value >= 10) {
result = 'A' + value - 10;
} else {
result = '0' + value;
}
Converting Int to Char
I take it that OP wants more that just a 1 digit conversion as radix was supplied.
To convert an int into a string, (not just 1 char) there is the sprintf(buf, "%d", value) approach.
To do so to any radix, string management becomes an issue as well as dealing the corner case of INT_MIN
The following C99 solution returns a char* whose lifetime is valid to the end of the block. It does so by providing a compound literal via the macro.
#include <stdio.h>
#include <stdlib.h>
#include <limits.h>
// Maximum buffer size needed
#define ITOA_BASE_N (sizeof(unsigned)*CHAR_BIT + 2)
char *itoa_base(char *s, int x, int base) {
s += ITOA_BASE_N - 1;
*s = '\0';
if (base >= 2 && base <= 36) {
int x0 = x;
do {
*(--s) = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ"[abs(x % base)];
x /= base;
} while (x);
if (x0 < 0) {
*(--s) = '-';
}
}
return s;
}
#define TO_BASE(x,b) itoa_base((char [ITOA_BASE_N]){0} , (x), (b))
Sample usage and tests
void test(int x) {
printf("base10:% 11d base2:%35s base36:%7s ", x, TO_BASE(x, 2), TO_BASE(x, 36));
printf("%ld\n", strtol(TO_BASE(x, 36), NULL, 36));
}
int main(void) {
test(0);
test(-1);
test(42);
test(INT_MAX);
test(-INT_MAX);
test(INT_MIN);
}
Output
base10: 0 base2: 0 base36: 0 0
base10: -1 base2: -1 base36: -1 -1
base10: 42 base2: 101010 base36: 16 42
base10: 2147483647 base2: 1111111111111111111111111111111 base36: ZIK0ZJ 2147483647
base10:-2147483647 base2: -1111111111111111111111111111111 base36:-ZIK0ZJ -2147483647
base10:-2147483648 base2: -10000000000000000000000000000000 base36:-ZIK0ZK -2147483648
Ref How to use compound literals to fprintf() multiple formatted numbers with arbitrary bases?
Check out the ascii table
The values stored in a char are interpreted as the characters corresponding to that table. The value of 10 is a newline
So characters in C are based on ASCII (or UTF-8 which is backwards-compatible with ascii codes). This means that under the hood, "A" is actually the number "65" (except in binary rather than decimal). All a "char" is in C is an integer with enough bytes to represent every ASCII character. If you want to convert an int to a char, you'll need to instruct the computer to interpret the bytes of an int as ASCII values - and it's been a while since I've done C, but I believe the compiler will complain since char holds fewer bytes than int. This means we need a function, as you've written. Thus,
if(value < 10) return '0'+value;
return 'A'+value-10;
will be what you want to return from your function. Keep your bounds checks with "radix" as you've done, imho that is good practice in C.
1. Converting int to char by type casting
Source File charConvertByCasting.c
#include <stdio.h>
int main(){
int i = 66; // ~~Type Casting Syntax~~
printf("%c", (char) i); // (type_name) expression
return 0;
}
Executable charConvertByCasting.exe command line output:
C:\Users\boqsc\Desktop\tcc>tcc -run charconvert.c
B
Additional resources:
https://www.tutorialspoint.com/cprogramming/c_type_casting.htm
https://www.tutorialspoint.com/cprogramming/c_data_types.htm
2. Convert int to char by assignment
Source File charConvertByAssignment.c
#include <stdio.h>
int main(){
int i = 66;
char c = i;
printf("%c", c);
return 0;
}
Executable charConvertByAssignment.exe command line output:
C:\Users\boqsc\Desktop\tcc>tcc -run charconvert.c
B
You can do
char a;
a = '0' + 5;
You will get character representation of that number.
Borrowing the idea from the existing answers, i.e. making use of array index.
Here is a "just works" simple demo for "integer to char[]" conversion in base 10, without any of <stdio.h>'s printf family interfaces.
Test:
$ cc -o testint2str testint2str.c && ./testint2str
Result: 234789
Code:
#include <stdio.h>
#include <string.h>
static char digits[] = "0123456789";
void int2str (char *buf, size_t sz, int num);
/*
Test:
cc -o testint2str testint2str.c && ./testint2str
*/
int
main ()
{
int num = 234789;
char buf[1024] = { 0 };
int2str (buf, sizeof buf, num);
printf ("Result: %s\n", buf);
}
void
int2str (char *buf, size_t sz, int num)
{
/*
Convert integer type to char*, in base-10 form.
*/
char *bufp = buf;
int i = 0;
// NOTE-1
void __reverse (char *__buf, int __start, int __end)
{
char __bufclone[__end - __start];
int i = 0;
int __nchars = sizeof __bufclone;
for (i = 0; i < __nchars; i++)
{
__bufclone[i] = __buf[__end - 1 - i];
}
memmove (__buf, __bufclone, __nchars);
}
while (num > 0)
{
bufp[i++] = digits[num % 10]; // NOTE-2
num /= 10;
}
__reverse (buf, 0, i);
// NOTE-3
bufp[i] = '\0';
}
// NOTE-1:
// "Nested function" is GNU's C Extension. Put it outside if not
// compiled by GCC.
// NOTE-2:
// 10 can be replaced by any radix, like 16 for hexidecimal outputs.
//
// NOTE-3:
// Make sure inserting trailing "null-terminator" after all things
// done.
NOTE-1:
"Nested function" is GNU's C Extension. Put it outside if not
compiled by GCC.
NOTE-2:
10 can be replaced by any radix, like 16 for hexidecimal outputs.
NOTE-3:
Make sure inserting trailing "null-terminator" after all things
done.

Need to convert int to string using C

Hi I am pretty new to coding and I really need help.
Basically I have a decimal value and I converted it to a binary value.
Using this method
long decimalToBinary(long n)
{
int remainder;
long binary = 0, i = 1;
while(n != 0)
{
remainder = n%2;
n = n/2;
binary= binary + (remainder*i);
i = i*10;
}
return binary;
}
And I want to give each character of the binary into it's own space inside an array. However, I can't seem to save digits from the return values in my string array. I think it has something to do with converting the long to string but I could be wrong! Here is what I have so far.
I do not want to use sprintf(); I do not wish to print the value I just want the value stored inside it so that the if conditions can read it. Any help would be appreciated!
int decimalG = 24;
long binaryG = decimalToBinary(decimalG);
char myStringG[8] = {binaryG};
for( int i = 0; i<8; i++)
{
if (myStringG[i] == '1' )
{
T1();
}
else
{
T0();
}
}
In this case since the decimal is 24, the binary would be 11000 therefore it should execute the the function T1(); 2 times and T0() 6 times. But it doesn't do that and I can't seem to find the answer to store the saved values in the array.
*Ps the Itoa(); function is also not an option. Thanks in Advance! :)
As the post is tagged arm using malloc() might not be the best approach, although the simplest. If you insist on using arrays:
#include <stdio.h>
#include <stdlib.h>
int decimalToBinary(long n, char out[], int len)
{
long remainder;
// C arrays are zero based
len--;
// TODO: check if the input is reasonable
while (n != 0) {
// pick a bit
remainder = n % 2;
// shift n one bit to the right
// It is the same as n = n/2 but
// is more telling of what you are doing:
// shifting the whole thing to the right
// and drop the least significant bit
n >>= 1;
// Check boundaries! Always!
if (len < 0) {
// return zero for "Fail"
return 0;
}
// doing the following four things at once:
// cast remainder to char
// add the numerical value of the digit "0"
// put it into the array at place len
// decrement len
out[len--] = (char) remainder + '0';
}
// return non-zero value for "All OK"
return 1;
}
// I don't know what you do here, but it
// doesn't matter at all for this example
void T0()
{
fputc('0', stdout);
}
void T1()
{
fputc('1', stdout);
}
int main()
{
// your input
int decimalG = 24;
// an array able to hold 8 (eight) elements of type char
char myStringG[8];
// call decimalToBinary with the number, the array and
// the length of that array
if (!decimalToBinary(decimalG, myStringG, 8)) {
fprintf(stderr, "decimalToBinary failed\n");
exit(EXIT_FAILURE);
}
// Print the whole array
// How to get rid of the leading zeros is left to you
for (int i = 0; i < 8; i++) {
if (myStringG[i] == '1') {
T1();
} else {
T0();
}
}
// just for the optics
fputc('\n', stdout);
exit(EXIT_SUCCESS);
}
Computing the length needed is tricky, but if you know the size of long your Micro uses (8, 16, 32, or even 64 bit these days) you can take that as the maximum size for the array. Leaves the leading zeros but that should not be a problem, or is it?
To achieve your goal, you don't have to convert a decimal value to binary:
unsigned decimalG = 24; // Assumed positive, for negative values
// have implementation-defined representation
for (; decimalG; decimalG >>= 1) {
if(decimalG & 1) {
// Do something
} else {
// Do something else
}
}
Or you can use a union, but I'm not sure whether this approach is well defined by the standard.
If you stick to writing decimalToBinary, note that you'll have to use an array:
#include <stdio.h>
#include <stdlib.h>
#include <limits.h>
char *decimalToBinary(unsigned n);
int
main(void) {
int decimalG = 15;
char *binary = decimalToBinary(decimalG);
puts(binary);
free(binary);
}
char *
decimalToBinary(unsigned n) {
// Don't forget to free() it after use!!!
char *binary = malloc(sizeof(unsigned) * CHAR_BIT + 1);
if(!binary) return 0;
size_t i;
for (i = 0; i < sizeof(unsigned) * CHAR_BIT; i++) {
binary[i] = '0' + ((n >> i) & 1); // in reverse order
}
binary[i] = 0;
return binary;
}
Use the itoa (integer-to-ascii) function.
http://www.cplusplus.com/reference/cstdlib/itoa/
EDIT: Correction:
Don't be an idiot, use the itoa (integer-to-ascii) function.
http://www.cplusplus.com/reference/cstdlib/itoa/
EDIT:
Maybe I wasn't clear enough. I saw the line that said:
*Ps the Itoa(); function is also not an option.
This is completely unreasonable. You want to reinvent the wheel, but you want someone else to do it? What do you possibly have against itoa? It's part of the standard. It will always exist, no matter what platform you're targeting or version of C that you're using.
I want to give each character of the binary into it's own
space inside an array. However, I can't seem to save digits
from the return values in my string array.
There are a number of ways to approach this, if I understand what you are asking. First, there is no need to actually store the results of the binary representation of your number in an array to call T1() or T0() based on the bit value of any given bit that makes up the number.
Take your example 24 (binary 11000). If I read your post correctly you state:
In this case since the decimal is 24, the binary
would be 11000 therefore it should execute the the
function T1() 2 times and T0() 6 times.
(I'm not sure where you get 6 times, it looks like you intended that T0() would be called 3 times)
If you have T0 and T1 defined, for example, to simply let you know when they are called, e.g.:
void T1 (void) { puts ("T1 called"); }
void T0 (void) { puts ("T0 called"); }
You can write a function (say named callt) to call T1 for each 1-bit and T0 for each 0-bit in a number simply as follows:
void callt (const unsigned long v)
{
if (!v) { putchar ('0'); return; };
size_t sz = sizeof v * CHAR_BIT;
unsigned long rem = 0;
while (sz--)
if ((rem = v >> sz)) {
if (rem & 1)
T1();
else
T0();
}
}
So far example if you passed 24 to the function callt (24), the output would be:
$ ./bin/dec2bincallt
T1 called
T1 called
T0 called
T0 called
T0 called
(full example provided at the end of answer)
On the other hand, if you really do want to give each character of the binary into it's own space inside an array, then you would simply need to pass an array to capture the bit values (either the ASCII character representations for '0' and '1', or just 0 and 1) instead of calling T0 and T1 (you would also add a few lines to handle v=0 and also the nul-terminating character if you will use the array as a string) For example:
/** copy 'sz' bits of the binary representation of 'v' to 's'.
* returns pointer to 's', on success, empty string otherwise.
* 's' must be adequately sized to hold 'sz + 1' bytes.
*/
char *bincpy (char *s, unsigned long v, unsigned sz)
{
if (!s || !sz) {
*s = 0;
return s;
}
if (!v) {
*s = '0';
*(s + 1) = 0;
return s;
}
unsigned i;
for (i = 0; i < sz; i++)
s[i] = (v >> (sz - 1 - i)) & 1 ? '1' : '0';
s[sz] = 0;
return s;
}
Let me know if you have any additional questions. Below are two example programs. Both take as their first argument the number to convert (or to process) as binary (default: 24 if no argument is given). The first simply calls T1 for each 1-bit and T0 for each 0-bit:
#include <stdio.h>
#include <stdlib.h>
#include <limits.h> /* for CHAR_BIT */
void callt (const unsigned long v);
void T1 (void) { puts ("T1 called"); }
void T0 (void) { puts ("T0 called"); }
int main (int argc, char **argv) {
unsigned long v = argc > 1 ? strtoul (argv[1], NULL, 10) : 24;
callt (v);
return 0;
}
void callt (const unsigned long v)
{
if (!v) { putchar ('0'); return; };
size_t sz = sizeof v * CHAR_BIT;
unsigned long rem = 0;
while (sz--)
if ((rem = v >> sz)) {
if (rem & 1) T1(); else T0();
}
}
Example Use/Output
$ ./bin/dec2bincallt
T1 called
T1 called
T0 called
T0 called
T0 called
$ ./bin/dec2bincallt 11
T1 called
T0 called
T1 called
T1 called
The second stores each bit of the binary representation for the value as a nul-terminated string and prints the result:
#include <stdio.h>
#include <stdlib.h>
#define BITS_PER_LONG 64 /* define as needed */
char *bincpy (char *s, unsigned long v, unsigned sz);
int main (int argc, char **argv) {
unsigned long v = argc > 1 ? strtoul (argv[1], NULL, 10) : 24;
char array[BITS_PER_LONG + 1] = "";
printf (" values in array: %s\n", bincpy (array, v, 16));
return 0;
}
/** copy 'sz' bits of the binary representation of 'v' to 's'.
* returns pointer to 's', on success, empty string otherwise.
* 's' must be adequately sized to hold 'sz + 1' bytes.
*/
char *bincpy (char *s, unsigned long v, unsigned sz)
{
if (!s || !sz) {
*s = 0;
return s;
}
if (!v) {
*s = '0';
*(s + 1) = 0;
return s;
}
unsigned i;
for (i = 0; i < sz; i++)
s[i] = (v >> (sz - 1 - i)) & 1 ? '1' : '0';
s[sz] = 0;
return s;
}
Example Use/Output
(padding to 16 bits)
$ ./bin/dec2binarray
values in array: 0000000000011000
$ ./bin/dec2binarray 11
values in array: 0000000000001011

Binary to Decimal Conversion in C - Input Size Issue

I have to write a C program for one of my classes that converts a given binary number to decimal. My program works for smaller inputs, but not for larger ones. I believe this may be due to the conversion specifier I am using for scanf() but I am not positive. My code is below
#include<stdio.h>
#include<math.h>
int main(void)
{
unsigned long inputNum = 0;
int currentBinary = 0;
int count = 0;
float decimalNumber = 0;
printf( "Input a binary number: " );
scanf( "%lu", &inputNum );
while (inputNum != 0)
{
currentBinary = inputNum % 10;
inputNum = inputNum / 10;
printf("%d\t%d\n", currentBinary, inputNum);
decimalNumber += currentBinary * pow(2, count);
++count;
}
printf("Decimal conversion: %.0f", decimalNumber);
return 0;
}
Running with a small binary number:
Input a binary number: 1011
1 101
1 10
0 1
1 0
Decimal conversion: 11
Running with a larger binary number:
Input a binary number: 1000100011111000
2 399133551
1 39913355
5 3991335
5 399133
3 39913
3 3991
1 399
9 39
9 3
3 0
Decimal conversion: 5264
"1000100011111000" is a 20 digit number. Certainly unsigned long is too small on your platform.
unsigned long is good - up to at least 10 digits.1
unsigned long long is better - up to at least 20 digits.1
To get past that:
Below is an any size conversion by reading 1 char at a time and forming an unbounded string.
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
// Double the decimal form of string: "512" --> "1024"
char *sdouble(char *s, size_t *len, int carry) {
size_t i = *len;
while (i > 0) {
i--;
int sum = (s[i] - '0')*2 + carry;
s[i] = sum%10 + '0';
carry = sum/10;
}
if (carry) {
(*len)++;
s = realloc(s, *len + 1); // TBD OOM check
memmove(&s[1], s, *len);
s[0] = carry + '0';
}
return s;
}
int main(void) {
int ch;
size_t len = 1;
char *s = malloc(len + 1); // TBD OOM check
strcpy(s, "0");
while ((ch = fgetc(stdin)) >= '0' && ch <= '1') {
s = sdouble(s, &len, ch - '0');
}
puts(s);
free(s);
return 0;
}
100 digits
1111111111000000000011111111110000000000111111111100000000001111111111000000000011111111110000000000
1266413867935323811836706421760
1 When the lead digit is 0 or 1.
When you do this for a large number inputNum
currentBinary = inputNum % 10;
its top portion gets "sliced off" on conversion to int. If you would like to stay within the bounds of an unsigned long, switch currentBinary to unsigned long as well, and use an unsigned long format specifier in printf. Moreover, unsigned long may not be sufficiently large on many platforms, so you need to use unsigned long long.
Demo.
Better yet, switch to reading the input in a string, validating it to be zeros and ones (you have to do that anyway) and do the conversion in a cleaner character-by-character way. This would let you go beyond the 64-bit of 19 binary digits to have a full-scale int input.
unsigned long supports a maximum number of 4294967295, which means in the process of scanf( "%lu", &inputNum ); you've sliced the decimal number 1000100011111000 to a 32-bit unsigned long number.
I think scanf inputNum to a string would help a lot. In the while loop condition check if the string is empty now, and in the loop body get the last char of the string, detect if it's an '1' of a '0', and then calculate the binary number using this info.
I was tasked with writing a binary to decimal converted with taking larger binary inputs, but using embedded C programming in which we are not allowed to use library functions such as strlen. I found a simpler way to write this conversion tool using C, with both strlen, and also sizeof, as shown in the code below. Hope this helps. As you can see, strlen is commented out but either approach works fine. Sizeof just accounts for the 0 elecment in the array and that is why sizeof (number) -1 is used. Cheers!
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
const char number[] = "100111111111111111111111";
int binToDec(char *);
int main()
{
printf("Output: %d", binToDec(&number));
}
int binToDec(char *n)
{
char *num = n;
int decimal_value = 0;
int base = 1;
int i;
int len = sizeof(number)-1;
//int len = strlen(number);
for (i=len-1; i>=0; i--)
{
if (num[i] == '1')
decimal_value += base;
base = base * 2;
}
return decimal_value;
}

How to convert epoch decimal time to 64 bit binary and back to decimal in C

I am trying to create a script that will convert decimals to binary based on specified size and then reverse the process, meaning from binary to decimal. So far the script and the output from my point of view (beginner) the script looks correct. I can convert all numbers from decimal to binary and vice versa. I am stack on the last part, that I am trying to convert the epoch time from a 64 bit binary number to decimal. I can not understand where I am going wrong since the rest of the numbers seem to recovered correctly. The source points that I found the scripts that I am using are Binary to Decimal and Decimal to Binary.
Update: modified code to short version:
I have modified the code to simply demonstrate the problem. The code works fine up to 32 bit binary conversion. But since I need to convert up to 64 I do not know how to do that. I noticed that because I used before int I reached the maximum limitations 32 bits, so I modified that to long long int to reach the 64 bit.
I have provided a sample of simple conversion of decimal as 1 in 32 bit format and 64 that demonstrate the problem. The epoch time is the desired output but I need to verify that the code works before I attempt the conversion.
Sample of code:
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
#include <math.h>
#include <time.h>
#include <inttypes.h>
#define MAX_CHARACTERS 65
typedef struct rec {
char transmit[MAX_CHARACTERS];
char receive[MAX_CHARACTERS];
}RECORD;
char *decimal_to_binary(int n , int num); /* Define function */
char *decimal_to_binary(int n , int num) {
long long int c, d, count;
char *pointer;
count = 0;
pointer = (char*) malloc( num + 1 );
if ( pointer == NULL )
exit(EXIT_FAILURE);
for ( c = num - 1; c >= 0; c-- ) {
d = n >> c;
if ( d & 1 )
*( pointer + count ) = 1 + '0';
else
*( pointer + count ) = 0 + '0';
count++;
}
*( pointer + count ) = '\0';
return pointer;
}
int binary_decimal(long long int n); /* Define function */
int binary_decimal(long long int n) { /* Function to convert binary to decimal.*/
int decimal=0, i=0, rem;
while (n!=0) {
rem = n%10;
n/=10;
decimal += rem*pow(2,i);
++i;
}
return decimal;
}
int main(void) {
RECORD *ptr_record;
ptr_record = (RECORD *) malloc (sizeof(RECORD));
if (ptr_record == NULL) {
printf("Out of memmory!\nExit!\n");
exit(0);
}
int LI_d = 1;
char *LI_b = decimal_to_binary(LI_d,32);
memset( (*ptr_record).transmit , '\0' , sizeof((*ptr_record).transmit) );
strncat((*ptr_record).transmit , LI_b , strlen(LI_b) );
printf("LI: %s\n",(*ptr_record).transmit);
//transmit and receive
memset( (*ptr_record).receive , '\0' , sizeof((*ptr_record).receive) );
strncpy( (*ptr_record).receive , (*ptr_record).transmit , strlen((*ptr_record).transmit) );
char *LI_rcv_b = strndup( (*ptr_record).receive , 64 );
int LI_rcv_i = atoi (LI_rcv_b);
int final_LI = binary_decimal(LI_rcv_i);
printf("Final_LI: %i\n",final_LI);
free( ptr_record );
return 0;
}
Sample of output for 32 bit conversion:
LI: 00000000000000000000000000000001
Final_LI: 1
Sample of output for 64 bit conversion:
LI: 0000000000000000000000000000000100000000000000000000000000000001
Final_LI: -1
decimal_to_binary(int n, ...): better to use unsigned math
//char *decimal_to_binary(int n, int num) {
char *decimal_to_binary(unsigned long long n, int num) {
// long long int c, d, count;
unsigned long long int d;
int c, count;
char *pointer;
count = 0;
pointer = malloc(num + 1); // drop cast
if (pointer == NULL)
exit(EXIT_FAILURE);
for (c = num - 1; c >= 0; c--) {
d = n >> c;
if (d & 1)
*(pointer + count) = 1 + '0';
else
*(pointer + count) = 0 + '0';
count++;
}
*(pointer + count) = '\0';
return pointer;
}
Simplify binary_decimal(). Again use unsigned math, drop pow()
/* Function to convert binary to decimal.*/
unsigned long binary_decimal(unsigned long long int n) {
unsigned long decimal = 0;
while (n != 0) {
decimal *= 2;
decimal += n % 10;
n /= 10;
}
return decimal;
}
main() has lots of issues
int main(void) {
RECORD *ptr_record;
ptr_record = malloc(sizeof(RECORD)); // drop cast
if (ptr_record == NULL) {
printf("Out of memory!\nExit!\n"); // spelling fix
exit(0);
}
// use unsigned long long
unsigned long long LI_d = 1;
LI_d = (unsigned long long) -1;
char *LI_b = decimal_to_binary(LI_d, 32);
memset((*ptr_record).transmit, '\0', sizeof((*ptr_record).transmit));
// strncat((*ptr_record).transmit, LI_b, strlen(LI_b));
strncat((*ptr_record).transmit, LI_b, sizeof((*ptr_record).transmit) - 1);
printf("LI: %s\n", (*ptr_record).transmit);
//transmit and receive
memset((*ptr_record).receive, '\0', sizeof((*ptr_record).receive));
// strncpy((*ptr_record).receive, (*ptr_record).transmit, strlen((*ptr_record).transmit));
strncpy((*ptr_record).receive, (*ptr_record).transmit, sizeof((*ptr_record).transmit) - 1);
// char *LI_rcv_b = strndup((*ptr_record).receive, 64);
char *LI_rcv_b = strndup((*ptr_record).receive, MAX_CHARACTERS);
// At this point, approach is in error
// Cannot take a 64-decimal digit string and convert to a typical long long.
// int LI_rcv_i = atoi(LI_rcv_b);
// int final_LI = binary_decimal(LI_rcv_i);
// printf("Final_LI: %i\n", final_LI);
// Suspect you want to convert 64-binary digit string to a 64-bit integer
// maybe by somehow using binary_decimal - suggest re-write of that function
unsigned long long LI_rcv_i = strtoull(LI_rcv_b, NULL, 2);
printf("Final_LI: %llu\n", LI_rcv_i);
free(ptr_record);
return 0;
}
Output
LI: 11111111111111111111111111111111
Final_LI: 4294967295

Resources