I have to convert a given binary input (e.g. 1101) to decimal, but the input isn't a string array or an integer (the passed argument is const char *binstr). How am I supposed to access each individual digit of the binary number so I can do pow(x,y) on each and add them together to get the decimal number?
const char * usually refers to a C string. You can just use strtol(3):
int x = strtol(binstr, NULL, 2);
You could try with this program which converts from Binary to Decimal
char *binstr = "1011011";
int num = 0, sum = 0, ctr = 0;
ctr = strlen(binstr) - 1;
do{
sum += ((binstr[ctr] & 0x1) << num);
ctr--;
num ++;
}while(ctr >= 0);
binstr[0];
binstr[1];
binstr[2];
etc
or you can do it through a pointer
char* s = binstr;
unsigned long x =0;
while(*s) { x = x << 1; x |= (*s == '1' ? 1:0); s++;}
printf("the decimal of %s is %ul", binstr, x);
You've made a c string and you can get each character the way similar to arrays:
input[i]
Here's an example of splitting the binary string into individual bits (characters) and printing them out: http://cfiddle.net/wYtKJv
You can use loops:
while(i<100){
if(binstr[i]== '\0'){
break;
}
printf("First Bit:\n%c\n\n",binstr[i]);
i++;
}
Since C-strings are null terminated you can check to see if a character if we hit is '\0' to break the loop.
In the loop you can also convert the chars to ints and store them someplace (array probably) where you can access them for calculations.
Related
How can I convert integer value to ASCII characters in C language?
I want to assign characters to array of chars.
char buff[10];
Let's say we have:
int = 93 (HEX: 5D) -> result should be - buff = {']'}
int = 13398 (HEX: 3456) -> result should be buff = {'4', 'V'}
Similar as is done here
I don't need to care about non printable characters. There will be always printable characters.
Just use bit-shifting to get the individual bytes.
Assuming an architecture on which the size of int is 4:
int someInt = ...
uint8_t first = (someInt >> 24);
uint8_t second = (someInt >> 16);
uint8_t third = (someInt >> 8);
uint8_t fourth = someInt;
Now you can just put the resulting bytes into your array. Make sure to check first, second and third to make sure they're not 0 first, and skip them if they are. Make sure to end your array with a null terminator, as required by C strings.
This answer assumes big-endian ordering, since that's what you indicated in your example. If you want little-endian, just reverse the order of the bytes when you put them in the array.
Note that this will turn 5DC into 05 and DC. If you want 5D instead, you should check to see whether the first digit in the original int is 0. You can do this using the & operator, testing the int against 0xf0000000, 0x00f00000, etc. If you find the first digit to be 0, shift the int to the right by 4 bits before extracting the bytes from it.
So, something like this:
void ExtractBytes(int anInt, uint8_t *buf, size_t bufSize) {
// passing an empty buffer to this function would be stupid,
// but hey, doesn't hurt to be idiot-proof
if (bufSize == 0) { return; }
// Get our sizes
const int intSize = sizeof(anInt);
const int digitCount = intSize * 2;
// find first non-zero digit
int firstNonZero = -1;
for (int i = 0; i < digitCount; i++) {
if ((anInt & (0xf << ((digitCount - 1 - i) * 4))) != 0) {
firstNonZero = i;
break;
}
}
if (firstNonZero < 0) {
// empty string; just bail out.
buf[0] = 0;
return;
}
// check whether first non-zero digit is even or odd;
// shift if it's odd
int intToUse = (firstNonZero % 2 != 0) ? (anInt >> 4) : anInt;
// now, just extract our bytes to the buffer
int bufPtr = 0;
for (int i = intSize - 1; i >= 0; i--) {
// shift over the appropriate amount, mask against 0xff
uint8_t byte = (intToUse >> (i * 8));
// If the byte is 0, we can just skip it
if (byte == 0) {
continue;
}
// always check to make sure we don't overflow our buffer.
// if we're on the last byte, make it a null terminator and bail.
if (bufPtr == bufSize - 1) {
buf[bufPtr] = 0;
return;
}
// Copy our byte into the buffer
buf[bufPtr++] = byte;
}
// Now, just terminate our string.
// We can be sure that bufPtr will be less than bufSize,
// since we checked for that in the loop. So:
buf[bufPtr] = 0;
// Aaaaaand we're done
}
Now let's take it for a spin:
uint8_t buf[10];
ExtractBytes(0x41424344, buf, 10);
printf("%s\n", buf);
ExtractBytes(0x4142434, buf, 10);
printf("%s\n", buf);
and the output:
ABCD
ABC
convert integer value to ASCII characters in C language?...
Referring to an ASCII table, the value of ']' in C will always be interpreted as 0x5D, or decimal value 93. While the value of "]" in C will always be interpreted as a NULL terminated char array, i.e., a string representation comprised of the values:
|93|\0|
(As illustrated in This Answer, similar interpretations are valid for all ASCII characters.)
To convert any of the integer (char) values to something that looks like a "]", you can use a string function to convert the char value to a string representation. For example all of these variations will perform that conversion:
char strChar[2] = {0};
sprintf(strChar, "%c", ']');
sprintf(strChar, "%c", 0x5D);
sprintf(strChar, "%c", 93);
and each produce the identical C string: "]".
I want to assign characters to array of chars...
example of how to create an array of char, terminated with a NULL char, such as "ABC...Z":
int i;
char strArray[27] = {0};
for(i=0;i<26;i++)
{
strArray[i] = i+'A';
}
strArray[i] = 0;
printf("Null terminated array of char: %s\n", strArray);
unsigned u = ...;
if (0x10 > u)
exit(EXIT_FAILURE);
while (0x10000 < u) u /= 2;
while (0x1000 > u) u *= 2;
char c[2] = {u / 0x100, u % 0x100);
I am working on an application in C where I need to show Unicode UTF-8 characters. I am getting the values as a binary byte stream as 11010000 10100100 as character array which is the Unicode character "Ф".
I want to store and display the character. I tried to convert the binary to a hexadecimal character array. But printing with
void binaryToHex(char *bData) {
char hexaDecimal[MAX];
int temp;
long int i = 0, j = 0;
while (bData[i]) {
bData[i] = bData[i] - 48;
++i;
}
--i;
while (i - 2 >= 0) {
temp = bData[i - 3] * 8 + bData[i - 2] * 4 + bData[i - 1] * 2 + bData[i];
if (temp > 9)
hexaDecimal[j++] = temp + 55;
else
hexaDecimal[j++] = temp + 48;
i = i - 4;
}
if (i == 1)
hexaDecimal[j] = bData[i - 1] * 2 + bData[i] + 48;
else if (i == 0)
hexaDecimal[j] = bData[i] + 48;
else
--j;
printf("Equivalent hexadecimal value: ");
char hexVal[MAX];
// size_t len = j+1;
int k = 0;;
while (j >= 0) {
char *ch = (char*)hexaDecimal[j--];
if (j % 2 == 0) {
hexVal[k] = '\\';
k++;
hexVal[k] = 'x';
k++;
}
printf("\nkk++Length %d ...J= %d.. ", k, j);
hexVal[k] = ch;
k++;
printf("%c", ch);
}
printf("KKKK+=== %d", k);
hexVal[k] = NULL;
// printf("\nkk++Length %d",strlen(hexVal));
printf("\nMM+-+MM %s===\n ..>>>>", hexVal);
}
Only showing the value as \xD0\xA4. I did string manipulation for that.
But when writing in the way
char s[]= "\xD0\xA4";
OR
char *s= "\xD0\xA4";
printf("\n %s",s);
producing the desired result that is printing the character "Ф". How can I get the correct string dynamically? Is there any library for this in C?
The code is from http://www.cquestions.com/2011/07/binary-to-hexadecimal-conversion-in.html.
Is there a way to print it from binary directly or from a HEX value. Or is there an alternative for that?
Escape codes such as \xD0 are interpreted by the compiler when encountered in the value of a character or string literal. The compiler replaces them with the corresponding byte (or byte sequence in some cases). They are not meaningful to C at runtime.
You are therefore not only making it harder on yourself but doing altogether the wrong thing by constructing and printing the text of such escape sequences at runtime. What you get is exactly what you should expect. Just print the literal byte sequence you decode from the program input, without any dress-up.
At last converting the Unicode binary char array to actual binary codepoint like converting
11010000 10100100 to 10000 100100 and then converting to decimal and then to Unicode solved my problem for now.below is the link I use to convert to UTF8 from decimal.
C++ Windows decimal to UTF-8 Character Conversion
resources I used:
https://www.youtube.com/watch?v=vLBtrd9Ar28
http://www.zehnet.de/2005/02/12/unicode-utf-8-tutorial/
Here I have created a string and I am storing the binary value of a number in the string.. I want to store the value of the variable num to the string.
i contains the length of the binary number for the given decimal number..suppose the given number is A=6, i contains 3 and i need a string 'result' having '110' which is the binary value of 6.
char* result = (char *)malloc((i)* sizeof(char));
i--;
while(A>=1)
{
num=A%2;
result[i]=num; // here I need to store the value of num in the string
A=A/2;
i--;
}
It appears from the code you've posted is that what you are trying to do is to print a number in binary in a fixed precision. Assuming that's what you want to do, something like
unsigned int mask = 1 << (i - 1);
unsigned int pos = 0;
while (mask != 0) {
result[pos] = (A & mask) == 0 ? '0' : '1';
++pos;
mask >>= 1;
}
result[pos] = 0; //If you need a null terminated string
edge cases left as an exercise for the reader.
I'm not sure specifically what you are asking for. Do you mean the binary representation (i.e. 00001000) of a number written into a string or converting the variable to a string (i.e. 8)? I'll assume you mean the first.
The easiest way to do this is to repeatedly test the least significant bit and shift the value to the right (>>). We can do this in for loop. However you will need to know how many bits you need to read. We can do this with sizeof.
int i = 15;
for (int b = 0; b < sizeof(i); ++b) {
uint8_t bit_value = (i & 0x1);
i >>= 1;
}
So how do we turn this iteration into a string? We need to construct the string in reverse. We know how many bits are needed, so we can create a string buffer accordingly with an extra byte for NULL termination.
char *buffer = calloc(sizeof(i) + 1, sizeof(char));
What this does is allocates memory that is sizeof(i) + 1 elements long where each element is sizeof(char), and then zero's each element. Now lets put the bits into the string.
for (int b = 0; b < sizeof(i); ++b) {
uint8_t bit_value = (i & 0x1);
size_t offset = sizeof(i) - 1 - b;
buffer[offset] = '0' + bit_value;
i >>= 1;
}
So what's happening here? In each pass we're calculating the offset in the buffer that we should be writing a value to, and then we're adding the ASCII value of 0 to bit_value as we write it into the buffer.
This code is untested and may have some issues, but that is left as an exercise to the reader. If you have any questions, let me know!
here is the whole code. It is supposed to work fine.
int i=0;
int A;//supposed entered by user
//calculating the value of i
while(A!=0)
{
A=A/2;
i++;
}
char* result=(char *)malloc(sizeof(char)*i);
i--;
while(A!=0)
{
result[i]='0'+(A%2);
A=A/2;
i--;
}
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <limits.h>
char *numToBinStr(int num){
static char bin[sizeof(int) * CHAR_BIT + 1];
char *p = &bin[sizeof(int) * CHAR_BIT];//p point to end
unsigned A = (unsigned)num;
do {
*--p = '0' + (A & 1);
A >>= 1;
}while(A > 0);//do-while for case value of A is 0
return p;
}
int main(void){
printf("%s\n", numToBinStr(6));
//To duplicate, if necessary
//char *bin = strdup(numToBinStr(6));
char *result = numToBinStr(6);
char *bin = malloc(strlen(result) + 1);
strcpy(bin, result);
printf("%s\n", bin);
free(bin);
return 0;
}
You could use these functions in <stdlib.h>:
itoa(); or sprintf()
The second link has some examples as well.
How do you put an int into a char array?
int x = 21, i = 3;
char length[4];
while(i >= 0) {
length[i] = (char) (x % 10);
x /= 10;
i--;
} printf("%s\n", length);
// length should now be "0021"
The string comes out blank instead.
Note: This is not a duplicate of "How do I convert from int to chars in C++?" because I also need padding. i.e. "0021" not "21"
You're not getting the character code of the digit, you're using the digit as if it were its own character code. It should be:
length[i] = '0' + (x % 10);
You also need to add an extra element to the length array for the terminating null character:
char length[5];
length[4] = 0;
The problem in your code is basically that 1 != '1' i.e. the character is not the integer, you need to check the ascii table to see what ascii code represents the character '1' but you don't really need to know the number, you can just use '1' note the single qoutes.
But you also didn't nul terminate your string, you need to add a '\0' at the end of the string, so
int x = 21, i = 3;
char length[5];
length[4] = '\0';
while (i >= 0)
{
length[i--] = x % 10 + '0';
x /= 10;
}
printf("%s\n", length);
should work, but is unecessary, you can just
snprintf(length, sizeof(length), "%0*d", padding, x);
/* ^ this is how many characters you want */
notice that sizeof works because length is a char array, do not confuse that with the length of a string.
I am trying to convert a character to its binary representation (so character --> ascii hex --> binary).
I know to do that I need to shift and AND. However, my code is not working for some reason.
Here is what I have. *temp points to an index in a C string.
char c;
int j;
for (j = i-1; j >= ptrPos; j--) {
char x = *temp;
c = (x >> i) & 1;
printf("%d\n", c);
temp--;
}
We show up two functions that prints a SINGLE character to binary.
void printbinchar(char character)
{
char output[9];
itoa(character, output, 2);
printf("%s\n", output);
}
printbinchar(10) will write into the console
1010
itoa is a library function that converts a single integer value to a string with the specified base.
For example... itoa(1341, output, 10) will write in output string "1341".
And of course itoa(9, output, 2) will write in the output string "1001".
The next function will print into the standard output the full binary representation of a character, that is, it will print all 8 bits, also if the higher bits are zero.
void printbincharpad(char c)
{
for (int i = 7; i >= 0; --i)
{
putchar( (c & (1 << i)) ? '1' : '0' );
}
putchar('\n');
}
printbincharpad(10) will write into the console
00001010
Now i present a function that prints out an entire string (without last null character).
void printstringasbinary(char* s)
{
// A small 9 characters buffer we use to perform the conversion
char output[9];
// Until the first character pointed by s is not a null character
// that indicates end of string...
while (*s)
{
// Convert the first character of the string to binary using itoa.
// Characters in c are just 8 bit integers, at least, in noawdays computers.
itoa(*s, output, 2);
// print out our string and let's write a new line.
puts(output);
// we advance our string by one character,
// If our original string was "ABC" now we are pointing at "BC".
++s;
}
}
Consider however that itoa don't adds padding zeroes, so printstringasbinary("AB1") will print something like:
1000001
1000010
110001
unsigned char c;
for( int i = 7; i >= 0; i-- ) {
printf( "%d", ( c >> i ) & 1 ? 1 : 0 );
}
printf("\n");
Explanation:
With every iteration, the most significant bit is being read from the byte by shifting it and binary comparing with 1.
For example, let's assume that input value is 128, what binary translates to 1000 0000.
Shifting it by 7 will give 0000 0001, so it concludes that the most significant bit was 1. 0000 0001 & 1 = 1. That's the first bit to print in the console. Next iterations will result in 0 ... 0.
Your code is very vague and not understandable, but I can provide you with an alternative.
First of all, if you want temp to go through the whole string, you can do something like this:
char *temp;
for (temp = your_string; *temp; ++temp)
/* do something with *temp */
The term *temp as the for condition simply checks whether you have reached the end of the string or not. If you have, *temp will be '\0' (NUL) and the for ends.
Now, inside the for, you want to find the bits that compose *temp. Let's say we print the bits:
for (as above)
{
int bit_index;
for (bit_index = 7; bit_index >= 0; --bit_index)
{
int bit = *temp >> bit_index & 1;
printf("%d", bit);
}
printf("\n");
}
To make it a bit more generic, that is to convert any type to bits, you can change the bit_index = 7 to bit_index = sizeof(*temp)*8-1