How this code in C is working - c

I got a code in c which will do functionality of atoi function but i don't how its working
int main(int argc, char* argv[])
{
printf("\n%d\n", myatoi("1998"));
getch();
return(0);
}
int myatoi(const char *string)
{
int i;
i=0;
while(*string)
{
i=(i<<3) + (i<<1) + (*string - '0');
string++;
}
return(i);
}
In the above code is not getting incremented and always be zero then how (i<<3) + (i<<1) will effect the code?

(i<<3) + (i<<1) (for positive numbers at least) is equivalent to multiplying by 10, because i<<3 shifts the integer by 3 bits to the left (i.e. multiplying by 8) and i<<1 shifts the integer by 1 bit to the left (i.e. multiplying by 2).
Each time you encounter a new digit, it multiplies the current number by 10 and adds the new digit (i.e. if your current number is 199 and you encounter the digit 8, then your new number should be 1998 = 10 * 199 + 8.
The reason for subtracting '0' is that if your characters are encoded in ASCII, you need to convert the ASCII codes back to numbers.

For understanding this code, you need to understand:
(i<<3), I mean bit operators
and
*string, string++, I mean string manipulation and more generally operations on pointers.
You also need to know how strings are represented in C and how numbers are represented in ASCII.

Related

Issue with turning a character into an integer in C

I am having issues with converting character variables into integer variables. This is my code:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
int main()
{
char string[] = "A2";
char letter = string[0];
char number = string[1];
char numbers[] = "12345678";
char letters[] = "ABCDEFGH";
int row;
int column;
for(int i = 0; i < 8; i++){
if(number == numbers[i]){
row = number;
}
}
}
When I try to convert the variable row into the integer value of the variable number, instead of 2 I get 50. The goal so far is to convert the variable row into the accurate value of the character variable number, which in this case is 2. I'm a little confused as to why the variable row is 50 and not 2. Can any one explain to me why it is not converting accurately?
'2' != 2. The '2' character, in ASCII, is 50 in decimal (0x32 in hex). See http://www.asciitable.com/
If you're sure they're really numbers you can just use (numbers[i] - '0') to get the value you're looking for.
2 in your case is a character, and that character's value is 50 because that's the decimal version of the byte value that represents the character 2 in ASCII. Remember, c is very low level and characters are essentially the same thing as any other value: a sequence of bytes. Just like letters are represented as bytes, so are the character representation of their value in our base 10 system. It might seem that 2 should have been represented with the value 2, but it wasn't.
If you use the atoi function, it will look at the string and compute the decimal value represented by the characters in your string.
However, if you're only converting one character to the decimal value it represents , you can take a short cut. subtract the digit from the value of '0'. Though the digits are not represented by the base 10 value they have for us humans, they are ordered sequentially in the ASCII code. And since in C the characters are simply byte values, the difference between a numeric character 0-9 and 0 is the value of the character.
char c = '2';
int i = c - '0';
If you understand why that would work, you get what I'm saying.

8 Byte Number as Hex in C

I have given a number, for example n = 10, and I want to calculate its length in hex with big endian and save it in a 8 byte char pointer. In this example I would like to get the following string:
"\x00\x00\x00\x00\x00\x00\x00\x50".
How do I do that automatically in C with for example sprintf?
I am not even able to get "\x50" in a char pointer:
char tmp[1];
sprintf(tmp, "\x%x", 50); // version 1
sprintf(tmp, "\\x%x", 50); // version 2
Version 1 and 2 don't work.
I have given a number, for example n = 10, and I want to calculate its length in hex
Repeatedly divide by 16 to find the number of hexadecimal digits. A do ... while insures the result is 1 when n==0.
int hex_length = 0;
do {
hex_length++;
} while (number /= 16);
save it in a 8 byte char pointer.
C cannot force your system to use 8-byte pointer. So if you system uses 4 byte char pointer, we are out of luck. Let us assume OP's system uses 8-byte pointer. Yet integers may be assigned to pointers. This may or may not result in valid pointer.
assert(sizeof (char*) == 8);
char *char_pointer = n;
printf("%p\n", (void *) char_pointer);
In this example I would like to get the following string: "\x00\x00\x00\x00\x00\x00\x00\x50".
In C, a string includes the various characters up to an including a null character. "\x00\x00\x00\x00\x00\x00\x00\x50" is not a valid C string, yet is a valid string literal. Code cannot construct string literals at run time, that is a part of source code. Further the relationship between n==10 and "\x00...\x00\x50" is unclear. Instead perhaps the goal is to store n into a 8-byte array (big endian).
char buf[8];
for (int i=8; i>=0; i--) {
buf[i] = (char) n;
n /= 256;
}
OP's code certainly will fail as it attempts to store a string which is too small. Further "\x%x" is not valid code as \x begins an invalid escape sequence.
char tmp[1];
sprintf(tmp, "\x%x", 50); // version 1
Just do:
int i;
...
int length = round(ceil(log(i) / log(16)));
This will give you (in length) the number of hexadecimal digits needed to represent i (without 0x of course).
log(i) / log(base) is the log-base of i. The log16 of i gives you the exponent.
To make clear what we're doing here: When rising 16 to the power of the found exponent, we get back i: 16^log16(i) = i.
By rounding up this exponent using ceil(), you get the number of digits.

Get bits from number string

If I have a number string (char array), one digit is one char, resulting in that the space for a four digit number is 5 bytes, including the null termination.
unsigned char num[] ="1024";
printf("%d", sizeof(num)); // 5
However, 1024 can be written as
unsigned char binaryNum[2];
binaryNum[0] = 0b00000100;
binaryNum[1] = 0b00000000;
How can the conversion from string to binary be made effectively?
In my program i would work with ≈30 digit numbers, so the space gain would be big.
My goal is to create datapackets to be sent over UDP/TCP.
I would prefer not to use libraries for this task, since the available space the code can take up is small.
EDIT:
Thanks for quick response.
char num = 0b0000 0100 // "4"
--------------------------
char num = 0b0001 1000 // "24"
-----------------------------
char num[2];
num[0] = 0b00000100;
num[1] = 0b00000000;
// num now contains 1024
I would need ≈ 10 bytes to contain my number in binary form. So, if I as suggested parse the digits one by one, starting from the back, how would that build up to the final big binary number?
In general, converting a number in string representation to decimal is easy because each character can be parsed separately. E.g. to convert "1024" to 1024 you can just look at the '4', convert it to 4, multiply by 10, then convert the 2 and add it, multiply by 10, and so on until you have parsed the whole string.
For binary it is not so easy, e.g. you can convert 4 to 100 and 2 to 010 but 42 is not 100 010 or 110 or something like that. So, your best bet is to convert the whole thing to a number and then convert that number to binary using mathematical operations (bit shifts and such). This will work fine for numbers that fit in one of the C++ number types, but if you want to handle arbitrarily large numbers you will need a BigInteger class which seems to be a problem for you since the code has to be small.
From your question I gather that you want to compress the string representation in order to transmit the number over a network, so I am offering a solution that does not strictly convert to binary but will still use fewer bytes than the string representation and is easy to use. It is based on the fact that you can store a number 0..9 in 4 bits, and so you can fit two of those numbers in a byte. Hence you can store an n-digit number in n/2 bytes. The algorithm could be as follows:
Take the last character, '4'
Subtract '0' to get 4 (i.e. an int with value 4).
Strip the last character.
Repeat to get 0
Concatenate into a single byte: digits[0] = (4 << 4) + 0.
Do the same for the next two numbers: digits[1] = (2 << 4) + 1.
Your representation in memory will now look like
4 0 2 1
0100 0000 0010 0001
digits[0] digits[1]
i.e.
digits = { 64, 33 }
This is not quite the binary representation of 1024, but it is shorter and it allows you to easily recover the original number by reversing the algorithm.
You even have 5 values left that you don't use for storing digits (i.e. everything larger than 1010) which you can use for other things like storing the sign, decimal point, byte order or end-of-number delimiter).
I trust that you will be able to implement this, should you choose to use it.
If I understand your question correctly, you would want to do this:
Convert your string representation into an integer.
Convert the integer into binary representation.
For step 1:
You could loop through the string
Subtract '0' from the char
Multiply by 10^n (depending on the position) and add to a sum.
For step 2 (for int x), in general:
x%2 gives you the least-significant-bit (LSB).
x /= 2 "removes" the LSB.
For example, take x = 6.
x%2 = 0 (LSB), x /= 2 -> x becomes 3
x%2 = 1, x /= 2 -> x becomes 1
x%2 = 1 (MSB), x /= 2 -> x becomes 0.
So we we see that (6)decimal == (110)bin.
On to the implementation (for N=2, where N is maximum number of bytes):
int x = 1024;
int n=-1, p=0, p_=0, i=0, ex=1; //you can use smaller types of int for this if you are strict on memory usage
unsigned char num[N] = {0};
for (p=0; p<(N*8); p++,p_++) {
if (p%8 == 0) { n++; p_=0; } //for every 8bits, 1) store the new result in the next element in the array. 2) reset the placing (start at 2^0 again).
for (i=0; i<p_; i++) ex *= 2; //ex = pow(2,p_); without using math.h library
num[n] += ex * (x%2); //add (2^p_ x LSB) to num[n]
x /= 2; // "remove" the last bit to check for the next.
ex = 1; // reset the exponent
}
We can check the result for x = 1024:
for (i=0; i<N; i++)
printf("num[%d] = %d\n", i, num[i]); //num[0] = 0 (0b00000000), num[1] = 4 (0b00000100)
To convert a up-to 30 digit decimal number, represented as a string, into a serious of bytes, effectively a base-256 representation, takes up to 13 bytes. (ceiling of 30/log10(256))
Simple algorithm
dest = 0
for each digit of the string (starting with most significant)
dest *= 10
dest += digit
As C code
#define STR_DEC_TO_BIN_N 13
unsigned char *str_dec_to_bin(unsigned char dest[STR_DEC_TO_BIN_N], const char *src) {
// dest[] = 0
memset(dest, 0, STR_DEC_TO_BIN_N);
// for each digit ...
while (isdigit((unsigned char) *src)) {
// dest[] = 10*dest[] + *src
// with dest[0] as the most significant digit
int sum = *src - '0';
for (int i = STR_DEC_TO_BIN_N - 1; i >= 0; i--) {
sum += dest[i]*10;
dest[i] = sum % 256;
sum /= 256;
}
// If sum is non-zero, it means dest[] overflowed
if (sum) {
return NULL;
}
}
// If stopped on something other than the null character ....
if (*src) {
return NULL;
}
return dest;
}

Decimal to Binary conversion

I need to convert a 20digits decimal to binary using C programming. What buffer will I create, because it will be very large, even the calculator can't compute converting 20 digits to Binary.
here is sample of what i intend to accomplish:
Let is assume this code:
I am type this value via my keyboard into the buffer. Meanwhile, I am using an AT85C55WD Mcu.
unsigned char idata token[20]=(2,3,5,6,3,3,4,4,3,2,4,4,6,7,4,3,4,5,3,3);
I want a result
type-define variable convrt_token= 23562244324467434533;
Is this possible using C? If it is, then please how do i go about it?
each digit in decimal has 10 values, 0..9. so you'll need between 3 and 4 digits in binary. so let's say 4 to keep it simple.
so for a 20 digit decimal you'll need an 80 digit number in binary.
you can do it all with chars to keep it simple as well. i assume this is for fun.
char decnum[20];
char binnum[80];
good luck.
Here is elegent solution....:)
#include<stdio.h>
#include<string.h>
#include<stdlib.h>
#include<math.h>
#include<unistd.h>
#include<assert.h>
#include<stdbool.h>
#define max 10000
#define RLC(num,pos) ((num << pos)|(num >> (32 - pos)))
#define RRC(num,pos) ((num >> pos)|(num << (32 - pos)))
void tobinstr(int value, int bitsCount, char* output)
{
int i;
output[bitsCount] = '\0';
for (i = bitsCount - 1; i >= 0; --i, value >>= 1)
{
output[i] = (value & 1) + '0';
}
}
int main()
{
char s[50];
tobinstr(65536,32, s);
printf("%s\n", s);
return 0;
}
Start off as Gidon mentions. Then you only need to define a function 'decToBin' or something like that, which you can use like this:
char decnum[20];
char binnum[80];
// ... get the values, do zero initialization etc...
decToBin( decnum, binnum );
And you're ready to go.
The decToBin function could easily be implemented as a common long division algorithm. That shouldn't be too hard.
Have fun ;)
EDIT: Detailed description:
Look at your last digit. Is it divisible by 2?
--> yes: your binary digit will be 0
--> no: your binary digit will be 1
Divide the entire decimal number by 2 (Long division preferrably) and truncate if necessary.
start over. You will get the binary digits in reverse order. (Repeat until your decimal number is 0)

ASCII and printf

I have a little (big, dumb?) question about int and chars in C. I rememeber from my studies that "chars are little integers and viceversa," and that's okay to me. If I need to use small numbers, the best way is to use a char type.
But in a code like this:
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char *argv[]) {
int i= atoi(argv[1]);
printf("%d -> %c\n",i,i);
return 0;
}
I can use as argument every number I want. So with 0-127 I obtain the expected results (the standard ASCII table) but even with bigger or negative numbers it seems to work...
Here is some example:
-181 -> K
-182 -> J
300 -> ,
301 -> -
Why? It seems to me that it's cycling around the ascii table, but I don't understand how.
When you pass an int corresponding to the "%c" conversion specifier, the int is converted to an unsigned char and then written.
The values you pass are being converted to different values when they are outside the range of an unsigned (0 to UCHAR_MAX). The system you are working on probably has UCHAR_MAX == 255.
When converting an int to an unsigned char:
If the value is larger than
UCHAR_MAX, (UCHAR_MAX+1) is
subtracted from the value as many
times as needed to bring it into the
range 0 to UCHAR_MAX.
Likewise, if the
value is less than zero, (UCHAR_MAX+1)
is added to the value as many times
as needed to bring it into the range
0 to UCHAR_MAX.
Therefore:
(unsigned char)-181 == (-181 + (255+1)) == 75 == 'K'
(unsigned char)-182 == (-182 + (255+1)) == 74 == 'J'
(unsigned char)300 == (300 - (255+1)) == 44 == ','
(unsigned char)301 == (301 - (255+1)) == 45 == '-'
The %c format parameter interprets the corresponding value as a character, not as an integer. However, when you lie to printf and pass an int in what you tell it is a char, its internal manipulation of the value (to get a char back, as a char is normally passed as an int anyway, with varargs) happens to yield the values you see.
My guess is that %c takes the first byte of the value provided and formats that as a character. On a little-endian system such as a PC running Windows, that byte would represent the least-significant byte of any value passed in, so consecutive numbers would always be shown as different characters.
You told it the number is a char, so it's going to try every way it can to treat it as one, despite being far too big.
Looking at what you got, since J and K are in that order, I'd say it's using the integer % 128 to make sure it fits in the legal range.
Edit: Please disregard this "answer".
Because you are on a little-endian machine :)
Serously, this is an undefined behavior. Try changing the code to printf("%d -> %c, %c\n",i,i,'4'); and see what happens then...
When we use the %c in printf statement, it can access only the first byte of the integer.
Hence anything greater than 256 is treated as n % 256.
For example
i/p = 321 yields op=A
What atoi does is converting the string to numerical values, so that "1234" gets 1234 and not just a sequence of the ordinal numbers of the string.
Example:
char *x = "1234"; // x[0] = 49, x[1] = 50, x[2] = 51, x[3] = 52 (see the ASCII table)
int y = atoi(x); // y = 1234
int z = (int)x[0]; // z = 49 which is not what one would want

Resources