atoi(happens with strtol as well) why add in the array '0'? - arrays

Im trying to study a code that has
array[0] = digitalRead(pin1);
array[1] = digitalRead(pin2);
array[2] = digitalRead(pin3);
array[3] = digitalRead(pin3);
array[4] = digitalRead(pin4);
array[5] = digitalRead(pin5);
array[6] = digitalRead(pin6);
array[7] = digitalRead(pin7);
for(i=0; i<8 ; i++){
data[i] = array[i] + '0';
}
input = atoi(data);
im curious why did they add a '0'? when i try to run the code without the '0' it returns 0 which i assume is saying it can't be converted

Short answer: '0' is added to convert interger values to ascii character values.
Explanation:
It's important to know that integer values like 0, 1, 2, ... are not the same as characters like '0', '1', '2', ... Characters do have an integer value that are defined in ascii-tables, see https://en.wikipedia.org/wiki/ASCII, but that value differs from the integer value. For instance the character '0' has the integer value 48. So to convert between an integer value (less than 10) and the corresponding character. There need to be some "conversion" - see later.
For your code:
digitalRead(pin1) returns an integer value being either 0 or 1
The purpose of the for loop is to generate a string that represents the value of the 8 pins. For instance like "10010110".
And finally the atoi call is to convert the string to an integer value. For instance converting the string "10010110" to the integer value 10010110 (decimal).
In order to construct the string from integer values that are 0 or 1, you need to calculate the integer value that represents the characters '0' and '1'. If you look-up ascii values, e.g. https://en.wikipedia.org/wiki/ASCII#Printable_characters , you can see that the character '0' has the decimal integer value 48 and the character '1' has the decimal integer value 49. So to go from integer value 0 to character '0' you need to add 48. Likewise - to go from integer value 1 to character '1' you need to add 48. So the code could be:
data[i] = array[i] + 48;
However, in C a character is considered an integer value. So instead of writing 48, C allows you to simply write the character that has the ascii-value 48. In other words:
data[i] = array[i] + 48;
is the same as
data[i] = array[i] + '0';
The compiler will automatically convert + '0' to + 48.
BTW: Make sure that data is defined as (at least) a 9 character array and that data[8] is already zero. Like char data[9] = {0};
That said... if array and data isn't used in other places, it seems a strange and complex way to calculate input. An alternative could be:
input = 0;
input = 10 * input + digitalRead(pin1);
input = 10 * input + digitalRead(pin2);
input = 10 * input + digitalRead(pin3);
input = 10 * input + digitalRead(pin3); // pin3 twice in OPs code. typo??
input = 10 * input + digitalRead(pin4);
input = 10 * input + digitalRead(pin5);
input = 10 * input + digitalRead(pin6);
input = 10 * input + digitalRead(pin7);
and if the pins could be placed in an array the above could be placed in a simple and short for-loop

The code shown is silly. Why are (presumed) values of 0-1 being stored to a not-null-terminated array (NOT a string) then passed to a function to do this? :
unsigned char input = 0;
input = (input << 1) + digitalRead(pin1);
input = (input << 1) + digitalRead(pin2);
input = (input << 1) + digitalRead(pin3); // << THIS IS ORIGINAL OP CODE
input = (input << 1) + digitalRead(pin3); // << THIS IS ORIGINAL OP CODE
input = (input << 1) + digitalRead(pin4);
input = (input << 1) + digitalRead(pin5);
input = (input << 1) + digitalRead(pin6);
input = (input << 1) + digitalRead(pin7);
/* input's value now 0 to 11111111 (0-255) as an integer value. */
/* User assumes responsibility for LSB <=> MSB ordering of pins */

Related

How to move uint32_t number to char[]?

I have to copy uint32_t number into the middle of the char[] buffer.
The situation is like this:
char buf[100];
uint8_t pos = 52; // position in buffer to which I want to copy my uint32_t number
uint32_t seconds = 23456; // the actual number
I tried to use memcpy like this:
memcpy(&buf[position], &seconds, sizeof(seconds));
But in buffer I'm getting some strange characters, not the number i want
I also tried using byte-shifting
int shiftby = 32;
for (int i = 0; i < 8; i++)
{
buf[position++] = (seconds >> (shiftby -= 4)) & 0xF;
}
Is there any other option how to solve this problem?
What you're doing in your memcpy code is to put the value 23456 in buff, starting at byte 52 (so bytes 52-55, since the size of seconds is 4 bytes). What you want to do (if I understand you correctly) is to put the string "23456" in buff, starting at byte 52. In this second case, each character takes one byte, and each byte would hold the ASCII value of its character.
Probably the best way to do that is to use snprintf:
int snprintf(char *buffer, size_t n, const char *format-string,
argument-list);
In your example:
snprintf(&buff[position], 5, "%d", seconds)
Note that the n arguments holds the number of digits, rather than the size of the variable. As I said - you take one byte per digit/character.
Obviously you should calculate the number of digits in seconds rather than hard-code it if it can change, and you should also check the return value of snprintf to see if the operation was performed successfully
It is unclear how you are intending to represent this uint32_t, but your code fragment suggest that you are expecting hexadecimal (or perhaps BCD). In that case:
for( int shiftby = 28; shiftby >= 0 ; shiftby -= 4 )
{
char hexdigit = (seconds >> shiftby) & 0xF ;
buf[position++] = hexdigit < 10 ? hexdigit + '0' : hexdigit + 'A' - 10 ;
}
Note that the only real difference between this and your code is the conversion to hex-digit characters by adding conditionally either '0' or 'A' - 10. The use of shiftby as the loop control variable is just a simplification or your algorithm.
The issue with your code is that it inserted integer values 0 to 15 into buf and the characters associated with these values are all ASCII control characters, nominally non-printing. How or whether they render as a glyph on any particular display depends on what you are using to present them. In Windows console for example, printing characters 0 to 15 results in the following:
00 = <no glyph>
01 = '☺'
02 = '☻'
03 = '♥'
04 = '♦'
05 = '♣'
06 = '♠'
07 = <bell> (emits a sound, no glyph)
08 = <backspace>
09 = <tab>
10 = <linefeed>
11 = '♂'
12 = '♀'
13 = <carriage return>
14 = '♫'
15 = '☼'
The change above transforms the values 0 to 15 to ASCII '0'-'9' or 'A'-'F'.
If a hexadecimal presentation is not what you were intending then you need to clarify the question.
Note that if the encoding is BCD (Binary Coded Decimal) where each decimal digit is coded into a 4 bit nibble, then the conversion can be simplified because the range of values is reduced to 0 to 9:
char bcddigit = (seconds >> shiftby) & 0xF ;
buf[position++] = bcddigit + '0' ;
but the hex conversion will work for BCD also.

Array contains garbage values not input values

I am writing a basic program to compute the binary eq of a decimal value. I'm storing the individual bits or 0 and 1 values into an array so I can eventually reverse the array and print the accurate binary representation. However when I print the array contents to check if array has been properly filled I see garbage values, or 0 if arr[]={0}
My code
int main() {
int i = 0, j = 0, k, decimal, binary = 0, remainder, divider;
int bin[10];
printf("Enter decimal value");
scanf("%d", &decimal);
while ((decimal != 0) && (i < decimal)) {
remainder = decimal % 2;
decimal = decimal / 2;
bin[i] = remainder;
j++;
printf("%d", bin[i]);
}
printf("\n%d", j);
printf("\n%d", bin[0]);
printf("\n%d", bin[1]);
printf("\n%d", bin[2]);
printf("\n%d", bin[3]);
printf("%d", bin);
return 0;
}
.exe
enter image description here
If you are still having problems with the conversion, it may be helpful to consider a couple of points. First, you are over-thinking the conversion from decimal to binary. For any given integer value, the value is already stored in memory in binary.
For example, when you have the integer 10, the computer stores it as 1010 in memory. So for all practical purposes, all you need to do is read the memory for value and set your array values to 1 for each bit that is 1 and 0 for each bit that is 0. You can even go one better, since what you are most likely after is the binary representation of the number, there is no need to store the 1s and 0s as a full 4-byte integer value in bin, why not make bin a character array and store the characters '1' or '0' in the character array (which when nul-terminated) allows a simple printing of the binary representation as a string.
This provides several benefits. Rather than converting from base 10 to base 2 and the divisions and modulo calls required for the base conversion, you can simply shift decimal to the right by one and check whether the least-significant-bit is 0 or 1 and store the desired character '0' or '1' based on the results of a simple unary and operation.
For example, in you case with an integer, you can determine the number of bits required to represent any integer value in binary with sizeof (int) * CHAR_BIT (where CHAR_BIT is a constant provided in limits.h and specifies the number of bits in a character (e.g. byte)). For an integer you could use:
#include <stdio.h>
#include <limits.h> /* for CHAR_BIT */
#define NBITS sizeof(int) * CHAR_BIT /* constant for bits in int */
To store the character representations of the binary number (or you could store the integers 1, 0 if desired), you can simply declare a character array:
char bin[NBITS + 1] = ""; /* declare storage for NBITS + 1 char */
char *p = bin + NBITS; /* initialize to the nul-terminating char */
(initialized to all zero and the +1 to allow for the nul-terminating character to allow the array to be treated as a string when filled)
Next, as you have discovered, whether you perform the base conversion or shift and and the resulting order of the individual bit values will be in reverse order. To handle that, you can simply declare a pointer pointing to the last character in your array and fill the array with 1s and 0s from the back toward the front.
Here too the character array/string representation makes things easier. Having initialized your array to all zero, you can start writing to your array beginning at the next to last character and working from the end to the beginning will insure you have a nul-terminated string when done. Further, regardless of the number of bits that make up decimal, you are always left with a pointer to the start of the binary representation.
Depending on how you loop over each bit in decimal, you may need to handle the case where decimal = 0; separately. (since you loop while there are bits in decimal, the loop won't execute if decimal = 0;) A simple if can handle the case and your else can simply loop over all bits in decimal:
if (decimal == 0) /* handle decimal == 0 separately */
*--p = '0';
else /* loop shifting decimal right by one until 0 */
for (; decimal && p > bin; decimal >>= 1)
*--p = (decimal & 1) ? '1' : '0'; /* decrement p and set
* char to '1' or '0' */
(note: since p was pointing to the nul-terminating character, you must decrement p with the pre-decrement operator (e.g. --p) before dereferencing and assigning the character or value)
All that remains is outputting your binary representation, and if done as above, it is a simple printf ("%s\n", p);. Putting all the pieces together, you could do something like the following:
#include <stdio.h>
#include <limits.h> /* for CHAR_BIT */
#define NBITS sizeof(int) * CHAR_BIT /* constant for bits in int */
int main (void) {
int decimal = 0;
char bin[NBITS + 1] = ""; /* declare storage for NBITS + 1 char */
char *p = bin + NBITS; /* initialize to the nul-terminating char */
printf ("enter a integer value: "); /* prompt for input */
if (scanf ("%d", &decimal) != 1) { /* validate ALL user input */
fputs ("error: invalid input.\n", stderr);
return 1;
}
if (decimal == 0) /* handle decimal == 0 separately */
*--p = '0';
else /* loop shifting decimal right by one until 0 */
for (; decimal && p > bin; decimal >>= 1)
*--p = (decimal & 1) ? '1' : '0'; /* decrement p and set
* char to '1' or '0' */
printf ("binary: %s\n", p); /* output the binary string */
return 0;
}
(note: the comment on validating ALL user input -- especially when using the scanf family of functions. Otherwise you can easily stray off into Undefined Behavior on an accidental entry of something that doesn't begin with a digit)
Example Use/Output
$ ./bin/int2bin
enter a integer value: 0
binary: 0
$ ./bin/int2bin
enter a integer value: 2
binary: 10
$ ./bin/int2bin
enter a integer value: 15
binary: 1111
Two's-complement of negative values:
$ ./bin/int2bin
enter a integer value: -15
binary: 11111111111111111111111111110001
Look things over and let me know if you have any questions, or if you really need bin to be an array of int. Having an integer array holding the individual bit values doesn't make a whole lot of sense, but if that is what you have to do, I'm happy to help.

Get bits from number string

If I have a number string (char array), one digit is one char, resulting in that the space for a four digit number is 5 bytes, including the null termination.
unsigned char num[] ="1024";
printf("%d", sizeof(num)); // 5
However, 1024 can be written as
unsigned char binaryNum[2];
binaryNum[0] = 0b00000100;
binaryNum[1] = 0b00000000;
How can the conversion from string to binary be made effectively?
In my program i would work with ≈30 digit numbers, so the space gain would be big.
My goal is to create datapackets to be sent over UDP/TCP.
I would prefer not to use libraries for this task, since the available space the code can take up is small.
EDIT:
Thanks for quick response.
char num = 0b0000 0100 // "4"
--------------------------
char num = 0b0001 1000 // "24"
-----------------------------
char num[2];
num[0] = 0b00000100;
num[1] = 0b00000000;
// num now contains 1024
I would need ≈ 10 bytes to contain my number in binary form. So, if I as suggested parse the digits one by one, starting from the back, how would that build up to the final big binary number?
In general, converting a number in string representation to decimal is easy because each character can be parsed separately. E.g. to convert "1024" to 1024 you can just look at the '4', convert it to 4, multiply by 10, then convert the 2 and add it, multiply by 10, and so on until you have parsed the whole string.
For binary it is not so easy, e.g. you can convert 4 to 100 and 2 to 010 but 42 is not 100 010 or 110 or something like that. So, your best bet is to convert the whole thing to a number and then convert that number to binary using mathematical operations (bit shifts and such). This will work fine for numbers that fit in one of the C++ number types, but if you want to handle arbitrarily large numbers you will need a BigInteger class which seems to be a problem for you since the code has to be small.
From your question I gather that you want to compress the string representation in order to transmit the number over a network, so I am offering a solution that does not strictly convert to binary but will still use fewer bytes than the string representation and is easy to use. It is based on the fact that you can store a number 0..9 in 4 bits, and so you can fit two of those numbers in a byte. Hence you can store an n-digit number in n/2 bytes. The algorithm could be as follows:
Take the last character, '4'
Subtract '0' to get 4 (i.e. an int with value 4).
Strip the last character.
Repeat to get 0
Concatenate into a single byte: digits[0] = (4 << 4) + 0.
Do the same for the next two numbers: digits[1] = (2 << 4) + 1.
Your representation in memory will now look like
4 0 2 1
0100 0000 0010 0001
digits[0] digits[1]
i.e.
digits = { 64, 33 }
This is not quite the binary representation of 1024, but it is shorter and it allows you to easily recover the original number by reversing the algorithm.
You even have 5 values left that you don't use for storing digits (i.e. everything larger than 1010) which you can use for other things like storing the sign, decimal point, byte order or end-of-number delimiter).
I trust that you will be able to implement this, should you choose to use it.
If I understand your question correctly, you would want to do this:
Convert your string representation into an integer.
Convert the integer into binary representation.
For step 1:
You could loop through the string
Subtract '0' from the char
Multiply by 10^n (depending on the position) and add to a sum.
For step 2 (for int x), in general:
x%2 gives you the least-significant-bit (LSB).
x /= 2 "removes" the LSB.
For example, take x = 6.
x%2 = 0 (LSB), x /= 2 -> x becomes 3
x%2 = 1, x /= 2 -> x becomes 1
x%2 = 1 (MSB), x /= 2 -> x becomes 0.
So we we see that (6)decimal == (110)bin.
On to the implementation (for N=2, where N is maximum number of bytes):
int x = 1024;
int n=-1, p=0, p_=0, i=0, ex=1; //you can use smaller types of int for this if you are strict on memory usage
unsigned char num[N] = {0};
for (p=0; p<(N*8); p++,p_++) {
if (p%8 == 0) { n++; p_=0; } //for every 8bits, 1) store the new result in the next element in the array. 2) reset the placing (start at 2^0 again).
for (i=0; i<p_; i++) ex *= 2; //ex = pow(2,p_); without using math.h library
num[n] += ex * (x%2); //add (2^p_ x LSB) to num[n]
x /= 2; // "remove" the last bit to check for the next.
ex = 1; // reset the exponent
}
We can check the result for x = 1024:
for (i=0; i<N; i++)
printf("num[%d] = %d\n", i, num[i]); //num[0] = 0 (0b00000000), num[1] = 4 (0b00000100)
To convert a up-to 30 digit decimal number, represented as a string, into a serious of bytes, effectively a base-256 representation, takes up to 13 bytes. (ceiling of 30/log10(256))
Simple algorithm
dest = 0
for each digit of the string (starting with most significant)
dest *= 10
dest += digit
As C code
#define STR_DEC_TO_BIN_N 13
unsigned char *str_dec_to_bin(unsigned char dest[STR_DEC_TO_BIN_N], const char *src) {
// dest[] = 0
memset(dest, 0, STR_DEC_TO_BIN_N);
// for each digit ...
while (isdigit((unsigned char) *src)) {
// dest[] = 10*dest[] + *src
// with dest[0] as the most significant digit
int sum = *src - '0';
for (int i = STR_DEC_TO_BIN_N - 1; i >= 0; i--) {
sum += dest[i]*10;
dest[i] = sum % 256;
sum /= 256;
}
// If sum is non-zero, it means dest[] overflowed
if (sum) {
return NULL;
}
}
// If stopped on something other than the null character ....
if (*src) {
return NULL;
}
return dest;
}

In C, how am I able to use the printf() function to 'store' a string?

I am attempting to represent a bit16 representation of a number (floating point representation) using unsigned integers. The fraction field here deviates from the standard of 10, and is 8 bits - implying the exponent field is 7 bits and the sign is 1 bit.
The code I have is as follows:
bit16 float_16(bit16 sign, bit16 exp, bit16 frac) {
//make the sign the number before binary point, make the fraction binary.
//concatenate the sign then exponent then fraction
//
bit16 result;
int theExponent;
theExponent = exp + 63; // bias = 2^(7-1) + 1 = 2^6 + 1 = 63
//printf("%d",sign);
int c, k;
for(c = 6; c > 0; c--)
{
k = theExponent >> c;
if( k & 1)
printf("1");
else
printf("0");
}
for(c = 7; c >= 0; c--)
{
k = frac >> c;
if( k & 1)
printf("1");
else
printf("0");
}
//return result;
}
My thinking to 'recreate' a 16 bit sequence from these fields is to concatenate them together as so, but if I want to use them in a further application I am unable to do so. Is there a way to store the final result after everything has been printed (16-bit sequence) into a variable which can then be represented as an unsigned integer? Or is there a more optimal way to do this procedure?
While printf will not work in this case (you can't 'store' it's result), you can use sprintf.
int sprintf ( char * output_str, const char * format, ... );
sprintf writes formatted data to string
Composes a string with the same text that would be printed if format was used on printf, but instead of being printed (or displayed on the console), the content is stored as a C string in the buffer pointed by output_str.
The size of the buffer should be large enough to contain the entire resulting string. See Buffer Overflow.
A terminating null character (\0) will automatically be appended at the end of your output_str.
From output_str to an integer variable
You can use the atoi function to do this. You can get your answer in an integer variable like this:
int i = atoi (output_str);

what does string - '0' do (string is a char)

what does this do
while(*string) {
i = (i << 3) + (i<<1) + (*string -'0');
string++;
}
the *string -'0'
does it remove the character value or something?
This subtracts from the character to which string is pointing the ASCII code of the character '0'. So, '0' - '0' gives you 0 and so on and '9' - '0' gives you 9.
The entire loop is basically calculating "manually" the numerical value of the decimal integer in the string string points to.
That's because i << 3 is equivalent to i * 8 and i << 1 is equivalent to i * 2 and (i << 3) + (i<<1) is equivalent to i * 8 + i * 2 or i * 10.
Since the digits 0-9 are guaranteed to be stored contiguously in the character set, subtracting '0' gives the integer value of whichever character digit you have.
Let's say you're using ASCII:
char digit = '6'; //value of 54 in ASCII
int actual = digit - '0'; //'0' is 48 in ASCII, therefore `actual` is 6.
No matter which values the digits have in the character set, since they're contiguous, subtracting the beginning ('0') from the digit will give the digit you're looking for. Note that the same is NOT particularly true for the letters. Look at EBCDIC, for example.
It converts the ascii value of 0-9 characters to its numerical value.
ASCII value of '0' (character) is 48 and '1' is 49.
So to convert 48-56('0'-'9') to 0-9, you just need to subtract 48 from the ascii value.
that is what your code line [ *string -'0' ] is doing.

Resources