How to convert an int to a series of characters - c

I'm trying to break down an integer with C on an 8-bit microcontroller (a PIC) into its ASCII equivalent characters.
For example:
convert 982 to '9','8','2'
Everything I've come up with so far seems pretty brute force. This is the main idea of what I'm basically doing right now:
if( (10 <= n) && (n < 100) ) {
// isolate and update the first order of magnitude
digit_0 = (n % 10);
// isolate and update the second order of magnitude
switch( n - (n % 10) ) {
case 0:
digit_1 = 0;
break;
case 10:
digit_1 = 1;
break;
...
And then I have another function to just add 0b00110000 (48decimal) to each of my digits.
I've been having trouble finding any C function to do this for me or doing it well myself.
Thanks in advance for any help!

If sprintf isn't suitable for some reason, something simple like this would work:
char digits[MAX_DIGITS];
int count = 0;
do {
digits[count++] = n % 10;
n /= 10;
} while (n > 0);
There'll be count digits, with digits[0] being the least significant.

To do it yourself you need to perform the operations which are demonstrated with the sample code below:
#include <stdio.h>
int main (void)
{
unsigned int x = 512;
int base_val = 10, digit, i = 0, n = 0;
char x_str[32], t;
printf ("\nEnter an unsigned number: ");
scanf ("%u", &x);
printf ("\nEnter base: ");
scanf ("%d", &base_val);
/* Chop the digits in reverse order and store in `x_arr`
* the interpretation of the digits are made in base value
* denoted by `base_val`
*/
while (x)
{
digit = x % base_val;
x /= base_val;
if (digit < 10)
x_str[n++] = digit + '0';
else
x_str[n++] = digit + 'A' - 10; /* handle base > 9 */
}
/* Terminate string */
x_str[n] = '\0';
/* Reverse string */
for (i=0; i<n/2; i++)
{
t = x_str[i];
x_str[i] = x_str[n-i-1];
x_str[n-i-1] = t;
}
printf ("\n%s\n", x_str);
return 0;
}
The while loop will chop out the digits from the integer in a given base and feed the digits in reverse order in an array. The inner if - else handles base more than 9, and places uppercase alphabets when a digit value is greater than 10. The for loop reverses the string and gets the chopped number into string in forward order.
You need to adjust the size of x_str array as per your max capability. Define a macro for it. Note the above code is only for unsigned integers. For signed integers you need to first check if it is below 0, then add a '-' sign in x_str and then print the magnitude ie. apply the above code with -x . This can also be done by checking the sign bit, by masking, but will make the process dependent on storage of integer.
The base_val is the base in which you want to interpret the numbers.

I answered a question like this a long time ago.
Implementing ftoa
I hope this helps. This applies to both integers and floating point numbers. The concept is simple.
Determine number of digits (for base 10, you use log base 10)
Grab the msb digit by taking the floor of (num/digit_weight)
Subtract digit * weight from number and decrease weight by 1
Do it again until number == 0 for integers or num > 0 + tolerance for fp numbers
Since the algorithm process a digit with every iteration, its O(logn) time so it's pretty reasonable.

If you are compatible with PIC16 RISC assembler than this is very simple, fast, short and efective. Here you have 16 bit unsigned 16 bit division by 10 rutine to see how to do it.
To get first lowest char of number in WREG put 16 bit unsigned number to Reg1 and Reg2 and call rutine. To get next char call rutine again and so on until Reg1 and Reg2 are 0.
RegAE res 1
RegA0 res 1
RegA1 res 1
RegA2 res 1
divR16by_c10
;{
;//Input: 16 bit unsigned number as RegA1,2 (RegA2 low byte, RegA1 High byte)
;//Division result: Reg1 and Reg2 and reminder as char in WREG
clrf RegA0
movlw 16 ;//init loop counter
movwf RegAE
lslf RegA2, f
divI16by_c10_
rlf RegA1, f
rlf RegA0, f
movlw 10
subwf RegA0, f
btfsc Carry
bra divI16by_c10_OK
addwfc RegA0, f
bcf Carry
divI16by_c10_OK
rlf RegA2, f
decfsz RegAE, f
bra divI16by_c10_
;//result= W from 0..9
addlw 0x30 ;//convert to char
return
;}
EDIT:
If you want to convert signed 16 bit value then check the 15th bit first, to determine sign number. If negative than write - sign and negate the number in RegA1,2. After that is the procedure the same a for positive number.
To negate number you can use the following asm rutine:
comf RegA2, f
comf RegA1, f
movlw 0
bsf STATUS, 0 ;//set carry flag
addwfc RegA2, f
addwfc RegA1, f

Related

Array contains garbage values not input values

I am writing a basic program to compute the binary eq of a decimal value. I'm storing the individual bits or 0 and 1 values into an array so I can eventually reverse the array and print the accurate binary representation. However when I print the array contents to check if array has been properly filled I see garbage values, or 0 if arr[]={0}
My code
int main() {
int i = 0, j = 0, k, decimal, binary = 0, remainder, divider;
int bin[10];
printf("Enter decimal value");
scanf("%d", &decimal);
while ((decimal != 0) && (i < decimal)) {
remainder = decimal % 2;
decimal = decimal / 2;
bin[i] = remainder;
j++;
printf("%d", bin[i]);
}
printf("\n%d", j);
printf("\n%d", bin[0]);
printf("\n%d", bin[1]);
printf("\n%d", bin[2]);
printf("\n%d", bin[3]);
printf("%d", bin);
return 0;
}
.exe
enter image description here
If you are still having problems with the conversion, it may be helpful to consider a couple of points. First, you are over-thinking the conversion from decimal to binary. For any given integer value, the value is already stored in memory in binary.
For example, when you have the integer 10, the computer stores it as 1010 in memory. So for all practical purposes, all you need to do is read the memory for value and set your array values to 1 for each bit that is 1 and 0 for each bit that is 0. You can even go one better, since what you are most likely after is the binary representation of the number, there is no need to store the 1s and 0s as a full 4-byte integer value in bin, why not make bin a character array and store the characters '1' or '0' in the character array (which when nul-terminated) allows a simple printing of the binary representation as a string.
This provides several benefits. Rather than converting from base 10 to base 2 and the divisions and modulo calls required for the base conversion, you can simply shift decimal to the right by one and check whether the least-significant-bit is 0 or 1 and store the desired character '0' or '1' based on the results of a simple unary and operation.
For example, in you case with an integer, you can determine the number of bits required to represent any integer value in binary with sizeof (int) * CHAR_BIT (where CHAR_BIT is a constant provided in limits.h and specifies the number of bits in a character (e.g. byte)). For an integer you could use:
#include <stdio.h>
#include <limits.h> /* for CHAR_BIT */
#define NBITS sizeof(int) * CHAR_BIT /* constant for bits in int */
To store the character representations of the binary number (or you could store the integers 1, 0 if desired), you can simply declare a character array:
char bin[NBITS + 1] = ""; /* declare storage for NBITS + 1 char */
char *p = bin + NBITS; /* initialize to the nul-terminating char */
(initialized to all zero and the +1 to allow for the nul-terminating character to allow the array to be treated as a string when filled)
Next, as you have discovered, whether you perform the base conversion or shift and and the resulting order of the individual bit values will be in reverse order. To handle that, you can simply declare a pointer pointing to the last character in your array and fill the array with 1s and 0s from the back toward the front.
Here too the character array/string representation makes things easier. Having initialized your array to all zero, you can start writing to your array beginning at the next to last character and working from the end to the beginning will insure you have a nul-terminated string when done. Further, regardless of the number of bits that make up decimal, you are always left with a pointer to the start of the binary representation.
Depending on how you loop over each bit in decimal, you may need to handle the case where decimal = 0; separately. (since you loop while there are bits in decimal, the loop won't execute if decimal = 0;) A simple if can handle the case and your else can simply loop over all bits in decimal:
if (decimal == 0) /* handle decimal == 0 separately */
*--p = '0';
else /* loop shifting decimal right by one until 0 */
for (; decimal && p > bin; decimal >>= 1)
*--p = (decimal & 1) ? '1' : '0'; /* decrement p and set
* char to '1' or '0' */
(note: since p was pointing to the nul-terminating character, you must decrement p with the pre-decrement operator (e.g. --p) before dereferencing and assigning the character or value)
All that remains is outputting your binary representation, and if done as above, it is a simple printf ("%s\n", p);. Putting all the pieces together, you could do something like the following:
#include <stdio.h>
#include <limits.h> /* for CHAR_BIT */
#define NBITS sizeof(int) * CHAR_BIT /* constant for bits in int */
int main (void) {
int decimal = 0;
char bin[NBITS + 1] = ""; /* declare storage for NBITS + 1 char */
char *p = bin + NBITS; /* initialize to the nul-terminating char */
printf ("enter a integer value: "); /* prompt for input */
if (scanf ("%d", &decimal) != 1) { /* validate ALL user input */
fputs ("error: invalid input.\n", stderr);
return 1;
}
if (decimal == 0) /* handle decimal == 0 separately */
*--p = '0';
else /* loop shifting decimal right by one until 0 */
for (; decimal && p > bin; decimal >>= 1)
*--p = (decimal & 1) ? '1' : '0'; /* decrement p and set
* char to '1' or '0' */
printf ("binary: %s\n", p); /* output the binary string */
return 0;
}
(note: the comment on validating ALL user input -- especially when using the scanf family of functions. Otherwise you can easily stray off into Undefined Behavior on an accidental entry of something that doesn't begin with a digit)
Example Use/Output
$ ./bin/int2bin
enter a integer value: 0
binary: 0
$ ./bin/int2bin
enter a integer value: 2
binary: 10
$ ./bin/int2bin
enter a integer value: 15
binary: 1111
Two's-complement of negative values:
$ ./bin/int2bin
enter a integer value: -15
binary: 11111111111111111111111111110001
Look things over and let me know if you have any questions, or if you really need bin to be an array of int. Having an integer array holding the individual bit values doesn't make a whole lot of sense, but if that is what you have to do, I'm happy to help.

Decimal to binary using Bitwise operator

#include <stdio.h>
int main()
{
int decimal_num, c, result;
printf("Enter an integer in decimal number system\n");
scanf("%d", &decimal_num);
for (c = 31; c >= 0; c--)
{
result = decimal_num >> c;
if (result & 1)
printf("1");
else
printf("0");
}
printf("\n");
return 0;
}
This code takes a decimal number and converts it into binary using bitwise operator. I am having a hard time understanding the logic inside the for loop result = decimal_num >> c and why does it iterate from for (c = 31; c >= 0; c--). I understand the basics of bitwise AND, OR, XOR and NOT and I know that when an odd number is ANDED with '1' the result is '1' else '0'(because the least significant bit of all odds are 1).
Here's an explanation of the code :
The program scans the bitwise representation of the decimal digit from left to write, working on each bit. The decimal digit is supposed to have 32 bits, hence the for loop runs 32 times.
The first time, the value of c is 31.
Assuming the bit representation of decimal_num initially is
x................................ ( . represents any digit )
decimal_num >> 31 shifts all bits rightwards 31 times, such that the first bit is shifted at the right most end. The result is 0000000000000000000000000000x. Note that when digits are shifted, 0 is pre-pended to the left end.
The result is then checked to see if it was 0 or 1, and printed accordingly.
0000000000000000000000000000x & 00000000000000000000000000001 = 1 if x is one
0000000000000000000000000000x & 00000000000000000000000000001 = 0 if x is zero.
Moving on, and checking the second bit when c is 30. :
.Y..............................
decimal_num >> 30 results in
000000000000000000000000000000.Y
000000000000000000000000000.Y & 00000000000000000000000000001 = 1 if Y is one
000000000000000000000000000.Y & 00000000000000000000000000001 = 0 if Y is zero.
We go on printing the results until the last digit.
Hope this helps you understand.

sprintf - producing char array from an int in C

I'm doing an assignment for school to swap the bytes in an unsigned long, and return the swapped unsigned long. ex. 0x12345678 -> 0x34127856.
I figured I'll make a char array, use sprintf to insert the long into a char array, and then do the swapping, stepping through the array. I'm pretty familiar with c++, but C seems a little more low level. I researched a few topics on sprintf, and I tried to make an array, but I'm not sure why it's not working.
unsigned long swap_bytes(unsigned long n) {
char new[64];
sprintf(new, "%l", n);
printf("Char array is now: %s\n", new);
}
TLDR; The correct approach is at the bottom
Preamble
Issues with what you're doing
First off using sprintf for byte swapping is the wrong approach because
it is a MUCH MUCH slower process than using the mathematical properties of bit operations to perform the byte swapping.
A byte is not a digit in a number. (a wrong assumption that you've made in your approach)
It's even more painful when you don't know the size of your integer (is it 32-bits, 64 bits or what)
The correct approach
Use bit manipulation to swap the bytes (see way way below)
The absolutely incorrect implementation with wrong output (because we're ignoring issue #2 above)
There are many technical reasons why sprintf is much slower but suffice it to say that it's so because moving contents of memory around is a slow operation, and of course more data you're moving around the slower it gets:
In your case, by changing a number (which sits in one manipulatable 'word' (think of it as a cell)) into its human readable string-equivalence you are doing two things:
You are converting (let's assume a 64-bit CPU) a single number represented by 8 bytes in a single CPU cell (officially a register) into a human equivalence string and putting it in RAM (memory). Now, each character in the string now takes up at least a byte: So a 16 digit number takes up 16 bytes (rather than 8)
You are then moving these characters around using memory operations (which are slow compared do doing something directly on CPU, by factor of a 1000)
Then you're converting the characters back to integers, which is a long and tedious operation
However, since that's the solution that you came up with let's first look at it.
The really wrong code with a really wrong answer
Starting (somewhat) with your code:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
unsigned long swap_bytes(unsigned long n) {
int i, l;
char new[64]; /* the fact that you used 64 here told me that you made assumption 2 */
sprintf(new, "%lu", n); /* you forgot the `u` here */
printf("The number is: %s\n", new); /* well it shows up :) */
l = strlen(new);
for(i = 0; i < l; i+=4) {
char tmp[2];
tmp[0] = new[i+2]; /* get next two characters */
tmp[1] = new[i+3];
new[i+2] = new[i];
new[i+3] = new[i+1];
new[i] = tmp[0];
new[i+1] = tmp[1];
}
return strtoul(new, NULL, 10); /* convert new back */
}
/* testing swap byte */
int main() {
/* seems to work: */
printf("Swapping 12345678: %lu\n", swap_bytes(12345678));
/* how about 432? (err not) */
printf("Swapping 432: %lu\n", swap_bytes(432));
}
As you can see the above is not really byte swapping but character swapping. And any attempt to try and "fix" the above code is nonsensical. For example,how do we deal with odd number of digits?
Well, I suppose we can pad odd digit counts with a zero:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
unsigned long swap_bytes(unsigned long n) {
int i, l;
char new[64]; /* the fact that you used 64 here told me that you made assumption 2 */
sprintf(new, "%lu", n); /* you forgot the `u` here */
printf("The number is: %s\n", new); /* well it shows up :) */
l = strlen(new);
if(l % 2 == 1) { /* check if l is odd */
printf("adding a pad to make n even digit count");
sprintf(new, "0%lu", n);
l++; /* length has increased */
}
for(i = 0; i < l; i+=4) {
char tmp[2];
tmp[0] = new[i+2]; /* get next two characters */
tmp[1] = new[i+3];
new[i+2] = new[i];
new[i+3] = new[i+1];
new[i] = tmp[0];
new[i+1] = tmp[1];
}
return strtoul(new, NULL, 10); /* convert new back */
}
/* testing swap byte */
int main() {
/* seems to work: */
printf("Swapping 12345678: %lu\n", swap_bytes(12345678));
printf("Swapping 432: %lu\n", swap_bytes(432));
/* how about 432516? (err not) */
printf("Swapping 432: %lu\n", swap_bytes(432));
}
Now we run into an issue with numbers which are not divisible by 4... Do we pad them with zeros on the right or the left or the middle? err NOT REALLY.
In any event this entire approach is wrong because we're not swapping bytes anyhow, we're swapping characters.
Now what?
So you may be asking
what the heck is my assignment talking about?
Well numbers are represented as bytes in memory, and what the assignment is asking for is for you to get that representation and swap it.
So for example, if we took a number like 12345678 it's actually stored as some sequence of bytes (1 byte == 8 bits). So let's look at the normal math way of representing 12345678 (base 10) in bits (base 2) and bytes (base 8):
(12345678)10 = (101111000110000101001110)2
Splitting the binary bits into groups of 4 for visual ease gives:
(12345678)10 = (1011 1100 0110 0001 0100 1110)2
But 4 bits are equal to 1 hex number (0, 1, 2, 3... 9, A, B...F), so we can convert the bits into nibbles (4-bit hex numbers) easily:
(12345678)10 = 1011 | 1100 | 0110 | 0001 | 0100 | 1110
(12345678)10 = B | C | 6 | 1 | 4 | E
But each byte (8-bits) is two nibbles (4-bits) so if we squish this a bit:
(12345678)10 = (BC 61 4E)16
So 12345678 is actually representable in 3 bytes;
However CPUs have specific sizes for integers, usually these are multiples of 2 and divisible by 4. This is so because of a variety of reasons that are beyond the scope of this discussion, suffice it to say that you will get things like 16-bit, 32-bit, 64-bit, 128-bit etc... And most often the CPU of a particular bit-size (say a 64bit CPU) will be able to manipulate unsigned integers representable in that bit-size directly without having to store parts of the number in RAM.
Slight Digression
So let's say we have a 32-bit CPU, and somewhere at byte number α in RAM. The CPU could store the number 12345678 as:
> 00 BC 61 4E
> ↑ α ↑ α+1 ↑ α+2 ↑ α+3
(Figure 1)
Here the most significant part of the number, is sitting at the lowest memory address index α
Or the CPU could store it differently, where the least significant part of the number is sitting at the lowest memory.
> 4E 61 BC 00
> ↑ α ↑ α+1 ↑ α+2 ↑ α+3
(Figure 2)
The way a CPU stores a number is called Endianness (of the CPU). Where, if the most significant part is on the left then it's called Big-Endian CPU (Figure 1), or Little-Endian if it stores it as in (Figure 2)
Getting the correct answer (the wrong way)
Now that we have an idea of how things may be stored, let's try and pull this out still using sprintf.
We're going to use a couple of tricks here:
we'll convert the numbers to hexadecimal and then pad the number to 8 bytes
we'll use printf's (therefore sprintf) format string capability that if we want to use a variable to specify the width of an argument then we can use a * after the % sign like so:
printf("%*d", width, num);
If we set our format string to %0*x we get a hex number that's zero padded in output automatically, so:
sprintf(new, "%0*llx", sizeof(n), n);
Our program then becomes:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
unsigned long swap_bytes(unsigned long n) {
int i, l;
char new[64] = "";
sprintf(new, "%0*llx", sizeof(n), n);
printf("The number is: %s\n", new);
l = strlen(new);
for(i = 0; i < l; i+=4) {
char tmp[2];
tmp[0] = new[i+2]; /* get next two characters */
tmp[1] = new[i+3];
new[i+2] = new[i];
new[i+3] = new[i+1];
new[i] = tmp[0];
new[i+1] = tmp[1];
}
return strtoul(new, NULL, 16); /* convert new back */
}
/* testing swap byte */
int main() {
printf("size of unsigned long is %ld\n", sizeof(unsigned long));
printf("Swapping 12345678: %llx\n", swap_bytes(12345678));
/* how about 123456? */
printf("Swapping 123456: %llx\n", swap_bytes(123456));
printf("Swapping 123456: %llx\n", swap_bytes(98899));
}
The output would look something like:
size of unsigned long is 8
The number is: 00bc614e
Swapping 12345678: bc004e61
The number is: 0001e240
Swapping 123456: 10040e2
The number is: 00018253
Swapping 123456: 1005382
Obviously we can change our outputs by using %ld and print the base 10 versions of the numbers, rather than base 16 as is happening above. I'll leave that to you.
Now let's do it the right way
This is however rather terrible, since byte swapping can be done much faster without ever doing the integer to string and string to integer conversion.
Let's see how that's done:
The rather explicit way
Before we go on, just a bit on bit shifting in C:
If I have a number, say 6 (=1102) and I shift all the bits to the left by 1 I would get 12 (11002) (we simply shifted everything to the left adding zeros on the right as needed)
This is written in C as 6 << 1.
A right shift is similar and can be expressed in C with >> so if I have a number say 240 = (11110000)2 and I right-shift it 4 times I would get 15 = (1111)2 this is expressed as 240 >> 3
Now we have unsigned long integers which are (in my case at least) 64 bits long, or 8 bytes long.
Let's say my number is 12345678 which is (00 00 00 00 00 bc 61 4e)16 in hex at 8 bytes long. If I want to get the value of byte number 3 I can extract it by taking the number 0xFF (1111 1111) all bits of a byte set to 1 and left shifting it until i get to the byte 3 (so left shift 3*8 = 24 times) performing a bitwise and with the number and then right shifting the results to get rid of the zeros. This is what it looks like:
0xFF << (3 * 8) = 0xFF0000 & 0000 0000 00bc 614e = 0000 0000 00bc 0000
Now right shift:
0xFF0000 & 0000 0000 00bc 0000 >> (3 * 8) = bc
Another (better) way to do it would be to right shift first and then perform bitwise and with 0xFF to drop all higher bits:
0000 0000 00bc 614e >> 24 = 0000 0000 0000 00bc & 0xFF = bc
We will use the second way, and make a macro using #define now we can add the bytes back at the right location by right shifting each kth byte k+1 times and each k+1st byte k times.
Here is a sample implementation of this:
#define GET_BYTE(N, B) ((N >> (8 * (B))) & 0xFFUL)
unsigned long swap_bytes(unsigned long n)
{
unsigned long long rv = 0ULL;
int k;
printf("number is %016llx\n", n);
for(k =0 ; k < sizeof(n); k+=2) {
printf("swapping bytes %d[%016lx] and %d[%016lx]\n", k, GET_BYTE(n, k),
k+1, GET_BYTE(n, k+1));
rv += GET_BYTE(n, k) << 8*(k+1);
rv += GET_BYTE(n, k+1) << 8*k;
}
return rv;
}
/* testing swap byte */
int main() {
printf("size of unsigned long is: %ld\n", sizeof(unsigned long));
printf("Swapping 12345678: %llx\n", swap_bytes(12345678));
/* how about 123456? */
printf("Swapping 123456: %llx\n", swap_bytes(123456));
printf("Swapping 123456: %llx\n", swap_bytes(98899));
}
But this can be done so much more efficiently. I leave it here for now. We'll come back to using bit blitting and xor swapping later.
Update with GET_BYTE as a function instead of a macro:
#define GET_BYTE(N, B) ((N >> (8 * (B))) & 0xFFUL)
Just for fun we also use a shift operator for multiplying by 8. You can note that left shifting a number by 1 is like multiplying it by 2 (makes sense since in binary 2 is 10 and multiplying by 10 adds a zero to the end and therefore is the same as shifting something left by one space) So multiplying by 8 (1000)2 is like shifting something three spaces over or basically tacking on 3 zeros (overflows notwithstanding):
unsigned long __inline__ get_byte(const unsigned long n, const unsigned char idx) {
return ((n >> (idx << 3)) & 0xFFUL);
}
Now the really really fun and correct way to do this
Okay so a fast way to swap integers around is to realize that if we have two integers x, and y we can use properties of xor function to swap their values. The basic algorithm is this:
X := X XOR Y
Y := Y XOR X
X := X XOR Y
Now we know that a char is one byte in C. So we can force the compiler to treat the 8 byte integer as a sequence of 1-byte chars (hehe it's a bit of a mind bender considering everything I said about not doing it in sprintf) but this is different. You have to just think about it a bit.
We'll take the memory address of our integer, cast it to a char pointer (char *) and treat the result as an array of chars. Then we'll use the xor function property above to swap the two consecutive array values.
To do this I am going to use a macro (although we could use a function) but using a function will make the code uglier.
One thing you'll note is that there is the use of ?: in XORSWAP below. That's like an if-then-else in C but with expressions rather than statements, so basically (conditional_expression) ? (value_if_true) : (value_if_false) means if conditional_expression is non-zero the result will be value_if_true, otherwise it will be value_if_false. AND it's important not to xor a value with itself because you will always get 0 as a result and clobber the content. So we use the conditional to check if the addresses of the values we are changing are DIFFERENT from each other. If the addresses are the same (&a == &b) we simply return the value at the address (&a == &b) ? a : (otherwise_do_xor)
So let's do it:
#include <stdio.h>
/* this macro swaps any two non floating C values that are at
* DIFFERENT memory addresses. That's the entire &a == &b ? a : ... business
*/
#define XORSWAP(a, b) ((&(a) == &(b)) ? (a) : ((a)^=(b),(b)^=(a),(a)^=(b)))
unsigned long swap_bytes(const unsigned long n) {
unsigned long rv = n; /* we are not messing with original value */
int k;
for(k = 0; k < sizeof(rv); k+=2) {
/* swap k'th byte with k+1st byte */
XORSWAP(((char *)&rv)[k], ((char *)&rv)[k+1]);
}
return rv;
}
int main()
{
printf("swapped: %lx", swap_bytes(12345678));
return 0;
}
Here endeth the lesson. I hope that you will go through all the examples. If you have any more questions just ask in comments and I'll try to elaborate.
unsigned long swap_bytes(unsigned long n) {
char new[64];
sprintf(new, "%lu", n);
printf("Char array is now: %s\n", new);
}
You need to use %lu - long unsigned, for format in sprintf(), the compiler should also given you conversion lacks type warning because of this.
To get it to print you need to use %lu (for unsigned)
It doesn't seem like you attempted the swap, could I see your try?

Get bits from number string

If I have a number string (char array), one digit is one char, resulting in that the space for a four digit number is 5 bytes, including the null termination.
unsigned char num[] ="1024";
printf("%d", sizeof(num)); // 5
However, 1024 can be written as
unsigned char binaryNum[2];
binaryNum[0] = 0b00000100;
binaryNum[1] = 0b00000000;
How can the conversion from string to binary be made effectively?
In my program i would work with ≈30 digit numbers, so the space gain would be big.
My goal is to create datapackets to be sent over UDP/TCP.
I would prefer not to use libraries for this task, since the available space the code can take up is small.
EDIT:
Thanks for quick response.
char num = 0b0000 0100 // "4"
--------------------------
char num = 0b0001 1000 // "24"
-----------------------------
char num[2];
num[0] = 0b00000100;
num[1] = 0b00000000;
// num now contains 1024
I would need ≈ 10 bytes to contain my number in binary form. So, if I as suggested parse the digits one by one, starting from the back, how would that build up to the final big binary number?
In general, converting a number in string representation to decimal is easy because each character can be parsed separately. E.g. to convert "1024" to 1024 you can just look at the '4', convert it to 4, multiply by 10, then convert the 2 and add it, multiply by 10, and so on until you have parsed the whole string.
For binary it is not so easy, e.g. you can convert 4 to 100 and 2 to 010 but 42 is not 100 010 or 110 or something like that. So, your best bet is to convert the whole thing to a number and then convert that number to binary using mathematical operations (bit shifts and such). This will work fine for numbers that fit in one of the C++ number types, but if you want to handle arbitrarily large numbers you will need a BigInteger class which seems to be a problem for you since the code has to be small.
From your question I gather that you want to compress the string representation in order to transmit the number over a network, so I am offering a solution that does not strictly convert to binary but will still use fewer bytes than the string representation and is easy to use. It is based on the fact that you can store a number 0..9 in 4 bits, and so you can fit two of those numbers in a byte. Hence you can store an n-digit number in n/2 bytes. The algorithm could be as follows:
Take the last character, '4'
Subtract '0' to get 4 (i.e. an int with value 4).
Strip the last character.
Repeat to get 0
Concatenate into a single byte: digits[0] = (4 << 4) + 0.
Do the same for the next two numbers: digits[1] = (2 << 4) + 1.
Your representation in memory will now look like
4 0 2 1
0100 0000 0010 0001
digits[0] digits[1]
i.e.
digits = { 64, 33 }
This is not quite the binary representation of 1024, but it is shorter and it allows you to easily recover the original number by reversing the algorithm.
You even have 5 values left that you don't use for storing digits (i.e. everything larger than 1010) which you can use for other things like storing the sign, decimal point, byte order or end-of-number delimiter).
I trust that you will be able to implement this, should you choose to use it.
If I understand your question correctly, you would want to do this:
Convert your string representation into an integer.
Convert the integer into binary representation.
For step 1:
You could loop through the string
Subtract '0' from the char
Multiply by 10^n (depending on the position) and add to a sum.
For step 2 (for int x), in general:
x%2 gives you the least-significant-bit (LSB).
x /= 2 "removes" the LSB.
For example, take x = 6.
x%2 = 0 (LSB), x /= 2 -> x becomes 3
x%2 = 1, x /= 2 -> x becomes 1
x%2 = 1 (MSB), x /= 2 -> x becomes 0.
So we we see that (6)decimal == (110)bin.
On to the implementation (for N=2, where N is maximum number of bytes):
int x = 1024;
int n=-1, p=0, p_=0, i=0, ex=1; //you can use smaller types of int for this if you are strict on memory usage
unsigned char num[N] = {0};
for (p=0; p<(N*8); p++,p_++) {
if (p%8 == 0) { n++; p_=0; } //for every 8bits, 1) store the new result in the next element in the array. 2) reset the placing (start at 2^0 again).
for (i=0; i<p_; i++) ex *= 2; //ex = pow(2,p_); without using math.h library
num[n] += ex * (x%2); //add (2^p_ x LSB) to num[n]
x /= 2; // "remove" the last bit to check for the next.
ex = 1; // reset the exponent
}
We can check the result for x = 1024:
for (i=0; i<N; i++)
printf("num[%d] = %d\n", i, num[i]); //num[0] = 0 (0b00000000), num[1] = 4 (0b00000100)
To convert a up-to 30 digit decimal number, represented as a string, into a serious of bytes, effectively a base-256 representation, takes up to 13 bytes. (ceiling of 30/log10(256))
Simple algorithm
dest = 0
for each digit of the string (starting with most significant)
dest *= 10
dest += digit
As C code
#define STR_DEC_TO_BIN_N 13
unsigned char *str_dec_to_bin(unsigned char dest[STR_DEC_TO_BIN_N], const char *src) {
// dest[] = 0
memset(dest, 0, STR_DEC_TO_BIN_N);
// for each digit ...
while (isdigit((unsigned char) *src)) {
// dest[] = 10*dest[] + *src
// with dest[0] as the most significant digit
int sum = *src - '0';
for (int i = STR_DEC_TO_BIN_N - 1; i >= 0; i--) {
sum += dest[i]*10;
dest[i] = sum % 256;
sum /= 256;
}
// If sum is non-zero, it means dest[] overflowed
if (sum) {
return NULL;
}
}
// If stopped on something other than the null character ....
if (*src) {
return NULL;
}
return dest;
}

Picking good first estimates for Goldschmidt division

I'm calculating fixedpoint reciprocals in Q22.10 with Goldschmidt division for use in my software rasterizer on ARM.
This is done by just setting the numerator to 1, i.e the numerator becomes the scalar on the first iteration. To be honest, I'm kind of following the wikipedia algorithm blindly here. The article says that if the denominator is scaled in the half-open range (0.5, 1.0], a good first estimate can be based on the denominator alone: Let F be the estimated scalar and D be the denominator, then F = 2 - D.
But when doing this, I lose a lot of precision. Say if I want to find the reciprocal of 512.00002f. In order to scale the number down, I lose 10 bits of precision in the fraction part, which is shifted out. So, my questions are:
Is there a way to pick a better estimate which does not require normalization? Why? Why not? A mathematical proof of why this is or is not possible would be great.
Also, is it possible to pre-calculate the first estimates so the series converges faster? Right now, it converges after the 4th iteration on average. On ARM this is about ~50 cycles worst case, and that's not taking emulation of clz/bsr into account, nor memory lookups. If it's possible, I'd like to know if doing so increases the error, and by how much.
Here is my testcase. Note: The software implementation of clz on line 13 is from my post here. You can replace it with an intrinsic if you want. clz should return the number of leading zeros, and 32 for the value 0.
#include <stdio.h>
#include <stdint.h>
const unsigned int BASE = 22ULL;
static unsigned int divfp(unsigned int val, int* iter)
{
/* Numerator, denominator, estimate scalar and previous denominator */
unsigned long long N,D,F, DPREV;
int bitpos;
*iter = 1;
D = val;
/* Get the shift amount + is right-shift, - is left-shift. */
bitpos = 31 - clz(val) - BASE;
/* Normalize into the half-range (0.5, 1.0] */
if(0 < bitpos)
D >>= bitpos;
else
D <<= (-bitpos);
/* (FNi / FDi) == (FN(i+1) / FD(i+1)) */
/* F = 2 - D */
F = (2ULL<<BASE) - D;
/* N = F for the first iteration, because the numerator is simply 1.
So don't waste a 64-bit UMULL on a multiply with 1 */
N = F;
D = ((unsigned long long)D*F)>>BASE;
while(1){
DPREV = D;
F = (2<<(BASE)) - D;
D = ((unsigned long long)D*F)>>BASE;
/* Bail when we get the same value for two denominators in a row.
This means that the error is too small to make any further progress. */
if(D == DPREV)
break;
N = ((unsigned long long)N*F)>>BASE;
*iter = *iter + 1;
}
if(0 < bitpos)
N >>= bitpos;
else
N <<= (-bitpos);
return N;
}
int main(int argc, char* argv[])
{
double fv, fa;
int iter;
unsigned int D, result;
sscanf(argv[1], "%lf", &fv);
D = fv*(double)(1<<BASE);
result = divfp(D, &iter);
fa = (double)result / (double)(1UL << BASE);
printf("Value: %8.8lf 1/value: %8.8lf FP value: 0x%.8X\n", fv, fa, result);
printf("iteration: %d\n",iter);
return 0;
}
I could not resist spending an hour on your problem...
This algorithm is described in section 5.5.2 of "Arithmetique des ordinateurs" by Jean-Michel Muller (in french). It is actually a special case of Newton iterations with 1 as starting point. The book gives a simple formulation of the algorithm to compute N/D, with D normalized in range [1/2,1[:
e = 1 - D
Q = N
repeat K times:
Q = Q * (1+e)
e = e*e
The number of correct bits doubles at each iteration. In the case of 32 bits, 4 iterations will be enough. You can also iterate until e becomes too small to modify Q.
Normalization is used because it provides the max number of significant bits in the result. It is also easier to compute the error and number of iterations needed when the inputs are in a known range.
Once your input value is normalized, you don't need to bother with the value of BASE until you have the inverse. You simply have a 32-bit number X normalized in range 0x80000000 to 0xFFFFFFFF, and compute an approximation of Y=2^64/X (Y is at most 2^33).
This simplified algorithm may be implemented for your Q22.10 representation as follows:
// Fixed point inversion
// EB Apr 2010
#include <math.h>
#include <stdio.h>
// Number X is represented by integer I: X = I/2^BASE.
// We have (32-BASE) bits in integral part, and BASE bits in fractional part
#define BASE 22
typedef unsigned int uint32;
typedef unsigned long long int uint64;
// Convert FP to/from double (debug)
double toDouble(uint32 fp) { return fp/(double)(1<<BASE); }
uint32 toFP(double x) { return (int)floor(0.5+x*(1<<BASE)); }
// Return inverse of FP
uint32 inverse(uint32 fp)
{
if (fp == 0) return (uint32)-1; // invalid
// Shift FP to have the most significant bit set
int shl = 0; // normalization shift
uint32 nfp = fp; // normalized FP
while ( (nfp & 0x80000000) == 0 ) { nfp <<= 1; shl++; } // use "clz" instead
uint64 q = 0x100000000ULL; // 2^32
uint64 e = 0x100000000ULL - (uint64)nfp; // 2^32-NFP
int i;
for (i=0;i<4;i++) // iterate
{
// Both multiplications are actually
// 32x32 bits truncated to the 32 high bits
q += (q*e)>>(uint64)32;
e = (e*e)>>(uint64)32;
printf("Q=0x%llx E=0x%llx\n",q,e);
}
// Here, (Q/2^32) is the inverse of (NFP/2^32).
// We have 2^31<=NFP<2^32 and 2^32<Q<=2^33
return (uint32)(q>>(64-2*BASE-shl));
}
int main()
{
double x = 1.234567;
uint32 xx = toFP(x);
uint32 yy = inverse(xx);
double y = toDouble(yy);
printf("X=%f Y=%f X*Y=%f\n",x,y,x*y);
printf("XX=0x%08x YY=0x%08x XX*YY=0x%016llx\n",xx,yy,(uint64)xx*(uint64)yy);
}
As noted in the code, the multiplications are not full 32x32->64 bits. E will become smaller and smaller and fits initially on 32 bits. Q will always be on 34 bits. We take only the high 32 bits of the products.
The derivation of 64-2*BASE-shl is left as an exercise for the reader :-). If it becomes 0 or negative, the result is not representable (the input value is too small).
EDIT. As a follow-up to my comment, here is a second version with an implicit 32-th bit on Q. Both E and Q are now stored on 32 bits:
uint32 inverse2(uint32 fp)
{
if (fp == 0) return (uint32)-1; // invalid
// Shift FP to have the most significant bit set
int shl = 0; // normalization shift for FP
uint32 nfp = fp; // normalized FP
while ( (nfp & 0x80000000) == 0 ) { nfp <<= 1; shl++; } // use "clz" instead
int shr = 64-2*BASE-shl; // normalization shift for Q
if (shr <= 0) return (uint32)-1; // overflow
uint64 e = 1 + (0xFFFFFFFF ^ nfp); // 2^32-NFP, max value is 2^31
uint64 q = e; // 2^32 implicit bit, and implicit first iteration
int i;
for (i=0;i<3;i++) // iterate
{
e = (e*e)>>(uint64)32;
q += e + ((q*e)>>(uint64)32);
}
return (uint32)(q>>shr) + (1<<(32-shr)); // insert implicit bit
}
A couple of ideas for you, though none that solve your problem directly as stated.
Why this algo for division? Most divides I've seen in ARM use some varient of
adcs hi, den, hi, lsl #1
subcc hi, hi, den
adcs lo, lo, lo
repeated n bits times with a binary search off of the clz to determine where to start. That's pretty dang fast.
If precision is a big problem, you are not limited to 32/64 bits for your fixed point representation. It'll be a bit slower, but you can do add/adc or sub/sbc to move values across registers. mul/mla are also designed for this kind of work.
Again, not direct answers for you, but possibly a few ideas to go forward this. Seeing the actual ARM code would probably help me a bit as well.
Mads, you are not losing any precision at all. When you divide 512.00002f by 2^10, you merely decrease the exponent of your floating point number by 10. Mantissa remains the same. Of course unless the exponent hits its minimum value but that shouldn't happen since you're scaling to (0.5, 1].
EDIT: Ok so you're using a fixed decimal point. In that case you should allow a different representation of the denominator in your algorithm. The value of D is from (0.5, 1] not only at the beginning but throughout the whole calculation (it's easy to prove that x * (2-x) < 1 for x < 1). So you should represent the denominator with decimal point at base = 32. This way you will have 32 bits of precision all the time.
EDIT: To implement this you'll have to change the following lines of your code:
//bitpos = 31 - clz(val) - BASE;
bitpos = 31 - clz(val) - 31;
...
//F = (2ULL<<BASE) - D;
//N = F;
//D = ((unsigned long long)D*F)>>BASE;
F = -D;
N = F >> (31 - BASE);
D = ((unsigned long long)D*F)>>31;
...
//F = (2<<(BASE)) - D;
//D = ((unsigned long long)D*F)>>BASE;
F = -D;
D = ((unsigned long long)D*F)>>31;
...
//N = ((unsigned long long)N*F)>>BASE;
N = ((unsigned long long)N*F)>>31;
Also in the end you'll have to shift N not by bitpos but some different value which I'm too lazy to figure out right now :).

Resources