C: Converting a very large integer into a binary - c

I am trying to convert decimal Nos. into binary. The code works pretty fine (Windows 7 , 32 bit MS-VS2010):
int main()
{
int k, n;
int binary[100];
printf("Enter the value in decimal \n ");
scanf("%d", &k);
n = (log(k * 1.0) / log(2 * 1.0)) + 1 ; //total no. of binary bits in this decimal
for (int i = n; i > 0; i--)
{
binary[i] = k % 2;
k /= 2;
}
return 0;
}
But the limitation is that it works for Int size values only i.e. 32 bit. I want to modify this code so that it works for 2048 bits (decimal numbers containing 617 digits actually). I am not allowed to use any library.
Can someone give me some pointers how to proceed to tackle this?
Can someone give an example code snippet say for 64 bits ? Then I can use this to extend to higher values.
Update
1-As per suggestions I am trying to use strings. But I am not able to understand how to convert an String into large Int (I cant use stoi() as thsi will convert to 32 bit int , right? ).
2- Secondly I have to find:
log(222121212213212313133123413131313131311313154515441315413451315641314563154134156313461316413415635154613415645156451434)
Is the library function log capable of finding this ? Then what is the solution?

Since you told that you just need some pointers and not the actual answer, here goes:
I am not able to understand how to convert an String into large Int
That's because you can't. If you want to convert a number that huge to a numerical type, in the first place you need such a type that can hold numbers that big. The language doesn't provide you anything more than long long which is usually 128-bits long (i.e. if you can use C99, or just long which is usually lesser than a long long). Since your tutor told you not to use any external library, it's a clear sign that s/he wants you to code the solution using what's available only in the language and perhaps additionally the standard library.
Is the library function log capable of finding this
No, you can't use stoi or log since all of these expect arguments of some arithmetic type, while none of those built-in types are that big to hold numbers this huge. So you've to work completely with strings (i.e. either static or dynamic char buffers).
I understand that you want to use log to deduce the number of digits the binary output would need; but there's another option, which is to not know the number of digits before hand and allocate them dynamically with some upper bound so that you needn't re-allocate them further.
Lets take an example.
Allocate 3 char buffers in, out (length of input) and bin (length of input * 4).
Copy input to in
While in is not "0" or "1" do else goto 12
For each element ch in in do else goto 10
Convert ch to integer i
If is_odd = 1 then i += 10
quot = i / 2
Append quot to out
is_odd = quot % 2; goto 4
If is_odd = 1 append '1' else '0' to bin
Copy out to in, reset out and goto 3
Append in to bin
Print bin in reverse
When you integer divide a number by 2, the quotient would always be less than or equal to the number of digits of the dividend. So you could allocate in and out with the same size as the input and use it for all iterations. For the bin buffer, the knowledge that each decimal digit wouldn't take more than 4 bits (9 takes a nibble, 1001) would help. So if the input is 10 digits, then 10*4 = 40 bytes would be the upper limit needed for bin buffer and 10 bytes would be needed for the in and out buffers.
This is a vague write-up of an algorithm, I hope it conveys the idea. I feel writing code is more easier than writing algorithms properly.

I'm afraid there are no standard types in C that will allow you to store such a big value with 20148 bits... You can try to read the string from console (not converting into int), and then parse the string into "010101...." on your own.
The approach would be like that:
You should go for "dividing" the string by 2 in each step (for each division by 2 you need to divide all digits of the string by 2, and handle special cases like 11 / 2 => 5), and for each step if the value cannot be divided by 2, then you then you can put "1" as another binary digit, otherwise you put "0". This way you gather the digits '0', '1', '0', '1', etc. one by one. Then finally you need to reverse the order of digits. A similar approach implemented in C# you can find here: Decimal to binary conversion in c #

Regarding the update:
Grinding it through WolframAlpha gives:
log(222121212213212313133123413131313131311313154515441315413451315641314563154134156313461316413415635154613415645156451434)
is roughly
274.8056791141317511022806994521207149274321589939103691837589..
Test:
Putting it into exp gives:
2.2212121221321231313312341313131313131131315451544131541.. × 10^119
This raises the question about the precision you need.
Note: I assumed you mean the natural logarithm (base e).

Related

Making a function for finding number of digits

I am currently working on a CS50 course and I am trying to make a function that can give me a number of digits in a number that I put. For example number 10323 will be 5 digits. I wrote a code for this but it seems like it doesn't work for case above 10 digits. Can I know what is wrong with this code?
P.S: CS50 uses modified C language for beginners. The language may look a little different but I think its the math that is the problem here so there should be no much difficulty looking at my code?
int digit(int x) //function gives digit of a number
{
if (x == 0)
{
return 0;
}
else
{
int dig = 0;
int n = 1;
int y;
do
{
y = x / n;
dig ++;
n = expo(10,dig);
}
while (y < 0 || y >= 10);
return dig;
}
}
You didn't supply a definition for the function expo(), so it's not possible to say why the digit() function isn't working.
However, you're working with int variables. The specification of the size of the int type is implementation-dependent. Different compilers can have different sized ints. And even a given compiler can have different sizes depending on compilation options.
If the particular compiler your CS50 class is using has 16-bit ints (not likely these days but theoretically possible), those values will go from 0 (0x0000) up to 32767 (0x7FFF), and then wrap around to -32768 (0x8000) and up to 01 (0xFFFF). So in that case, your digit function would only handle part of the range up to 5 decimal digits.
If your compiler using 32-bit ints, then your ints would go from 0 (0x00000000) up to 2147483647 (0x7FFFFFFF), then wrap around to -2147483648 (0x80000000) and up to -1 (0xFFFFFFFF), thus limited to part of the 10-bit range.
I'm going to go out on a limb and guess that you have 32-bit ints.
You can get an extra bit by using the type unsigned int everywhere that you are saying int. But basically you're going to be limited by the compiler and the implementation.
If you want to get the number of decimal digits in much larger values, you would be well advised to use a string input rather than a numeric input. Then you would just look at the length of the string. For extra credit, you might also strip off leading 0's, maybe drop a leading plus sign, maybe drop commas in the string. And it would be nice to recognize invalid strings with unexpected non-numeric characters. But basically all of this depends on learning those string functions.
"while(input>0)
{
input=input/10;
variable++;
}
printf("%i\n",variable);"
link an input to this.

How does a float get converted to scientific notation for storage?

http://www.cs.yale.edu/homes/aspnes/pinewiki/C(2f)FloatingPoint.html
I was looking into why there are sometimes rounding issues when storing a float. I read the above link, and see that floats are converted to scientific notation.
https://babbage.cs.qc.cuny.edu/IEEE-754/index.xhtml
Base is always 2. So, 8 is stored as 1 * 2^3. 9 is stored as 1.001 * 2^3.
What is the math algorithm to determine the mantissa/significand and exponent?
Here is C++ code to convert a decimal string to a binary floating-point value. Although the question is tagged C, I presume the question is more about the algorithm and calculations than the programming language.
The DecimalToFloat class is constructed with a string that contains solely decimal digits and a decimal point (a period, most one). In its constructor, it shows how to use elementary school multiplication and long division to convert the number from decimal to binary. This demonstrates the fundamental concepts using elementary arithmetic. Real implementations of decimal-to-floating-point conversion in commercial software using algorithms that are faster and more complicated. They involve prepared tables, analysis, and proofs and are the subjects of academic papers. A significant problem of quality implementations of decimal-to-binary-floating-point conversion is getting the rounding correct. The disparate nature of powers of ten to powers of two (both positive and negative powers) makes it tricky to correctly determine when some values are above or below a point where rounding changes. Normally, when we are parsing something like 123e300, we want to figure out the binary floating-point result without actually calculating 10300. That is a much more extensive subject.
The GetValue routine finishes the preparation fo the number, taking the information prepared by the constructor and rounding it to the final floating-point form.
Negative numbers and exponential (scientific) notation are not handled. Handling negative numbers is of course easy. Exponential notation could be accommodated by shifting the input—moving the decimal point right for positive exponents or left for negative exponents. Again, this is not the fastest way to perform the conversion, but it demonstrates fundamental ideas.
/* This code demonstrates conversion of decimal numerals to binary
floating-point values using the round-to-nearest-ties-to-even rule.
Infinities and subnormal values are supported and assumed.
The basic idea is to convert the decimal numeral to binary using methods
taught in elementary school. The integer digits are repeatedly divided by
two to extract a string of bits in low-to-high position-value order. Then
sub-integer digits are repeatedly multiplied by two to continue extracting
a string of bits in high-to-low position-value order. Once we have enough
bits to determine the rounding direction or the processing exhausts the
input, the final value is computed.
This code is not (and will not be) designed to be efficient. It
demonstrates the fundamental mathematics and rounding decisions.
*/
#include <algorithm>
#include <limits>
#include <cmath>
#include <cstring>
template<typename Float> class DecimalToFloat
{
private:
static_assert(std::numeric_limits<Float>::radix == 2,
"This code requires the floatng-point radix to be two.");
// Abbreviations for parameters describing the floating-point format.
static const int Digits = std::numeric_limits<Float>::digits;
static const int MaximumExponent = std::numeric_limits<Float>::max_exponent;
static const int MinimumExponent = std::numeric_limits<Float>::min_exponent;
/* For any rounding rule supported by IEEE 754 for binary floating-point,
the direction in which a floating-point result should be rounded is
completely determined by the bit in the position of the least
significant bit (LSB) of the significand and whether the value of the
trailing bits are zero, between zero and 1/2 the value of the LSB,
exactly 1/2 the LSB, or between 1/2 the LSB and 1.
In particular, for round-to-nearest, ties-to-even, the decision is:
LSB Trailing Bits Direction
0 0 Down
0 In (0, 1/2) Down
0 1/2 Down
0 In (1/2, 1) Up
1 0 Down
1 In (0, 1/2) Down
1 1/2 Up
1 In (1/2, 1) Up
To determine whether the value of the trailing bits is 0, in (0, 1/2),
1/2, or in (1/2, 1), it suffices to know the first of the trailing bits
and whether the remaining bits are zeros or not:
First Remaining Value of Trailing Bits
0 All zeros 0
0 Not all zeros In (0, 1/2)
1 All zeros 1/2
1 Not all zeros In (1/2, 1)
To capture that information, we maintain two bits in addition to the
bits in the significand. The first is called the Round bit. It is the
first bit after the position of the least significand bit in the
significand. The second is called the Sticky bit. It is set if any
trailing bit after the first is set.
The bits for the significand are kept in an array along with the Round
bit and the Sticky bit. The constants below provide array indices for
locating the LSB, the Round Bit, and the Sticky bit in that array.
*/
static const int LowBit = Digits-1; // Array index for LSB in significand.
static const int Round = Digits; // Array index for rounding bit.
static const int Sticky = Digits+1; // Array index for sticky bit.
char *Decimal; // Work space for the incoming decimal numeral.
int N; // Number of bits incorporated so far.
char Bits[Digits+2]; // Bits for significand plus two for rounding.
int Exponent; // Exponent adjustment needed.
/* PushBitHigh inserts a new bit into the high end of the bits we are
accumulating for the significand of a floating-point number.
First, the Round bit shifted down by incorporating it into the Sticky
bit, using an OR so that the Sticky bit is set iff any bit pushed below
the Round bit is set.
Then all bits from the significand are shifted down one position,
which moves the least significant bit into the Round position and
frees up the most significant bit.
Then the new bit is put into the most significant bit.
*/
void PushBitHigh(char Bit)
{
Bits[Sticky] |= Bits[Round];
std::memmove(Bits+1, Bits, Digits * sizeof *Bits);
Bits[0] = Bit;
++N; // Count the number of bits we have put in the significand.
++Exponent; // Track the absolute position of the leading bit.
}
/* PushBitLow inserts a new bit into the low end of the bits we are
accumulating for the significand of a floating-point number.
If we have no previous bits and the new bit is zero, we are just
processing leading zeros in a number less than 1. These zeros are not
significant. They tell us the magnitude of the number. We use them
only to track the exponent that records the position of the leading
significant bit. (However, exponent is only allowed to get as small as
MinimumExponent, after which we must put further bits into the
significand, forming a subnormal value.)
If the bit is significant, we record it. If we have not yet filled the
regular significand and the Round bit, the new bit is recorded in the
next space. Otherwise, the new bit is incorporated into the Sticky bit
using an OR so that the Sticky bit is set iff any bit below the Round
bit is set.
*/
void PushBitLow(char Bit)
{
if (N == 0 && Bit == 0 && MinimumExponent < Exponent)
--Exponent;
else
if (N < Sticky)
Bits[N++] = Bit;
else
Bits[Sticky] |= Bit;
}
/* Determined tells us whether the final value to be produced can be
determined without any more low bits. This is true if and only if:
we have all the bits to fill the significand, and
we have at least one more bit to help determine the rounding, and
either we know we will round down because the Round bit is 0 or we
know we will round up because the Round bit is 1 and at least one
further bit is 1 or the least significant bit is 1.
*/
bool Determined() const
{
if (Digits < N)
if (Bits[Round])
return Bits[LowBit] || Bits[Sticky];
else
return 1;
else
return 0;
}
// Get the floating-point value that was parsed from the source numeral.
Float GetValue() const
{
// Decide whether to round up or not.
bool RoundUp = Bits[Round] && (Bits[LowBit] || Bits[Sticky]);
/* Now we prepare a floating-point number that contains a significand
with the bits we received plus, if we are rounding up, one added to
the least significant bit.
*/
// Start with the adjustment to the LSB for rounding.
Float x = RoundUp;
// Add the significand bits we received.
for (int i = Digits-1; 0 <= i; --i)
x = (x + Bits[i]) / 2;
/* If we rounded up, the addition may have carried out of the
initial significand. In this case, adjust the scale.
*/
int e = Exponent;
if (1 <= x)
{
x /= 2;
++e;
}
// Apply the exponent and return the value.
return MaximumExponent < e ? INFINITY : std::scalbn(x, e);
}
public:
/* Constructor.
Note that this constructor allocates work space. It is bad form to
allocate in a constructor, but this code is just to demonstrate the
mathematics, not to provide a conversion for use in production
software.
*/
DecimalToFloat(const char *Source) : N(), Bits(), Exponent()
{
// Skip leading sources.
while (*Source == '0')
++Source;
size_t s = std::strlen(Source);
/* Count the number of integer digits (digits before the decimal
point if it is present or before the end of the string otherwise)
and calculate the number of digits after the decimal point, if any.
*/
size_t DigitsBefore = 0;
while (Source[DigitsBefore] != '.' && Source[DigitsBefore] != 0)
++DigitsBefore;
size_t DigitsAfter = Source[DigitsBefore] == '.' ? s-DigitsBefore-1 : 0;
/* Allocate space for the integer digits or the sub-integer digits,
whichever is more numerous.
*/
Decimal = new char[std::max(DigitsBefore, DigitsAfter)];
/* Copy the integer digits into our work space, converting them from
digit characters ('0' to '9') to numbers (0 to 9).
*/
for (size_t i = 0; i < DigitsBefore; ++i)
Decimal[i] = Source[i] - '0';
/* Convert the integer portion of the numeral to binary by repeatedly
dividing it by two. The remainders form a bit string representing
a binary numeral for the integer part of the number. They arrive
in order from low position value to high position value.
This conversion continues until the numeral is exhausted (High <
Low is false) or we see it is so large the result overflows
(Exponent <= MaximumExponent is false).
Note that Exponent may exceed MaximumExponent while we have only
produced 0 bits during the conversion. However, because we skipped
leading zeros above, we know there is a 1 bit coming. That,
combined with the excessive Exponent, guarantees the result will
overflow.
*/
for (char *High = Decimal, *Low = Decimal + DigitsBefore;
High < Low && Exponent <= MaximumExponent;)
{
// Divide by two.
char Remainder = 0;
for (char *p = High; p < Low; ++p)
{
/* This is elementary school division: We bring in the
remainder from the higher digit position and divide by the
divisor. The remainder is kept for the next position, and
the quotient becomes the new digit in this position.
*/
char n = *p + 10*Remainder;
Remainder = n % 2;
n /= 2;
/* As the number becomes smaller, we discard leading zeros:
If the new digit is zero and is in the highest position,
we discard it and shorten the number we are working with.
Otherwise, we record the new digit.
*/
if (n == 0 && p == High)
++High;
else
*p = n;
}
// Push remainder into high end of the bits we are accumulating.
PushBitHigh(Remainder);
}
/* Copy the sub-integer digits into our work space, converting them
from digit characters ('0' to '9') to numbers (0 to 9).
The convert the sub-integer portion of the numeral to binary by
repeatedly multiplying it by two. The carry-outs continue the bit
string. They arrive in order from high position value to low
position value.
*/
for (size_t i = 0; i < DigitsAfter; ++i)
Decimal[i] = Source[DigitsBefore + 1 + i] - '0';
for (char *High = Decimal, *Low = Decimal + DigitsAfter;
High < Low && !Determined();)
{
// Multiply by two.
char Carry = 0;
for (char *p = Low; High < p--;)
{
/* This is elementary school multiplication: We multiply
the digit by the multiplicand and add the carry. The
result is separated into a single digit (n % 10) and a
carry (n / 10).
*/
char n = *p * 2 + Carry;
Carry = n / 10;
n %= 10;
/* Here we discard trailing zeros: If the new digit is zero
and is in the lowest position, we discard it and shorten
the numeral we are working with. Otherwise, we record the
new digit.
*/
if (n == 0 && p == Low-1)
--Low;
else
*p = n;
}
// Push carry into low end of the bits we are accumulating.
PushBitLow(Carry);
}
delete [] Decimal;
}
// Conversion operator. Returns a Float converted from this object.
operator Float() const { return GetValue(); }
};
#include <iostream>
#include <cstdio>
#include <cstdlib>
static void Test(const char *Source)
{
std::cout << "Testing " << Source << ":\n";
DecimalToFloat<float> x(Source);
char *end;
float e = std::strtof(Source, &end);
float o = x;
/* Note: The C printf is used here for the %a conversion, which shows the
bits of floating-point values clearly. If your C++ implementation does
not support this, this may be replaced by any display of floating-point
values you desire, such as printing them with all the decimal digits
needed to distinguish the values.
*/
std::printf("\t%a, %a.\n", e, o);
if (e != o)
{
std::cout << "\tError, results do not match.\n";
std::exit(EXIT_FAILURE);
}
}
int main(void)
{
Test("0");
Test("1");
Test("2");
Test("3");
Test(".25");
Test(".0625");
Test(".1");
Test(".2");
Test(".3");
Test("3.14");
Test(".00000001");
Test("9841234012398123");
Test("340282346638528859811704183484516925440");
Test("340282356779733661637539395458142568447");
Test("340282356779733661637539395458142568448");
Test(".00000000000000000000000000000000000000000000140129846432481707092372958328991613128026194187651577175706828388979108268586060148663818836212158203125");
// This should round to the minimum positive (subnormal), as it is just above mid-way.
Test(".000000000000000000000000000000000000000000000700649232162408535461864791644958065640130970938257885878534141944895541342930300743319094181060791015626");
// This should round to zero, as it is mid-way, and the even rule applies.
Test(".000000000000000000000000000000000000000000000700649232162408535461864791644958065640130970938257885878534141944895541342930300743319094181060791015625");
// This should round to zero, as it is just below mid-way.
Test(".000000000000000000000000000000000000000000000700649232162408535461864791644958065640130970938257885878534141944895541342930300743319094181060791015624");
}
One of the surprising things about a real, practical computer -- surprising to beginning programmers who have been tasked with writing artificial little binary-to-decimal conversion programs, anyway -- is how thoroughly ingrained the binary number system is in an actual computer, and how few and how diffuse any actual binary/decimal conversion routines actually are. In the C world, for example (and if we confine our attention to integers for the moment), there is basically one binary-to-decimal conversion routine, and it's buried inside printf, where the %d directive is processed. There are perhaps three decimal-to-binary converters: atof(), strtol(), and the %d conversion inside scanf. (There might be another one inside the C compiler, where it converts your decimal constants into binary, although the compiler might just call strtol() directly for those, too.)
I bring this all up for background. The question of "what's the actual algorithm for constructing floating-point numbers internally?" is a fair one, and I'd like to think I know the answer, but as I mentioned in the comments, I'm chagrined to discover that I don't, really: I can't describe a clear, crisp "algorithm". I can and will show you some code that gets the job done, but you'll probably find it unsatisfying, as if I'm cheating somehow -- because a number of the interesting details happen more or less automatically, as we'll see.
Basically, I'm going to write a version of the standard library function atof(). Here are my ground rules:
I'm going to assume that the input is a string of characters. (This isn't really an assumption at all; it's a restatement of the original problem, which is to write a version of atof.)
I'm going to assume that we can construct the floating-point number "0.0". (In IEEE 754 and most other formats, it's all-bits-0, so that's not too hard.)
I'm going to assume that we can convert the integers 0-9 to their corresponding floating-point equivalents.
I'm going to assume that we can add and multiply any floating-point numbers we want to. (This is the biggie, although I'll describe those algorithms later.) But on any modern computer, there's almost certainly a floating-point unit, that has built-in instructions for the basic floating-point operations like addition and multiplication, so this isn't an unreasonable assumption, either. (But it does end up hiding some of the interesting aspects of the algorithm, passing the buck to the hardware designer to have implemented the instructions correctly.)
I'm going to initially assume that we have access to the standard library functions atoi and pow. This is a pretty big assumption, but again, I'll describe later how we could write those from scratch if we wanted to. I'm also going to assume the existence of the character classification functions in <ctype.h>, especially isdigit().
But that's about it. With those prerequisites, it turns out we can write a fully-functional version of atof() all by ourselves. It might not be fast, and it almost certainly won't have all the right rounding behaviors out at the edges, but it will work pretty well. (I'm even going to handle negative numbers, and exponents.) Here's how it works:
skip leading whitespace
look for '-'
scan digit characters, converting each one to the corresponding digit by subtracting '0' (aka ASCII 48)
accumulate a floating-point number (with no fractional part yet) representing the integer implied by the digits -- the significand -- and this is the real math, multiplying the running accumulation by 10 and adding the next digit
if we see a decimal point, count the number of digits after it
when we're done scanning digits, see if there's an e/E and some more digits indicating an exponent
if necessary, multiply or divide our accumulated number by a power of 10, to take care of digits past the decimal, and/or the explicit exponent.
Here's the code:
#include <ctype.h>
#include <stdlib.h> /* just for atoi() */
#include <math.h> /* just for pow() */
#define TRUE 1
#define FALSE 0
double my_atof(const char *str)
{
const char *p;
double ret;
int negflag = FALSE;
int exp;
int expflag;
p = str;
while(isspace(*p))
p++;
if(*p == '-')
{
negflag = TRUE;
p++;
}
ret = 0.0; /* assumption 2 */
exp = 0;
expflag = FALSE;
while(TRUE)
{
if(*p == '.')
expflag = TRUE;
else if(isdigit(*p))
{
int idig = *p - '0'; /* assumption 1 */
double fdig = idig; /* assumption 3 */
ret = 10. * ret + fdig; /* assumption 4 */
if(expflag)
exp--;
}
else break;
p++;
}
if(*p == 'e' || *p == 'E')
exp += atoi(p+1); /* assumption 5a */
if(exp != 0)
ret *= pow(10., exp); /* assumption 5b */
if(negflag)
ret = -ret;
return ret;
}
Before we go further, I encourage you to copy-and-paste this code into a nearby C compiler, and compile it, to convince yourself that I haven't cheated too badly. Here's a little main() to invoke it with:
#include <stdio.h>
int main(int argc, char *argv[])
{
double d = my_atof(argv[1]);
printf("%s -> %g\n", argv[1], d);
}
(If you or your IDE aren't comfortable with command-line invocations, you can use fgets or scanf to read the string to hand to my_atof, instead.)
But, I know, your question was "How does 9 get converted to 1.001 * 2^3 ?", and I still haven't really answered that, have I? So let's see if we can find where that happens.
First of all, that bit pattern 10012 for 9 came from... nowhere, or everywhere, or it was there all along, or something. The character 9 came in, probably with a bit pattern of 1110012 (in ASCII). We subtracted 48 = 1100002, and out popped 10012. (Even before doing the subtraction, you can see it hiding there at the end of 111001.)
But then what turned 1001 into 1.001E3? That was basically my "assumption 3", as embodied in the line
double fdig = idig;
It's easy to write that line in C, so we don't really have to know how it's done, and the compiler probably turns it into a 'convert integer to float' instruction, so the compiler writer doesn't have to know how to do it, either.
But, if we did have to implement that ourselves, at the lowest level, we could. We know we have a single-digit (decimal) number, occupying at most 4 bits. We could stuff those bits into the significand field of our floating-point format, with a fixed exponent (perhaps -3). We might have to deal with the peculiarities of an "implicit 1" bit, and if we didn't want to inadvertently create a denormalized number, we might have to some more tinkering, but it would be straightforward enough, and relatively easy to get right, because there are only 10 cases to test. (Heck, if we found writing code to do the bit manipulations troublesome, we could even use a 10-entry lookup table.)
Since 9 is a single-digit number, we're done. But for a multiple-digit number, our next concern is the arithmetic we have to do: multiplying the running sum by 10, and adding in the next digit. How does that work, exactly?
Again, if we're writing a C (or even an assembly language) program, we don't really need to know, because our machine's floating-point 'add' and 'multiply' instructions will do everything for us. But, also again, if we had to do it ourselves, we could. (This answer's getting way too long, so I'm not going to discuss floating-point addition and multiplication algorithms just yet. Maybe farther down.)
Finally, the code as presented so far "cheated" by calling the library functions atoi and pow. I won't have any trouble convincing you that we could have implemented atoi ourselves if we wanted/had to: it's basically just the same digit-accumulation code we already wrote. And pow isn't too hard, either, because in our case we don't need to implement it in full generality: we're always raising to integer powers, so it's straightforward repeated multiplication, and we've already assumed we know how to do multiplication.
(With that said, computing a large power of 10 as part of our decimal-to-binary algorithm is problematic. As #Eric Postpischil noted in his answer, "Normally we want to figure out the binary floating-point result without actually calculating 10N." Me, since I don't know any better, I'll compute it anyway, but if I wrote my own pow() I'd use the binary exponentiation algorithm, since it's super easy to implement and quite nicely efficient.)
I said I'd discuss floating-point addition and multiplication routines. Suppose you want to add two floating-point numbers. If they happen to have the same exponent, it's easy: add the two significands (and keep the exponent the same), and that's your answer. (How do you add the significands? Well, I assume you have a way to add integers.) If the exponents are different, but relatively close to each other, you can pick the smaller one and add N to it to make it the same as the larger one, while simultaneously shifting the significand to the right by N bits. (You've just created a denormalized number.) Once the exponents are the same, you can add the significands, as before. After the addition, it may be important to renormalize the numbers, that is, to detect if one or more leading bits ended up as 0 and, if so, shift the significand left and decrement the exponent. Finally, if the exponents are too different, such that shifting one significand to the right by N bits would shift it all away, this means that one number is so much smaller than the other that all of it gets lost in the roundoff when adding them.
Multiplication: Floating-point multiplication is actually somewhat easier than addition. You don't have to worry about matching up the exponents: the final product is basically a new number whose significand is the product of the two significands, and whose exponent is the sum of the two exponents. The only trick is that the product of the two M-bit significands is nominally 2M bits, and you may not have a multiplier that can do that. If the only multiplier you have available maxes out at an M-bit product, you can take your two M-bit significands and literally split them in half by bits:
signif1 = a * 2M/2 + b
signif2 = c * 2M/2 + d
So by ordinary algebra we have
signif1 × signif2 = ac × 2M + ad × 2M/2 + bc × 2M/2 + bd
Each of those partial products ac, ad, etc. is an M-bit product. Multiplying by 2M/2 or 2M is easy, because it's just a left shift. And adding the terms up is something we already know how to do. We actually only care about the upper M bits of the product, so since we're going to throw away the rest, I imagine we could cheat and skip the bd term, since it contributes nothing (although it might end up slightly influencing a properly-rounded result).
But anyway, the details of the addition and multiplication algorithms, and the knowledge they contain about the floating-point representation we're using, end up forming the other half of the answer to the question of the decimal-to-binary "algorithm" you're looking for. If you convert, say, the number 5.703125 using the code I've shown, out will pop the binary floating-point number 1.011011012 × 22, but nowhere did we explicitly compute that significand 1.01101101 or that exponent 2 -- they both just fell out of all the digitwise multiplications and additions we did.
Finally, if you're still with me, here's a quick and easy integer-power-only pow function using binary exponentiation:
double my_pow(double a, unsigned int b)
{
double ret = 1;
double fac = a;
while(1) {
if(b & 1) ret *= fac;
b >>= 1;
if(b == 0) break;
fac *= fac;
}
return ret;
}
This is a nifty little algorithm. If we ask it to compute, say, 1021, it does not multiply 10 by itself 21 times. Instead, it repeatedly squares 10, leading to the exponential sequence 101, 102, 104, 108, or rather, 10, 100, 10000, 100000000... Then it looks at the binary representation of 21, namely 10101, and selects only the intermediate results 101, 104, and 1016 to multiply into its final return value, yielding 101+4+16, or 1021, as desired. It therefore runs in time O(log2(N)), not O(N).
And, tune in tomorrow for our next exciting episode when we'll go in the opposite direction, writing a binary-to-decimal converter which will require us to do... (ominous chord)
floating point long division!
Here's a completely different answer, that tries to focus on the "algorithm" part of the question. I'll start with the example you asked about, converting the decimal integer 9 to the binary scientific notation number 1.0012×23. The algorithm is in two parts: (1) convert the decimal integer 9 to the binary integer 10012, and (2) convert that binary integer into binary scientific notation.
Step 1. Convert a decimal integer to a binary integer. (You can skip over this part if you already know it. Also, although this part of the algorithm is going to look perfectly fine, it turns out it's not the sort of thing that's actually used anywhere on a practical binary computer.)
The algorithm is built around a number we're working on, n, and a binary number we're building up, b.
Set n initially to the number we're converting, 9.
Set b to 0.
Compute the remainder when dividing n by 2. In our example, the remainder of 9 ÷ 2 is 1.
The remainder is one bit of our binary number. Tack it on to b. In our example, b is now 1. Also, here we're going to be tacking bits on to b on the left.
Divide n by 2 (discarding the remainder). In our example, n is now 4.
If n is now 0, we're done.
Go back to step 3.
At the end of the first trip through the algorithm, n is 4 and b is 1.
The next trip through the loop will extract the bit 0 (because 4 divided by 2 is 2, remainder 0). So b goes to 01, and n goes to 2.
The next trip through the loop will extract the bit 0 (because 2 divided by 2 is 1, remainder 0). So b goes to 001, and n goes to 1.
The next trip through the loop will extract the bit 1 (because 1 divided by 2 is 0, remainder 1). So b goes to 1001, and n goes to 0.
And since n is now 0, we're done. Meanwhile, we've built up the binary number 1001 in b, as desired.
Here's that example again, in tabular form. At each step, we compute n divided by two (or in C, n/2), and the remainder when dividing n by 2, which in C is n%2. At the next step, n gets replaced by n/2, and the next bit (which is n%2) gets tacked on at the left of b.
step n b n/2 n%2
0 9 0 4 1
1 4 1 2 0
2 2 01 1 0
3 1 001 0 1
4 0 1001
Let's run through that again, for the number 25:
step n b n/2 n%2
0 25 0 12 1
1 12 1 6 0
2 6 01 3 0
3 3 001 1 1
4 1 1001 0 1
5 0 11001
You can clearly see that the n column is driven by the n/2 column, because in step 5 of the algorithm as stated we divided n by 2. (In C this would be n = n / 2, or n /= 2.) You can clearly see the binary result appearing (in right-to-left order) in the n%2 column.
So that's one way to convert decimal integers to binary. (As I mentioned, though, it's likely not the way your computer does it. Among other things, the act of tacking a bit on to the left end of b turns out to be rather unorthodox.)
Step 2. Convert a binary integer to a binary number in scientific notation.
Before we begin with this half of the algorithm, it's important to realize that scientific (or "exponential") representations are typically not unique. Returning to decimal for a moment, let's think about the number "one thousand". Most often we'll represent that as 1 × 103. But we could also represent it as 10 × 102, or 100 × 101, or even crazier representations like 10000 × 10-1, or 0.01 × 105.
So, in practice, when we're working in scientific notation, we'll usually set up an additional rule or guideline, stating that we'll try to keep the mantissa (also called the "significand") within a certain range. For base 10, usually the goal is either to keep it in the range 0 ≤ mantissa < 10, or 0 ≤ mantissa < 1. That is, we like numbers like 1 × 103 or 0.1 × 104, but we don't like numbers like 100 × 101 or 0.01 × 105.
How do we keep our representations in the range we like? What if we've got a number (perhaps the intermediate result of a calculation) that's in a form we don't like? The answer is simple, and it depends on a pattern you've probably already noticed: If you multiply the mantissa by 10, and if you simultaneously subtract 1 from the exponent, you haven't changed the value of the number. Similarly, you can divide the mantissa by 10 and increment the exponent, again without changing anything.
When we convert a scientific-notation number into the form we like, we say we're normalizing the number.
One more thing: since 100 is 1, we can preliminarily convert any integer to scientific notation by simply multiplying it by 100. That is, 9 is 9×100, and 25 is 25×100. If we do it that way we'll usually get a number that's in a form we "don't like" (that is "nonnormalized"), but now we have an idea of how to fix that.
So let's return to base 2, and the rest of this second half of our algorithm. Everything we've said so far about decimal scientific notation is also true about binary scientific notation, as long as we make the obvious changes of "10" to "2".
To convert the binary integer 10012 to binary scientific notation, we first multiply it by 20, resulting in: 10012×20. So actually we're almost done, except that this number is nonnormalized.
What's our definition of a normalized base-two scientific notation number? We haven't said, but the requirement is usually that the mantissa is between 0 and 102 (that is, between 0 and 210), or stated another way, that the high-order bit of the mantissa is always 1 (unless the whole number is 0). That is, these mantissas are normalized: 1.0012, 1.12, 1.02, 0.02. These mantissas are nonnormalized: 10.012, 0.0012.
So to normalize a number, we may need to multiply or divide the mantissa by 2, while incrementing or decrementing the exponent.
Putting this all together in step-by-step form: to convert a binary integer to a binary scientific number:
Multiply the integer by 20: set the mantissa to the number we're converting, and the exponent to 0.
If the number is normalized (if the mantissa is 0, or if its leading bit is 1), we're done.
If the mantissa has more than one bit to the left of the decimal point (really the "radix point" or "binary point"), divide the mantissa by 2, and increment the exponent by 1. Return to step 2.
(This step will never be necessary if the number we started with was an integer.) If the mantissa is nonzero but the bit to the left of the radix point is 0, multiply the mantissa by 2, and decrement the exponent by 1. Return to step 2.
Running this algorithm in tabular form for our number 9, we have:
step mantissa exponent
0 1001. 0
1 100.1 1
2 10.01 2
3 1.001 3
So, if you're still with me, that's how we can convert the decimal integer 9 to the binary scientific notation (or floating-point) number 1.0012×23.
And, with all of that said, the algorithm as stated so far only works for decimal integers. What if we wanted to convert, say, the decimal number 1.25 to the binary number 1.012×20, or 34.125 to 1.000100012×25? That's a discussion that will have to wait for another day (or for this other answer), I guess.

Why won't these two functions convert decimals to binary (uint8_t)? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 4 years ago.
Improve this question
I have two functions which supposedly can convert decimals to binary, however I can't seem to get them to work if the decimal is above 3 (I get weird negative numbers). I can't fully understand the code in the functions as I've only just started to learn C, however I was hoping if someone could tell me if the functions work, or if I am just not doing something right i.e. should I be using int32_t type as the value to pass into the function?
uint8_t dec_to_bin(int decimal){
int n = decimal, i, j, binno=0,dn;
dn=n;
i=1;
for(j=n;j>0;j=j/2)
{
binno=binno+(n%2)*i;
i=i*10;
n=n/2;
}
return binno;
}
uint8_t dec_to_bin2(int decimal){
long long binaryNumber = 0;
int remainder, i = 1;
while (decimal!=0){
remainder = decimal%2;
decimal /= 2;
binaryNumber += remainder*i;
i *= 10;
}
return binaryNumber;
}
Unfortunately I have no way to find out the values of the binary numbers as a uint_8 type as I am doing this on a micro-controller and there is no way to debug or print values anywhere (I have posted numerous threads asking how to do this, with no luck). We have however been provided with another function:
int bin_to_dec(uint64_t binary) {
int result = 0;
for ( int i = 7; i >= 0; i-- ) {
result = result * 10 + ((binary >> i) & 1);
}
return result;
}
This function converts the binary number back to an integer so I can display it on the screen with a library function (the library function can only display integers or strings). If I pass 10 into either of the decimal to binary converter functions, then pass the uint_8 value from either function to the binary to decimal converter and print to the LCD, I get -3110. This should just be 1010.
I'm sorry to say, but your dec_to_bin and dec_to_bin2 functions are meaningless. Throw them away. They might -- might -- have a tiny bit of value as a teaching exercise. But if you're trying to write actual code for a microcontroller to actually do something, you don't need these functions, and you don't want these functions. (Also you need to understand why you don't need these functions.)
The problem is not that they're implemented wrongly. They're fundamentally flawed in their very intent.
These functions seem to convert, for example, the integer 5 to the integer 101, and at first glance that might look like "decimal to binary conversion", but it's not. You've just converted the number five to the number one hundred and one.
Let's look at this a different way. If I say
int i = 17;
and if I then call
printf("%d\n", i);
I see the value "17" printed, as I expect. But I can also call
printf("%x\n", i);
and this prints i's value in hexadecimal, so I see "11". Did I just convert i from decimal to hexadecimal? No, I did not! I took the same number, "seventeen", and I printed it out in two different ways: in decimal, and in hexadecimal.
For all practical purposes, unless you're designing the actual hardware a program will run on, it really doesn't make sense to ask what base a number is stored in. A variable like int i is just a number, an integer. (Deep down inside, of course, on a conventional processor we know it's stored in binary all the time.)
The only time it makes sense to explicitly convert a number to binary is if you want to print it out in a human-readable text representation, in binary. In that case, you're converting from an integer to a string. (The string will consist of the characters '0' and '1'.)
So if you want to write a meaningful decimal-to-binary converter (which will actually be an anything-to-binary converter), either have it print the binary number out to the screen, or store it in a string (an array of characters, or char []).
And if you're in a class where you're being asked to write uint8_t dec_to_bin(int decimal) (or where your instructor is giving you examples of such functions), I can't help you, and I'm afraid you're doomed. Half of what this instructor is teaching you is wrong or badly misleading, will seriously handicap you from ever being a successful C programmer. Of course I can't tell you which half, because I don't know what other lies you're being taught. Good luck -- you'll need it! -- unlearning these false facts later.
I don't know why you're trying to store binary data as base 10 numbers instead of just printing or storing (as a char[]) the bits of an int (get kth bit of n as (n >> k) & 1 then print/store), but I'll assume it's necessary.
Your solution could be prone to overflowing the uint8_t as mentioned in the comments. Even a uint64_t can only hold 19 bits of binary data in the format you're using, less than the 32 of typical ints. The return type of your second function is still uint8_t, this might just be a typo, but it means the internal representation of long long will be implicitly cast on return.
I've written some functions based on yours, but with a little more bit manipulation that work for me.
uint64_t dec_to_bin (int n)
{
uint64_t result = 0;
int k, i;
// 64 bits can only hold 19 (log10(2^64)) bits in this format
for (i = 0, k = n; k && i < 19; i++) {
// Because the loop is limited to 19 we shouldn't need to worry about overflowing
result *= 10;
result += (n >> i) & 1;
k /= 2;
}
return result;
}
int bin_to_dec (uint64_t n)
{
int result = 0;
while (n) {
result <<= 1;
result += n % 2;
n /= 10;
}
return result;
}
I tested your functions on the input 43 on my system and got
43 // Original Input
101011 // dec_to_bin2 with long long return type
10010011 // bin_to_dec
With an appropriately sized output, your dec_to_bin2 function does work as expected.
And my functions:
43 // Original Input
110101 // dec_to_bin
43 // bin_to_dec
The endianness may not be what you're expecting, but that can be changed if necessary.

Bit count for all numbers up to 1048576 is wrong [duplicate]

This question already has answers here:
Binary numbers with the same quantity of 0s and 1s
(6 answers)
Closed 8 years ago.
I want to convert all integers below 1048576 to binary and display all numbers which have the same number of bits set as unset. My program works fine when I use a table t of 20 integers, in which case cpt records the correct result.
However, when I use a table t of 40 integers (which means I want the numbers with 20 '1' bits and 20 '0' bits) the counter is set to 1. What is wrong?
int main(){
long int a;
int r,j,i;
long int aux;
int z,u;
long int cpt;
int t[40];
for(int k=0;k<40;k++) t[k]=0;
cpt=0;
for(a=0;a<1048576;a++){
j=0;u=0;z=0;
aux=a;
do{
r=aux%2;
switch(r){
case 0 : t[j]=0;
aux=(aux/2);
j++;
break;
case 1 : t[j]=1;
aux=((aux-1)/2);
j++;
break;
}
}while(aux!=0);
for(i=0;i<40;i++){
if(t[i]==0) z++;
else u++;
}
if(z==u) cpt++;
}
printf("%d",cpt);
getchar();
}
Your loop only goes to 1048576, which is 2^20.
Don't you need to loop until 2^40?
Also, note that int may not be 40 bits wide.
Note:
The naive solution to check all numbers doesn't scale well. Perhaps you should consider a smarter solution?
Because only one number in the range [0, 1048576) has exactly as many bits 1 as 0, when counted in your 40 "bit" array.
The flaw in your logic is that you do not examine all numbers in a given range. For instance, when you want to examine all 40-bit integers, you need to iterate until 2^40 and not 2^20.
Lastly, this brute force solution won't work very well for your problem. Instead, try to consider the pattern that appears when you examine the number of paths from the top-left node and proceeding to the down or right for a small array on a piece of paper. Does one emerge? If you're math-inclined, you will instantly recognise it; otherwise, take a minute to look through the binomial coefficients.
As others have said, the main thing that is wrong is the algorithm you are using (exhaustive search); you would need to loop from 0 to 240-1 to iterate across the entire set of numbers to be tested (rather than to 220-1), which would be an impractical number of iterations, not least as there are faster ways.
Consider the maths of the problem: you want a 40 bit field, with 20 bits set to 1. So, you are choosing 20 things from 40. Think about the nCr (combination operator from permutations and combinations); that will give you the link to the binomial coefficients. Now think how you might write an algorithm to go through every combination.
Your code would also be more comprehensible if it did not have single letter variable names, and had some comments explaining what it was meant to be doing.
If you are having difficulty remembering the bit width of integer types, I suggest you use
#include <stdint.h>
and use types like int64_t which is guaranteed to be 64 bits. See:
http://pubs.opengroup.org/onlinepubs/9699919799/basedefs/stdint.h.html
(Note that OP apparently wanted a list of the numbers with 20 bits set, rather just the count of such numbers, so merely looking at binomial coefficients is insufficient).
try using a long long int:
unsigned long long int a;
to fasten and clear a lot of code try also using popcount, it's a function that returns the number of 1-bits in x: http://gcc.gnu.org/onlinedocs/gcc/Other-Builtins.html.
//I'm not 100% sure if the popcount will work with long long ints, but you can try.
BTW I have a question about the same euler problem #: Binary numbers with the same quantity of 0s and 1s

Decimal to Binary conversion - I need Help [duplicate]

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Decimal to Binary conversion
I need to convert a 20digits decimal to binary using C programming. How do I go about it. Really, I am finding it hard. What buffer will I create, because it will be very large, even the calculator can't compute converting 20 digits to Binary.
I need suggestions, links and possibly sample codes.
Thanks.
Do you need to convert a decimal string to a binary string or to a value?
Rule of thumb 10^3 ~= 2^10, therefore 10^20 ~= 2^70 > 64 bits (67 to be accurate).
==> A 64bit integer will not not be enough. You can you a structure with 2 64bit integers (long long in C) or even a 8bit byte for the upper part and 64 for the lower part.
Make sure the lower part is unsigned.
You will need to write code that checks for overflow on lower part and increases upper part when this happens. You will also need to use the long division algorithm once you cross the 64bit line.
What about using a library for extended precision arithmetic? try to give a look at http://gmplib.org/
I don't know if you are trying to convert a string of numerical characters into a really big int, or a really big int into a string of 1s and 0s... but in general, you'll be doing something like this:
for (i = 0; i < digits; i++)
{
bit[i] = (big_number>>i) & 1;
// or, for the other way around
// big_number |= (bit[i] << i);
}
the main problem is that there is no builtin type that can store "big_number". So you'll probably be doing it more like this...
uint8_t big_number[10]; // the big number is stored in 10 bytes.
// (uint8_t is just "unsigned char")
for (block = 0; block < 10; block++)
{
for (i = 0; i < 8; i++)
{
bit[block*8 + i] = (big_number[block]>>i) & 1;
}
}
[edit]
To read an string of numerical characters into an int (without using scanf, or atoi, etc), I would do something like this:
// Supposing I have something like char token[]="23563344324467434533";
int n = strlen(token); // number of digits.
big_number = 0;
for (int i = 0; i < n; i++)
{
big_number += (token[i] - '0') * pow(10, n-i-1);
}
That will work for reading the number, but again, the problem with this is that there is no built-in type to store big_number. You could use a float or a double, and that would get the magnitude of the number correct, but the last few decimal places would be rounded off. If you need perfect precision, you will have to use an arbitrary-precision integer. A google search turns up a few possible libraries to use for that; but I don't have much personal experience with those libraries, so I won't make a recommendation.
Really though, the data type you use depends on what you want to do with the data afterwards. Maybe you need an arbitrary-precision integer, or maybe a double would be exactly what you need; or maybe you can write your own basic data type using the technique I outlined with the blocks of uint8_t, or maybe you're better off just leaving it as a string!

Resources